Suppress "Failed to save key " warning messages in Log - symfony

Is there any way to suppress these log messages in Symfony 4:
cache.WARNING: Failed to save key "%5B%5BC%5DApp%5CController%5CAgencyController%23about%5D%5B1%5D" '(integer) {"key":"%5B%5BC%5DApp%5CController%5CAgencyController%23about%5D%5B1%5D","type":"integer","exception":"[object] (ErrorException(code: 0): touch(): Utime failed: Operation not permitted at /mnt/c/Users/...../vendor/symfony/cache/Traits/FilesystemCommonTrait.php:95)"} []
There are hundredes of them in log (monolog) per each request which is really annoying! I have tried to change permissions to 777 as similar question answers suggested but that does no effect (maybe since I'm on WSL). Also I do not have APC installed.

Are you sure you are using php 7+? Seems like the file your are accessing in a windows filesystem. touch() will fail with php 5.4 (or 5.3 don't remember) on windows filesystems. Also, try changing your cache files owner, (not just 777) wo they are owned by your webserver user. sudo chown -R user:usergroup directory/

Are you using vagrant?
I answered the same here
I had the same problem.
All you need to do is change the type of the synced_folder to nfs, but that option only works with Mac hosts.
To be able to use it in Windows, you need to install vagrant-winnfsd
$ vagrant plugin install vagrant-winnfsd
Then change the type of the synchronisation in your Vagrantfile
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/var/www", type: "nfs"
end
The documentation says that it is also needed to change the type of the network to dhcp, but I didn't need to do that to solve my problem.
config.vm.network "private_network", type: "dhcp"
I hope this helped

Related

SELinux and cryptsetup: chown failed and can't access temporary keystore

I am trying to set up SELinux and an encrypted additional partition that I mount at startup using a systemd service.
If I run SELinux in permissive mode, everything runs ok (partition is correctly mounted, data can be accessed and service runs properly).
If I run SELinux in enforcing mode (enforcing=1), I am not able to mount such partition with the error:
/dev/mapper/temporary-cryptsetup-1808: chown failed: Permission denied
sh[1777]: Failed to open temporary keystore device.
sh[1777]: Command failed with code 5: Input/output error
Any ideas to fix that?
Audit2allow does not return any additional rules to be added
Solved assigning to cryptsetup the lvm_exec_t context.
In the lvm.fc file cryptsetup was defined as /bin/cryptsetup but I had to change it to /usr/sbin/cryptsetup where it actually was.

Using ROBOCOPY through Salt Master

I have SLS files set up to copy things from a network folder to a local directory on a minion.
Looks a little like this:
cmd-test:
cmd.run:
- name: 'ROBOCOPY \\\CygwinSource C:\CygwinSource /E'
and get the following output:
-------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
-------------------------------------------------------------------------------
Started : Tuesday, December 6, 2016 10:50:35 AM
2016/12/06 10:50:35 ERROR 1808 (0x00000710) Getting File System Type of Source \\<Server>\<program>\<file>\
The account used is a computer account. Use your global user account or local user account to access this server.
Source - \\<Server>\<program>\<folder>\
Dest : C:\<path>\<folder>\
Files : *.*
Options : *.* /S /E /DCOPY:DA /COPY:DATS /PURGE /MIR /NP /R:1 /W:1
------------------------------------------------------------------------------
NOTE : NTFS Security may not be copied - Source may not be NTFS.
2016/12/06 10:50:35 ERROR 1808 (0x00000710) Accessing Source Directory \\<Server>\<program>\<file>\
The account used is a computer account. Use your global user account or local user account to access this server.
Waiting 1 seconds... Retrying...
When I run the same thing locally in command line as 'ROBOCOPY \\\CygwinSource C:\CygwinSource /E' and it worked perfectly. I have no idea how to fix this 'use local user account' that Robocopy seems to give when using it through salt.
I also tried adding /MIR and /SEC which didnt't work.
Running Windows 10, Minion 2016.3.3
Master: Red Hat, 2016.3.3
Salt seems to be connecting to the network resource with a computer account. A few possible solutions:
Try changing the Salt Service on the Client (if that's how salt is executing the commands) to run as a domain user.
Try using the salt file server
Implement this hacky workaround where a scheduled task is created - discussed in the github issue that seems related to your problem: https://github.com/saltstack/salt/issues/16340

mount.nfs: requested NFS version or transport protocol is not supported

NFS Mount is not working in my RHEL 7 AWS instance.
When I do a
mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/
I get the error:
mount.nfs: requested NFS version or transport protocol is not supported
Can anyone point me where I am wrong?
Thanks.
Check the nfs service is started or reboot the nfs service.
sudo systemctl status nfs-kernel-server
In my case this package was not running and the issue was in /etc/exports file where i was having same IP address for two machines.
So i commented one ip address for the machine and restarted nf-kernel-server using
sudo systemctl restart nfs-kernel-server and reload the machine.
It worked.
A precision which might be useful for the dump (like me): systemctl status nfs-server.service and systemctl start nfs-server.service must be executed on the server!
Some additional data
If, like me, you've deleted a VM without shutting it down right you might also need to manually edit the file /etc/exports because NFS is trying to connect to it and fails but doesn't continue with the next, it just dies.
After that you can manually restart as mentioned in other answers.
In my case, a simple reload didn't suffice. I had to perform a full restart:
sudo systemctl status nfs-kernel-server
In my case, it didn't work correctly with version NFS 4.1.
So in Vargantfile in each place where is type: 'nfs' I added coma and nfs_version: 4, nfs_udp: false
Here is more detailing explanation NFS
If you're giving a specific protocol to connect with, also check to make sure your NFS server has that protocol enabled.
I got this error when trying to start up a Vagrant box, and my nfs server was running. It turns out that the command Vagrant uses is:
mount -o vers=3,udp,rw,actimeo=1 192.168.56.1:/dir/on/host /vagrant
Which specifically asks for UDP. My server was running but it was not configured to enable connecting over UDP. After consulting /etc/nfs.conf, I created /etc/nfs.conf.d/10-enable-udp.conf with the following contents to enable udp:
[nfsd]
udp=y
The name of the file doesn't matter, as long as it's in the conf.d directory and ends in .conf. Depending on your distribution it may be configured differently. You can directly edit nfs.conf, but using a conf.d file is more likely to preserve the changes after upgrading your system.
Try to ping IP address of the server "ping " from client "ping , if you get reply then install nfs server on the host. Then edit /etc/exports file don't forget to add port along with IP address
I got the solution: make an entry in nfs server /etc/nfsmount.conf with Defaultvers=3 .
There will # Defaultvers=3 just unhash it and then mount on nfs client.
Issue will be resolved!

mount: nfs access denied by server

Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/

Devstack - Changing IP address after installation

I have devstack installed on a ubuntu 12.04 and I could get logged into Dashboard , Now I changed the IP of my ubuntu machine. After changing the IP, I couldn't log into Dashboard anymore
I gets the following error message. I can see my old IP in the error message.
ConnectionError at /auth/login/
HTTPConnectionPool(host='OLD_IP_ADDRESS', port=35357): Max retries exceeded with url: /v2.0/tokens (Caused by <class 'socket.error'>: [Errno 113] No route to host)
Request Method: POST
Request URL: http://NEW_IP_ADDRESS/auth/login/
Django Version: 1.4.5
Exception Type: ConnectionError
Exception Value:
HTTPConnectionPool(host='OLD_IP_ADDRESS', port=35357): Max retries exceeded with url: /v2.0/tokens (Caused by <class 'socket.error'>: [Errno 113] No route to host)
Exception Location: /usr/local/lib/python2.7/dist-packages/requests/adapters.py in send, line 246
Python Executable: /usr/bin/python
Python Version: 2.7.3
Python Path:
['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
'/opt/stack/python-keystoneclient',
'/usr/local/lib/python2.7/dist-packages',
'/opt/stack/python-glanceclient/setuptools_git-1.0b1-py2.7.egg',
'/opt/stack/python-glanceclient',
'/opt/stack/python-cinderclient',
Is there a documented procedure available to change the IP address manually ?
My New IP doesn't have connection to internet so I wouldn't be able to redeploy devstack
Thanks guys for your answers..
I missed to update my answer, I fixed that issue in an easy way.
Solution is first run unstack.sh and then run stack.sh once more. It will do the necessary fix. Since I haven't made much progress with Devstack after installation it makes me more confident to run stack.sh
For the second time when you run stack.sh its not needed to connect to internet, So my issue is fixed.
Feel free to share your thoughts on this.
You will need to change the IP address hard-coded in OpenStack configuration files generated by devstack. They are stored in /etc/ and elsewhere.
http://xmodulo.com/2013/04/how-to-change-ip-address-after-openstack-installation-via-devstack.html
Here's a few steps I've taken to get back online.
backup the answers file...
cp packstack-answers-20130417.txt packstack-answers.txt-SAVE
replace ip addresses...
sed -i '/s/10\.10\.248\.11/10\.32\.70\.10/g' packstack-answers-20130417.txt
Delete the cinder loopback devices, installer fails if it exists
losetup -d /dev/loop0
List what's left mounted via the loop.
losetup -a
rm /var/lib/cinder/cinder-volumes
Now rerun the deploy scripts
packstack --answer-file=packstack-answers-20130417.txt
Fix up other IP addressing concerns with nova-manage in the CLI.
Should work from here.

Resources