Placing logfiles in multiple directories Asterisk - asterisk

I have a quick question.
We want to put the asterisk logs in multiple directories.
Is this possible with an asterisk server?
We were thinking about something like this.
(our example)
astlogdir => /var/log/asterisk, /var/log/remote-asterisk
right now it is this;
astlogdir => /var/log/asterisk
Anybody got any ideas if this is possible and what would be the right way?
Thank you in advance.

I use a symlink. The logger sees the link as a file, and the file system is responsible for writing the data to the linked location instead. The link is not a copy of the log, but just a pointer to another path on the file system.
Go to your logging folder:
cd /var/log/asterisk/
Make a link to a new file in /var/log/asterisk-remote (adjust the target path to your needs).
sudo ln -s /var/log/asterisk-remote/other-log /var/log/asterisk/link-log
View the link:
ls -ll
lrwxrwxrwx 1 root root 34 Oct 2 16:12 link-log -> /var/log/asterisk-remote/other-log
Then in /etc/asterisk/logger.conf just add the link's name (with whatever types of log messages you want it to receive):
[logfiles]
messages => notice,warning,error
link-log => notice,warning,error
Be sure to reload your config to apply the changes (this is done in the Asterisk CLI, which you can access via asterisk -r in the shell):
core reload

Yes, it is possible.
You just have open logger.c and write needed code
No, at current moment it not do so, nobody need it.
As second option you always can do symlink

Related

Symbolic link is getting the permission error

I'm trying to get the symbolic link from other user.
My file is located in /home/serviceA/logs/a.txt And I want to create a symbolic link to /home/centos/logs/a.txt.
Here is my command I ran as root user:
ln -s /home/serviceA/logs/a.txt /home/centos/logs/a.txt
I see the red color of filename. And I still get the permission denied error
The error is lrwxrwxrwx 1 root root 47 Feb 12 01:49 /home/centos/logs/a.txt -> /home/serviceA/logs/a.txt
Eventually, I want to forward the /home/centos/logs/a.txt log file to the Splunk.
Why am I getting the permission error after creating the symbolic link? And how do I fix it? (chmod 777 didn't help)
Unfortunately, that isn't how symlinks work on Linux systems. You can't create a symlink to a file, then change the permissions on the symlink and have it change the permissions of the actual file. Think of the security issues with this approach!
If you want Splunk to be able to monitor /home/serviceA/logs/a.txt, you will need to either:
change the file to be world readable (chmod a+r /home/serviceA/logs/a.txt), OR
add splunk (assuming Splunk is running as user splunk) to the group that owns the file, and make the file group readable (chmod g+r /home/serviceA/logs/a.txt), OR
run Splunk as root, BUT THIS IS VERY BAD, DO NOT DO THIS IN PRODUCTION, ONLY DO THIS FOR TESTING, AND EVEN THEN, ITS VERY BAD

how to open another file from within vim as a different user (sudoedit | vim )

so I've recently learned about sudoedit and how I can edit a file more safely than the standard "sudo vim".
the problem is now, when I'm in vim and "vsplit" or "tabnew" I open it as my user account (no root privileges)
sudoedit launches a separate instance of Vim, because it has to manage the lifecycle of the editing session; i.e. write back the edited temporary file with root priviledges. It cannot achieve that from a running Vim session.
However, there are plugins that achieve sudoedit-like functionality, for example the aptly named SudoEdit.
Maybe you just want a option to save file as sudo.
You can find mapings for write file as sudo or use tpope enuch plugin.
You will get :SudoWrite and :SudoEdit commands and couple more.
vim-enuch

Testing plugins live with Varying Vagrant Vagrants

I'm currently trying to use VVV to develop and test my plugins. My host OS is Win10.
My plugins are in D:\Workshop\projects\vendor\module. I've used this folder structure for a long time, and it is really convenient, especially for use with Composer and friends.
Now I've installed VVV, created a site with VV. I want to test a plugin, the source code of which is in D:\Workshop\projects\XedinUnknown\my-project. So, I create a symlink in D:\Workshop\projects\XedinUnknown\vvv-local\www\my-test-site\htdocs\wp-content\plugins that points to that project's folder. Alas, it doesn't work. If I SSH into VVV and ls /srv/www/my-test-site/htdocs/wp-content/plugins, I can see my-project there, but it points to ../../../../../../../XedinUnknown/my-project, which, of course, doesn't exist. If instead of symlink I create a junction, it's just an empty file.
I suspect that this has to do with how the Linux environment handles Windows symlinks, but I'm not entirely sure. Is it possible to make this work somehow? I really don't wanna copy the whole project folder into VVV.
This is also addressed here.
So, it would seem like I've found somewhat of a solution. I added a synched folder, which maps to my projects home. I then create a symlink to that folder from the WP plugins directory, inside the VM.
Step 1 - Add Shared Folder
This should be done in a Customfile as explained here. This file should go into the same directory as the Vagrantfile, e.g. it will become the Vagrantfile's sibling. In my case, if you're following along from my question, it is in D:\Workshop\projects\XedinUnknown\vvv-local. Anything put here becomes global for the whole of VVV. This also gives you the ability to use different combinations of your projects in different websites. Add these contents to your Customfile, creating it if it does not exist.
config.vm.synced_folder "D:/Workshop/projects", "/srv/projects", :owner => "www-data", :mount_options => [ "dmode=775", "fmode=774" ]
Of course, you should replace D:/Workshop/projects with the path to where you store your projects. Note the forward slashes (/). This works on Win/Nix. For a Windows-only configuration, I suspect you'd have to replace them with \\, because this is an escape sequence.
Step 2 - Add Link to Project
This should be done in your site's vvv-init.sh file. In my case, this file was in D:\Workshop\projects\XedinUnknown\vvv-local\www\my-test-site\, because I want to create this symlink specifically for the my-test-site site. Please note that your VVV path will probably be different, and it doesn't have to be inside the projects directory. It's wherever you cloned VVV into. Add the below lines to your site's vvv-init.sh file.
if [ ! -f "htdocs/wp-content/plugins/my-project" ]; then
echo 'Creating symlink to plugin project...'
cd ./htdocs/wp-content/plugins
ln -s /srv/projects/XedinUnknown/my-project my-project
cd -
fi
In the above snippet, change the path to your desired project path, keeping in mind that /srv/projects/ now maps live to the projects root in your host OS. You can also replace the second occurrence (last word) of my-project in ln -s /srv/projects/XedinUnknown/my-project my-project with whatever you want. As long as you don't change it later, your plugin should not suddenly get de-activated.
Also, from what I understood, vvv-init.sh runs during provisioning, not every time the machine is brought up. So, if you want to run the code in there, you have to run vagrant up --provision from the VVV directory. If you don't want to provision, you can run it manually. SSH into VVV with vagrant ssh, then cd /srv/www/my-test-site (replace my-test-site with name of your site), and run . vvv-init.sh.
Afterword
I am quite new to Bash scripting, and I don't know if my solution is the best one, so please feel free to suggest better versions of the Bash script. I also don't know Ruby, and am new to Vagrant, so please feel free to suggest improvements to the Customfile - this is in essence the same as the Vagrantfile.
One possible issue that I can anticipate with this solution (and this is inherently by design of the filesystem architecture) is that if WordPress decides to make changes to your plugin, e.g. if you run a WP update, it will effectively delete all files in your project, including the repository. So, on the testing site I would recommend using something like this. I am in no way associated with this plugin.

How to redirect rsyslog messages to some other path instead of /var/log

I am using the rsyslog facility for logging. Everything is working fine; I am able to log the messages in /var/log/MYlog.log path.
But now my requirement is to log the message in some other path like /opt/log/Somepath.log instead of /var/log.
I tried modifying Path in the /etc/rsyslog.conf file, but it only works if I give a log path under /var/log/. Nothing else seems to work. I want the log Path to be a configurable path like /opt/log/somePath.log.
I have an entry like this in the file and it works fine:
local6.* /var/log/Mylog.log
Now if I change it like this:
local6.* /opt/log/Mylog.log
it does not generate the Mylog.log file in /opt/log. The directory /opt/log is present.
After Modifying the configuration file /etc/rsyslog.conf I am Restarting the deamon again.
`/etc/init.d/rsyslog restart`
And There is no possibility of any permission and security issue since both /var/log and /opt/log are having same permissions(I changed /opt/log permissions similar to the /var/log).
I am using CentOs 6.3. It is my local VM and there is no Chance of NFS.
Is there any way or trick so that I can achieve this?
The problem is selinux. SELinux will prevent processes that are labeled syslogd_t to write to files that are (probably) labeled default_t. So we need to label the file with something syslogd_t can write to. Files in /var/log are labeled var_log_t, a type syslogd_t can surely write to.
Temporarily You can achieve this by changing the label of /opt/log directory.
chcon -R -t var_log_t /opt/log
You can check the modified labeling using
ls -Z /opt/log
that will give output something like this
drwxrwxrwx. root root unconfined_u:object_r:var_log_t:s0 log
So after this you will be able redirect syslog to any other directories. For permanent solution you need to write SELinux policy.

rsync error: failed to set times on "/foo/bar": Operation not permitted

I'm getting a confusing error from rsync and the initial things I'm finding from web searches (as well as all the usual chmod'ing) are not solving it:
rsync: failed to set times on "/foo/bar": Operation not permitted (1)
rsync error: some files could not be transferred (code 23)
at /SourceCache/rsync/rsync-35.2/rsync/main.c(992) [sender=2.6.9]
It seems to be working despite that error, but it would be nice to get rid of that.
If /foo/bar is on NFS (or possibly some FUSE filesystem), that might be the problem.
Either way, adding -O / --omit-dir-times to your command line will avoid it trying to set modification times on directories.
The issue is probably due to /foo/bar not being owned by the writing process on a remote darwin (OS X) system.
A solution to the issue is to set adequate owner on the remote site.
Since this answer has been voted, and therefore has been hopefully useful to someone, I'm extending it to make it clearer.
The reason why this happens is that rsync is probably trying to set an arbitrary modification time (mtime) when copying files.
In order to do this darwin's system utime() function requires that the writing process effective uid is either the same as the file uid or super user's one, see opengroup utime's page.
Check this discussion on rsync mailing list as reference.
As #racl101 has commented on an answer, this problem might be related to the folder owner. The rsync command should be done by the same user as the folder owner's one. If it's not the same, you can change it.
chown -R userCorrect /remote/path/to/foo/bar
I had the same problem. For me the solution is to delete the remote file and let rsync create again.
The problem in my case was that the "receiver mountpoint" was incorrectly mounted. It was in read-only mode (for some extrange reason).
It looked like rsync was copying the files, but it was not.
I checked my fstab file and changed mount options to default, re-mount file system and execute rsync again. All fine then.
I've seen that problem when I'm writing to a filesystem which doesn't (properly) handle times -- I think SMB shares or FAT or something.
What is your target filesystem?
This happened to me on a partition of type xfs (rw,relatime,seclabel,attr2,inode64,noquota), where the directories where owned by another user in a group we were both members of. The group membership was already established before login, and the whole directory structure was group-writeable. I had manually run sudo chown -R otheruser.group directory and sudo chmod -R g+rw directory to confirm this.
I still have no idea why it didn't work originally, but taking ownership with sudo chown -R myuser.group directory fixed it. Perhaps SELinux-related?
I came across this problem as well and the issue I was having was a permissions issue with the root folder that contained the files I was trying to send over. I don't care about that root folder being included with rsync I just care what's in it. The error was coming from my command where I need to specify an additional / at the end. If you do not have that trailing slash rsync will attempt to set times the folder.
Example:
This will attempt to set times on html
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html
This will not
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html/
This error might also pop-up if you run the rsync process for files that are not recently modified in the source or destination...because it cant set the time for the recently modified files.
I ran into this error trying to fix timestamps on a new MacOS Monterey, after the Migration Assistant decided to set all of them to the time the copy operation occurred, instead of the original file's.
anddam's answer did not help me, as the remote user used in the rsync command did match the directories and files owner.
After further research, I realised that I had no access to the Mac's Documents directory over SSH (error ls: Documents: Operation not permitted).
I managed to fix the problem by opening System Preferences on the Mac, then selecting Security & Privacy, go to Privacy tab select Full Disk Access and check the box next to sshd-keygen-wrapper.
It could be that you don't have privileges to some of the files. From an administrator account, try "sudo rsync -av " Alternately, enable the root account and sign in as root. That should allow you to completely hose your system and brute force your rsync! ;-) I'm not sure if the above mentioned --extended-attributes will help, but I threw it in too, just for good measure.

Resources