rsyslog not creating log file - syslog

Currently, all my syslogs are logged into /var/log/syslog , And I want local1.* to go to separate log file.
My rsyslog.d/50-default.conf file looks as follows:
# Default rules for rsyslog.
#
# For more information see rsyslog.conf(5) and /etc/rsyslog.conf
#
# First some standard log files. Log by facility.
#
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
#daemon.* -/var/log/daemon.log
kern.* -/var/log/kern.log
#lpr.* -/var/log/lpr.log
mail.* -/var/log/mail.log
#user.* -/var/log/user.log
local1.* -/mnt/log/*/*/python/syslog-analytics.log
But, syslog-analytics.log file is not getting created in the desired location.
Later, I also created that file manually with syslog:adm as the user:group, but still all my logs are logged into /var/log/syslog.

I had the same problem - rsyslogd was creating files under /var/log/ and /tmp/ but refused to create files in /data/log/ or log into those files when I created them manually. File systems permissions were never an issue, everything was world-writeable. This almost drove me mad until I found out that I am on an selinux enforced system (CentOS) where some processes are restricted to certain file system locations.
If you don't care for selinux or want to check if this is the cause of your problems simply turn it off by setting SELINUX=disabled in /etc/selinux/config (needs a reboot!).

Related

Sparklyr spark_connect error in prepare_windows_environment: FindFileOwnerAndPermission error (1789) [duplicate]

> D:\>echo %HADOOP_HOME%
> D:\Apps\winutils\hadoop-2.7.1
Create tmp/hive folders on the same disk as HADOOP_HOME
D:\>dir tmp\hive
Directory of D:\tmp\hive
06/13/2016 01:13 PM <DIR> .
06/13/2016 01:13 PM <DIR> ..
0 File(s) 0 bytes
2 Dir(s) 227,525,246,976 bytes free
Try to figure out what permission are set
D:\>winutils.exe ls \tmp\hive
FindFileOwnerAndPermission error (1789): The trust relationship between this workstation and the primary domain failed.
When I tried chmod for this folders it seems work
winutils.exe chmod 777 \tmp\hive
but ls shows same exception
Does anyone has an idea what is going on ? Moreover, It works for me a couple hours ago but now my spark application fails with an exception
java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
I am quite late here still posting it so it might help someone in future.
While setting the permission, make sure you are using correct path for winutils.exe (try to use complete path). For me winutils.exe was in C drive:
C:\path\to\winutils.exe chmod -R 777 C:\tmp\hive
Run the below command to check the permission and it should look like below image ([setting and checking the permission : click to see the image]):
https://i.stack.imgur.com/vE9vl.png
If this is your corporate system the you must be on the same network using VPN or Forti Client or any other tool your organisation has been using
https://support.microsoft.com/en-us/kb/2771040
Looks like domain access issues, please ensure you can access domain and take a try again.
After ensure domain access, below error disappear
Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions a
re: rw-rw-rw-
I'm late here and I just encountered this issue. Writing this so it will help somebody.
If you are using your office laptop, make sure you are connected to office network and retry. The domain Member of Domain settings point to your office network. That must solve the issue.
Log on Windows 10 using local Administrator account
Hold Windowslogo and press E to open File Explorer
3.On the right side of the File Explorer right click on This PC and choose Properties Click Advanced System Settings
Choose Computer Name tab and select change to see the value configured.
I am a newbie here, so it might be wrong but I think you need to add -R in the command as below:
winutils chmod -R 777 \tmp\hive

the "user" directive makes sense only if the master process runs with super-user privileges

I had to remove a folder ON LOCAL COMPUTER with my project (because of corrupted git files) and re-git clone it (successfully).
It runs on MAMP's NGINX server.
Now when I am trying to open project's main page in broswer and get "HTTP ERROR 500".
/Applications/MAMP/logs/nginx_error.log:
2018/06/08 18:58:29 [warn] 3218#0: the "user" directive makes sense
only if the master process runs with super-user privileges, ignored in
/Applications/MAMP/conf/nginx/nginx.conf:7
/Applications/MAMP/conf/nginx/nginx.conf:7:
user sergeyfomin staff;
(sergeyfomin is my User name on my Mac)
I guess it has something to do with user-priviliges I need to re-set on my project after git-cloning it?
Would appreciate any help.
The 500 error and the log warning are probably unrelated, since the warning says that it just ignored the directive. You'll probably have to dig elsewhere for the cause.

Error while uploading DICOM image to dcm4chee

Log Image: dcm image using dcmsnd utility.
When I try to upload dcm image to the pacs It generates the error.
2016-07-21 12:24:46,017 WARN STORESCU->DCM4CHEE (TCPServer-1-1)
[org.dcm4chex.archive.mbean.FileSystemMgt2Service] Failed to create
directory /var/lib/bahmni/dcm4chee-2.18.1-psql/server/default/archive
- try to switch to next configured storage directory 2016-07-21 12:24:46,026 ERROR STORESCU->DCM4CHEE (TCPServer-1-1)
[org.dcm4chex.archive.mbean.FileSystemMgt2Service] High Water Mark
reached on storage file system FileSystem[pk=1, archive,
groupID=ONLINE_STORAGE, aet=DCM4CHEE, ONLINE, RW+, userinfo=null] - no
alternative storage file system configured for file system group
ONLINE_STORAGE 2016-07-21 12:24:46,027 WARN STORESCU->DCM4CHEE
(TCPServer-1-1) [org.dcm4chex.archive.dcm.storescp.StoreScpService]
org.dcm4che.net.DcmServiceException
Here is the some log which shows in the dcm4chee.
I tried to give permission for the directory but still it gives an error to me.
I am not getting any solution for the error please provide any solution if any one have.
Thanks.
It looks like your current file system is full.
You should try adding another file system. Here you can find additional instructions about this using the MBeans based management interface.

Lucene index gets broken segments after every restart of liferay-tomcat

I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr

PSFTP open for write: failure

I am uploading a file from a collection of different servers to one data server. I am using psftp and one out of 20+ servers is producing a permissions problem.
Remote working directory is /
psftp> cd Remote_Directory\
Remote directory is now /Remote_Directory/
psftp> put C:\folders\containing\file\FILE.zip
/Remote_Directory/: open for write: failure
psftp> quit
It appears like a permissions issue on the remote directory, however, why am I only getting the issue on one server? The batch is identical on all of the 20+ servers.
PUT command expects a file name at the end of the destination location.
Please try the following code
put C:\folders\containing\file\FILE.zip /Remote_Directory/FILE.zip
The path in the error message is an exact path to the remote file the psftp tried to create. See outfname in below code snippet:
req = fxp_open_send(outfname,
SSH_FXF_WRITE | SSH_FXF_CREAT | SSH_FXF_TRUNC,
&attrs);
...
printf("%s: open for write: %s\n", outfname, fxp_error());
As the path is obviously not correct (lacks file name), it seems that psftp got confused somehow. I believe it's likely due to wrong (back)slash you have used in the cd command.
Try cd Remote_Directory/.
In my case, it's a permission issue on the remote server, i.e. the account you are using to log on doesn't have the write permission for the remote folder.

Resources