I'm using OpenJRE 11 and Jetty 9 as installed on Ubuntu Server LTS. With this source:
$ find .
./build.gradle
./gradlew
./src
./src/main
./src/main/java
./src/main/java/com.example
./src/main/java/com.example/servlet
./src/main/java/com.example/servlet/ExampleServlet.java
./src/main/webapp
./src/main/webapp/WEB-INF
./src/main/webapp/WEB-INF/jetty-web.xml
./src/main/webapp/WEB-INF/web.xml
build.gradle:
plugins {
id 'java'
id 'war'
}
repositories {
jcenter()
}
group 'com.example'
version '0.1'
// For Ubuntu's openjdk-11-jre-headless
sourceCompatibility = 1.11
dependencies {
def jsp_dep = 'org.eclipse.jetty:apache-jsp:9.4.15.v20190215'
if (gradle.startParameter.taskNames.contains('war')) {
// Bundled in Ubuntu's libjetty9-java so not embedded
compileOnly jsp_dep
}
// This should not need to be 'implementation'.
// When hosted in Jetty, this alone is enough to support JSTL. This is a thin reference to
// org.apache.taglibs:taglibs-standard-{spec,impl}:1.2.5
implementation 'org.eclipse.jetty:apache-jstl:9.4.15.v20190215'
}
ExampleServlet:
package com.example.servlet;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
#WebServlet("/example")
public class ExampleServlet extends HttpServlet {
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) {
}
}
jetty-web.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!-- configure-9_4 is missing -->
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_3.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
</Configure>
web.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!-- https://download.oracle.com/otn-pub/jcp/servlet-4-final-eval-spec/servlet-4_0_FINAL.pdf
-->
<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd"
version="4.0">
</web-app>
Compilation and installation:
$ ./gradlew war
BUILD SUCCESSFUL in 1s
2 actionable tasks: 2 executed
$ unzip -l build/libs/jetty-example-0.1.war
Archive: build/libs/jetty-example-0.1.war
Length Date Time Name
--------- ---------- ----- ----
0 2019-06-26 14:08 META-INF/
25 2019-06-26 14:08 META-INF/MANIFEST.MF
0 2019-06-26 14:08 WEB-INF/
0 2019-06-26 14:08 WEB-INF/classes/
0 2019-06-26 14:08 WEB-INF/classes/com/
0 2019-06-26 14:08 WEB-INF/classes/com/example/
0 2019-06-26 14:08 WEB-INF/classes/com/example/servlet/
696 2019-06-26 14:08 WEB-INF/classes/com/example/servlet/ExampleServlet.class
0 2019-06-26 14:08 WEB-INF/lib/
12585 2019-05-04 18:20 WEB-INF/lib/apache-jstl-9.4.15.v20190215.jar
40153 2019-04-15 11:56 WEB-INF/lib/taglibs-standard-spec-1.2.5.jar
206430 2019-04-15 11:56 WEB-INF/lib/taglibs-standard-impl-1.2.5.jar
255 2019-06-26 13:55 WEB-INF/jetty-web.xml
444 2019-06-26 14:03 WEB-INF/web.xml
--------- -------
260588 14 files
$ mv jetty-example-0.1.war root.war
$ chmod 644 root.war
# chown jetty:adm root.war
# mv root.war /var/lib/jetty9/webapps/
# ls -la /tmp/systemd-private-*jetty9.service*/tmp/*root.war*
total 12
drwxr-xr-x 3 jetty jetty 4096 Jun 26 14:08 .
drwxrwxrwt 6 root root 4096 Jun 26 14:08 ..
drwxr-xr-x 2 jetty jetty 4096 Jun 26 14:08 jsp
When then accessing 127.0.0.1:8080/example, I get Jetty's default 404 page, leading me to believe that package scanning is not happening by default. In /var/log/syslog I get these messages:
Jun 26 14:07:07 web jetty9[475]: 2019-06-26 14:07:07.980:INFO:oejshC.root:Scanner-0: Warning: No org.apache.tomcat.JarScanner set in ServletContext. Falling back to default JarScanner implementation.
Jun 26 14:07:08 web jetty9[475]: 2019-06-26 14:07:08.162:INFO:oajs.TldScanner:Scanner-0: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Jun 26 14:07:08 web jetty9[475]: 2019-06-26 14:07:08.204:INFO:oejsh.ContextHandler:Scanner-0: Started o.e.j.w.WebAppContext#156a87be{root,/,file:///var/lib/jetty9/webapps/root/,AVAILABLE}{/root}
Jun 26 14:07:08 web jetty9[475]: 2019-06-26 14:07:08.280:INFO:oejsh.ContextHandler:Scanner-0: Stopped o.e.j.w.WebAppContext#7f3b84b8{root,/,null,UNAVAILABLE}{/root.war}
Jun 26 14:08:29 web jetty9[475]: 2019-06-26 14:08:29.532:INFO:oeja.AnnotationConfiguration:Scanner-0: Scanning elapsed time=28ms
Jun 26 14:08:29 web jetty9[475]: 2019-06-26 14:08:29.564:INFO:oejshC.root:Scanner-0: Warning: No org.apache.tomcat.JarScanner set in ServletContext. Falling back to default JarScanner implementation.
Jun 26 14:08:29 web jetty9[475]: 2019-06-26 14:08:29.725:INFO:oajs.TldScanner:Scanner-0: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Jun 26 14:08:29 web jetty9[475]: 2019-06-26 14:08:29.782:INFO:oejsh.ContextHandler:Scanner-0: Started o.e.j.w.WebAppContext#42e87462{root,/,file:///var/lib/jetty9/webapps/root/,AVAILABLE}{/root.war}
Why is default jar scanning not working, and how can I fix it?
As it turns out: it isn't package scanning that's broken; it was a conflicting context. Somehow, Jetty thinks that two overlapping contexts should fail silently. In my particular case, there was a default root directory containing a sample web application in Jetty. Deleting that fixed the problem.
Related
I have Artifactory 6.20.1 running in a Docker container. I'm trying to install the artifactCleanup plugin (https://github.com/jfrog/artifactory-user-plugins/tree/master/cleanup/artifactCleanup)
I have put the artifactCleanup.groovy file in the corresponding folder:
$ ls -all /opt/jfrog/artifactory/var/etc/artifactory/plugins/
total 36
drwxr-xr-x 2 artifact artifact 4096 Feb 24 10:28 .
drwxr-xr-x 3 artifact artifact 4096 Feb 23 15:24 ..
-rwxr-xr-x 1 artifact artifact 5829 Feb 23 15:25 README.md
-rwxr-xr-x 1 artifact artifact 14043 Feb 23 15:26 artifactCleanup.groovy
-rwxr-xr-x 1 artifact artifact 325 Feb 24 10:28 artifactCleanup.json
However if I'm trying to see my loaded plugins I get an empty response
curl -X GET -u "admin:password" http://localhost:8081/artifactory/api/plugins
{}
The Server has been restarted before running that request. All commands have been running inside the Docker container. I have been looking at the documentation (https://www.jfrog.com/confluence/display/JFROG/User+Plugins) on how to install plugins. My User account which was used for the rest calls is an admin account.
Now I am out of clues, why that plugin is not loading?
You can use the below reload plugins using the Reload Plugins REST API endpoint.
https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-ReloadPlugins
Please comment here if you are running into any issues.
Turns out I created a wrong directory. Correct directory is
/var/opt/jfrog/artifactory/etc/plugins
which already existed.
Steps to reproduce
dotnet build or dotnet run
Expected behavior
Run or Build app
Actual Behavior
Getting ready...
The template "ASP.NET Core with Angular" was created successfully.
Processing post-creation actions...
Running 'dotnet restore' on /home/limup/Documents/Projetos/Limup/salao/salao.csproj...
/usr/share/dotnet/sdk/3.1.101/NuGet.targets(123,5): error : Unable to obtain lock file access on '/tmp/NuGetScratch/lock/b19d3901039706ea82571abad7c98ec690508d4b' for operations on '/home/limup/Documents/Projetos/Limup/salao/obj/salao.csproj.nuget.cache'. This may mean that a different user or administator is holding this lock and that this process does not have permission to access it. If no other process is currently performing an operation on this file it may mean that an earlier NuGet process crashed and left an inaccessible lock file, in this case removing the file '/tmp/NuGetScratch/lock/b19d3901039706ea82571abad7c98ec690508d4b' will allow NuGet to continue. [/home/limup/Documents/Projetos/Limup/salao/salao.csproj]
Restore failed.
Post action failed.
Description: Restore NuGet packages required by this project.
Manual instructions: Run 'dotnet restore'
Environment Data
dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 3.1.101
Commit: b377529961
Runtime Environment:
OS Name: fedora
OS Version: 31
OS Platform: Linux
RID: fedora.31-x64
Base Path: /usr/share/dotnet/sdk/3.1.101/
Host (useful for support):
Version: 3.1.1
Commit: a1388f194c
.NET Core SDKs installed:
3.1.101 [/usr/share/dotnet/sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.App 3.1.1 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.1 [/usr/share/dotnet/shared/Microsoft.NETCore.App]
To install additional .NET Core runtimes or SDKs:
https://aka.ms/dotnet-download
Obs
Tried Fixes put in dotnet restore, but I received the same error.
Did not have this problem with dotnet sdk 2.0.
In my case, the problem was caused by ownership of the "lock file" (in Linux).
I was running dotnet build under my user (without sudo) but my project was created using sudo.
Option A)
Use sudo again
sudo dotnet build
Option B)
Change /tmp/NuGetScratch/lock/ ownership:
sudo chown -R <user>:<user> /tmp/NuGetScratch/
Then, the user can run dotnet build without sudo.
I fixed that bug with commands below:
export TMPDIR=/tmp/NuGetScratch/
mkdir -p ${TMPDIR}
but, I've been receive the other error and I opened other question post: Post
I was getting the same error in my Visual Studio 2022 when I was trying to build my new console project.
Severity Code Description Project File Line Suppression State
Error Error occurred while restoring NuGet packages: Unable to obtain
lock file access on for operations on 'NuGet.Config'. This may mean
that a different user or administrator is holding this lock and that
this process does not have permission to access it. If no other
process is currently performing an operation on this file it may mean
that an earlier NuGet process crashed and left an inaccessible lock
file, in this case removing the file will allow NuGet to continue.
The very easy fix you can do is to open your Visual Studio as an administrator and build your project.
#Omair Majid, see:
[limup#localhost tmp]$ ll -a
total 12
drwxrwxrwt. 22 root root 480 Jan 15 16:59 .
dr-xr-xr-x. 18 root root 4096 Jan 10 18:51 ..
drwx------. 2 limup limup 60 Jan 15 16:54 .esd-1000
drwxrwxrwt. 2 root root 40 Jan 15 16:49 .font-unix
drwxrwxrwt. 2 root root 60 Jan 15 16:54 .ICE-unix
drwx------. 3 root root 60 Jan 15 16:49 systemd-private-5f8b1b99edb949b1864fa2e580380675-adb.service-WzMSag
drwx------. 3 root root 60 Jan 15 16:49 systemd-private-5f8b1b99edb949b1864fa2e580380675-chronyd.service-Rw6V8i
drwx------. 3 root root 60 Jan 15 16:50 systemd-private-5f8b1b99edb949b1864fa2e580380675-colord.service-9QFuBh
drwx------. 3 root root 60 Jan 15 16:49 systemd-private-5f8b1b99edb949b1864fa2e580380675-dbus-broker.service-DqpFAf
drwx------. 3 root root 60 Jan 15 16:54 systemd-private-5f8b1b99edb949b1864fa2e580380675-fwupd.service-eBNBLh
drwx------. 3 root root 60 Jan 15 16:54 systemd-private-5f8b1b99edb949b1864fa2e580380675-geoclue.service-O6YLYg
drwx------. 3 root root 60 Jan 15 16:49 systemd-private-5f8b1b99edb949b1864fa2e580380675-ModemManager.service-hHCfHf
drwx------. 3 root root 60 Jan 15 16:49 systemd-private-5f8b1b99edb949b1864fa2e580380675-rtkit-daemon.service-3A8e9f
drwx------. 3 root root 60 Jan 15 16:49 systemd-private-5f8b1b99edb949b1864fa2e580380675-switcheroo-control.service-zSjrXg
drwx------. 3 root root 60 Jan 15 16:49 systemd-private-5f8b1b99edb949b1864fa2e580380675-systemd-logind.service-ohP23f
drwx------. 3 root root 60 Jan 15 16:50 systemd-private-5f8b1b99edb949b1864fa2e580380675-upower.service-kMAiIg
drwx------. 2 limup limup 40 Jan 15 16:58 Temp-31248c7a-04ad-429c-ab49-4c2ed74a1986
drwx------. 2 limup limup 40 Jan 15 16:58 Temp-8fa994cb-0334-4cf9-886d-103347596108
drwxrwxrwt. 2 root root 40 Jan 15 16:49 .Test-unix
drwx------. 2 limup limup 40 Jan 15 17:00 tracker-extract-files.1000
-r--r--r--. 1 limup limup 11 Jan 15 16:54 .X0-lock
-r--------. 1 gdm gdm 11 Jan 15 16:50 .X1024-lock
drwxrwxrwt. 2 root root 80 Jan 15 16:54 .X11-unix
drwxrwxrwt. 2 root root 40 Jan 15 16:49 .XIM-unix
For some reason, in our CI, we need to run node tests inside docker container (including fetching dependencies, etc.). So, I am trying to have UI tests run as a part of docker build.
This is how my Dockerfile looks like:
FROM testcafe/testcafe:1.3.3
USER root
#some packages needed for some dependencies
RUN apk add --no-cache yarn python make build-base vim curl
RUN ln -s /opt/testcafe/docker/testcafe-docker.sh /usr/local/bin/testcafe-docker
WORKDIR /usr/src/app
RUN yarn config set registry https://private-npm-registry --global
COPY package*.json ./
RUN yarn
COPY . .
RUN yarn test:ui:ci
# "test:ui:clean": "rm -rf uitests/reports"
# "test:ui:ci-debug": "yarn test:ui:clean; testcafe-docker 'chromium --no-sandbox' uitests/tests -S -s uitests/reports/screenshots --video uitests/reports/videos -r spec,json:uitests/reports/report.json,html:uitests/reports/report.html",
# "test:ui:ci": "start-server-and-test serve http://127.0.0.1:8080 test:ui:ci-debug"
I get ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
Also, I tried using user user, but it gives permission error when creating reports folder inside uitests folder before running tests.
I tried it with and without the --no-sandbox option, got the same issue. Also tried chromium:headless --no-sandbox, got the same error.
Any suggestion, please? Thanks.
UPDATE:
Also tried with user: user (avoiding permission issue by using /tmp folder for report) and got same issue:
20-Jul-2019 23:53:33 > yarn test:ui:clean; whoami; ls -sail; testcafe-docker 'chromium --no-sandbox' uitests/tests -S -s /tmp/uitests/reports/screenshots --video /tmp/uitests/reports/videos -r spec,json:/tmp/uitests/reports/report.json,html:/tmp/uitests/reports/report.html
20-Jul-2019 23:53:33
20-Jul-2019 23:53:34 $ rm -rf /tmp/uitests/reports
20-Jul-2019 23:53:34 user
20-Jul-2019 23:53:34 total 528
20-Jul-2019 23:53:34 13 4 drwxr-xr-x 10 user user 4096 Jul 20 13:52 .
20-Jul-2019 23:53:34 12 4 drwxr-xr-x 8 root root 4096 Jul 20 13:52 ..
20-Jul-2019 23:53:34 79 4 -rw-r--r-- 1 root root 20 Jul 19 07:06 .dockerignore
20-Jul-2019 23:53:34 75 4 -rw-r--r-- 1 root root 45 Jul 19 07:06 .eslintignore
20-Jul-2019 23:53:34 83 4 -rw-r--r-- 1 root root 790 Jul 19 07:06 .eslintrc
20-Jul-2019 23:53:34 78 4 drwxr-xr-x 8 root root 4096 Jul 20 13:48 .git
20-Jul-2019 23:53:34 82 4 -rw-r--r-- 1 root root 326 Jul 20 13:48 .gitignore
20-Jul-2019 23:53:34 87 4 -rw-r--r-- 1 root root 189 Jul 19 07:06 Dockerfile
20-Jul-2019 23:53:34 81 4 -rw-r--r-- 1 root root 592 Jul 20 13:48 DockerfileUITest
20-Jul-2019 23:53:34 89 4 -rw-r--r-- 1 root root 451 Jul 19 07:06 README.md
20-Jul-2019 23:53:34 90 4 drwxr-xr-x 3 root root 4096 Jul 19 07:06 backend
20-Jul-2019 23:53:34 4096 4 drwxr-xr-x 3 user user 4096 Jul 20 13:53 build
20-Jul-2019 23:53:34 85 4 -rwxr-xr-x 1 root root 959 Jul 19 07:06 build.sh
20-Jul-2019 23:53:34 84 4 -rw-r--r-- 1 root root 1124 Jul 19 07:06 deploy.yaml
20-Jul-2019 23:53:34 91 4 drwxr-xr-x 1348 user user 4096 Jul 20 13:52 node_modules
20-Jul-2019 23:53:34 74 4 -rw-r--r-- 1 root root 2959 Jul 20 13:48 package.json
20-Jul-2019 23:53:34 76 4 drwxr-xr-x 2 root root 4096 Jul 19 07:06 public
20-Jul-2019 23:53:34 88 4 -rwxr-xr-x 1 root root 742 Jul 19 07:06 runUITestsInCI.sh
20-Jul-2019 23:53:34 77 4 drwxr-xr-x 8 root root 4096 Jul 19 07:06 src
20-Jul-2019 23:53:34 80 4 drwxr-xr-x 7 root root 4096 Jul 19 07:06 uitests
20-Jul-2019 23:53:34 86 448 -rw-r--r-- 1 root root 454847 Jul 20 13:48 yarn.lock
20-Jul-2019 23:53:34 Using locally installed version of TestCafe.
20-Jul-2019 23:55:36 ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
20-Jul-2019 23:55:36
20-Jul-2019 23:55:36 Type "testcafe -h" for help.
Update-2:
Tried with firefox as well and same issue: ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
Update-3:
Sorry for the delay. Got distracted in something else. Tried both in CI and local machine and it was same behaviour. Also tried suggestion in comment echo -e '#!/bin/sh\n/usr/bin/chromium-browser --no-sandbox --remote-debugging-port=9222 --headless' > /usr/local/bin/testcafe-docker and got following output in both CI and local env.
02-Dec-2019 18:45:17 DevTools listening on ws://127.0.0.1:9222/devtools/browser/711c6409-be9a-4e08-959e-0c994c8c5742
02-Dec-2019 18:45:17 [1202/074517.060637:ERROR:gl_implementation.cc(282)] Failed to load /usr/lib/chromium/swiftshader/libGLESv2.so: Error loading shared library /usr/lib/chromium/swiftshader/libGLESv2.so: No such file or directory
02-Dec-2019 18:45:17 [1202/074517.064824:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
02-Dec-2019 18:45:17 [1202/074517.072317:WARNING:dns_config_service_posix.cc(341)] Failed to read DnsConfig.
02-Dec-2019 18:45:17 [1202/074517.072782:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 1 time(s)
02-Dec-2019 18:45:17 [1202/074517.135149:ERROR:gl_implementation.cc(282)] Failed to load /usr/lib/chromium/swiftshader/libGLESv2.so: Error loading shared library /usr/lib/chromium/swiftshader/libGLESv2.so: No such file or directory
02-Dec-2019 18:45:17 [1202/074517.139203:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
02-Dec-2019 18:45:17 [1202/074517.143251:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 2 time(s)
02-Dec-2019 18:45:17 [1202/074517.208950:WARNING:gpu_process_host.cc(990)] Reinitialized the GPU process after a crash. The reported initialization time was 0 ms
02-Dec-2019 18:45:17 [1202/074517.209227:ERROR:gpu_channel_manager.cc(397)] ContextResult::kFatalFailure: Failed to create shared context for virtualization.
The recent 'Chromium' versions are not allow you to run them under the root user. Change the user from 'root' to user before running TestCafe tests.
Also, you need to setup permission for creating new folders.
See the detailed explanation in write in shared volumes docker
...
USER user
RUN yarn test:ui:ci
It is because of proxy. We use proxy to reach internet. I was adding http_proxy, https_proxy and no_proxy(127.0.0.1) in docker image. Browser in docker container was trying to reach testcafe server (running in same container) through proxy, because it does not use 127.0.0.1/localhost but 172.17.0.2as testcafe server host inside container. So adding 172.17.0.2 to no_proxy works.
guys i have looked and searched and read. yum update changed a permission somewhere but cant find where. Nagios on centos starts correctly i can view the page but for some reason i dont see any hosts or services, only 403 forbidden in the corner.
ive checked my nagios.cfg and no errors or warnings. I have started Nagios as daemon, same. Any other suggestions ?
total 160
drwxrwxr-x 5 root root 4096 May 7 18:14 .
drwxr-xr-x. 78 root root 4096 May 8 22:38 ..
-rw-rw-r-- 1 root root 11339 Sep 23 2014 cgi.cfg
-rw-rw-r-- 1 root root 11658 Aug 30 2013 cgi.cfg.rpmnew
drwxr-x--- 5 root nagios 4096 Aug 30 2013 conf.d
-rw-rw-r-- 1 root root 43443 Oct 2 2014 nagios.cfg
-rw-rw-r-- 1 root root 44533 Aug 30 2013 nagios.cfg.rpmnew
-rw-r--r-- 1 root root 960 Jul 24 2016 nrpe.cfg
-rw-r--r-- 1 root root 899 Mar 31 2015 nrpe.cfg.rpmsave
-rw-r--r-- 1 root root 5332 Feb 24 2015 nsca.cfg
drwxr-x--- 2 root nagios 4096 May 7 17:39 objects
-rw-r----- 1 root apache 27 Aug 30 2013 passwd
drwxr-x--- 2 root nagios 4096 May 7 18:14 private
-rw-r----- 1 root root 1340 Aug 30 2013 resource.cfg
-rw-r--r-- 1 root root 1628 Mar 20 2013 send_nsca.cfg
the check configuration :
Nagios Core 3.5.1
Copyright (c) 2009-2011 Nagios Core Development Team and Community Contributors
Copyright (c) 1999-2009 Ethan Galstad
Last Modified: 08-30-2013
License: GPL
Website: http://www.nagios.org
Reading configuration data...
Read main config file okay...
Processing object config directory '/etc/nagios/conf.d'...
Processing object config directory '/etc/nagios/conf.d/servicegroups'...
Processing object config file '/etc/nagios/conf.d/servicegroups/jira-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/servicegroups/routers-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/servicegroups/ups-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/servicegroups/backup-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/servicegroups/clone-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/servicegroups/perforce-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/servicegroups/linux-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/servicegroups/web-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/hostgroups.cfg'...
Processing object config directory '/etc/nagios/conf.d/hosts'...
Processing object config file '/etc/nagios/conf.d/hosts/servers.cfg'...
Processing object config file '/etc/nagios/conf.d/hosts/test.cfg'...
Processing object config file '/etc/nagios/conf.d/hosts/diskstation.cfg'...
Processing object config file '/etc/nagios/conf.d/hosts/clone-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/hosts/wifi.cfg'...
Processing object config file '/etc/nagios/conf.d/hosts/cloud.cfg'...
Processing object config file '/etc/nagios/conf.d/hosts/perforce-servers.cfg'...
Processing object config file '/etc/nagios/conf.d/hosts/printers.cfg'...
Processing object config file '/etc/nagios/conf.d/hosts/switches.cfg'...
Processing object config file '/etc/nagios/conf.d/contacts.cfg'...
Processing object config directory '/etc/nagios/conf.d/commands'...
Processing object config file '/etc/nagios/conf.d/commands/notifications.cfg'...
Processing object config file '/etc/nagios/conf.d/commands/perfdata.cfg'...
Processing object config file '/etc/nagios/conf.d/commands/checks.cfg'...
Processing object config file '/etc/nagios/conf.d/commands/nrpe.cfg'...
Processing object config file '/etc/nagios/conf.d/templates.cfg'...
Read object config files okay...
Running pre-flight check on configuration data...
Checking services...
Checked 124 services.
Checking hosts...
Checked 23 hosts.
Checking host groups...
Checked 8 host groups.
Checking service groups...
Checked 8 service groups.
Checking contacts...
Checked 1 contacts.
Checking contact groups...
Checked 1 contact groups.
Checking service escalations...
Checked 0 service escalations.
Checking service dependencies...
Checked 0 service dependencies.
Checking host escalations...
Checked 0 host escalations.
Checking host dependencies...
Checked 0 host dependencies.
Checking commands...
Checked 27 commands.
Checking time periods...
Checked 1 time periods.
Checking for circular paths between hosts...
Checking for circular host and service dependencies...
Checking global event handlers...
Checking obsessive compulsive processor commands...
Checking misc settings...
Total Warnings: 0
Total Errors: 0
Things look okay - No serious problems were detected during the pre-flight check
finally what i see :
what is see
thanks in advance.
Looks like your permissions are all messed up!
When you installed it.. was it from source? If so, did you use the --with-nagios-user= flag during ./configure?
On one of my boxes I have a combination of apache and nagios as the /usr/local/nagios owners. Try this:
chown -R nagios:nagios /usr/local/nagios
chown -R apache:nagios /usr/local/nagios/etc
chmod +x -R /usr/local/nagios/bin /usr/local/nagios/libexec
You'll also want to make sure that the nagios user and group is set in the main configuration file (/usr/local/nagios/etc/nagios.cfg), like this:
nagios_user=nagios
nagios_group=nagios
Also, did you remember to set up your htpasswd file?
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
Anyway, hope this helps get you started!
This might be simple problem. But I am stuck with this for weeks now.
We have an AIX server in which we are facing this issue. I am not able to run programs inside a specific directory and its sub directories.
I am getting proper outputs for commands java and scp2 in /opt/FileNet directory. But when I am in /opt/FileNet/RM directory these commands stops working. Outputs are as below.
Java - JVMXM008: Error occured while initialising System ClassException in thread "main" Could not create the Java virtual machine.
SCP2 - Failed to parse installation path.
I have no idea why this is happening. Your thoughts please.
drwxr-xr-x 24 root system 4096 Feb 21 2012 opt
drwxr-xr-x 17 jxadmin wasadmin 4096 Aug 14 08:40 FileNet
drwxrwxr-x 17 jxadmin wasadmin 4096 Aug 14 08:45 RM
drwxrwxr-x 37 jxadmin wasadmin 4096 Feb 13 2012 AE (/opt/FileNet/AE, This directory is working as expected)
Couldn't find any ACLs.