I want to create a diary report with endeca, so I have log server running at 15010 [port], but when I start [WeeklyReportGenerator] seems something is wrong I think because I have an error with log server, I check log and this is error:
Oct 12, 2012 10:19:17 AM com.endeca.forge.base.Pipeline$Engine$1 handle
WARNING: Error in pipeline: No log files to process
Oct 12, 2012 10:19:17 AM com.endeca.rg.components.input.FileSystemMultiInput$Engine$Statistics log
INFO: LogFileInput/FileSystemInput/com.endeca.rg.components.input.FileSystemMultiInput: Progress: 1/1 (100%), 0:00:00 remaining
Oct 12, 2012 10:19:17 AM com.endeca.rg.ReportGenerator main
SEVERE: Unable to proceed
Pipeline execution interrupted by exception
No log files to process
java.lang.RuntimeException: No log files to process
at com.endeca.rg.components.input.LogFileInput$Substitution$1$Engine.portClosed(LogFileInput.java:269)
Some clue about what is wrong?
The reporting processes need log files in order to produce reports. By default, no log messages are sent to the log server.
If you look at the orange reference app (http://:8006/endeca_jspref ) you'll see that it does implement logging. If you look at the logging_functions.jsp, you can see a good basic implementation of how to send log messages ( C:\Endeca\ToolsAndFrameworks\11.1.0\reference\endeca_jspref\logging_functions.jsp )
If you're using the Assembler API, it will handle most logging for you. Make sure you have the correct hostname and port configured. If you need to extend or replace the logging, look for the com.endeca.infront.navigation.event.LogServerAdapter in the assembler-context.xml.
Related
I have been trying to run clickhouse on ec2 instance from terraform. So far the ec2 instance runs well and I have access to the http localhost:8123. However when I try to access the localhost:8123/play I get the following message:
There is no handle /play
Use / or /ping for health checks.
Or /replicas_status for more sophisticated health checks.
Send queries from your program with POST method or GET /?query=...
Use clickhouse-client:
For interactive data analysis:
clickhouse-client
For batch query processing:
clickhouse-client --query='SELECT 1' > result
clickhouse-client < query > result
I don't understand why this is happening as I was not getting that error when running in local.
When I check the status of the clickhouse server I get the following output:
● clickhouse-server.service - ClickHouse Server
Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
Mar 25 12:14:35 systemd[1]: Started ClickHouse Server.
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: clickhouse_remote_servers
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: clickhouse_compression
Mar 25 12:14:35 clickhouse-server[11774]: Logging warning to /var/log/clickhouse-server/clickhouse-server.log
Mar 25 12:14:35 clickhouse-server[11774]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: networks
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: networks
Mar 25 12:14:37 clickhouse-server[11774]: Include not found: clickhouse_remote_servers
Mar 25 12:14:37 clickhouse-server[11774]: Include not found: clickhouse_compression
I don't know if this will help but maybe it is related to the problem.(logs file are empty)
Another question that I have and that has nothing to do with the problem above, is about the understanding of how clickhouse works because we hear many different articles talking about clickhouse but none seem very clear to me. We often hear about "nodes" in the articles that I've been reading. So far I think that clickhouse works with servers on which we put clusters. Inside those clusters we put shards and in each of those shards we put replicas, the so called "nodes". As we will be running in production I just want to make sure that when we talk about "nodes" we are talking about container which act as compute units or it is completely something else.
So far I've tried to open all port ingress and egress but it did not fix the problem. I've checked the clickhouse documentation which mention custom http endpoint but none talk about this error.
I've encountered a problem with deploying my shiny app on linux Ubuntu 16.04 LTS.
After I run sudo systemctl start shiny-server, and open up my browser heading to http://192.168..*:3838/StockVis/, the web page greys out in a second.
I found some warnings in the web console as below, and survey some information on the web for like two weeks, but still have no solution. :(
***"Thu Feb 16 2017 12:20:49 GMT+0800 (CST) [INF]: Connection opened. http://192.168.**.***:3838/StockVis/"
Thu Feb 16 2017 12:20:49 GMT+0800 (CST) [DBG]: Open channel 0
The application unexpectedly exited.
Diagnostic information is private. Please ask your system admin for permission if you need to check the R logs.
**Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [INF]: Connection closed. Info: {"type":"close","code":4503,"reason":"The application unexpectedly exited","wasClean":true}
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: SockJS connection closed
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: Channel 0 is closed
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: Removed channel 0, 0 left*****
Please kindly give some suggestions to move on.
This can indicate something in your R code is causing an error. As that R error could be anything, this answer is to help you gather that info. The browser console messages will not tell you what that is. In order to access the error, you need to configure Shiny to not delete the log upon exiting the application.
Assuming you have sudo access:
$ sudo vi /etc/shiny-server/shiny-server.conf
Place the following line in the file after run_as shiny; :
preserve_logs true;
Restart shiny:
sudo systemctl restart shiny-server
Reload your Shiny app.
In the var/log/shiny-sever/ directory there will be a log file with your application name. Viewing that file will give you more information on what is going on.
Warning. After you are done, take out the preserve_logs true; line in the conf file and restart Shiny. If not, you will start generating a bunch of log files you don't want.
I'm trying to get Spark, and SparkR, running on a small EC2 cluster using the provided scripts and directions. Whenever I ask for an operation that would require computation on an RDD (e.g., collect(), reduce()), I get the error logged below. Workers do appear to startup correctly -- if I only parallelize, I see the workers running via the master's web ui.
The error I get is similar to the one in Intermittent Timeout Exception using Spark and I've been through all of the solutions there (modifying the conf file for URL's, disabling the firewall, etc.), no luck.
Here is the error log, thank you in advance for your help:
15/02/17 19:10:22 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
15/02/17 19:10:22 INFO spark.SecurityManager: Changing view acls to: root,-
15/02/17 19:10:22 INFO spark.SecurityManager: Changing modify acls to: root,-
15/02/17 19:10:22 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, -); users with modify permissions: Set(root, -)
15/02/17 19:10:23 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/02/17 19:10:23 INFO Remoting: Starting remoting
15/02/17 19:10:23 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher#-.ec2.internal:60218]
15/02/17 19:10:23 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 60218.
15/02/17 19:10:53 ERROR security.UserGroupInformation: PriviledgedActionException as:- cause:java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException: Unknown exception in doAs
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1134)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:115)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:161)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.security.PrivilegedActionException: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
... 4 more
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:127)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
... 7 more
This was ultimately resolved by a combination of
- Updates to SparkR, which have resolved a number of serialization issues.
- Recognizing that the Spark-ec2 scripts require that the control node and master node be the same machine.
and
- Replacing calls to parallelize() with distributing and then loading the data by hadoop.
I am writing an intro to SparkR for R programmers that I hope will help people with things like this in the future.
My requirement is to create a dump file of heap memory of a remote server using Jmap.
I did this way.
jmap -dump:file=remoteDump.txt,format=b 3104
This worked fine as 3104 is the pid of a process from my local machine.
How do I do the same with remote server?
I tried
jmap -dump:file=remoteDump.txt,format=b 3104 54.197.228.33:8080
But it's failed.
I tried creating a debug server using jsadebugd, as below.
1.Started rmiregistry
rmiregistry -J-Xbootclasspath/p:$JAVA_HOME/lib/sa-jdi.jar
2.Ran jsadebugd
>jsadebugd 11594 54.197.228.33:9009
But the step 2 is throwing the following error:
Error attaching to process or starting server: sun.jvm.hotspot.debugger.D
Exception: Windbg Error: WaitForEvent failed!
at sun.jvm.hotspot.debugger.windbg.WindbgDebuggerLocal.attach0(Na
thod)
at sun.jvm.hotspot.debugger.windbg.WindbgDebuggerLocal.attach(Win
ggerLocal.java:152)
at sun.jvm.hotspot.HotSpotAgent.attachDebugger(HotSpotAgent.java:
at sun.jvm.hotspot.HotSpotAgent.setupDebuggerWin32(HotSpotAgent.j
)
at sun.jvm.hotspot.HotSpotAgent.setupDebugger(HotSpotAgent.java:3
at sun.jvm.hotspot.HotSpotAgent.go(HotSpotAgent.java:313)
at sun.jvm.hotspot.HotSpotAgent.startServer(HotSpotAgent.java:220
at sun.jvm.hotspot.DebugServer.run(DebugServer.java:106)
at sun.jvm.hotspot.DebugServer.main(DebugServer.java:45)
at sun.jvm.hotspot.jdi.SADebugServer.main(SADebugServer.java:55)
Help me get out of it.
The reason why you can not attach to process could be that it is already attached to some other debuger or executed on other visual machine than your jmap is running.
Try to assure that process is not attached to any debuger and you attach to the same VM.
Could someone please help me to figure out this issue.I have installed the japserserver trial version for windows xp.The tomcat server seems to work fine.But when I try to connect to the japserserver i get a HTTP 404 error.
SEVERE: A web application appears to have started a TimerThread named
[adhocCache] via the java.util.Timer API but has failed to stop it.
To prevent a memory leak, the timer (and hence the enter code here associated thread)
has been forcibly cancelled.
Jul 15, 2013 4:22:04 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Server startup in 30926 ms
In the localhost file under tomcat logs i could also see some error as:
Caused by: net.sf.ehcache.config.InvalidConfigurationException: There is one error in your configuration:
* CacheManager configuration: You've assigned more memory to the on-heap than the VM can sustain, please adjust your -Xmx setting accordingly
`