When I was trying Playlist tutorial of Cassandra at the first step, I encountered an Exception.
I create a VM on Google Compute Engine and install cassandra 3.0.10(this link). Then I did what the tutorial said.
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:240)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:86)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1455)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:329)
at com.datastax.driver.core.Cluster.connect(Cluster.java:279)
at playlist.model.CassandraData.createSession(CassandraData.java:66)
at playlist.model.CassandraData.getSession(CassandraData.java:50)
at playlist.model.CassandraInfo.<init>(CassandraInfo.java:25)
at playlist.controller.HomeServlet.doGet(HomeServlet.java:23)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:595)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:191)
at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:72)
at org.apache.jasper.runtime.PageContextImpl.doForward(PageContextImpl.java:742)
at org.apache.jasper.runtime.PageContextImpl.forward(PageContextImpl.java:712)
at org.apache.jsp.index_jsp._jspService(index_jsp.java:123)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:438)
... 38 more
Someone said that the version of com.datastax.cassandra doesn't match Cassandra.
In the pom.xml, the version of com.datastax.cassandra is 2.1.10.
Though I changed it to 3.1.0, the exception still appeared.
Which version of com.datastax.cassandra should I adopt?
By the way, I could use com.datastax.cassandra3.1.0 to access Cassandra3.7.
But Using com.datastax.cassandra2.1.10 to access Cassandra3.7 got the same exception.
The problem indeed is the inconsistency of the versions.
When I use Tomcat to deploy this application, I could access it from the web browser successfully. However, using cargo the tutorial adopted didn't work for me. I'm not familiar with it. Maybe I did something wrong.
Related
My SonarQube version is sonarqube-7.5 community edition.
Sonar Scanner version is sonar-scanner-3.3.0.1492-windows
I downloaded sonar-plsql-plugin-3.3.0.2273.jar and placed it in \sonarqube-7.5\extensions\plugins\ folder.
My operating system is Windows.
When I try to start SonarQube, I get the below exception in web.log file.
The PLSQL plugin which I am using is compatible with SonarQube 6.7+ and I am using version 7.5 (https://docs.sonarqube.org/display/PLUG/SonarPLSQL)
How could I resolve this issue and start the server?
2019.01.28 16:00:00 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2019.01.28 16:00:01 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
java.lang.IllegalStateException: Fail to load plugin SonarPLSQL [plsql]
at org.sonar.server.plugins.ServerExtensionInstaller.installExtensions(ServerExtensionInstaller.java:82)
at org.sonar.server.platform.platformlevel.PlatformLevel4.start(PlatformLevel4.java:586)
at org.sonar.server.platform.Platform.start(Platform.java:211)
at org.sonar.server.platform.Platform.startLevel34Containers(Platform.java:185)
at org.sonar.server.platform.Platform.access$500(Platform.java:46)
at org.sonar.server.platform.Platform$1.lambda$doRun$0(Platform.java:119)
at org.sonar.server.platform.Platform$AutoStarterRunnable.runIfNotAborted(Platform.java:371)
at org.sonar.server.platform.Platform$1.doRun(Platform.java:119)
at org.sonar.server.platform.Platform$AutoStarterRunnable.run(Platform.java:355)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NoClassDefFoundError: com/sonarsource/plugins/license/api/LicensedPluginRegistration
at com.sonar.plsql.plugin.PlSqlPlugin.define(Unknown Source)
at org.sonar.server.plugins.ServerExtensionInstaller.installExtensions(ServerExtensionInstaller.java:72)
... 9 common frames omitted
Caused by: java.lang.ClassNotFoundException: com.sonarsource.plugins.license.api.LicensedPluginRegistration
at org.sonar.classloader.ParentFirstStrategy.loadClass(ParentFirstStrategy.java:39)
at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:87)
at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:76)
... 11 common frames omitted
2019.01.28 16:00:02 INFO web[][o.s.p.StopWatcher] Stopping process
Sonar PLSQL plugin is a commercial product. You cannot install it on SonarQube Community edition. You have to buy at least Developer edition. Read more at Plans & Pricing.
After running a groovy script as task to create a role with:
security.addRole(// id
roleDeveloper,
// name
roleDeveloper,
// description
"A developer on ${repoCap} group",
// privileges
["nx-repository-view-maven2-${repo}-dependencies-browse",
"nx-repository-view-maven2-${repo}-dependencies-read"],
// roles
["dw-all-public-repos"])
I can't access to the roles menu. I get the following error:
com.orientechnologies.orient.core.exception.ODatabaseException: Error on deserialization of Serializable DB name="security"
[...]
Caused by: java.lang.ClassNotFoundException: org.codehaus.groovy.runtime.GStringImpl
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) [na:1.8.0_91]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) [na:1.8.0_91]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) [na:1.8.0_91]
at org.apache.felix.framework.BundleWiringImpl.doImplicitBootDelegation(BundleWiringImpl.java:1782) [na:na]
at org.apache.felix.framework.BundleWiringImpl.searchDynamicImports(BundleWiringImpl.java:1717) [na:na]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1552) [na:na]
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79) [na:na]
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2018) [na:na]
After running several tests (with and without String interpolations) on several version of Nexus (3.x) it looks like String interpolations are supported for some parameters but not for privileges parameter.
Is it a known issue ?
Now that my Roles menu is inaccessible due to this above error is there a way to fix it ? (I tried to remove it with a script but it failed because delete perform a load first)
Sorry for the problems Alexandre. It looks like you'll have to connect to the database directly in order to fix the problematic records. Instructions for how to do this with Nexus offline are here: https://support.sonatype.com/hc/en-us/articles/115002930827-Accessing-the-OrientDB-Console
In particular the database you're looking to connect to is 'security':
connect plocal:data/db/security admin admin
And the tables you will need to inspect/delete from are 'privilege' and 'role'.
I'll keep an eye out here in case you run into problems or have any followup questions.
I’m a newbie with Storm and I have setup a Storm-on-Yarn on an HDP cluster using the instructions at the HDP Storm-on-Yarn page and the storm-yarn-master from anfeng's storm-yarn git project.
I’m able to get Nimbus running and even submit topologies and see them on Storm UI. However, the spouts and the bolts don’t seem to be “working” (0 counts of tuples emitted).
I did some digging around and realized that my worker daemons are not starting. The supervisor log spits out these:
2014-03-13 11:22:03 b.s.d.supervisor [INFO] 18bf93a1-1cea-4e99-93da-8f36a4e9c056 still hasn't started
I tried launching the worker command from the “Launching worker with command” line in the supverviser log and I got this error:
Exception in thread "main" java.lang.NoClassDefFoundError: backtype/storm/daemon/worker
Caused by: java.lang.ClassNotFoundException: backtype.storm.daemon.worker
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: backtype.storm.daemon.worker. Program will exit.
It looks like it can’t find the worker class although it’s present in the storm-core jar.
Any ideas on how I can proceed with troubleshooting this? I’ve attached the nimbus and the supervisor logs. The worker logs don't seem to have been created.
Nimbus Log - http://paste.ubuntu.com/7089418/
Supervisor Log - http://paste.ubuntu.com/7089422/
Hadoop Version - 2.2
Storm Version - 0.9.0-wip21
I've had an issue like this when the JAR file I was creating did not exclude the storm binaries. i.e. in the pom.xml file, make sure that you have the storm-core dependency set with:
<scope>provided</scope>
As well, I had issues where multiple versions of netty were installed in the storm lib folder (had to delete the old version JAR). This was also causing NoClassDefFoundErrors to be thrown (albeit, different than the one you are experiencing).
I would suggest looking at the classpath that shows up when you submit the topolgies (you can do that by doing ps -Af | grep storm)
To uninstall an application I called uninstall-application app-name from the cloudify prompt in a local cloud environment. However the uninstall is unsuccessful. The log file shows following exception.
2013-10-14 13:06:50,537 rest [1] INFO [org.cloudifysource.rest.controllers.ServiceController] - Removing all application scope attributes for application
2013-10-14 13:06:50,542 rest [1] WARNING [org.openspaces.admin.internal.admin.DefaultAdmin] - Failed to execute: org.openspaces.admin.internal.gsm.DefaultGridServiceManager$3#70b1ec8b - org.openspaces.admin.AdminException: Failed to undeploy processing unit [app-name]; Caused by: org.openspaces.admin.AdminException: Failed to undeploy processing unit [app-name]
at org.openspaces.admin.internal.gsm.DefaultGridServiceManager.undeployProcessingUnit(DefaultGridServiceManager.java:279)
at org.openspaces.admin.internal.gsm.DefaultGridServiceManager$3.run(DefaultGridServiceManager.java:799)
at org.openspaces.admin.internal.admin.DefaultAdmin$LoggerRunnable.run(DefaultAdmin.java:2077)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.jini.rio.core.OperationalStringException: GSM not found
at org.jini.rio.monitor.ProvisionMonitorImpl.undeploy(ProvisionMonitorImpl.java:601)
at org.jini.rio.monitor.ProvisionMonitorAdminImpl.undeploy(ProvisionMonitorAdminImpl.java:126)
at org.jini.rio.monitor.DeployAdminGigaspacesMethodinternalInvoke7.internalInvoke(Unknown Source)
at com.gigaspaces.internal.reflection.fast.AbstractMethod.invoke(AbstractMethod.java:41)
at com.gigaspaces.lrmi.LRMIRuntime.invoked(LRMIRuntime.java:450)
at com.gigaspaces.lrmi.nio.Pivot.consumeAndHandleRequest(Pivot.java:557)
at com.gigaspaces.lrmi.nio.Pivot.handleRequest(Pivot.java:658)
at com.gigaspaces.lrmi.nio.Pivot$ChannelEntryTask.run(Pivot.java:196)
... 3 more
2013-10-14 13:06:51,544 rest [1] INFO [org.cloudifysource.rest.util.RestPollingRunnable] - undeployAndWait for processing unit has not finished yet
#
Eventually the operation times out. Post that I can not even teardown the local cloud. The only way to come out of this is the reboot the system. Appreciate some help on this one.
The following error:
Caused by: org.jini.rio.core.OperationalStringException: GSM not found at org.jini.rio.monitor.ProvisionMonitorImpl.undeploy
indicates that one of the Cloudify management components was missing. It may have crashed earlier, or perhaps the local machine was running at 100% CPU, causing local components to not respond to each other.
In an actual cloud deployment, this would cause the Cloudify agent to restart the failed component, but in the local-cloud environment the agent and the other management components run in the same process to conserve memory and speed up start-up time.
I'm having trouble running a custom jar on Elastic Map-Reduce
I'm using jdk1.6.0_26, Hadoop 0.20.205, and compiling with Eclipse on my computer and everything works perfectly fine
for example if I ran the following on my computer it would be successful
hadoop jar MaxTemperature.jar input/temperature.txt output
I specified the jar as the following on AWS
s3n://chrishadoop/MaxTemperature.jar
and I specified the arguments as
s3n://chrishadoop/input/temperature.txt s3n://chrishadoop/output
I did not specify the main class because I pointed to it in the manifest
Here is the jar I'm using, I will make it public for a little while
https://s3.amazonaws.com/chrishadoop/MaxTemperature.jar
Here is the error I'm getting
2012-07-08 19:31:39,824 INFO com.amazonaws.elasticmapreduce.statepusher.StatePusher (main): Pusher awoke, starting to push data into simpledb...
2012-07-08 19:31:40,552 FATAL com.amazonaws.elasticmapreduce.statepusher.StatePusher (main): Fatal Exception raised while extracting data from hadoop and pushing to simpledb
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException
at com.amazonaws.elasticmapreduce.statepusher.StatePusher.run(StatePusher.java:65)
at com.amazonaws.elasticmapreduce.statepusher.StatePusher.main(StatePusher.java:205)
Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.map.JsonMappingException
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 2 more
There is a version of Jackson which is installed as part of the AMI, and I'm guessing you're bundling a different version of Jackson? The error seems to be happening in the support code which makes "enable debugging" work.