After running a groovy script as task to create a role with:
security.addRole(// id
roleDeveloper,
// name
roleDeveloper,
// description
"A developer on ${repoCap} group",
// privileges
["nx-repository-view-maven2-${repo}-dependencies-browse",
"nx-repository-view-maven2-${repo}-dependencies-read"],
// roles
["dw-all-public-repos"])
I can't access to the roles menu. I get the following error:
com.orientechnologies.orient.core.exception.ODatabaseException: Error on deserialization of Serializable DB name="security"
[...]
Caused by: java.lang.ClassNotFoundException: org.codehaus.groovy.runtime.GStringImpl
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) [na:1.8.0_91]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) [na:1.8.0_91]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) [na:1.8.0_91]
at org.apache.felix.framework.BundleWiringImpl.doImplicitBootDelegation(BundleWiringImpl.java:1782) [na:na]
at org.apache.felix.framework.BundleWiringImpl.searchDynamicImports(BundleWiringImpl.java:1717) [na:na]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1552) [na:na]
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79) [na:na]
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2018) [na:na]
After running several tests (with and without String interpolations) on several version of Nexus (3.x) it looks like String interpolations are supported for some parameters but not for privileges parameter.
Is it a known issue ?
Now that my Roles menu is inaccessible due to this above error is there a way to fix it ? (I tried to remove it with a script but it failed because delete perform a load first)
Sorry for the problems Alexandre. It looks like you'll have to connect to the database directly in order to fix the problematic records. Instructions for how to do this with Nexus offline are here: https://support.sonatype.com/hc/en-us/articles/115002930827-Accessing-the-OrientDB-Console
In particular the database you're looking to connect to is 'security':
connect plocal:data/db/security admin admin
And the tables you will need to inspect/delete from are 'privilege' and 'role'.
I'll keep an eye out here in case you run into problems or have any followup questions.
Related
I migrated to API Manager 2.1.0 with Identity server as key manager 5.3.0.
I followed the document https://docs.wso2.com/display/AM210/Upgrading+from+the+Previous+Release+when+WSO2+IS+is+the+Key+Manager
I have custom domain name mapped to all url.
I have the below error in WSO2 API manager logs when i try to hit an API. How to debug this?which file should be the culprit?
In earlier version , i faced this issue when i missed to update jndi.properties.But that is done properly here as below.
# connectionfactory.[jndiname] = [ConnectionURL]
connectionfactory.TopicConnectionFactory = amqp://admin:admin#clientid/carbon?brokerlist='tcp://mydevwso2.ca:5672'
connectionfactory.QueueConnectionFactory = amqp://admin:admin#clientID/test?brokerlist='tcp://mydevwso2.ca:5672'
What could cause the below error?
Caused by: org.wso2.andes.AMQConnectionFailureException: Could not open connection
at org.wso2.andes.client.AMQConnection.<init>(AMQConnection.java:486)
at org.wso2.andes.client.AMQConnectionFactory.createConnection(AMQConnectionFactory.java:351)
... 13 more
Caused by: org.wso2.andes.transport.TransportException: Could not open connection
at org.wso2.andes.transport.network.mina.MinaNetworkTransport$IoConnectorCreator.connect(MinaNetworkTransport.java:216)
at org.wso2.andes.transport.network.mina.MinaNetworkTransport.connect(MinaNetworkTransport.java:74)
at org.wso2.andes.client.AMQConnectionDelegate_8_0.makeBrokerConnection(AMQConnectionDelegate_8_0.java:130)
at org.wso2.andes.client.AMQConnection$2.run(AMQConnection.java:631)
at org.wso2.andes.client.AMQConnection$2.run(AMQConnection.java:628)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.andes.client.AMQConnection.makeBrokerConnection(AMQConnection.java:628)
at org.wso2.andes.client.AMQConnection.<init>(AMQConnection.java:409)
... 14 more
TID: [-1] [] [2017-11-14 14:12:08,925] ERROR {org.wso2.carbon.event.output.adapter.jms.internal.util.JMSMessageSender} - {org.wso2.carbon.event.output.adapter.jms.internal.util.JMSMessageSender}
java.lang.NullPointerException
at org.wso2.carbon.event.output.adapter.jms.internal.util.JMSMessageSender.send(JMSMessageSender.java:88)
at org.wso2.carbon.event.output.adapter.jms.JMSEventAdapter$JMSSender.run(JMSEventAdapter.java:284)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2017-11-14 14:12:22,207] INFO {org.wso2.andes.client.AMQConnection} - Unable to connect to broker at tcp://mydevwso2.ca:5672 {org.wso2.andes.client.AMQConnection}
org.wso2.andes.transport.TransportException: Could not open connection
at org.wso2.andes.transport.network.mina.MinaNetworkTransport$IoConnectorCreator.connect(MinaNetworkTransport.java:216)
at org.wso2.andes.transport.network.mina.MinaNetworkTransport.connect(MinaNetworkTransport.java:74)
at org.wso2.andes.client.AMQConnectionDelegate_8_0.makeBrokerConnection(AMQConnectionDelegate_8_0.java:130)
at org.wso2.andes.client.AMQConnection$2.run(AMQConnection.java:631)
at org.wso2.andes.client.AMQConnection$2.run(AMQConnection.java:628)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.andes.client.AMQConnection.makeBrokerConnection(AMQConnection.java:628)
at org.wso2.andes.client.AMQConnection.<init>(AMQConnection.java:409)
Check whether the super admin user credentials are correct in the jndi.properties file and also in the api-manager.xml file. In the api-manager.xml file, the relevant configuration can be seen similar to the below example (credentials can be different).
<JMSConnectionParameters>
<transport.jms.ConnectionFactoryJNDIName>TopicConnectionFactory</transport.jms.ConnectionFactoryJNDIName>
<transport.jms.DestinationType>topic</transport.jms.DestinationType>
<java.naming.factory.initial>org.wso2.andes.jndi.PropertiesFileInitialContextFactory</java.naming.factory.initial>
<connectionfactory.TopicConnectionFactory>amqp://admin:admin2#clientid/carbon?brokerlist='${jms.url}'</connectionfactory.TopicConnectionFactory>
</JMSConnectionParameters>
I could observe the same issue as you, resulting in:
org.wso2.andes.AMQConnectionFailureException: Could not open connection ..)
I was able to resolve by changing the above configurations to the correct super admin user credentials. The correct admin user credentials can be found from the <APIM_Home>/repository/conf/usr-mgt.xml file. An example configuration is as follows:
<AdminUser>
<UserName>admin</UserName>
<Password>admin2</Password>
</AdminUser>
I am using solr 4.10.2. I tried to perform a shard split on my solr cloud test cluster. It fails all the time if the index type is set to "native" or "simple".
Is that normal? I can perform shard splitting if the index type is set to "single" or "none".
They advertise that shard splitting can be done while solr is running and i hardly imagine poking around changing the lock type of a production server...
Here is the test environment:
1 shard, 2 nodes, 1 collection.
Initially the collection was empty. I added few documents, verified that they have been replicated. All worked.
Issued the split shard command:
server1:port/solr/admin/collections?action=SPLITSHARD&collection=mycollection&shard=shard1&async=myhandle
After verifying that the operation had finished, by calling
server1:port/solr/admin/collections?action=REQUESTSTATUS&requestid=myhandle
The status was "complete".
Here is the log:
OverseerCollectionProcessor.processMessage : splitshard , {
"operation":"splitshard",
"shard":"shard1",
"collection":"mycollection",
"async":"myhandle"}
1/26/2015, 1:49:02 PM
ERROR
CoreContainer
Error creating core [mycollection_shard1_0_replica1]: Error opening new searcher
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:873)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:845)
... 9 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/nfs/solr/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 11 more
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:873)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:845)
... 9 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/nfs/solr/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 11 more
1/26/2015, 1:49:26 PM
ERROR
SolrIndexWriter
SolrIndexWriter was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!
1/26/2015, 1:49:26 PM
ERROR
SolrIndexWriter
Error closing IndexWriter
java.lang.NullPointerException
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3230)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3203)
at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:907)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:984)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:954)
at org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:129)
at org.apache.solr.update.SolrIndexWriter.finalize(SolrIndexWriter.java:182)
at java.lang.System$2.invokeFinalize(System.java:1213)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:98)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:210)
I fixed the problem. Here is how:
When i created the solr cloud environment, i used the -Dsolr.data.dir property to map the collection storage to a different file system. This was because i was running VMs with limited storage capacity. Once i removed this property everything started working.
I think solr tries to use the same solr.data.dir path for the new cores created by the shard split causing the lock problem.
I followed this https://github.com/imduffy15/GSoC-2014/ and installed apache cloudstack with basic networking in my local machine, and every thing worked smoothly.
I can add instances from the built in tiny linux template(cent-os 5.6 64bit) without having any issue.
I got a snapshot of the running instance volume and created a template from it. When I try to add an instance using that template I always get InsufficientServerCapacityException.
According to my dashboard there are enough capacity to add the instance.
Any ideas?
console:
WARN [o.a.c.alerts] (API-Job-Executor-5:ctx-4692c763 job-166 ctx-b141500e) alertType:: 8 // dataCenterId:: 1 // podId:: null // clusterId:: null // message:: Failed to deploy Vm with Id: 29, on Host with Id: null
INFO [o.a.c.a.c.a.v.DeployVMCmdByAdmin] (API-Job-Executor-5:ctx-4692c763 job-166 ctx-b141500e) com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-29-VM]Scope=interface com.cloud.dc.DataCenter; id=1
INFO [o.a.c.a.c.a.v.DeployVMCmdByAdmin] (API-Job-Executor-5:ctx-4692c763 job-166 ctx-b141500e) Unable to create a deployment for VM[User|i-2-29-VM]
com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-29-VM]Scope=interface com.cloud.dc.DataCenter; id=1
at org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.reserveVirtualMachine(VMEntityManagerImpl.java:214)
at org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.reserve(VirtualMachineEntityImpl.java:200)
at com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:3515)
at com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:3166)
at com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:3154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy224.startVirtualMachine(Unknown Source)
at org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin.execute(DeployVMCmdByAdmin.java:48)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:141)
at com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:503)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:460)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I replied to your private email. Will reply here for anybody googling the issue.
This is most likely due to the service offering you are using.
The default Cloudstack service offerings use shared storage. The setup for my GSoC project is all local storage.
You can create a new service offering via the UI and just specified local storage.
Alternatively, open up the SQL database, navigate over to the service_offering_view and modify the use_local_storage column to be 1.
EDIT:
Can you look at the template_view for the template you created based off a volume. Check that the column flag for HVM is set to 0.
I’m a newbie with Storm and I have setup a Storm-on-Yarn on an HDP cluster using the instructions at the HDP Storm-on-Yarn page and the storm-yarn-master from anfeng's storm-yarn git project.
I’m able to get Nimbus running and even submit topologies and see them on Storm UI. However, the spouts and the bolts don’t seem to be “working” (0 counts of tuples emitted).
I did some digging around and realized that my worker daemons are not starting. The supervisor log spits out these:
2014-03-13 11:22:03 b.s.d.supervisor [INFO] 18bf93a1-1cea-4e99-93da-8f36a4e9c056 still hasn't started
I tried launching the worker command from the “Launching worker with command” line in the supverviser log and I got this error:
Exception in thread "main" java.lang.NoClassDefFoundError: backtype/storm/daemon/worker
Caused by: java.lang.ClassNotFoundException: backtype.storm.daemon.worker
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: backtype.storm.daemon.worker. Program will exit.
It looks like it can’t find the worker class although it’s present in the storm-core jar.
Any ideas on how I can proceed with troubleshooting this? I’ve attached the nimbus and the supervisor logs. The worker logs don't seem to have been created.
Nimbus Log - http://paste.ubuntu.com/7089418/
Supervisor Log - http://paste.ubuntu.com/7089422/
Hadoop Version - 2.2
Storm Version - 0.9.0-wip21
I've had an issue like this when the JAR file I was creating did not exclude the storm binaries. i.e. in the pom.xml file, make sure that you have the storm-core dependency set with:
<scope>provided</scope>
As well, I had issues where multiple versions of netty were installed in the storm lib folder (had to delete the old version JAR). This was also causing NoClassDefFoundErrors to be thrown (albeit, different than the one you are experiencing).
I would suggest looking at the classpath that shows up when you submit the topolgies (you can do that by doing ps -Af | grep storm)
I'm having trouble running a custom jar on Elastic Map-Reduce
I'm using jdk1.6.0_26, Hadoop 0.20.205, and compiling with Eclipse on my computer and everything works perfectly fine
for example if I ran the following on my computer it would be successful
hadoop jar MaxTemperature.jar input/temperature.txt output
I specified the jar as the following on AWS
s3n://chrishadoop/MaxTemperature.jar
and I specified the arguments as
s3n://chrishadoop/input/temperature.txt s3n://chrishadoop/output
I did not specify the main class because I pointed to it in the manifest
Here is the jar I'm using, I will make it public for a little while
https://s3.amazonaws.com/chrishadoop/MaxTemperature.jar
Here is the error I'm getting
2012-07-08 19:31:39,824 INFO com.amazonaws.elasticmapreduce.statepusher.StatePusher (main): Pusher awoke, starting to push data into simpledb...
2012-07-08 19:31:40,552 FATAL com.amazonaws.elasticmapreduce.statepusher.StatePusher (main): Fatal Exception raised while extracting data from hadoop and pushing to simpledb
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException
at com.amazonaws.elasticmapreduce.statepusher.StatePusher.run(StatePusher.java:65)
at com.amazonaws.elasticmapreduce.statepusher.StatePusher.main(StatePusher.java:205)
Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.map.JsonMappingException
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 2 more
There is a version of Jackson which is installed as part of the AMI, and I'm guessing you're bundling a different version of Jackson? The error seems to be happening in the support code which makes "enable debugging" work.