Nexus indexing results in java.io.IOException: No space left on device - nexus

When running a re-index on our public group, which contains the Maven Central proxy among others, I get the error below; however, running a df -k I cannot see any device that is anywhere near full... indeed, the partition Nexus is installed on has 156GB free.
Is there something I can do to prevent this happening?
2013-06-07 12:04:18 WARN [pool-1-thread-7] - org.sonatype.nexus.tasks.UpdateIndexTask - Scheduled task (UpdateIndexTask) failed :: Updating repository index "Central" from path / and below. (started 2013-06-07T12:02:26+01:00, runtime 0:01:52.193)
java.io.IOException: background merge hit exception: _a(3.6.2):C360466 _l(3.6.2):C357408 _w(3.6.2):C329033 _x(3.6.2):c32252 _y(3.6.2):c32813 _z(3.6.2):c33077 _10(3.6.2):c33145 _11(3.6.2):c32795 _12(3.6.2):c17849 into _13 [maxNumSegments=1]
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:2555) ~[na:na]
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2400) ~[na:na]
at org.apache.maven.index.updater.IndexDataReader.readIndex(IndexDataReader.java:98) ~[na:na]
at org.apache.maven.index.updater.DefaultIndexUpdater.unpackIndexData(DefaultIndexUpdater.java:509) ~[na:na]
at org.apache.maven.index.updater.DefaultIndexUpdater.loadIndexDirectory(DefaultIndexUpdater.java:197) ~[na:na]
at org.apache.maven.index.updater.DefaultIndexUpdater.access$300(DefaultIndexUpdater.java:76) ~[na:na]
at org.apache.maven.index.updater.DefaultIndexUpdater$LuceneIndexAdaptor.setIndexFile(DefaultIndexUpdater.java:642) ~[na:na]
at org.apache.maven.index.updater.DefaultIndexUpdater.fetchAndUpdateIndex(DefaultIndexUpdater.java:862) ~[na:na]
at org.apache.maven.index.updater.DefaultIndexUpdater.fetchAndUpdateIndex(DefaultIndexUpdater.java:157) ~[na:na]
at org.sonatype.nexus.index.DefaultIndexerManager.updateRemoteIndex(DefaultIndexerManager.java:1311) ~[na:na]
at org.sonatype.nexus.index.DefaultIndexerManager.access$800(DefaultIndexerManager.java:186) ~[na:na]
at org.sonatype.nexus.index.DefaultIndexerManager$7.run(DefaultIndexerManager.java:1061) ~[na:na]
at org.sonatype.nexus.index.DefaultIndexerManager.sharedSingle(DefaultIndexerManager.java:2459) ~[na:na]
at org.sonatype.nexus.index.DefaultIndexerManager.reindexRepository(DefaultIndexerManager.java:1091) ~[na:na]
at org.sonatype.nexus.index.DefaultIndexerManager.reindexRepository(DefaultIndexerManager.java:1013) ~[na:na]
at org.sonatype.nexus.index.DefaultIndexerManager.reindexRepository(DefaultIndexerManager.java:989) ~[na:na]
at org.sonatype.nexus.tasks.ReindexTaskHandlerLegacy.reindexRepository(ReindexTaskHandlerLegacy.java:54) ~[na:na]
at org.sonatype.nexus.tasks.AbstractIndexerTask.doRun(AbstractIndexerTask.java:69) ~[na:na]
at org.sonatype.nexus.scheduling.AbstractNexusTask.call(AbstractNexusTask.java:179) ~[nexus-core-2.4.0-09.jar:2.4.0-09]
at org.sonatype.scheduling.DefaultScheduledTask.call(DefaultScheduledTask.java:459) [sisu-task-scheduler-1.7.jar:na]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [na:1.6.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:138) [na:1.6.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) [na:1.6.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206) [na:1.6.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [na:1.6.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [na:1.6.0_45]
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_45]
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.6.0_45]
at java.io.RandomAccessFile.write(RandomAccessFile.java:482) ~[na:1.6.0_45]
at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:448) ~[na:na]
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:99) ~[na:na]
at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:88) ~[na:na]
at org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:113) ~[na:na]
at org.apache.lucene.store.FSDirectory$FSIndexOutput.close(FSDirectory.java:458) ~[na:na]
at org.apache.lucene.util.IOUtils.close(IOUtils.java:141) ~[na:na]
at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:139) ~[na:na]
at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:232) ~[na:na]
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:107) ~[na:na]
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263) ~[na:na]
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3908) ~[na:na]
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388) ~[na:na]
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456) ~[na:na]

You should be aware that Lucene Index needs up to 3-5 times the disk space of the resulting index size while merging the indexes.
So if the central index is 3 GB in result your disk should provide at least 15 GB free space.

I finally solved the problem!!!
I move the NEXUS_HOME/tmp folder in the 1TB partition.
Probably during the insexing process nexus need

Related

Upgrade to Artifactory OSS 5.4, and Artifactoy won't start

I have upgraded from Artifactoy 5.1 to 5.4. Now Artifactory will now start. Getting the following error:
INFO: Starting ProtocolHandler ["ajp-nio-8019"]
2017-06-22 09:59:34,971 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:243) - Got response from Access server after 2687 ms, continuing.
2017-06-22 09:59:35,387 [art-init] [ERROR] (o.a.w.s.ArtifactoryContextConfigListener:97) - Application could not be initialized: HTTP response status 401:{"errors":[{"code":"UNAUTHORIZED","detail":"Bad credentials","message":"HTTP 401 Unauthorized"}]}
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_91]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_91]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_91]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_91]
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.configure(ArtifactoryContextConfigListener.java:222) ~[artifactory-web-application-5.4.0.jar:na]
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.access$2(ArtifactoryContextConfigListener.java:184) ~[artifactory-web-application-5.4.0.jar:na]
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener$1.run(ArtifactoryContextConfigListener.java:93) ~[artifactory-web-application-5.4.0.jar:na]
Caused by: org.springframework.beans.factory.BeanInitializationException: Failed to initialize bean 'org.artifactory.security.access.AccessService'.; nested exception is java.lang.RuntimeException: Failed to generate service admin token using bootstrap credentials.
at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:230) ~[artifactory-core-5.4.0.jar:na]
at org.artifactory.spring.ArtifactoryApplicationContext.<init>(ArtifactoryApplicationContext.java:114) ~[artifactory-core-5.4.0.jar:na]
... 7 common frames omitted
Caused by: java.lang.RuntimeException: Failed to generate service admin token using bootstrap credentials.
at org.jfrog.access.client.AccessClientBootstrap.createAndStoreServiceAdminToken(AccessClientBootstrap.java:110) ~[access-client-core-2.0.0.jar:na]
at org.jfrog.access.client.AccessClientBootstrap.bootstrapServiceAdminToken(AccessClientBootstrap.java:79) ~[access-client-core-2.0.0.jar:na]
at org.jfrog.access.client.AccessClientBootstrap.<init>(AccessClientBootstrap.java:42) ~[access-client-core-2.0.0.jar:na]
at org.artifactory.security.access.AccessServiceImpl.initAccessService(AccessServiceImpl.java:227) ~[artifactory-core-5.4.0.jar:na]
at org.artifactory.security.access.AccessServiceImpl.initIfNeeded(AccessServiceImpl.java:216) ~[artifactory-core-5.4.0.jar:na]
at org.artifactory.security.access.AccessServiceImpl.init(AccessServiceImpl.java:211) ~[artifactory-core-5.4.0.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_91]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_91]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) ~[spring-aop-4.1.5.RELEASE.jar:4.1.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) ~[spring-aop-4.1.5.RELEASE.jar:4.1.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) ~[spring-aop-4.1.5.RELEASE.jar:4.1.5.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99) ~[spring-tx-4.1.5.RELEASE.jar:4.1.5.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:281) ~[spring-tx-4.1.5.RELEASE.jar:4.1.5.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) ~[spring-tx-4.1.5.RELEASE.jar:4.1.5.RELEASE]
at org.artifactory.storage.fs.lock.aop.LockingAdvice.invoke(LockingAdvice.java:76) ~[artifactory-storage-common-5.4.0.jar:na]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.1.5.RELEASE.jar:4.1.5.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) ~[spring-aop-4.1.5.RELEASE.jar:4.1.5.RELEASE]
at com.sun.proxy.$Proxy144.init(Unknown Source) ~[na:na]
at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:228) ~[artifactory-core-5.4.0.jar:na]
... 8 common frames omitted
Caused by: org.jfrog.access.client.AccessClientHttpException: HTTP response status 401:{"errors":[{"code":"UNAUTHORIZED","detail":"Bad credentials","message":"HTTP 401 Unauthorized"}]}
at org.jfrog.access.client.http.AccessHttpClient.createRestResponse(AccessHttpClient.java:312) ~[access-client-core-2.0.0.jar:na]
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:299) ~[access-client-core-2.0.0.jar:na]
at org.jfrog.access.client.http.AccessHttpClient.createToken(AccessHttpClient.java:133) ~[access-client-core-2.0.0.jar:na]
at org.jfrog.access.client.token.TokenClientImpl.create(TokenClientImpl.java:36) ~[access-client-core-2.0.0.jar:na]
at org.jfrog.access.client.AccessClientBootstrap.createAndStoreServiceAdminToken(AccessClientBootstrap.java:103) ~[access-client-core-2.0.0.jar:na]
... 28 common frames omitted
2017-06-22 09:59:41,768 [http-nio-8081-exec-7] [ERROR] (o.a.w.s.ArtifactoryFilter:188) - Artifactory failed to initialize: Context is null
Jun 22, 2017 10:04:14 AM org.apache.catalina.core.StandardServer await
This error occurs due to a missing step during the upgrade process. As mentioned in the wiki page, as part of the upgrade, you need to remove the existing $ARTIFACTORY_HOME/bin folder and copy over the new one from the extracted zip file.
This step is crucial, as it contains a property which makes the Access service to be bundled with Artifactory. When this property is missing, the Access service creates a new database with a new admin token, which is different than the existing one. This results in the 401 error that you're seeing, which prevents Artifactory from being restarted.
In order to overcome this issue, follow the steps of the upgrade process, including the removal of the existing bin folder.

Nexus OSS Yum: generate meta data 'option --no-database not recognized'

I am using Nexus Repository Manager OSS 2.14.4-03 on RHEL5.
When using the Yum: Generate Meta Data capability, the task fails with the Nexus log saying:
'org.sonatype.nexus.yum.internal.task.CommandLineExecutor - Options Error: option --no-database not recognized.'
I know that RHEL5 only supports createrepo v0.4.9, which does not recognize the --no-database option. This thread, however, https://issues.sonatype.org/browse/NEXUS-6801 raises the issue and claims it has been solved. According to the comments on this thread, It seems there is a variable within the yum plugin called final #Named("${nexus.yum.useNoDatabaseSwitch:-true}") boolean useNoDatabaseSwitch) .
but I can't figure out how to set this variable. I think all I need to do is set this boolean to false.
Also, the plugin configuration mentioned in that thread might be outdated because the Yum plugin is now included with Nexus. I can't seem to find any configuration options for the Yum plugin, no yum.xml to be seen.
Any help would be great, thanks!
Full nexus log from the Yum: Generate Metadata task is included below
2017-05-09 16:18:23,812-0700 INFO [pxpool-1-thread-12] scitegicuser
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask - Scheduled
task (Generate Biovia rpm yum metadata) started :: Generate Yum
metadata of repository 'biovia-rpms' 2017-05-09 16:18:24,069-0700
ERROR [pxpool-1-thread-12] scitegicuser
org.sonatype.nexus.yum.internal.task.CommandLineExecutor - Options
Error: option --no-database not recognized.
2017-05-09 16:18:24,072-0700 WARN [pxpool-1-thread-12] scitegicuser
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask - Yum
metadata generation failed org.apache.commons.exec.ExecuteException:
Process exited with an error: 1 (Exit value: 1) at
org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:377)
~[nexus-yum-repository-plugin-2.14.4-03/:na] at
org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:160)
~[nexus-yum-repository-plugin-2.14.4-03/:na] at
org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:147)
~[nexus-yum-repository-plugin-2.14.4-03/:na] at
org.sonatype.nexus.yum.internal.task.CommandLineExecutor.exec(CommandLineExecutor.java:68)
~[nexus-yum-repository-plugin-2.14.4-03/:na] at
org.sonatype.nexus.yum.internal.task.CommandLineExecutor.exec(CommandLineExecutor.java:43)
~[nexus-yum-repository-plugin-2.14.4-03/:na] at
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask.doRun(GenerateMetadataTask.java:162)
[nexus-yum-repository-plugin-2.14.4-03/:na] at
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask.doRun(GenerateMetadataTask.java:69)
[nexus-yum-repository-plugin-2.14.4-03/:na] at
org.sonatype.nexus.scheduling.AbstractNexusTask.call(AbstractNexusTask.java:163)
[nexus-core-2.14.4-03.jar:2.14.4-03] at
org.sonatype.scheduling.DefaultScheduledTask.call(DefaultScheduledTask.java:418)
[nexus-scheduler-2.14.4-03.jar:2.14.4-03] at
org.sonatype.nexus.threads.MDCAwareCallable.call(MDCAwareCallable.java:44)
[nexus-core-2.14.4-03.jar:2.14.4-03] at
org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
[shiro-core-1.3.2.jar:1.3.2] at
org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
[shiro-core-1.3.2.jar:1.3.2] at
java.util.concurrent.FutureTask.run(FutureTask.java:266)
[na:1.8.0_101] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_101] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[na:1.8.0_101] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_101] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_101] at java.lang.Thread.run(Thread.java:745)
[na:1.8.0_101] 2017-05-09 16:18:24,073-0700 WARN [pxpool-1-thread-12]
scitegicuser org.sonatype.nexus.yum.internal.task.GenerateMetadataTask
- Scheduled task (Generate Biovia rpm yum metadata) failed :: Generate Yum metadata of repository 'biovia-rpms' (started
2017-05-09T16:18:23-07:00, runtime 0:00:00.260) java.io.IOException:
Yum metadata generation failed at
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask.doRun(GenerateMetadataTask.java:166)
~[na:na] at
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask.doRun(GenerateMetadataTask.java:69)
~[na:na] at
org.sonatype.nexus.scheduling.AbstractNexusTask.call(AbstractNexusTask.java:163)
~[nexus-core-2.14.4-03.jar:2.14.4-03] at
org.sonatype.scheduling.DefaultScheduledTask.call(DefaultScheduledTask.java:418)
[nexus-scheduler-2.14.4-03.jar:2.14.4-03] at
org.sonatype.nexus.threads.MDCAwareCallable.call(MDCAwareCallable.java:44)
[nexus-core-2.14.4-03.jar:2.14.4-03] at
org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
[shiro-core-1.3.2.jar:1.3.2] at
org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
[shiro-core-1.3.2.jar:1.3.2] at
java.util.concurrent.FutureTask.run(FutureTask.java:266)
[na:1.8.0_101] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_101] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[na:1.8.0_101] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_101] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_101] at java.lang.Thread.run(Thread.java:745)
[na:1.8.0_101] Caused by: org.apache.commons.exec.ExecuteException:
Process exited with an error: 1 (Exit value: 1) at
org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:377)
~[na:na] at
org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:160)
~[na:na] at
org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:147)
~[na:na] at
org.sonatype.nexus.yum.internal.task.CommandLineExecutor.exec(CommandLineExecutor.java:68)
~[na:na] at
org.sonatype.nexus.yum.internal.task.CommandLineExecutor.exec(CommandLineExecutor.java:43)
~[na:na] at
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask.doRun(GenerateMetadataTask.java:162)
~[na:na] ... 12 common frames omitted 2017-05-09 16:18:24,407-0700
WARN [pxpool-1-thread-12] scitegicuser
org.sonatype.scheduling.DefaultScheduledTask - Exception in call
method of scheduled task Generate Biovia rpm yum metadata
java.io.IOException: Yum metadata generation failed at
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask.doRun(GenerateMetadataTask.java:166)
~[na:na] at
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask.doRun(GenerateMetadataTask.java:69)
~[na:na] at
org.sonatype.nexus.scheduling.AbstractNexusTask.call(AbstractNexusTask.java:163)
~[nexus-core-2.14.4-03.jar:2.14.4-03] at
org.sonatype.scheduling.DefaultScheduledTask.call(DefaultScheduledTask.java:418)
~[nexus-scheduler-2.14.4-03.jar:2.14.4-03] at
org.sonatype.nexus.threads.MDCAwareCallable.call(MDCAwareCallable.java:44)
[nexus-core-2.14.4-03.jar:2.14.4-03] at
org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
[shiro-core-1.3.2.jar:1.3.2] at
org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
[shiro-core-1.3.2.jar:1.3.2] at
java.util.concurrent.FutureTask.run(FutureTask.java:266)
[na:1.8.0_101] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_101] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[na:1.8.0_101] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_101] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_101] at java.lang.Thread.run(Thread.java:745)
[na:1.8.0_101] Caused by: org.apache.commons.exec.ExecuteException:
Process exited with an error: 1 (Exit value: 1) at
org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:377)
~[na:na] at
org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:160)
~[na:na] at
org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:147)
~[na:na] at
org.sonatype.nexus.yum.internal.task.CommandLineExecutor.exec(CommandLineExecutor.java:68)
~[na:na] at
org.sonatype.nexus.yum.internal.task.CommandLineExecutor.exec(CommandLineExecutor.java:43)
~[na:na] at
org.sonatype.nexus.yum.internal.task.GenerateMetadataTask.doRun(GenerateMetadataTask.java:162)
~[na:na] ... 12 common frames omitted
Yum support in Nexus requires RHEL6 or higher, it won't work with the createrepo in RHEl5.

Can I connect NiFi docker container to a HBase container over a docker user defined bridge network?

My objective:
Use NiFi running on a HDF docker container to store data into HBase running on an HDP docker container.
Progress:
I am running two docker containers: NiFi and HBase. I have configured NiFi's PutHBaseJSON processor to write data to HBase (puthbasejson_configuration.png).
Below are the configurations I updated in the processor:
PutHBaseJSON = HBase_1_1_2_ClientService
Table Name = publictrans
Row Identifier Field Name = Vehicle_ID
Row Identifier Encoding Strategy = String
Column Family = trafficpatterns
Batch Size = 25
Complex Field Strategy = Text
Field Encoding Strategy = String
HBase Client Service
I also configured the NiFi's hbase client service on that processor, so NiFi knows which IP address Zookeeper is located at to ask Zookeeper to tell it where HBase Master is (hbaseclientservice_configuration.png).
Configure Controller Service for HBaseClient:
ZooKeeper Quorum = 172.25.0.3
ZooKeeper Client Port = 2181
ZooKeeper ZNode Parent = /hbase-unsecure
HBase Client Retries = 1
Problem:
The problem I am facing is that NiFi is unable to make the connection to HBase Master. I get the following message: failed to invoke "#OnEnabled method due to ... hbase.client.RetriesExhausted Exception ... hbase.MasterNotRunningException ... java.net.ConnectException: Connection refused." Visual of hbaseclientservice at (hbaseMasterNotRunningException Stack Trace).
Configurations I made to troubleshoot the problem:
In HDF container, I updated /etc/hosts with 172.25.0.3 -> hdp.hortonworks.com. In HDP container, I updated the hosts file with 172.25.0.2 -> hdf.hortonworks.com. So both containers are aware of each others hostnames.
I port forwarded the needed ports for NiFi, Zookeeper and HBase when I built the HDF and HDP containers. I checked if all ports on HBase were exposed on the HDP container and the image shows all the ports HDP is listening in on including HBase's ports (ports_hdp_listening_on.png). Here is an image of all ports needed by HBase, I filtered for port keyword in Ambari (hbase_ports_needed.png).
16000 and 16020 ports both looked suspicious since all others had pattern :::port but those two ports had some wording preceding it. So, I checked if I could make the connection to HDP from HDF using telnet 172.25.0.3 16000 and received the output:
Trying 172.25.0.3...
Connected to 172.25.0.3.
Escape character is '^]'.
So I was able to connect to HDP container.
hbaseMasterNotRunningException Stack Trace:
2017-01-25 22:23:03,342 ERROR [StandardProcessScheduler Thread-7] o.a.n.c.s.StandardControllerServiceNode HBase_1_1_2_ClientService[id=d3eaf393-0159-1000-ffff-ffffa95f1940] Failed to invoke #OnEnabled method due to org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1, exceptions:
Wed Jan 25 22:23:03 UTC 2017, RpcRetryingCaller{globalStartTime=1485382983338, pause=100, retries=1}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
2017-01-25 22:23:03,348 ERROR [StandardProcessScheduler Thread-7] o.a.n.c.s.StandardControllerServiceNode
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1, exceptions:
Wed Jan 25 22:23:03 UTC 2017, RpcRetryingCaller{globalStartTime=1485382983338, pause=100, retries=1}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147) ~[na:na]
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917) ~[na:na]
at org.apache.hadoop.hbase.client.HBaseAdmin.listTableNames(HBaseAdmin.java:413) ~[na:na]
at org.apache.hadoop.hbase.client.HBaseAdmin.listTableNames(HBaseAdmin.java:397) ~[na:na]
at org.apache.nifi.hbase.HBase_1_1_2_ClientService.onEnabled(HBase_1_1_2_ClientService.java:187) ~[na:na]
at sun.reflect.GeneratedMethodAccessor568.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_111]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_111]
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137) ~[na:na]
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125) ~[na:na]
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70) ~[na:na]
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47) ~[na:na]
at org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:345) ~[na:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_111]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_111]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1533) ~[na:na]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1553) ~[na:na]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1704) ~[na:na]
at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) ~[na:na]
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124) ~[na:na]
... 19 common frames omitted
Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223) ~[na:na]
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) ~[na:na]
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:50918) ~[na:na]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1564) ~[na:na]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1502) ~[na:na]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1524) ~[na:na]
... 23 common frames omitted
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_111]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_111]
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ~[na:na]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ~[na:na]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) ~[na:na]
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:424) ~[na:na]
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:748) ~[na:na]
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920) ~[na:na]
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:889) ~[na:na]
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1222) ~[na:na]
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) ~[na:na]
... 28 common frames omitted
2017-01-25 22:23:03,348 ERROR [StandardProcessScheduler Thread-7] o.a.n.c.s.StandardControllerServiceNode Failed to invoke #OnEnabled method of HBase_1_1_2_ClientService[id=d3eaf393-0159-1000-ffff-ffffa95f1940] due to org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1, exceptions:
Wed Jan 25 22:23:03 UTC 2017, RpcRetryingCaller{globalStartTime=1485382983338, pause=100, retries=1}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
I am currently still dealing with problem:
Has anyone setup NiFi HDF docker container to store data into HBase HDP docker container?

in oozie installation, while creating ooziedb Iam getting the following error

Command:
wcbdd#ubuntu:~/apache/oozie-4.1.0/distro/target/oozie-4.1.0/bin$ sudo -u oozie ./ooziedb.sh create -sqlfile oozie.sql –run
setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"
Validate DB Connection
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/util/ReflectionUtils
at org.apache.oozie.service.Services.setServiceInternal(Services.java:374)
at org.apache.oozie.service.Services.<init>(Services.java:110)
at org.apache.oozie.tools.OozieDBCLI.getJdbcConf(OozieDBCLI.java:163)
at org.apache.oozie.tools.OozieDBCLI.createConnection(OozieDBCLI.java:845)
at org.apache.oozie.tools.OozieDBCLI.validateConnection(OozieDBCLI.java:853)
at org.apache.oozie.tools.OozieDBCLI.createDB(OozieDBCLI.java:181)
at org.apache.oozie.tools.OozieDBCLI.run(OozieDBCLI.java:125)
at org.apache.oozie.tools.OozieDBCLI.main(OozieDBCLI.java:76)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.ReflectionUtils
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 8 more

Magnolia CMS 5.4 (activation)

I have a problem with making changes to my css file (Hotfix resource, Publish). If I want to publish changes in my style.css it gives me an error:
2015-07-29 09:49:45,592 ERROR info.magnolia.module.activation.ExchangeTask : Failed to deactivate content.
info.magnolia.cms.exchange.ExchangeException: Not able to send the activation request [http:// localhost:8080/magnoliaPublic/.magnolia/activation]: http:// localhost:8080/magnoliaPublic/.magnolia/activation
at info.magnolia.module.activation.BaseSyndicatorImpl.activate(BaseSyndicatorImpl.java:443)
at info.magnolia.module.activation.SimpleSyndicator$2.runTask(SimpleSyndicator.java:132)
at info.magnolia.module.activation.ExchangeTask.run(ExchangeTask.java:75)
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: http:// localhost:8080/magnoliaPublic/.magnolia/activation
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1889)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1884)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1883)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1456)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at java.net.URLConnection.getContent(URLConnection.java:739)
at info.magnolia.module.activation.BaseSyndicatorImpl.activate(BaseSyndicatorImpl.java:428)
... 4 more
Caused by: java.io.FileNotFoundException: http:// localhost:8080/magnoliaPublic/.magnolia/activation
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1835)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2942)
at info.magnolia.module.activation.BaseSyndicatorImpl.transportActivatedData(BaseSyndicatorImpl.java:482)
at info.magnolia.module.activation.BaseSyndicatorImpl.activate(BaseSyndicatorImpl.java:407)
... 4 more
2015-07-29 09:49:45,596 ERROR info.magnolia.module.activation.SimpleSyndicator : Not able to send the activation request [http:// localhost:8080/magnoliaPublic/.magnolia/activation]: http:// localhost:8080/magnoliaPublic/.magnolia/activation
info.magnolia.cms.exchange.ExchangeException: Not able to send the activation request [http:// localhost:8080/magnoliaPublic/.magnolia/activation]: http:// localhost:8080/magnoliaPublic/.magnolia/activation
at info.magnolia.module.activation.BaseSyndicatorImpl.activate(BaseSyndicatorImpl.java:443)
at info.magnolia.module.activation.SimpleSyndicator$2.runTask(SimpleSyndicator.java:132)
at info.magnolia.module.activation.ExchangeTask.run(ExchangeTask.java:75)
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: http:// localhost:8080/magnoliaPublic/.magnolia/activation
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1889)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1884)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1883)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1456)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at java.net.URLConnection.getContent(URLConnection.java:739)
at info.magnolia.module.activation.BaseSyndicatorImpl.activate(BaseSyndicatorImpl.java:428)
... 4 more
Caused by: java.io.FileNotFoundException: http:// localhost:8080/magnoliaPublic/.magnolia/activation
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1835)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2942)
at info.magnolia.module.activation.BaseSyndicatorImpl.transportActivatedData(BaseSyndicatorImpl.java:482)
at info.magnolia.module.activation.BaseSyndicatorImpl.activate(BaseSyndicatorImpl.java:407)
... 4 more
I do not know what is this error and how to fix it.
Any suggestions?
It seems to me you have misconfigured the subscriber: there is few times http:// localhost:8080/magnoliaPublic/.magnolia/activation in the stacktrace - with a space between http:// and localhost:8080.... Check the subscriber configuration, remove the space, and let us know. ;-)

Resources