Flume not processing keywords from Twitter source with flume-ng with Hadoop 2.5 cdh5.3 - flume-ng

I am trying to process some twitter keywords with MemChannel and HDFS. But flume-ng is not showing further progress after HDFS started status on the console.
Here are /etc/flume-ns/conf/flume-env.sh file contents.
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# If this file is placed at FLUME_CONF_DIR/flume-env.sh, it will be sourced during Flume startup.
# Environment variables can be set here.
export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
# Give Flume more memory and pre-allocate, enable remote monitoring via JMX
# export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote"
# Note that the Flume conf directory is always included in the classpath.
#FLUME_CLASSPATH=""
Here are the twitter configuration file contents.
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
#TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TwitterAgent.sources.Twitter.consumerSecret = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TwitterAgent.sources.Twitter.accessToken = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TwitterAgent.sources.Twitter.accessTokenSecret = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientist, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://uat.cloudera:8020/user/root/flume/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100
I am running below command on centOs console.
flume-ng agent -c /etc/flume-ng/conf -f /etc/flume-ng/conf/twitter.conf -n TwitterAgent -Dflume.root.logger=INFO,console
When I run the command here is the output.
Info: Sourcing environment configuration script /etc/flume-ng/conf/flume-env.sh
Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12.jar from classpath
+ exec /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp r/lib/flume-ng/../search/lib/xmlbeans-2.3.0.jar:/usr/lib/flume-ng/../search/lib/xmlenc-0.52.jar:/usr/lib/flume-ng/../search/lib/xmpcore-5.1.2.jar:/usr/lib/flume-ng/../search/lib/xz-1.0.jar:/usr/lib/flume-ng/../search/lib/zookeeper.jar' -Djava.library.path=:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native org.apache.flume.node.Application -f /etc/flume-ng/conf/farrukh.conf -n TwitterAgent
2015-09-24 12:05:38,876 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
2015-09-24 12:05:38,885 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:/etc/flume-ng/conf/farrukh.conf
2015-09-24 12:05:38,896 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:HDFS
2015-09-24 12:05:38,896 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:HDFS
2015-09-24 12:05:38,897 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:HDFS
2015-09-24 12:05:38,897 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:931)] Added sinks: HDFS Agent: TwitterAgent
2015-09-24 12:05:38,897 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:HDFS
2015-09-24 12:05:38,897 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:HDFS
2015-09-24 12:05:38,897 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:HDFS
2015-09-24 12:05:38,897 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:HDFS
2015-09-24 12:05:38,898 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:HDFS
2015-09-24 12:05:38,911 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSources(FlumeConfiguration.java:508)] Agent configuration for 'TwitterAgent' has no sources.
2015-09-24 12:05:38,919 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:141)] Post-validation flume configuration contains configuration for agents: [TwitterAgent]
2015-09-24 12:05:38,920 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:145)] Creating channels
2015-09-24 12:05:38,939 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel MemChannel type memory
2015-09-24 12:05:38,957 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:200)] Created channel MemChannel
2015-09-24 12:05:38,963 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: HDFS, type: hdfs
2015-09-24 12:05:40,019 (conf-file-poller-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.java:559)] Hadoop Security enabled: false
2015-09-24 12:05:40,022 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:114)] Channel MemChannel connected to [HDFS]
2015-09-24 12:05:40,031 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{} sinkRunners:{HDFS=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#3c1cefaa counterGroup:{ name:null counters:{} } }} channels:{MemChannel=org.apache.flume.channel.MemoryChannel{name: MemChannel}} }
2015-09-24 12:05:40,040 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel MemChannel
2015-09-24 12:05:40,218 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: CHANNEL, name: MemChannel: Successfully registered new MBean.
2015-09-24 12:05:40,218 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: CHANNEL, name: MemChannel started
2015-09-24 12:05:40,219 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink HDFS
2015-09-24 12:05:40,221 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: HDFS: Successfully registered new MBean.
2015-09-24 12:05:40,221 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: HDFS started
Here is detail of my computer environment.
JDK
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
OS
CentOS release 6.4 (Final)
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
cat: /etc/lsb-release.d: Is a directory
cpe:/o:centos:linux:6:GA
Flume-ng
Flume 1.5.0-cdh5.3.0
Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
Revision: cc2139f386f7fccc9a6e105e2026228af58c6e9f
Compiled by jenkins on Tue Dec 16 20:25:18 PST 2014
From source with checksum 0b02653a07c9e96af03ce2189b8d51c3
Hadoop
Hadoop 2.5.0-cdh5.3.0
Subversion http://github.com/cloudera/hadoop -r f19097cda2536da1df41ff6713556c8f7284174d
Compiled by jenkins on 2014-12-17T03:05Z
Compiled with protoc 2.5.0
From source with checksum 9c4267e6915cf5bbd4c6e08be54d54e0
This command was run using /usr/lib/hadoop/hadoop-common-2.5.0-cdh5.3.0.jar
Here is output of hdfs report command.
Configured Capacity: 20506943488 (19.10 GB)
Present Capacity: 20506943488 (19.10 GB)
DFS Remaining: 20057721155 (18.68 GB)
DFS Used: 449222333 (428.41 MB)
DFS Used%: 2.19%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (1):
Name: 127.0.0.1:50010 (uat.cloudera)
Hostname: uat.cloudera
Rack: /default
Decommission Status : Normal
Configured Capacity: 20506943488 (19.10 GB)
DFS Used: 449222333 (428.41 MB)
Non DFS Used: 0 (0 B)
DFS Remaining: 20057721155 (18.68 GB)
DFS Used%: 2.19%
DFS Remaining%: 97.81%
Configured Cache Capacity: 4294967296 (4 GB)
Cache Used: 0 (0 B)
Cache Remaining: 4294967296 (4 GB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Xceivers: 6
Last contact: Thu Sep 25 12:09:42 PDT 2015

You are missing ".sources" property of the agent. How can Flume-ng work without knowing the source? You are missing following line.
TwitterAgent.sources = Twitter
See source, channel and sink relationship diagram.
To see more detail, see following link:
https://flume.apache.org/FlumeUserGuide.html
Always remember there are mainly three things in flume configuration file.( sources, channels, sinks ). First three lines are setting these three properties.
TwitterAgent.sources = Twitter
TwitterAgent.sinks = HDFS
TwitterAgent.channels = MemChannel
Rest of the configuration file is setting detailed properties of these three main things (sources, channels, sinks).
Check below corrected configuration file contents.
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
#TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = xxxxx
TwitterAgent.sources.Twitter.consumerSecret = xxxxxx
TwitterAgent.sources.Twitter.accessToken = xxxxx
TwitterAgent.sources.Twitter.accessTokenSecret = xxxxx
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientiest, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://uat.cloudera:8020/user/root/flume/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 10
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10
TwitterAgent.channels.MemChannel.transactionCapacity = 10
Other than setting sources property, I have also changed the below properties, so that we can quickly see the results on hdfs as temp files.
TwitterAgent.sinks.HDFS.hdfs.batchSize = 10
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10
TwitterAgent.channels.MemChannel.capacity = 10
TwitterAgent.channels.MemChannel.transactionCapacity = 10
Copy the contents and save in any configuration file e.g sample.conf in /etc/flume-ng/conf/ folder and then use below command.
flume-ng agent -c /etc/flume-ng/conf -f /etc/flume-ng/conf/sample.conf -n TwitterAgent -Dflume.root.logger=INFO,console
After HDFS Started status, it should show processing message like this.
2015-09-25 13:44:18,045 (lifecycleSupervisor-1-4) [INFO - org.apache.flume.source.twitter.TwitterSource.start(TwitterSource.java:139)] Twitter source Twitter started.
2015-09-25 13:44:18,045 (Twitter Stream consumer-1[initializing]) [INFO - twitter4j.internal.logging.SLF4JLogger.info(SLF4JLogger.java:83)] Establishing connection.
2015-09-25 13:44:19,931 (Twitter Stream consumer-1[Establishing connection]) [INFO - twitter4j.internal.logging.SLF4JLogger.info(SLF4JLogger.java:83)] Connection established.
2015-09-25 13:44:19,931 (Twitter Stream consumer-1[Establishing connection]) [INFO - twitter4j.internal.logging.SLF4JLogger.info(SLF4JLogger.java:83)] Receiving status stream.
2015-09-25 13:44:20,283 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false
2015-09-25 13:44:20,557 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:261)] Creating hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860284.tmp
2015-09-25 13:44:22,435 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 100 docs
2015-09-25 13:44:25,383 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 200 docs
2015-09-25 13:44:28,178 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 300 docs
2015-09-25 13:44:30,505 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:413)] Closing hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860284.tmp
2015-09-25 13:44:30,506 (hdfs-HDFS-call-runner-2) [INFO - org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:339)] Close tries incremented
2015-09-25 13:44:30,526 (hdfs-HDFS-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:673)] Renaming hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860284.tmp to hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860284
2015-09-25 13:44:30,607 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:261)] Creating hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860285.tmp
2015-09-25 13:44:31,157 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 400 docs
2015-09-25 13:44:33,330 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 500 docs
2015-09-25 13:44:36,131 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 600 docs
2015-09-25 13:44:38,298 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 700 docs
2015-09-25 13:44:40,465 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 800 docs
2015-09-25 13:44:41,158 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:413)] Closing hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860285.tmp
2015-09-25 13:44:41,158 (hdfs-HDFS-call-runner-6) [INFO - org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:339)] Close tries incremented
2015-09-25 13:44:41,166 (hdfs-HDFS-call-runner-7) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:673)] Renaming hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860285.tmp to hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860285
2015-09-25 13:44:41,230 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:261)] Creating hdfs://uat.cloudera:8020/user/root/flume/FlumeData.1443213860286.tmp
2015-09-25 13:44:43,238 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 900 docs
2015-09-25 13:44:46,118 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:178)] Processed 1,000 docs
2015-09-25 13:44:46,118 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:300)] Total docs indexed: 1,000, total skipped docs: 0
2015-09-25 13:44:46,118 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:302)] 35 docs/second
2015-09-25 13:44:46,118 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:304)] Run took 28 seconds and processed:
2015-09-25 13:44:46,118 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:306)] 0.009 MB/sec sent to index
2015-09-25 13:44:46,119 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:308)] 0.255 MB text sent to index
2015-09-25 13:44:46,119 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:310)] There were 0 exceptions ignored:
^C2015-09-25 13:44:46,666 (agent-shutdown-hook) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:79)] Stopping lifecycle supervisor 10
2015-09-25 13:44:46,673 (agent-shutdown-hook) [INFO - org.apache.flume.source.twitter.TwitterSource.stop(TwitterSource.java:150)] Twitter source Twitter stopping...
Let me know if your issue is still now resolved.

Related

Cannot run flow in corda node

I am running a corda network on kubernetes (corda version 4.4) and I am trying to install and run a cordapp .
The cordapp am trying to run is the Heartbeat one (from the github corda sample folder) .
But whenever I try to start the flow using the command start StartHeartbeatFlow
I get the following error message :
[INFO] 11:00:32+0200 [pool-2-thread-11] shell.StartShellCommand.main - Executing command "start StartHeartbeatFlow <no arguments>",
start StartHeartbeatFlow: exception: com.heartbeat.StartHeartbeatFlow
Tue Apr 07 11:00:32 CEST 2020>>> [ERROR] 11:00:32+0200 [pool-2-thread-11] command.CRaSHSession.execute - Error while evaluating request 'start StartHeartbeatFlow' start StartHeartbeatFlow: exception: com.heartbeat.StartHeartbeatFlow [errorCode=1oe81or, moreInformationAt=https://errors.corda.net/OS/4.4/1oe81or]
Which doesn't really help me on how to solve the issue :/
Running flow list is listing the StartHeartbeatFlow so it's not an issue with the installation of the cordapp ...
Has anyone encountered the same kind of issue ?
Thanks !
Edit : The logs in the corda node I have when I execute the flow start StartHeartbeatFlow command .
corda#corda-node-corda-node-0:~/logs$ tail -f corda-node.log | grep -A 10 -B 10 "heartbeat"
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] realm.AuthenticatingRealm. - Looked up AuthenticationInfo [rpcuser] from doGetAuthenticationInfo
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] realm.AuthenticatingRealm. - AuthenticationInfo caching is disabled for info [rpcuser]. Submitted token: [org.apache.shiro.authc.UsernamePasswordToken - rpcuser, rememberMe=false].
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] credential.SimpleCredentialsMatcher. - Performing credentials equality check for tokenCredentials of type [[C and accountCredentials of type [java.lang.String]
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] credential.SimpleCredentialsMatcher. - Both credentials arguments can be easily converted to byte arrays. Performing array equals comparison
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] authc.AbstractAuthenticator. - Authentication successful for token [org.apache.shiro.authc.UsernamePasswordToken - rpcuser, rememberMe=false]. Returned account [rpcuser]
[DEBUG] 2020-04-07T13:21:09,768Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] artemis.BrokerJaasLoginModule. - Login for rpcuser succeeded
[DEBUG] 2020-04-07T13:21:09,771Z [Thread-12 (ActiveMQ-client-global-threads)] rpc.RPCServer. - -> RPC by rpcuser -> registeredFlows
[DEBUG] 2020-04-07T13:21:09,772Z [Thread-12 (ActiveMQ-client-global-threads)] rpc.RPCServer. - Arguments: [] {actor_id=rpcuser, actor_owning_identity=OU=Regular Node, O=organization, L=Brussels, C=BE, actor_store_id=NODE_CONFIG, invocation_id=aac88106-cf60-4c63-b1b9-c5fac224b89a, invocation_timestamp=2020-04-07T13:21:09.771Z, origin=rpcuser, session_id=df6cc401-6f9f-41b5-9a18-790c28e33b06, session_timestamp=2020-04-07T13:11:30.204Z}
[DEBUG] 2020-04-07T13:21:09,772Z [rpc-server-handler-pool-1] realm.AuthorizingRealm. - No authorizationCache instance set. Checking for a cacheManager... {actor_id=rpcuser, actor_owning_identity=OU=Regular Node, O=organization, L=Brussels, C=BE, actor_store_id=NODE_CONFIG, invocation_id=aac88106-cf60-4c63-b1b9-c5fac224b89a, invocation_timestamp=2020-04-07T13:21:09.771Z, origin=rpcuser, session_id=df6cc401-6f9f-41b5-9a18-790c28e33b06, session_timestamp=2020-04-07T13:11:30.204Z}
[DEBUG] 2020-04-07T13:21:09,772Z [rpc-server-handler-pool-1] realm.AuthorizingRealm. - No cache or cacheManager properties have been set. Authorization cache cannot be obtained. {actor_id=rpcuser, actor_owning_identity=OU=Regular Node, O=organization, L=Brussels, C=BE, actor_store_id=NODE_CONFIG, invocation_id=aac88106-cf60-4c63-b1b9-c5fac224b89a, invocation_timestamp=2020-04-07T13:21:09.771Z, origin=rpcuser, session_id=df6cc401-6f9f-41b5-9a18-790c28e33b06, session_timestamp=2020-04-07T13:11:30.204Z}
[DEBUG] 2020-04-07T13:21:09,773Z [rpc-server-sender] rpc.RPCServer. - <- RPC <- RpcReply(id=10ea96d9-5c19-4200-a64e-1eb3903835ce, timestamp: 2020-04-07T13:21:09.748Z, entityType: Invocation, result=Success([com.heartbeat.StartHeartbeatFlow, net.corda.core.flows.ContractUpgradeFlow$Authorise, net.corda.core.flows.ContractUpgradeFlow$Deauthorise, net.corda.core.flows.ContractUpgradeFlow$Initiate]), deduplicationIdentity=9c974c01-08af-44d0-bdef-c609efee11a8)
[DEBUG] 2020-04-07T13:21:09,775Z [Thread-62 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] artemis.BrokerJaasLoginModule. - Processing login for SystemUsers/NodeRPC
[DEBUG] 2020-04-07T13:21:09,775Z [Thread-62 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] artemis.BrokerJaasLoginModule. - Login for SystemUsers/NodeRPC succeeded
[DEBUG] 2020-04-07T13:21:09,809Z [Network Map Updater Thread-1] pool.PoolBase. - HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection#6fa7296c
[DEBUG] 2020-04-07T13:21:10,560Z [RxIoScheduler-2] network.NodeInfoWatcher. - pollDirectory /opt/corda/additional-node-infos
[DEBUG] 2020-04-07T13:21:10,560Z [RxIoScheduler-2] network.NodeInfoWatcher. - Examining /opt/corda/additional-node-infos/nodeInfo-FEBE485DF04D12B91F70740AC3EDDDB1A0C5058B017C6DD6046A1AF37AB1687D
[DEBUG] 2020-04-07T13:21:10,560Z [RxIoScheduler-2] network.NodeInfoWatcher. - Read 0 NodeInfo files from /opt/corda/additional-node-infos
[DEBUG] 2020-04-07T13:21:10,560Z [RxIoScheduler-2] network.NodeInfoWatcher. - Number of removed NodeInfo files 0
[DEBUG] 2020-04-07T13:21:11,824Z [Network Map Updater Thread-1] pool.PoolBase. - HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection#6fa7296c
[DEBUG] 2020-04-07T13:21:13,844Z [Network Map Updater Thread-1] pool.PoolBase. - HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection#6fa7296c
[DEBUG] 2020-04-07T13:21:15,559Z [RxIoScheduler-2] network.NodeInfoWatcher. - pollDirectory /opt/corda/additional-node-infos
I could invoke the flow from the standalone shell , I was having some weird issues with the /cordapp folder holding my cordapp locally . I deleted it and recreated it and now it works .
Can you update your question with more stack trace? Are there any other errors in your node's log file? I'm asking because I just tried the example and it worked for me.
Here's what I did:
Built the Java version:
// Browse to Java files.
cd /heartbeat/contracts-java
// Build the nodes (Notary and PartyA).
./../gradlew deployNodes
Start the nodes (I don't like using runnodes task, so I start each node individually):
// Terminal 1 (Notary).
cd /heartbeat/contracts-java/build/nodes/Notary
// Start the Notary.
java -jar corda.jar
// Terminal 2 (PartyA).
cd /heartbeat/contracts-java/build/nodes/PartyA
// Start PartyA.
java -jar corda.jar
Start the flow inside of PartyA's terminal. Notice that I use flow start instead of just start (like in your case); it's worth trying flow start, even though both should work:
flow start StartHeartbeatFlow
You will see in my screenshot below that the flow completed (i.e. it created the SchedulableState that will start the flow again, which will lead to an endless loop until you shutdown the node):
Now I can watch that flow being called again and again by typing the below in PartyA's terminal:
flow watch
I could invoke the flow from the standalone shell , I was having some weird issues with the /cordapp folder holding my cordapp locally . I deleted it and recreated it and now it works .

artifactory 6.8.7 won't start as can't connect to access server

Since upgrading to 6.8.7 using the rpm on RHEL 7, using systemctl start artifactory fails
Looking in the log its failing at this point
2019-03-16 09:50:28,952 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:593) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2019-03-16 09:50:29,379 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:353) - Waiting for access server...
2019-03-16 09:50:30,625 [art-init] [WARN ] (o.j.a.c.AccessClientHttpException:41) - Unrecognized ErrorsModel by Access. Original message: Failed on executing /api/v1/system/ping, with response: Not Found
2019-03-16 09:50:30,634 [art-init] [ERROR] (o.a.s.a.AccessServiceImpl:364) - Could not ping access server: {}
org.jfrog.access.client.AccessClientHttpException: HTTP response status 404:Failed on executing /api/v1/system/ping, with response: Not Found
Previously we would get
2019-03-13 09:56:06,293 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:593) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2019-03-13 09:56:06,787 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:353) - Waiting for access server...
2019-03-13 09:56:24,068 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:360) - Got response from Access server after 17280 ms, continuing.
Any suggestions on debugging whether this access server has started ?
Further to this I found logs showing when it worked it used to start a jar file
2019-03-08 09:19:11,609 [localhost-startStop-2] [INFO ] (o.j.a.AccessApplication:48) - Starting AccessApplication v4.1.48 on hostname.nexor.co.uk with PID 5913 (/opt/jfrog/artifactory/tomcat/webapps/access/WEB-INF/lib/access-application-4.1.48.jar started by artifactory in /)
Now when i look I find /opt/jfrog/artifactory/tomcat/webapps/access/ is empty so there is no jar file to run
The rpm did deliver an access.war file and that is there
$ ls -l /opt/jfrog/artifactory/webapps
total 104692
-rwxrwxr-x. 1 root root 51099759 Mar 14 12:14 access.war
-rwxrwxr-x. 1 root root 56099348 Mar 14 12:14 artifactory.war
Is there some manual step I can run to expand this war file to get the jar (as you can guess I am not up on my java apps)
Eventually got it working by deleting the empty /opt/jfrog/artifactory/tomcat/webapps/access directory and a new one containing the required jar files got created.
Not sure why this happened but that got it working for me
I had similar problem on CentOS 7, the solution was downgrade the newly updated java packages by running this command:
yum downgrade java-1.8.0*
After that restart the artifactory:
systemctl restart artifactory
Try changing the port number under your tomcat\conf\server.xml from 8081 to a different, unused port. Then, restart the Artifactory service to ensure the change takes effect.

Processing is hanging at notary

Currently, I am using Corda V3.1 and there is one issue which I could not figure out the root cause of. The behavior of the error occurs when the application processes a transaction. It is hanged at the last step in the below logs:
>> Verifying contractCode constraints.
>> Signing transaction with our private key.
>> Collecting signatures from counterparties.
>> Done
>> Obtaining notary signature and recording transaction.
>> Requesting signature by notary service
>> Requesting signature by Notary service(hanged here)
I didn't make any changes, but it stopped working. From the log, I could see:
[INFO ] 2018-06-10T07:06:35,287Z [main] BasicInfo.printBasicNodeInfo - Node for "Notary" started up and registered in 42.91 sec {}
[INFO ] 2018-06-10T07:06:40,305Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005], legalIdentitiesAndCerts=[O=CompanyA, L=London, C=GB], platformVersion=3, serial=1528610763747) {}
[INFO ] 2018-06-10T07:06:40,336Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Previous node was identical to incoming one - doing nothing {}
[INFO ] 2018-06-10T07:06:40,336Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Done adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005], legalIdentitiesAndCerts=[O=CompanyA, L=London, C=GB], platformVersion=3, serial=1528610763747) {}
[INFO ] 2018-06-10T07:06:40,336Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10008], legalIdentitiesAndCerts=[O=CompanyB, L=New York, C=US], platformVersion=3, serial=1528610765829) {}
[INFO ] 2018-06-10T07:06:40,352Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Previous node was identical to incoming one - doing nothing {}
[INFO ] 2018-06-10T07:06:40,352Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Done adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10008], legalIdentitiesAndCerts=[O=CompanyB, L=New York, C=US], platformVersion=3, serial=1528610765829) {}
[INFO ] 2018-06-10T07:06:40,352Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10002], legalIdentitiesAndCerts=[O=Notary, L=London, C=GB], platformVersion=3, serial=1528610765215) {}
[INFO ] 2018-06-10T07:06:40,352Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Discarding older nodeInfo for O=Notary, L=London, C=GB {}
[INFO ] 2018-06-10T07:06:53,654Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:06:54,663Z [nioEventLoopGroup-2-2] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:15,687Z [nioEventLoopGroup-2-3] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:16,696Z [nioEventLoopGroup-2-4] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:37,720Z [nioEventLoopGroup-2-5] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:38,728Z [nioEventLoopGroup-2-6] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:59,747Z [nioEventLoopGroup-2-7] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:08:00,747Z [nioEventLoopGroup-2-8] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:08:21,768Z [nioEventLoopGroup-2-9] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:08:22,779Z [nioEventLoopGroup-2-10] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
The last two steps are repeating again and again. The only approach to resolve it is to clean and re-deploy nodes but, for sure, that is not correct. Anyone able to help with this? Thanks a lot.
So it's not clear based on your description exactly how you were running your corda nodes.
The issue is that the corda nodes are having trouble communicating with each other but it's not clear why. if it was running on localhost than this is really strange.
If you're running these in the cloud than I'd try to regenerate your node configuration or maybe take another look at the network map corda node as it's definitely gotten wonky.
It also could be that the cordapp it's trying to run is making mistakes when trying to execute on the nodes or the notary.
You may have an easier time getting this to work with some of the newer developer samples in order to determine whether corda updates solved this problem.
The most basic sample that basically always works is the yo cordapp (https://github.com/corda/samples-java/tree/master/Basic/yo-cordapp). Try running it to see if you can isolate the problem to the flows or to corda.

Artifactory JFrog Backup failing with error code 401

Will appreciate if someone can please guide or provide me pointers to debug the backup issues with Artifactory. Whenever backups are performed - there is always an error message of 401 on /api/v1/system/backup/export in the artifactory.log
The backups exist on backup location but with an error message in the log. Not sure how can debug and implications of this error in the logs. I can see in the stack the rest call is failing, have tried setting a password to unsupported and multiple other things but the error persists. Also checked the Jira on Artifactory to no avail. Any pointers will be greatly appreciated
More details
artifactory.version=5.9.3
artifactory.timestamp=1521564024289
artifactory.revision=50903900
artifactory.buildNumber=820
The backups are failing with following the info in the log.
2018-04-24 11:59:24,620 [ajp-apr-8009-exec-9] [ERROR] (o.a.s.a.AccessServiceImpl:1070) - Error during access server backup
org.jfrog.access.client.AccessClientHttpException: HTTP response status 401:Failed on executing /api/v1/system/backup/export, with response: {"errors":[{"code":"UNAUTHORIZED","detail":"Bad credentials","message":"HTTP 401 Unauthorized"}]}
at org.jfrog.access.client.http.AccessHttpClient.createRestResponse(AccessHttpClient.java:154)
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:113)
...
Following is the full stack as displayed in artifactory.log
2018-04-24 11:59:24,620 [ajp-apr-8009-exec-9] [ERROR] (o.a.s.a.AccessServiceImpl:1070) - Error during access server backup
org.jfrog.access.client.AccessClientHttpException: HTTP response status 401:Failed on executing /api/v1/system/backup/export, with response: {"errors":[{"code":"UNAUTHORIZED","detail":"Bad credentials","message":"HTTP 401 Unauthorized"}]}
at org.jfrog.access.client.http.AccessHttpClient.createRestResponse(AccessHttpClient.java:154)
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:113)
at org.jfrog.access.client.system.SystemClientImpl.exportAccessServer(SystemClientImpl.java:21)
at org.artifactory.security.access.AccessServiceImpl.exportTo(AccessServiceImpl.java:1060)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:201)
at com.sun.proxy.$Proxy144.exportTo(Unknown Source)
at org.artifactory.spring.ArtifactoryApplicationContext.exportTo(ArtifactoryApplicationContext.java:662)
at org.artifactory.ui.rest.service.admin.importexport.exportdata.ExportSystemService.execute(ExportSystemService.java:67)
at org.artifactory.rest.common.service.ServiceExecutor.process(ServiceExecutor.java:38)
at org.artifactory.rest.common.resource.BaseResource.runService(BaseResource.java:92)
at org.artifactory.ui.rest.resource.admin.importexport.ExportArtifactResource.exportSystem(ExportArtifactResource.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:184)
at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:93)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:403)
at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:212)
at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:166)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:67)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:128)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502)
at org.apache.coyote.ajp.AbstractAjpProcessor.process(AbstractAjpProcessor.java:877)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.doRun(AprEndpoint.java:2527)
at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2516)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
2018-04-24 11:59:24,660 [ajp-apr-8009-exec-9] [INFO ] (o.a.s.ArtifactoryApplicationContext:819) - Note: the etc exported folder has excessive permissions. Be careful with the files.
Thanks for the feedback.
I suspect the reason for the behavior is that for the more common operations Artifactory uses the <adminToken> from the Config Descriptor, however some other operations use credentials saved in $ARTIFACTORY_HOME/etc/security/access/access.creds.
These seem to be bad.
In order to restart these you should:
Create the following file: $ARTIFACTORY_HOME/access/etc/bootstrap.creds
The content of the file should be: access-admin#*=password
The permissions of the file must be 600.
Once that file is in place, restart Artifactory.
Since Artifactory version 5.4 there is a separate service in Artifactory named "Access", which manages permissions and security related configurations.
Artifactory authenticates with Access using an access token that is configured in the "Config Descriptor".
It is possible that this issue will be resolved by a simple restart, so that's what i would have started with.
If this doesn't work, or what i suspect will happen is Artifactory won't be able to start, please do (while Artifactory is shut down):
Remove the <adminToken> tag from the $ARTIFACTORY_HOME/etc/artifactory.config.latest.xml
$ cp artifactory.config.latest.xml artifactory.config.import.xml
Restart Artifactory.
Added bootstrap.creds
tomcat#XXXXXXXX:/home/bitnami/apps/artifactory/artifactory_home/etc$ ls -ltrh
total 136K
-rw-r--r-- 1 tomcat tomcat 5.5K Mar 20 16:41 mimetypes.xml
-rw-r--r-- 1 tomcat tomcat 12K Mar 20 16:41 logback.xml
-rw-r--r-- 1 tomcat tomcat 13K Mar 20 16:41 artifactory.system.properties
-rw-r--r-- 1 tomcat tomcat 11K Mar 20 16:41 artifactory.config.bootstrap.xml
-rw-rw-r-- 1 tomcat tomcat 145 Mar 21 13:36 db.properties
drwxrwxr-x 2 tomcat tomcat 4.0K Mar 21 13:37 ui
drwxrwxr-x 2 tomcat tomcat 4.0K Mar 21 13:37 plugins
drwx------ 3 tomcat tomcat 4.0K Mar 21 13:37 security
-rw-rw-r-- 1 tomcat tomcat 13K Mar 21 13:37
artifactory.config.latest.1521639475000.xml
-rw-r--r-- 1 tomcat tomcat 199 Apr 24 18:50 binarystore.xml
-rw-rw-r-- 1 tomcat tomcat 13K Apr 24 18:57 artifactory.config.latest.xml_bkp
-rw-rw-r-- 1 tomcat tomcat 12K Apr 24 18:58
artifactory.config.latest.1524596291000.xml
-rw-rw-r-- 1 tomcat tomcat 14K Apr 25 14:39 artifactory.config.latest.xml
-rw------- 1 tomcat tomcat 24 Apr 25 15:09 bootstrap.creds
-rw-rw-r-- 1 tomcat tomcat 864 Apr 25 15:12 artifactory.properties
Contents of bootstrap.creds are
access-admin#*=password
AND restarted the arttifactory BUT still seeing the same error message
2018-04-25 15:14:29,607 [art-exec-1] [ERROR] (o.a.s.a.AccessServiceImpl:1070) - Error during access server backup: HTTP response status 401:Failed on executing /api/v1/system/backup/export, with response: {"errors":[{"code":"UNAUTHORIZED","detail":"Bad credentials","message":"HTTP 401 Unauthorized"}]}
I tried the backup using access-admin as well as user from gui with same error (both are admin users).
Following are from the access log when artifactory restarted
2018-04-25 15:10:54,396 [localhost-startStop-1] [INFO ] (o.j.a.c.AccessApplicationContextInitializer:46) - Using JFrog Access home at '/opt/bitnami/apps/artifactory/artifactory_home/access'
2018-04-25 15:10:54,413 [localhost-startStop-1] [INFO ] (o.j.a.AccessApplication:48) - Starting AccessApplication v3.2.2 on ip-10-51-35-238 with PID 16169 (/opt/bitnami/apache-tomcat/webapps/access/WEB-INF/lib/access-application-3.2.2.jar started by tomcat in /opt/bitnami/apache-tomcat/bin)
2018-04-25 15:10:54,414 [localhost-startStop-1] [INFO ] (o.j.a.AccessApplication:597) - The following profiles are active: production,grpc
2018-04-25 15:11:11,402 [localhost-startStop-1] [INFO ] (o.j.a.s.d.u.AccessJdbcHelperImpl:129) - Database: Apache Derby 10.11.1.1 - (1616546). Driver: Apache Derby Embedded JDBC Driver 10.11.1.1 - (1616546)
2018-04-25 15:11:11,410 [localhost-startStop-1] [INFO ] (o.j.a.s.d.u.AccessJdbcHelperImpl:132) - Connection URL: jdbc:derby:/opt/bitnami/apps/artifactory/artifactory_home/data/derby
2018-04-25 15:11:16,127 [localhost-startStop-1] [INFO ] (o.j.a.s.s.c.InternalConfigurationServiceImpl:94) - Loading configuration from db finished successfully
2018-04-25 15:11:18,035 [localhost-startStop-1] [INFO ] (o.j.a.s.s.AccessStartupServiceImpl:79) - Found master.key file at /opt/bitnami/apps/artifactory/artifactory_home/etc/security/master.key, using it as master.key
2018-04-25 15:11:19,486 [localhost-startStop-1] [INFO ] (o.j.a.s.s.t.TokenServiceImpl:100) - Scheduling task for revoking expired tokens using cron expression: 0 0 0/1 * * ?
2018-04-25 15:11:19,651 [localhost-startStop-1] [INFO ] (o.j.a.s.r.c.RpcServiceInvoker:86) - Added service: sync
2018-04-25 15:11:20,764 [localhost-startStop-1] [INFO ] (o.j.a.s.AccessServerBootstrapImpl:93) - [ACCESS BOOTSTRAP] Starting JFrog Access bootstrap...
2018-04-25 15:11:23,381 [localhost-startStop-1] [INFO ] (o.j.a.s.AccessServerBootstrapImpl:146) - [ACCESS BOOTSTRAP] Updating server '509b68e8-48e5-4bba-8ea7-6564c65d5b37' private key finger print to: 63c2f42824ead3169fc13eab66d3d254d25c659ceef5cdeed21c3110f47ee0d3
2018-04-25 15:11:24,164 [localhost-startStop-1] [INFO ] (o.j.a.s.AccessServerBootstrapImpl:108) - [ACCESS BOOTSTRAP] JFrog Access bootstrap finished.
2018-04-25 15:11:30,624 [localhost-startStop-1] [INFO ] (o.j.a.s.s.s.RefreshableScheduledJob:56) - Scheduling heartbeat task to run every 5 seconds
2018-04-25 15:11:43,339 [localhost-startStop-1] [INFO ] (o.j.a.AccessApplication:57) - Started AccessApplication in 53.93 seconds (JVM running for 93.388)
2018-04-25 15:11:47,756 [localhost-startStop-1] [WARN ] (o.a.t.u.s.StandardJarScanner:311) - Failed to scan [file:/opt/bitnami/apache-tomcat/lib/derbyLocale_cs.jar] from classloader hierarchy enter code here
Additionaly I enabled trace on the logs. Following is what I am seeing in
/home/bitnami/apps/artifactory/artifactory_home/access/logs/request.log. Seems the ones with backup call are tagged as coming from anonymous vs |jfrt#01c94cf2cpdmx60xvtsf8j1jy4 for other calls.
2018-04-25T15:49:03.906+0000|127.0.0.1|jfrt#01c94cf2cpdmx60xvtsf8j1jy4|GET|http://127.0.0.1/access/api/v1/groups/|200|0|198|JFrog Access Java Client/3.2.2
2018-04-25T15:50:17.553+0000|127.0.0.1|anonymous|POST|http://127.0.0.1/access/api/v1/system/backup/export|401|0|188|JFrog Access Java Client/3.2.2
2018-04-25T15:50:18.657+0000|127.0.0.1|jfrt#01c94cf2cpdmx60xvtsf8j1jy4|GET|http://127.0.0.1/access/api/v1/users/|200|0|84|JFrog Access Java Client/3.2.2
2018-04-25T15:50:18.733+0000|127.0.0.1|jfrt#01c94cf2cpdmx60xvtsf8j1jy4|GET|http://127.0.0.1/access/api/v1/groups/|200|0|73|JFrog Access Java Client/3.2.2
2018-04-25T15:50:42.823+0000|127.0.0.1|anonymous|POST|http://127.0.0.1/access/api/v1/system/backup/export|401|0|134|JFrog Access Java Client/3.2.2
2018-04-25T15:50:42.995+0000|127.0.0.1|jfrt#01c94cf2cpdmx60xvtsf8j1jy4|GET|http://127.0.0.1/access/api/v1/users/|200|0|114|JFrog Access Java Client/3.2.2
2018-04-25T15:50:43.086+0000|127.0.0.1|jfrt#01c94cf2cpdmx60xvtsf8j1jy4|GET|http://127.0.0.1/access/api/v1/groups/|200|0|64|JFrog Access Java Client/3.2.2
Please let me know if I can provide additional relevant information that can help you to get more insight in my envoirnment. Thank you again for you help

Artifactory warns about "Path checksum calculation job" after upgrade to 5.8.3

I just migrated our OSS instance of Artifactory from 5.4.5 to 5.8.3 (standalone, using Derby)
I followed the recommendations at https://www.jfrog.com/confluence/display/RTF/Upgrading+Artifactory and basically did:
Actual upgrade to 5.8.3 (stop, replace some files, start)
SHA-256 job to calculate checksums on existing artifacts (stop, add property to confiuguration, start)
Both worked fine and the server is back up and running smoothly.
However, I now have a warning in logs/artifactory.log on instance start up
2018-01-24 16:12:07,633 [art-exec-4] [WARN ] (o.a.s.j.m.p.RepoPathChecksumMigrationJobDelegate:110) - Path Checksum calculation job (for existing artifacts) has been disabled and will not run, there are still 5348 artifacts without path checksum values in the database. Future version of Artifactory may enforce this conversion as a prerequisite for upgrades.
I cannot find any more substantial explanation for this warning.
I am thinking that this is related to the artifacts layout on the file system (cf. https://www.jfrog.com/confluence/display/RTF/Checksum-Based+Storage#Checksum-BasedStorage-Overview). The artifacts are still laid according to their SHA-1checksums rather than their SHA-256 checksums.
Is my assumption correct? How do I "fix" this warning...
EDIT: Some more tests requested by #Ariel:
Restarting the server doesn't help, the warning is still there
Reenabling the migration job and restarting the server doesn't help either
echo "artifactory.sha2.migration.job.enabled=true" >> etc/artifactory.system.properties
Looking at the logs related to this migration, it seems that the migration job thinks that everything has been migrated, and the startup check finds 2,000 artifacts that should be migrated.
$ARTIFACTORY_HOME/logs/sha256_migration.log
2018-01-24 14:39:53,982 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:284) - artifactory.Sha256MigrationJob#83253c93-33ec-4c52-bc61-d2d33942dc28: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-24 16:12:07,576 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:284) - artifactory.Sha256MigrationJob#6f5c6739-c365-4be2-80a4-d32063a75f8f: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-24 16:12:07,651 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:186) - 3319 artifacts and 3292 binary entries are missing SHA256 values - starting calculation job.
2018-01-24 16:12:07,756 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:121) - SHA256 migration state: 0/3319 artifacts were handled.
[...]
2018-01-24 16:13:58,226 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:121) - SHA256 migration state: 3318/3319 artifacts were handled.
2018-01-24 16:13:58,227 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:270) - SHA256 migration job now filling in for missing SHA256 values for binary entries. Found 1 such entries
2018-01-24 16:14:01,065 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:338) - SHA256 migration job has finished successfully. 3319 artifacts and 1 binary entry calculations were submitted (including retries)
2018-01-24 16:29:06,072 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:284) - artifactory.Sha256MigrationJob#7cdeab66-229d-43a5-a788-301f72c10cc5: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-29 06:04:57,405 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:284) - artifactory.Sha256MigrationJob#2c539185-b7f4-412f-b988-0688e8505649: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-29 10:27:33,655 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:284) - artifactory.Sha256MigrationJob#19afc5d2-c12d-4821-8c1a-808655e8746c: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-29 10:31:39,250 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:284) - artifactory.Sha256MigrationJob#c01bd20d-249a-4c6e-80ff-e26301db7e84: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-29 10:34:57,321 [art-exec-3] [INFO ] (o.a.s.j.m.s.Sha256MigrationJob:284) - artifactory.Sha256MigrationJob#32fff21f-04d5-45a4-84d2-58083aaf6593: all nodes reached minimal version '5.5.0-m001', continuing execution
$ARTIFACTORY_HOME/logs/path_checksum_migration.log
2018-01-24 14:39:53,982 [art-exec-4] [INFO ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:284) - artifactory.RepoPathChecksumMigrationJob#2f7f690c-dc78-4074-b35f-e5085d41a2f7: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-24 14:39:54,012 [art-exec-4] [WARN ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:111) - Path Checksum calculation job (for existing artifacts) has been disabled and will not run, there are still 5348 artifacts without path checksum values in the database. Future version of Artifactory may enforce this conversion as a prerequisite for upgrades.
2018-01-24 16:12:07,576 [art-exec-4] [INFO ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:284) - artifactory.RepoPathChecksumMigrationJob#4c3a71ed-f389-4bff-a7b9-62d20806b270: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-24 16:12:07,634 [art-exec-4] [WARN ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:111) - Path Checksum calculation job (for existing artifacts) has been disabled and will not run, there are still 5348 artifacts without path checksum values in the database. Future version of Artifactory may enforce this conversion as a prerequisite for upgrades.
2018-01-24 16:29:06,072 [art-exec-4] [INFO ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:284) - artifactory.RepoPathChecksumMigrationJob#8e2c1a01-c3d9-4848-b48a-70813ffd26d1: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-24 16:29:06,127 [art-exec-4] [WARN ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:111) - Path Checksum calculation job (for existing artifacts) has been disabled and will not run, there are still 2029 artifacts without path checksum values in the database. Future version of Artifactory may enforce this conversion as a prerequisite for upgrades.
2018-01-29 06:04:57,405 [art-exec-4] [INFO ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:284) - artifactory.RepoPathChecksumMigrationJob#b7ac1c4a-5dec-4065-a901-bb5a3d2a4b59: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-29 06:04:57,505 [art-exec-4] [WARN ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:111) - Path Checksum calculation job (for existing artifacts) has been disabled and will not run, there are still 2029 artifacts without path checksum values in the database. Future version of Artifactory may enforce this conversion as a prerequisite for upgrades.
2018-01-29 10:27:33,655 [art-exec-4] [INFO ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:284) - artifactory.RepoPathChecksumMigrationJob#74cf399a-1c3b-4b11-a687-cc11b19d2887: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-29 10:27:33,704 [art-exec-4] [WARN ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:111) - Path Checksum calculation job (for existing artifacts) has been disabled and will not run, there are still 2029 artifacts without path checksum values in the database. Future version of Artifactory may enforce this conversion as a prerequisite for upgrades.
2018-01-29 10:31:39,250 [art-exec-4] [INFO ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:284) - artifactory.RepoPathChecksumMigrationJob#ba1c5406-2f49-48f2-a9f2-a9e48c8d7807: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-29 10:31:39,308 [art-exec-4] [WARN ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:111) - Path Checksum calculation job (for existing artifacts) has been disabled and will not run, there are still 2029 artifacts without path checksum values in the database. Future version of Artifactory may enforce this conversion as a prerequisite for upgrades.
2018-01-29 10:34:57,321 [art-exec-4] [INFO ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:284) - artifactory.RepoPathChecksumMigrationJob#b62872c1-4c00-4503-8628-bc2dd38d8c17: all nodes reached minimal version '5.5.0-m001', continuing execution
2018-01-29 10:34:57,372 [art-exec-4] [WARN ] (o.a.s.j.m.p.RepoPathChecksumMigrationJob:111) - Path Checksum calculation job (for existing artifacts) has been disabled and will not run, there are still 2029 artifacts without path checksum values in the database. Future version of Artifactory may enforce this conversion as a prerequisite for upgrades.
This is poorly documented — I had to download the Artifactory OSS source code and grep for the answer — but apparently Path Checksum migration is a completely separate job from SHA-256 migration.
You want artifactory.pathChecksum.migration.job.enabled=true in your artifactory.system.properties, I think.
The operation of Sha-256 is not being done automatically. You need to manually set it up so it will run. This was done in order not to cause overload on users environments.
If you wish to activate it follow this link:
https://www.jfrog.com/confluence/display/RTF/Checksum-Based+Storage#Checksum-BasedStorage-MigratingtheDatabasetoIncludeSHA-256

Resources