I updated Artifactory OSS from 7.15.4 to 7.24.3. Everythings seems running but in the console.log I get all 5 Minutes an entry like this one:
2021-08-21T07:33:19.081Z [34;1m[jfmd ][0m [31;1m[ERROR][0m [672d2eb628a9855d] [compatibility_logger.go:28 ] [main ] - Project update error: rpc error: code = DeadlineExceeded desc = context deadline exceeded [access_client]
In the medata-service.log I get these errors after or during restart:
2021-08-25T15:02:37.582Z [jfmd ] [ERROR] [40fc5c5d4d36c69 ] [compatibility_logger.go:28 ] [main ] - Refreshing permissions cache invalidation gRPC stream - got an error (status code: 13) - resubscribe expected [access_client]
2021-08-25T15:02:37.582Z [jfmd ] [ERROR] [40fc5c5d4d36c69 ] [compatibility_logger.go:28 ] [main ] - Project update error: rpc error: code = Internal desc = server closed the stream without sending trailers [access_client]
2021-08-25T15:02:37.582Z [jfmd ] [ERROR] [40fc5c5d4d36c69 ] [compatibility_logger.go:28 ] [main ] - Refreshing project change events gRPC stream - got an error (status code: 13) - resubscribe expected [access_client]
2021-08-25T15:02:37.591Z [jfmd ] [ERROR] [40fc5c5d4d36c69 ] [compatibility_logger.go:28 ] [main ] - Project update error: rpc error: code = Unimplemented desc = Not Found: HTTP status code 404; transport: received the unexpected content-type "text/plain; charset=utf-8" [access_client]
2021-08-25T15:02:37.591Z [jfmd ] [ERROR] [40fc5c5d4d36c69 ] [compatibility_logger.go:28 ] [main ] - Refreshing project change events gRPC stream - got an error (status code: 12) - resubscribe expected [access_client]
I can't find anything about it. What it means and how i could resolve it. Has anybody an idea what the problem could be?
Thanks
Michael
#UPDATE
Console.log before the error starts:
2021-08-21T07:28:20.529Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8046
2021-08-21T07:28:20.529Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2021-08-21T07:28:20.530Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2021-08-21T07:28:20.626Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [o.a.s.a.AccessServiceImpl:456 ] [art-init ] - Initialized access service successfully with client id 7779704, closing old client id [null]
2021-08-21T07:28:20.626Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [o.a.s.a.AccessServiceImpl:1360] [art-init ] - Updating access configuration with password expiration data
2021-08-21T07:28:20.790Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:21.017Z [36m[jfrou][0m [31m[WARN ][0m [1389b52b3df1ce7e] [local_topology.go:256 ] [main ] - Readiness test failed with the following error: "required node services are missing or unhealthy"
2021-08-21T07:28:21.175Z [34;1m[jfmd ][0m [31m[WARN ][0m [672d2eb628a9855d] [versions_package_id_ordinal_in] [main ] - Running DB indexes [md_version_pkgid_ordinal_idx,md_versions_package_id_idx,md_versions_ordinal_idx] maintenance in the background, it might cause some slowness in the Metadata service [BatchEndKey][1] [BatchKey][versionsPackageIdOrdinalIndexTask] [BatchStartKey][0] [versions_package_id_ordinal_index_task] [workerNumber][1]
2021-08-21T07:28:21.235Z [34;1m[jfmd ][0m [31m[WARN ][0m [672d2eb628a9855d] [versions_package_id_ordinal_in] [main ] - Finished with DB indexes [[md_version_pkgid_ordinal_idx md_versions_package_id_idx md_versions_ordinal_idx]] background maintenance, Metadata service is back to normal [BatchEndKey][1] [BatchKey][versionsPackageIdOrdinalIndexTask] [BatchStartKey][0] [versions_package_id_ordinal_index_task] [workerNumber][1]
2021-08-21T07:28:21.812Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:22.255Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [tegrationCleanupServiceImpl:75] [art-init ] - Using generated cron 0 12 3 ? * * for jobs table cleanup
2021-08-21T07:28:22.548Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [d.c.m.ConverterManagerImpl:212] [art-init ] - Triggering POST_INIT conversion, from 7.15.4 to 7.24.3
2021-08-21T07:28:22.550Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [d.c.m.ConverterManagerImpl:215] [art-init ] - Finished POST_INIT conversion, current version is: 7.24.3
2021-08-21T07:28:22.551Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [d.c.m.ConverterManagerImpl:249] [art-init ] - Updating database properties to running version CompoundVersionDetails{version=7.24.3, buildNumber='LOCAL', timestamp=1167040800000}
2021-08-21T07:28:22.608Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [ifactoryApplicationContext:560] [art-init ] - Artifactory application context set to READY by refresh
2021-08-21T07:28:22.641Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [c70b31da2cdf896b] [adsFolderCleanupServiceImpl:52] [art-exec-4 ] - Starting docker temp folder cleanup
2021-08-21T07:28:22.643Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [c70b31da2cdf896b] [adsFolderCleanupServiceImpl:54] [art-exec-4 ] - Docker temp folder cleanup finished, time took: 2 millis
2021-08-21T07:28:22.654Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [.w.NodeEventTaskManagerImpl:41] [art-init ] - Event management started on behalf of Event Operator with ID 'metadata-operator-events'
2021-08-21T07:28:22.678Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [o.a.s.s.StorageServiceImpl:529] [art-init ] - Scheduling CalculateReposStorageSummaryJob to run at '0 1 * ? * *'
2021-08-21T07:28:22.688Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [o.a.s.s.StorageServiceImpl:558] [art-init ] - LogStorageStatusJob disabled and not scheduled to run
2021-08-21T07:28:22.740Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [o.a.m.f.MetricsServiceImpl:135] [art-init ] - Metric Framework Service is enabled: false
2021-08-21T07:28:22.815Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [o.a.s.a.AccessServiceImpl:1681] [art-init ] - Successful register of Artifactory serviceId jf-artifactory#c0349e2a-f2d7-44ab-8a02-3459c2eabbb4 in Access Federation
2021-08-21T07:28:22.825Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:22.924Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [70e937c2d18bcda8] [ctoryContextConfigListener:271] [art-init ] - Artifactory (jfrt) service initialization completed in 29.280 seconds. Listening on port: 8081
2021-08-21T07:28:22.934Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [d.DatabaseConverterRunnable:37] [pool-84-thread-1 ] - Starting Async converter thread.
2021-08-21T07:28:22.935Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:33] [pool-84-thread-1 ] - Starting attempt #1 of async conversion for v225_change_nodes_node_name_idx
2021-08-21T07:28:22.936Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:35] [pool-84-thread-1 ] - Conversion of v225_change_nodes_node_name_idx finished successfully.
2021-08-21T07:28:22.939Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:33] [pool-84-thread-1 ] - Starting attempt #1 of async conversion for v225_change_nodes_node_path_idx
2021-08-21T07:28:22.940Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:35] [pool-84-thread-1 ] - Conversion of v225_change_nodes_node_path_idx finished successfully.
2021-08-21T07:28:22.941Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:33] [pool-84-thread-1 ] - Starting attempt #1 of async conversion for v225_change_nodes_node_repo_path_idx
2021-08-21T07:28:22.941Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:35] [pool-84-thread-1 ] - Conversion of v225_change_nodes_node_repo_path_idx finished successfully.
2021-08-21T07:28:22.943Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:33] [pool-84-thread-1 ] - Starting attempt #1 of async conversion for v225_add_bundle_files_node_id_index
2021-08-21T07:28:23.025Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:35] [pool-84-thread-1 ] - Conversion of v225_add_bundle_files_node_id_index finished successfully.
2021-08-21T07:28:23.035Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:33] [pool-84-thread-1 ] - Starting attempt #1 of async conversion for v229_node_events_tmp_event_id_idx
2021-08-21T07:28:23.047Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [s.d.v.c.DbSqlConverterUtil:101] [pool-84-thread-1 ] - Starting schema conversion: /conversion/derby/derby_v229_node_events_tmp_event_id_idx.sql
2021-08-21T07:28:23.073Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [s.d.v.c.DbSqlConverterUtil:103] [pool-84-thread-1 ] - Finished schema conversion: /conversion/derby/derby_v229_node_events_tmp_event_id_idx.sql
2021-08-21T07:28:23.074Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [ncDBSqlConditionalConverter:35] [pool-84-thread-1 ] - Conversion of v229_node_events_tmp_event_id_idx finished successfully.
2021-08-21T07:28:23.252Z [34;1m[jfmd ][0m [31m[WARN ][0m [672d2eb628a9855d] [ver_repos_lead_file_path_index] [main ] - Running DB index [md_ver_repos_lead_file_pth_idx] maintenance in the background, it might cause some slowness in the Metadata service [BatchEndKey][1] [BatchKey][verReposLeadFilePathIndexIndexTask] [BatchStartKey][0] [ver_repos_lead_file_path_index_task] [workerNumber][1]
2021-08-21T07:28:23.496Z [34;1m[jfmd ][0m [31m[WARN ][0m [672d2eb628a9855d] [ver_repos_lead_file_path_index] [main ] - Finished with DB index [md_ver_repos_lead_file_pth_idx] background maintenance, Metadata service is back to normal [BatchEndKey][1] [BatchKey][verReposLeadFilePathIndexIndexTask] [BatchStartKey][0] [ver_repos_lead_file_path_index_task] [workerNumber][1]
2021-08-21T07:28:23.833Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:24.840Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:25.957Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:26.017Z [36m[jfrou][0m [31m[WARN ][0m [311a50c5e0280291] [local_topology.go:256 ] [main ] - Readiness test failed with the following error: "required node services are missing or unhealthy"
2021-08-21T07:28:26.958Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - pinging artifactory, attempt number 50
2021-08-21T07:28:26.965Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:26.965Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - pinging artifactory attempt number 50 failed with code : undefined
2021-08-21T07:28:28.021Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:29.101Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:30.109Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [ ] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/ping failed with 503 code
2021-08-21T07:28:31.583Z [36m[jfrou][0m [31m[WARN ][0m [7f23355140d7ce99] [local_topology.go:256 ] [main ] - Readiness test failed with the following error: "required node services are missing or unhealthy"
2021-08-21T07:28:31.602Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - artifactory was pinged successfully
2021-08-21T07:28:31.604Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - setting service id - jffe#000
2021-08-21T07:28:31.834Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [5fc9900945b9b96e] [s.r.NodeRegistryServiceImpl:68] [27.0.0.1-8040-exec-8] - Cluster join: Successfully joined jffe#000 with node id nodeX
2021-08-21T07:28:31.853Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - Cluster join: Successfully joined the cluster
2021-08-21T07:28:31.975Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:465 ] [main ] - Skipping same configuration for provider file
2021-08-21T07:28:31.993Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - UI service successfully registered on router, serviceId: jffe#000
2021-08-21T07:28:32.625Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - Recurring tasks started
Sat, 21 Aug 2021 07:28:33 GMT helmet deprecated helmet.featurePolicy is deprecated (along with the HTTP header) and will be removed in helmet#4. You can use the `feature-policy` module instead. at ../app/frontend/bin/server/dist/bundle.js:13834:24
2021-08-21T07:28:33.174Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - frontend (jffe) service initialization completed in 56.22 seconds. Listening on port: port 8070
2021-08-21T07:28:36.072Z [36m[jfrou][0m [34m[INFO ][0m [5e15997735a74857] [local_topology.go:270 ] [main ] -
###############################################################
### All services started successfully in 61.943 seconds ###
###############################################################
2021-08-21T07:28:36.098Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2021-08-21T07:28:36.098Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2021-08-21T07:28:36.099Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8046
2021-08-21T07:28:36.140Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [ ] [CertificateFileHandlerBase:181] [c-default-executor-1] - Loading root certificate from database.
2021-08-21T07:28:36.217Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [2530a31ed464ef0c] [.s.b.AccessProjectBootstrap:89] [pool-66-thread-2 ] - Finished initializing Projects permissions in 81.5 millis
2021-08-21T07:28:36.299Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [ ] [CertificateFileHandlerBase:328] [c-default-executor-1] - [ACCESS BOOTSTRAP] Saved new root certificate at: /opt/jfrog/artifactory/var/etc/access/keys/root.crt
2021-08-21T07:28:36.301Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [ ] [CertificateFileHandlerBase:190] [c-default-executor-1] - Finished loading root certificate from database.
2021-08-21T07:28:36.301Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [ ] [CertificateFileHandlerBase:181] [c-default-executor-1] - Loading ca certificate from database.
2021-08-21T07:28:36.422Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [ ] [CertificateFileHandlerBase:328] [c-default-executor-1] - [ACCESS BOOTSTRAP] Saved new ca certificate at: /opt/jfrog/artifactory/var/etc/access/keys/ca.crt
2021-08-21T07:28:36.423Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [ ] [CertificateFileHandlerBase:190] [c-default-executor-1] - Finished loading ca certificate from database.
2021-08-21T07:28:36.431Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [ ] [alConfigurationServiceBase:182] [c-default-executor-1] - Loading configuration from db finished successfully
2021-08-21T07:28:38.112Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8046
2021-08-21T07:28:38.112Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2021-08-21T07:28:38.112Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2021-08-21T07:28:39.537Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [o.j.c.ConfigWrapperImpl:342 ] [pool-44-thread-1 ] - [Node ID: nodeX] detected local modify for config 'artifactory/config/security/access/access.admin.token'
2021-08-21T07:28:43.664Z [33m[jffe ][39m [31m[WARN ][39m [66cd9108b9cd26f9] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:43.673Z [33m[jffe ][39m [31m[WARN ][39m [66cd9108b9cd26f9] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:44.392Z [33m[jffe ][39m [31m[WARN ][39m [1062901fde61dfc6] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:44.392Z [33m[jffe ][39m [31m[WARN ][39m [1062901fde61dfc6] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:44.392Z [33m[jffe ][39m [31m[WARN ][39m [1062901fde61dfc6] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:45.482Z [33m[jffe ][39m [31m[WARN ][39m [4e1961812d24332a] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:45.483Z [33m[jffe ][39m [31m[WARN ][39m [4e1961812d24332a] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:45.683Z [33m[jffe ][39m [31m[WARN ][39m [a648f90bf2d7102 ] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:45.683Z [33m[jffe ][39m [31m[WARN ][39m [a648f90bf2d7102 ] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:45.683Z [33m[jffe ][39m [31m[WARN ][39m [a648f90bf2d7102 ] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:46.535Z [33m[jffe ][39m [31m[WARN ][39m [503666ed4b13889 ] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:46.536Z [33m[jffe ][39m [31m[WARN ][39m [503666ed4b13889 ] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:46.536Z [33m[jffe ][39m [31m[WARN ][39m [503666ed4b13889 ] [ ] [main ] - topology is missing, can't decide if service exists, will return false
2021-08-21T07:28:52.919Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [6b8dfc5cbb3ed2a4] [a.e.EventsLogCleanUpService:69] [art-exec-3 ] - Starting cleanup of old events from event log
2021-08-21T07:28:53.016Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [6b8dfc5cbb3ed2a4] [.e.EventsLogCleanUpService:105] [art-exec-3 ] - Cleanup of old events from event log finished
2021-08-21T07:30:52.536Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [139a17c81a46c969] [ ] [main ] - ArtifactoryClient::http [get] request to /api/system/nodes failed with 403 code
2021-08-21T07:30:52.541Z [33m[jffe ][39m [31m[1m[ERROR][22m[39m [139a17c81a46c969] [ ] [main ] - Error: Request failed with status code 403
at createError (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/core/createError.js:16:15)
at settle (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/adapters/http.js:260:11)
at IncomingMessage.emit (events.js:203:15)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
2021-08-21T07:31:12.829Z [1;32m[jfrt ][0;39m [31m[WARN ][0;39m [1b0e83537f9e0c72] [a.l.s.SumoLogicServiceImpl:227] [http-nio-8081-exec-4] - Unable to refresh Sumo Logic token, returning previously associated token
2021-08-21T07:33:02.254Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .replicator.enabled (true) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:02.881Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .artifactory.port (8081) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:02.996Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .artifactory.tomcat.connector.maxThreads (200) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:03.118Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .access.tomcat.connector.maxThreads (50) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:03.491Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .shared.extraJavaOpts (__sensitive_key_hidden___) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:03.589Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .shared.extraJavaOpts (__sensitive_key_hidden___) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:04.062Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .shared.database.type (derby) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:04.314Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .shared.database.url (__sensitive_key_hidden___) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:05.339Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .shared.extraJavaOpts (__sensitive_key_hidden___) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:06.253Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:522 ] [main] - Resolved .replicator.enabled (true) from /opt/jfrog/artifactory/var/etc/system.yaml
2021-08-21T07:33:19.081Z [34;1m[jfmd ][0m [31;1m[ERROR][0m [672d2eb628a9855d] [compatibility_logger.go:28 ] [main ] - Project update error: rpc error: code = DeadlineExceeded desc = context deadline exceeded [access_client]
2021-08-21T07:38:19.082Z [34;1m[jfmd ][0m [31;1m[ERROR][0m [672d2eb628a9855d] [compatibility_logger.go:28 ] [main ] - Project update error: rpc error: code = DeadlineExceeded desc = context deadline exceeded [access_client]
This error should be debug log and safe to ignore:
Project update error: rpc error: code = DeadlineExceeded desc = context deadline exceeded [access_client]
(I created an internal ticket to hide this)
Error 12 usually means access server is down or at least not available to metadata. I was not able to reproduce the issue. You can check if access is available buy a simple curl command:
curl localhost:8082/access/api/v1/system/ping
You should get:
ok
If not you should check the reason. Usually going to localhost:8082 in the browser shows a UI error page with details.
FYI if you're not working with permissions and projects you shouldn't be impacted by this issue.(I suspect you're not if you use OSS)
Related
I want to have all my logs stored as json so I'm using the monolog.formatter.json like this
monolog:
handlers:
main:
type: stream
path: "php://stderr"
level: info
channels: ["!event"]
formatter: 'monolog.formatter.json
However i'm still seeing in the logs lines like this
[2021-04-16T09:29:17.946886+00:00] security.DEBUG: Checking for authenticator support. {
"firewall_name": "endusers",
"authenticators": 2
} {
"request_id": "21b36d09-8cc0-466c-b835-1bc8fb05bf47"
}
(but i correctly have other logs formatted like this
{
"message": "......",
"context": [
1401
],
"level": 100,
"level_name": "DEBUG",
"channel": "doctrine",
"datetime": "2021-04-16T09:29:18.047739+00:00",
"extra": {
"request_id": "21b36d09-8cc0-466c-b835-1bc8fb05bf47"
}
}
which makes me think that my configuration is correct but somehow some logs are not going through my handler
Is there something I'm missing ?
(I'm using symfony 5.2 and monologbundle 3.7.0 )
whenever coin selection fails, "unrestorable checkpoint" exception is thrown.
I am testing out scheduled event functionality, when an IOU is created, an event is scheduled to do payment at a specific time. from the error logs (see below), whenever coin selection fails, unrestorable checkpoint" exception is thrown, but the process continues to run, retries until coin selection is successful.
The exception stack doesn't provide info on where the error could be, how can I debug this?
#Suspendable
public List<StateAndRef<Cash.State>> getFunds(Set<AbstractParty> issuers, Amount<Currency> amt) {
try {
AbstractCashSelection db = AbstractCashSelection.Companion.getInstance( () -> {
try {
return this.serviceHub.jdbcSession().getMetaData();
}catch(Exception e) {
throw new RuntimeException("getFunds error", e);
}
});
List<StateAndRef<Cash.State>> funds = db.unconsumedCashStatesForSpending(this.serviceHub, amt, issuers, null, this.flowRunId, new HashSet<OpaqueBytes>());
funds.forEach(s -> {
states.put(StateAndRefSerializer.getRef(s), s);
});
return funds;
}catch(Exception e) {
throw new FlowException("getFunds error", e);
}
}
Error from log file:
INFO ] 2019-07-21T20:52:07,683Z [Node thread-1]
IOU_nextScheduledActivity.info - com.example.iou.autopaymentImplis
scheduled at 2019-07-21T20:52:07.678Z,
flowRef=com.example.iou.autopaymentImpl {actor_id=internalShell,
actor_owning_identity=O=charlie, L=New York, C=US,
actor_store_id=NODE_CONFIG, fiber-id=10000002,
flow-id=0eebd34b-92be-4af5-a650-2d49675702c3,
invocation_id=35bd0d5c-ea00-4a7d-972d-ff353bb9d2ec,
invocation_timestamp=2019-07-21T20:51:57.700Z, origin=internalShell,
session_id=13825f94-0f7c-4df7-80e2-c73511031bc7,
session_timestamp=2019-07-21T20:51:57.304Z, thread-id=229,
tx_id=D0EDC810EF00AFC9E99494F2DEF50AD040F439DAE3FC554BBC7698B9DCBA6073}
[INFO ] 2019-07-21T20:52:07,699Z [Node thread-1]
corda.flow.notariseAndRecord - Recorded transaction locally
successfully. {actor_id=internalShell,
actor_owning_identity=O=charlie, L=New York, C=US,
actor_store_id=NODE_CONFIG, fiber-id=10000002,
flow-id=0eebd34b-92be-4af5-a650-2d49675702c3,
invocation_id=35bd0d5c-ea00-4a7d-972d-ff353bb9d2ec,
invocation_timestamp=2019-07-21T20:51:57.700Z, origin=internalShell,
session_id=13825f94-0f7c-4df7-80e2-c73511031bc7,
session_timestamp=2019-07-21T20:51:57.304Z, thread-id=229,
tx_id=D0EDC810EF00AFC9E99494F2DEF50AD040F439DAE3FC554BBC7698B9DCBA6073}
[INFO ] 2019-07-21T20:52:07,840Z [pool-6-thread-1]
IOU_nextScheduledActivity.info - com.example.iou.autopaymentImplis
scheduled at 2019-07-21T20:52:07.839Z,
flowRef=com.example.iou.autopaymentImpl [INFO ]
2019-07-21T20:52:07,891Z [Node thread-1] corda.flow.call - autopayment
is triggered by a schedulable event {fiber-id=10000003,
flow-id=c8d1be17-aa32-4a73-b031-fff2707ed877,
invocation_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
invocation_timestamp=2019-07-21T20:52:07.842Z, origin=Scheduler,
session_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
session_timestamp=2019-07-21T20:52:07.842Z, thread-id=229} [INFO ]
2019-07-21T20:52:07,897Z [Node thread-1] corda.flow.info - flow
autopayment is started {fiber-id=10000003,
flow-id=c8d1be17-aa32-4a73-b031-fff2707ed877,
invocation_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
invocation_timestamp=2019-07-21T20:52:07.842Z, origin=Scheduler,
session_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
session_timestamp=2019-07-21T20:52:07.842Z, thread-id=229} [INFO ]
2019-07-21T20:52:07,912Z [Node thread-1] corda.flow.info - autopayment
is completed {fiber-id=10000003,
flow-id=c8d1be17-aa32-4a73-b031-fff2707ed877,
invocation_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
invocation_timestamp=2019-07-21T20:52:07.842Z, origin=Scheduler,
session_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
session_timestamp=2019-07-21T20:52:07.842Z, thread-id=229} [INFO ]
2019-07-21T20:52:07,915Z [Node thread-1]
corda.flow.createSubflowObject - autopayment to invoke flow
com.charlie.iou.flows.SettleIOUInitiatorImpl {fiber-id=10000003,
flow-id=c8d1be17-aa32-4a73-b031-fff2707ed877,
invocation_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
invocation_timestamp=2019-07-21T20:52:07.842Z, origin=Scheduler,
session_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
session_timestamp=2019-07-21T20:52:07.842Z, thread-id=229} [INFO ]
2019-07-21T20:52:07,922Z [Node thread-1] corda.flow.info - flow
SettleIOU startred {fiber-id=10000003,
flow-id=c8d1be17-aa32-4a73-b031-fff2707ed877,
invocation_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
invocation_timestamp=2019-07-21T20:52:07.842Z, origin=Scheduler,
session_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
session_timestamp=2019-07-21T20:52:07.842Z, thread-id=229} [INFO ]
2019-07-21T20:52:08,062Z [Node thread-1] corda.flow.lambda$eval$3 -
txbuilder::build commands, cmd=Move,
keys=["GfHq2tTVk9z4eXgyGKLLbMNy9c7hZ3XmrUEpRayxezT7VanQYX7c61RPCedL","GfHq2tTVk9z4eXgyQbQ5fnon7qDegNzvJ4s71djeDtZVSfoA466yXun6CLcK","GfHq2tTVk9z4eXgyV2oK18JdGAUozHXddWjBCFoMMKhMz5taj1qUyVYWBBfi"]
{fiber-id=10000003, flow-id=c8d1be17-aa32-4a73-b031-fff2707ed877,
invocation_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
invocation_timestamp=2019-07-21T20:52:07.842Z, origin=Scheduler,
session_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
session_timestamp=2019-07-21T20:52:07.842Z, thread-id=229} [INFO ]
2019-07-21T20:52:08,063Z [Node thread-1] corda.flow.eval -
txbuilder::build commands, cmd=com.example.iou.SettleIOU,
keys=["GfHq2tTVk9z4eXgyQbQ5fnon7qDegNzvJ4s71djeDtZVSfoA466yXun6CLcK","GfHq2tTVk9z4eXgySdF6Vz2jjYaaUP61nQVBgMJUAvmnhhWGA34bUwC9CaVn","GfHq2tTVk9z4eXgyV2oK18JdGAUozHXddWjBCFoMMKhMz5taj1qUyVYWBBfi"]
{fiber-id=10000003, flow-id=c8d1be17-aa32-4a73-b031-fff2707ed877,
invocation_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
invocation_timestamp=2019-07-21T20:52:07.842Z, origin=Scheduler,
session_id=09dbdf92-55cb-4bcc-8ce9-5eae3103a740,
session_timestamp=2019-07-21T20:52:07.842Z, thread-id=229} [INFO ]
2019-07-21T20:52:08,337Z [Node thread-1] corda.flow.call - Party
O=bob, L=New York, C=US received the transaction.
{actor_id=internalShell, actor_owning_identity=O=charlie, L=New York,
C=US, actor_store_id=NODE_CONFIG, fiber-id=10000002,
flow-id=0eebd34b-92be-4af5-a650-2d49675702c3,
invocation_id=35bd0d5c-ea00-4a7d-972d-ff353bb9d2ec,
invocation_timestamp=2019-07-21T20:51:57.700Z, origin=internalShell,
session_id=13825f94-0f7c-4df7-80e2-c73511031bc7,
session_timestamp=2019-07-21T20:51:57.304Z, thread-id=229,
tx_id=D0EDC810EF00AFC9E99494F2DEF50AD040F439DAE3FC554BBC7698B9DCBA6073}
[INFO ] 2019-07-21T20:52:08,338Z [Node thread-1] corda.flow.call - All
parties received the transaction successfully.
In our recent release of the TokenSDK 1.1, we have officially introduced the token-selection pack.
Here is the link for the binary file: https://ci-artifactory.corda.r3cev.com/artifactory/webapp/#/artifacts/browse/tree/General/corda-lib/com/r3/corda/lib/tokens/tokens-selection
And here is an example of how to import the dependencies to your project: https://github.com/corda/samples-java/blob/master/Accounts/worldcupticketbooking/build.gradle#L120
And here is how you use it:
Pair<List<StateAndRef<FungibleToken>>, List<FungibleToken>> inputsAndOutputs =
tokenSelection.generateMove(Arrays.asList(partyAndAmount), buyerAccount, new TokenQueryBy(), getRunId().getUuid());
I am trying to deploy node with the official docker image with following command
docker run -ti \
--memory=2048m \
--cpus=2 \
-v /Users/aliceguo/IdeaProjects/car-cordapp/build/nodes/PartyC/config:/etc/corda \
-v /Users/aliceguo/IdeaProjects/car-cordapp/build/nodes/PartyC/certificates:/opt/corda/certificates \
-v /Users/aliceguo/IdeaProjects/car-cordapp/build/nodes/PartyC/persistence:/opt/corda/persistence \
-v /Users/aliceguo/IdeaProjects/car-cordapp/build/nodes/PartyC/logs:/opt/corda/logs \
-v /Users/aliceguo/IdeaProjects/car-cordapp/build/nodes/PartyC/cordapps:/opt/corda/cordapps \
-v /Users/aliceguo/IdeaProjects/car-cordapp/build/nodes/PartyC/additional-node-infos:/opt/corda/additional-node-infos \
-v /Users/aliceguo/IdeaProjects/car-cordapp/build/nodes/PartyC/network-parameters:/opt/corda/network-parameters \
-p 10011:10011 \
-p 10012:10012 \
corda/corda-corretto-5.0-snapshot.
And the node seems to start successfully, but I cannot connect to it via rpc from my laptop (the docker container is on my laptop as well). I will attach some log and screenshot below. Any help would be appreciated!
Node Log:
[INFO ] 2019-07-19T03:21:23,163Z [main] cliutils.CordaCliWrapper.call - Application Args: --base-directory /opt/corda --config-file /etc/corda/node.conf
[INFO ] 2019-07-19T03:21:24,146Z [main] manifests.Manifests.info - 115 attributes loaded from 152 stream(s) in 61ms, 115 saved, 2353 ignored: ["ActiveMQ-Version", "Agent-Class", "Ant-Version", "Application-Class", "Application-ID", "Application-Library-Allowable-Codebase", "Application-Name", "Application-Version", "Archiver-Version", "Automatic-Module-Name", "Bnd-LastModified", "Branch", "Build-Date", "Build-Host", "Build-Id", "Build-Java-Version", "Build-Jdk", "Build-Job", "Build-Number", "Build-Timestamp", "Built-By", "Built-OS", "Built-Status", "Bundle-Activator", "Bundle-Category", "Bundle-ClassPath", "Bundle-Copyright", "Bundle-Description", "Bundle-DocURL", "Bundle-License", "Bundle-ManifestVersion", "Bundle-Name", "Bundle-NativeCode", "Bundle-RequiredExecutionEnvironment", "Bundle-SymbolicName", "Bundle-Vendor", "Bundle-Version", "Caller-Allowable-Codebase", "Can-Redefine-Classes", "Can-Retransform-Classes", "Can-Set-Native-Method-Prefix", "Caplets", "Change", "Class-Path", "Codebase", "Corda-Platform-Version", "Corda-Release-Version", "Corda-Revision", "Corda-Vendor", "Created-By", "DynamicImport-Package", "Eclipse-BuddyPolicy", "Eclipse-LazyStart", "Export-Package", "Extension-Name", "Fragment-Host", "Gradle-Version", "Hibernate-JpaVersion", "Hibernate-VersionFamily", "Implementation-Build", "Implementation-Build-Date", "Implementation-Title", "Implementation-URL", "Implementation-Url", "Implementation-Vendor", "Implementation-Vendor-Id", "Implementation-Version", "Import-Package", "Include-Resource", "JCabi-Build", "JCabi-Date", "JCabi-Version", "JVM-Args", "Java-Agents", "Java-Vendor", "Java-Version", "Kotlin-Runtime-Component", "Kotlin-Version", "Liquibase-Package", "Log4jReleaseKey", "Log4jReleaseManager", "Log4jReleaseVersion", "Main-Class", "Main-class", "Manifest-Version", "Min-Java-Version", "Min-Update-Version", "Module-Email", "Module-Origin", "Module-Owner", "Module-Source", "Multi-Release", "Originally-Created-By", "Os-Arch", "Os-Name", "Os-Version", "Permissions", "Premain-Class", "Private-Package", "Provide-Capability", "Require-Capability", "SCM-Revision", "SCM-url", "Scm-Connection", "Scm-Revision", "Scm-Url", "Service-Component", "Specification-Title", "Specification-Vendor", "Specification-Version", "System-Properties", "Tool", "Trusted-Library", "X-Compile-Source-JDK", "X-Compile-Target-JDK"]
[INFO ] 2019-07-19T03:21:24,188Z [main] BasicInfo.printBasicNodeInfo - Logs can be found in : /opt/corda/logs
[INFO ] 2019-07-19T03:21:25,096Z [main] subcommands.ValidateConfigurationCli.logRawConfig$node - Actual configuration:
{
"additionalNodeInfoPollingFrequencyMsec" : 5000,
"additionalP2PAddresses" : [],
"attachmentCacheBound" : 1024,
"baseDirectory" : "/opt/corda",
"certificateChainCheckPolicies" : [],
"cordappSignerKeyFingerprintBlacklist" : [
"56CA54E803CB87C8472EBD3FBC6A2F1876E814CEEBF74860BD46997F40729367",
"83088052AF16700457AE2C978A7D8AC38DD6A7C713539D00B897CD03A5E5D31D",
"6F6696296C3F58B55FB6CA865A025A3A6CC27AD17C4AFABA1E8EF062E0A82739"
],
"crlCheckSoftFail" : true,
"dataSourceProperties" : "*****",
"database" : {
"exportHibernateJMXStatistics" : false,
"initialiseAppSchema" : "UPDATE",
"initialiseSchema" : true,
"mappedSchemaCacheSize" : 100,
"transactionIsolationLevel" : "REPEATABLE_READ"
},
"detectPublicIp" : false,
"devMode" : true,
"emailAddress" : "admin#company.com",
"extraNetworkMapKeys" : [],
"flowMonitorPeriodMillis" : {
"nanos" : 0,
"seconds" : 60
},
"flowMonitorSuspensionLoggingThresholdMillis" : {
"nanos" : 0,
"seconds" : 60
},
"flowTimeout" : {
"backoffBase" : 1.8,
"maxRestartCount" : 6,
"timeout" : {
"nanos" : 0,
"seconds" : 30
}
},
"jarDirs" : [],
"jmxReporterType" : "JOLOKIA",
"keyStorePassword" : "*****",
"lazyBridgeStart" : true,
"myLegalName" : "O=PartyC,L=New York,C=US",
"noLocalShell" : false,
"p2pAddress" : "localhost:10011",
"rpcSettings" : {
"address" : "localhost:10012",
"adminAddress" : "localhost:10052",
"standAloneBroker" : false,
"useSsl" : false
},
"rpcUsers" : [],
"security" : {
"authService" : {
"dataSource" : {
"passwordEncryption" : "NONE",
"type" : "INMEMORY",
"users" : [
{
"ignoresFallbacks" : false,
"resolved" : true,
"value" : {
"loadFactor" : 0.75,
"modCount" : 3,
"size" : 3,
"table" : {},
"threshold" : 3
}
}
]
}
}
},
"trustStorePassword" : "*****",
"useTestClock" : false,
"verifierType" : "InMemory"
}
[INFO ] 2019-07-19T03:21:25,119Z [main] internal.Node.logStartupInfo - Vendor: Corda Open Source
[INFO ] 2019-07-19T03:21:25,119Z [main] internal.Node.logStartupInfo - Release: 5.0-SNAPSHOT
[INFO ] 2019-07-19T03:21:25,119Z [main] internal.Node.logStartupInfo - Platform Version: 5
[INFO ] 2019-07-19T03:21:25,119Z [main] internal.Node.logStartupInfo - Revision: df19b444ddd32d3afd10ed0b76c1b2f68d985968
[INFO ] 2019-07-19T03:21:25,119Z [main] internal.Node.logStartupInfo - PID: 19
[INFO ] 2019-07-19T03:21:25,120Z [main] internal.Node.logStartupInfo - Main class: /opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-node-5.0-SNAPSHOT.jar
[INFO ] 2019-07-19T03:21:25,120Z [main] internal.Node.logStartupInfo - CommandLine Args: -Xmx512m -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -javaagent:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/quasar-core-0.7.10-jdk8.jar=x(antlr**;bftsmart**;co.paralleluniverse**;com.codahale**;com.esotericsoftware**;com.fasterxml**;com.google**;com.ibm**;com.intellij**;com.jcabi**;com.nhaarman**;com.opengamma**;com.typesafe**;com.zaxxer**;de.javakaffee**;groovy**;groovyjarjarantlr**;groovyjarjarasm**;io.atomix**;io.github**;io.netty**;jdk**;junit**;kotlin**;net.bytebuddy**;net.i2p**;org.apache**;org.assertj**;org.bouncycastle**;org.codehaus**;org.crsh**;org.dom4j**;org.fusesource**;org.h2**;org.hamcrest**;org.hibernate**;org.jboss**;org.jcp**;org.joda**;org.junit**;org.mockito**;org.objectweb**;org.objenesis**;org.slf4j**;org.w3c**;org.xml**;org.yaml**;reflectasm**;rx**;org.jolokia**;com.lmax**;picocli**;liquibase**;com.github.benmanes**;org.json**;org.postgresql**;nonapi.io.github.classgraph**) -Dcorda.dataSourceProperties.dataSource.url=jdbc:h2:file:/opt/corda/persistence/persistence;DB_CLOSE_ON_EXIT=FALSE;WRITE_DELAY=0;LOCK_TIMEOUT=10000 -Dvisualvm.display.name=Corda -Djava.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT -Dcapsule.app=net.corda.node.Corda_5.0-SNAPSHOT -Dcapsule.dir=/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT -Dcapsule.jar=/opt/corda/bin/corda.jar -Djava.security.egd=file:/dev/./urandom
[INFO ] 2019-07-19T03:21:25,120Z [main] internal.Node.logStartupInfo - bootclasspath: /usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/lib/resources.jar:/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/lib/rt.jar:/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/lib/jsse.jar:/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/lib/jce.jar:/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/lib/charsets.jar:/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/lib/jfr.jar:/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/classes
[INFO ] 2019-07-19T03:21:25,120Z [main] internal.Node.logStartupInfo - classpath: /opt/corda/bin/corda.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-shell-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-rpc-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-node-api-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-tools-cliutils-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-common-configuration-parsing-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-common-validation-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-common-logging-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-confidential-identities-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/log4j-slf4j-impl-2.9.1.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/log4j-web-2.9.1.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/jul-to-slf4j-1.7.25.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-jackson-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-serialization-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/corda-core-5.0-SNAPSHOT.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/kotlin-stdlib-jdk8-1.2.71.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/jackson-module-kotlin-2.9.5.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/kotlin-reflect-1.2.71.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/quasar-core-0.7.10-jdk8.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/kryo-serializers-0.42.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/kryo-4.0.0.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/jimfs-1.1.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/metrics-new-relic-1.1.1.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/guava-25.1-jre.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/caffeine-2.6.2.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/disruptor-3.4.2.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/commons-collections4-4.1.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/artemis-amqp-protocol-2.6.2.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/artemis-server-2.6.2.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/artemis-jdbc-store-2.6.2.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/artemis-journal-2.6.2.jar:/opt/corda/.capsule/apps/net.corda.node.Corda_5.0-SNAPSHOT/art...
In order to solve this, you need to bind the ports to 0.0.0.0:xxxx instead of localhost:xxxx in the node.conf
"p2pAddress" : "localhost:10011",
"rpcSettings" : {
"address" : "localhost:10012",
"adminAddress" : "localhost:10052",
"standAloneBroker" : false,
"useSsl" : false
},
I am migrating from v 3.0 to v 3.2 and a lot of my unit test uses Try.on to assert some expected result. It was working before in v3.0 but after v3.2 it gets stuck in a infinite wait on the unit test in mock network.
#Test
fun `Issue non-anonymous obligation successfully with non null remark`() {
// Throw null pointer
val result = Try.on {
issueObligation(a, b, 1000.POUNDS, anonymous = false, remark = null)
network.waitQuiescent()
}
assert(result.isFailure)
// If I don't run this subsequent transaction, the test runs successfully.
// Somehow, any transaction after a Try.on will get stuck.
// Should issue successfully.
issueObligation(a, b, 1000.POUNDS, anonymous = false, remark = "Valid")
network.waitQuiescent()
}
Stacktrace where by the console will be stuck in infinite wait.
[WARN ] 11:15:45,055 [Mock node 1 thread] (FlowStateMachineImpl.kt:111) flow.[2c41b166-041a-4ec8-87dd-896e3712415e].run - Terminated by unexpected exception {}
java.lang.IllegalArgumentException: Invalid string.
at net.corda.examples.obligation.flows.IssueObligation$Initiator.call(IssueObligation.kt:44) ~[classes/:?]
at net.corda.examples.obligation.flows.IssueObligation$Initiator.call(IssueObligation.kt:21) ~[classes/:?]
at net.corda.node.services.statemachine.FlowStateMachineImpl.run(FlowStateMachineImpl.kt:96) [corda-node-3.2-corda.jar:?]
at net.corda.node.services.statemachine.FlowStateMachineImpl.run(FlowStateMachineImpl.kt:44) [corda-node-3.2-corda.jar:?]
at co.paralleluniverse.fibers.Fiber.run1(Fiber.java:1092) [quasar-core-0.7.9-jdk8.jar:0.7.9]
at co.paralleluniverse.fibers.Fiber.exec(Fiber.java:788) [quasar-core-0.7.9-jdk8.jar:0.7.9]
at co.paralleluniverse.fibers.RunnableFiberTask.doExec(RunnableFiberTask.java:100) [quasar-core-0.7.9-jdk8.jar:0.7.9]
at co.paralleluniverse.fibers.RunnableFiberTask.run(RunnableFiberTask.java:91) [quasar-core-0.7.9-jdk8.jar:0.7.9]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_151]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_151]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_151]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
at net.corda.node.utilities.AffinityExecutor$ServiceAffinityExecutor$1$thread$1.run(AffinityExecutor.kt:62) [corda-node-3.2-corda.jar:?]
[INFO ] 11:15:45,474 [Mock node 1 thread] (FlowStateMachineImpl.kt:432) flow.[77bea7fa-402f-4527-8388-d910faea6342].initiateSession - Initiating flow session with party O=Mock Company 2, L=London, C=GB. Session id for tracing purposes is SessionId(toLong=7175264157685997907). {}
[INFO ] 11:15:45,570 [Mock node 2 thread] (StateMachineManagerImpl.kt:367) statemachine.StateMachineManagerImpl.onSessionInit - Accepting flow session from party O=Mock Company 1, L=London, C=GB. Session id for tracing purposes is SessionId(toLong=7175264157685997907). {invocation_id=18195f36-850b-4fec-9794-9e57c16f9155, invocation_timestamp=2018-08-03T04:15:45.557Z, session_id=18195f36-850b-4fec-9794-9e57c16f9155, session_timestamp=2018-08-03T04:15:45.557Z}
[INFO ] 11:15:45,747 [Mock node 1 thread] (FlowStateMachineImpl.kt:432) flow.[77bea7fa-402f-4527-8388-d910faea6342].initiateSession - Initiating flow session with party O=Notary Service, L=Zurich, C=CH. Session id for tracing purposes is SessionId(toLong=-6165721234712091944). {}
[INFO ] 11:15:45,772 [Mock node 0 thread] (StateMachineManagerImpl.kt:367) statemachine.StateMachineManagerImpl.onSessionInit - Accepting flow session from party O=Mock Company 1, L=London, C=GB. Session id for tracing purposes is SessionId(toLong=-6165721234712091944). {invocation_id=c7c9d70d-e73a-4724-912a-e7f3d5080e57, invocation_timestamp=2018-08-03T04:15:45.770Z, session_id=c7c9d70d-e73a-4724-912a-e7f3d5080e57, session_timestamp=2018-08-03T04:15:45.770Z}
[INFO ] 11:15:45,868 [Mock node 1 thread] (HibernateConfiguration.kt:47) persistence.HibernateConfiguration.makeSessionFactoryForSchemas - Creating session factory for schemas: [ObligationSchemaV1(name=net.corda.examples.obligation.ObligationSchema, version=1)] {}
[INFO ] 11:15:45,875 [Mock node 1 thread] (ConnectionProviderInitiator.java:122) internal.ConnectionProviderInitiator.initiateService - HHH000130: Instantiating explicit connection provider: net.corda.nodeapi.internal.persistence.HibernateConfiguration$NodeDatabaseConnectionProvider {}
[INFO ] 11:15:45,876 [Mock node 1 thread] (Dialect.java:157) dialect.Dialect. - HHH000400: Using dialect: org.hibernate.dialect.H2Dialect {}
[INFO ] 11:15:45,881 [Mock node 1 thread] (BasicTypeRegistry.java:148) type.BasicTypeRegistry.register - HHH000270: Type registration [materialized_blob] overrides previous : org.hibernate.type.MaterializedBlobType#7a7698f4 {}
[INFO ] 11:15:45,894 [Mock node 1 thread] (DdlTransactionIsolatorNonJtaImpl.java:47) connections.access.getIsolatedConnection - HHH10001501: Connection obtained from JdbcConnectionAccess [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess#736e95ad] for (non-JTA) DDL execution was not in auto-commit mode; the Connection 'local transaction' will be committed and the Connection will be set into auto-commit mode. {}
[INFO ] 11:15:45,909 [Mock node 1 thread] (HibernateConfiguration.kt:64) persistence.HibernateConfiguration.makeSessionFactoryForSchemas - Created session factory for schemas: [ObligationSchemaV1(name=net.corda.examples.obligation.ObligationSchema, version=1)] {}
[INFO ] 11:15:45,916 [Mock node 1 thread] (HibernateConfiguration.kt:47) persistence.HibernateConfiguration.makeSessionFactoryForSchemas - Creating session factory for schemas: [VaultSchemaV1(name=net.corda.node.services.vault.VaultSchema, version=1)] {}
[INFO ] 11:15:45,923 [Mock node 1 thread] (ConnectionProviderInitiator.java:122) internal.ConnectionProviderInitiator.initiateService - HHH000130: Instantiating explicit connection provider: net.corda.nodeapi.internal.persistence.HibernateConfiguration$NodeDatabaseConnectionProvider {}
[INFO ] 11:15:45,924 [Mock node 1 thread] (Dialect.java:157) dialect.Dialect. - HHH000400: Using dialect: org.hibernate.dialect.H2Dialect {}
[INFO ] 11:15:45,927 [Mock node 1 thread] (BasicTypeRegistry.java:148) type.BasicTypeRegistry.register - HHH000270: Type registration [materialized_blob] overrides previous : org.hibernate.type.MaterializedBlobType#7a7698f4 {}
[INFO ] 11:15:45,953 [Mock node 1 thread] (DdlTransactionIsolatorNonJtaImpl.java:47) connections.access.getIsolatedConnection - HHH10001501: Connection obtained from JdbcConnectionAccess [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess#318fb15] for (non-JTA) DDL execution was not in auto-commit mode; the Connection 'local transaction' will be committed and the Connection will be set into auto-commit mode. {}
[INFO ] 11:15:45,994 [Mock node 1 thread] (HibernateConfiguration.kt:64) persistence.HibernateConfiguration.makeSessionFactoryForSchemas - Created session factory for schemas: [VaultSchemaV1(name=net.corda.node.services.vault.VaultSchema, version=1)] {}
[INFO ] 11:15:46,003 [Mock node 1 thread] (FlowStateMachineImpl.kt:432) flow.[77bea7fa-402f-4527-8388-d910faea6342].initiateSession - Initiating flow session with party O=Mock Company 2, L=London, C=GB. Session id for tracing purposes is SessionId(toLong=-2453204714770107984). {}
[INFO ] 11:15:46,027 [Mock node 2 thread] (StateMachineManagerImpl.kt:367) statemachine.StateMachineManagerImpl.onSessionInit - Accepting flow session from party O=Mock Company 1, L=London, C=GB. Session id for tracing purposes is SessionId(toLong=-2453204714770107984). {invocation_id=ec8f9d18-73ef-4739-a0cd-84202e590df9, invocation_timestamp=2018-08-03T04:15:46.026Z, session_id=ec8f9d18-73ef-4739-a0cd-84202e590df9, session_timestamp=2018-08-03T04:15:46.026Z}
[INFO ] 11:15:46,084 [Mock node 2 thread] (HibernateConfiguration.kt:47) persistence.HibernateConfiguration.makeSessionFactoryForSchemas - Creating session factory for schemas: [ObligationSchemaV1(name=net.corda.examples.obligation.ObligationSchema, version=1)] {}
[INFO ] 11:15:46,099 [Mock node 2 thread] (ConnectionProviderInitiator.java:122) internal.ConnectionProviderInitiator.initiateService - HHH000130: Instantiating explicit connection provider: net.corda.nodeapi.internal.persistence.HibernateConfiguration$NodeDatabaseConnectionProvider {}
[INFO ] 11:15:46,100 [Mock node 2 thread] (Dialect.java:157) dialect.Dialect. - HHH000400: Using dialect: org.hibernate.dialect.H2Dialect {}
[INFO ] 11:15:46,107 [Mock node 2 thread] (BasicTypeRegistry.java:148) type.BasicTypeRegistry.register - HHH000270: Type registration [materialized_blob] overrides previous : org.hibernate.type.MaterializedBlobType#7a7698f4 {}
[INFO ] 11:15:46,115 [Mock node 2 thread] (DdlTransactionIsolatorNonJtaImpl.java:47) connections.access.getIsolatedConnection - HHH10001501: Connection obtained from JdbcConnectionAccess [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess#606d0355] for (non-JTA) DDL execution was not in auto-commit mode; the Connection 'local transaction' will be committed and the Connection will be set into auto-commit mode. {}
[INFO ] 11:15:46,129 [Mock node 2 thread] (HibernateConfiguration.kt:64) persistence.HibernateConfiguration.makeSessionFactoryForSchemas - Created session factory for schemas: [ObligationSchemaV1(name=net.corda.examples.obligation.ObligationSchema, version=1)] {}
[INFO ] 11:15:46,130 [Mock node 2 thread] (HibernateConfiguration.kt:47) persistence.HibernateConfiguration.makeSessionFactoryForSchemas - Creating session factory for schemas: [VaultSchemaV1(name=net.corda.node.services.vault.VaultSchema, version=1)] {}
[INFO ] 11:15:46,134 [Mock node 2 thread] (ConnectionProviderInitiator.java:122) internal.ConnectionProviderInitiator.initiateService - HHH000130: Instantiating explicit connection provider: net.corda.nodeapi.internal.persistence.HibernateConfiguration$NodeDatabaseConnectionProvider {}
[INFO ] 11:15:46,135 [Mock node 2 thread] (Dialect.java:157) dialect.Dialect. - HHH000400: Using dialect: org.hibernate.dialect.H2Dialect {}
[INFO ] 11:15:46,136 [Mock node 2 thread] (BasicTypeRegistry.java:148) type.BasicTypeRegistry.register - HHH000270: Type registration [materialized_blob] overrides previous : org.hibernate.type.MaterializedBlobType#7a7698f4 {}
[INFO ] 11:15:46,157 [Mock node 2 thread] (DdlTransactionIsolatorNonJtaImpl.java:47) connections.access.getIsolatedConnection - HHH10001501: Connection obtained from JdbcConnectionAccess [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess#6b0f595c] for (non-JTA) DDL execution was not in auto-commit mode; the Connection 'local transaction' will be committed and the Connection will be set into auto-commit mode. {}
[INFO ] 11:15:46,175 [Mock node 2 thread] (HibernateConfiguration.kt:64) persistence.HibernateConfiguration.makeSessionFactoryForSchemas - Created session factory for schemas: [VaultSchemaV1(name=net.corda.node.services.vault.VaultSchema, version=1)] {}
This appears to be a bug in Corda 3.2. It is being tracked here: https://github.com/corda/corda/issues/3741.
I have the following adapter and service configured:
knp_gaufrette:
adapters:
sc_documents:
aws_s3:
service_id: sc.aws_s3.client
bucket_name: HIDDEN
options:
directory: documents
create: true
filesystems:
sc_documents_fs:
adapter: sc_documents
alias: sc_document_storage
sc.aws_s3.client:
class: Aws\S3\S3Client
factory_class: Aws\S3\S3Client
factory_method: factory
arguments:
-
key: 'HIDDEN'
secret: 'HIDDEN'
region: 'eu-central-1'
version: '2006-03-01'
I keep getting the following error for both read and write:
Error retrieving credentials from the instance profile metadata server. (cURL error 7: Failed to connect to 169.254.169.254 port 80: Network unreachable (see http://curl.haxx.se/libcurl/c/libcurl-errors.html))
The bucket policy is:
"Id": "Policy1445959171046",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "STMT",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::HIDDEN/*",
"Principal": {
"AWS": "*"
}
}
]
I tried with different accounts but none of them worked. I wonder what I am doing wrong. Do I need something else enabled?
Thanks!
It's a problem with the change of how credentials are managed in the latest version of php sdk. Now they use a Credentials object.
If you are using the aws/aws-sdk-php-symfony bundle there's no need to create the s3_client as a service.
Initialize the sdk in your config with
aws:
version: latest
region: us-east-1
credentials:
key: not-a-real-key
secret: "##not-a-real-secret"
S3:
version: '2006-03-01'
And just pass the aws.s3 client to gaufrette instead of your declared client
knp_gaufrette:
adapters:
sc_documents:
aws_s3:
service_id: aws.s3
bucket_name: HIDDEN
options:
directory: documents
create: true