ERROR:ws_common: websphereHandleRequest: Failed to handle request rc=2 ws_common: websphereShouldHandleRequest: Config was successfully reloaded
025400da 00000809 - ERROR: ws_common: websphereGetStream: Failed to connect to app server on host
ERROR: ws_common: websphereExecute: Failed to create the stream
025400da 00000809 - ERROR: ws_common: websphereHandleRequest: Failed to execute the transaction to
025400da 00000809 - ERROR: ws_common: websphereWriteRequestReadResponse: Failed to find an app server to handle this request
025400da 00000809 - ERROR: ws_common: websphereRequestHandler: Failed to find an app server to handle this request
025400da 00000809 - ERROR: ws_common: websphereHandleRequest: Failed to handle request rc=2
025400da 00000809 -
I am developing a Fabric application, and I am facing the issue where gateway is not able to get the network
366 | await gateway.connect(ccp, gatewayOpts);
367 | const network = await gateway.getNetwork(channelName);
| ^
368 | const contract = network.getContract(chaincodeName);
In Line number 367 I am facing an error
Following is the error :-
2021-07-24T13:39:03.866Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Endorser- name: peer0.org1.example.com, url:grpcs://localhost:7051, connected:false, connectAttempted:true
2021-07-24T13:39:03.867Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server peer0.org1.example.com url:grpcs://localhost:7051 timeout:3000
2021-07-24T13:39:03.893Z - info: [NetworkConfig]: buildPeer - Unable to connect to the endorser peer0.org1.example.com due to Error: Failed to connect before the deadline on Endorser- name: peer0.org1.example.com, url:grpcs://localhost:7051, connected:false, connectAttempted:true
at checkState (/home/user/Documents/Learnings/aries-learning/aries-javascript/aries-framework-javascript/node_modules/#grpc/grpc-js/src/client.ts:169:18)
at Timeout._onTimeout (/home/user/Documents/Learnings/aries-learning/aries-javascript/aries-framework-javascript/node_modules/#grpc/grpc-js/src/channel.ts:579:9)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7) {
connectFailed: true
}
2021-07-24T13:39:07.452Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Discoverer- name: peer0.org1.example.com, url:grpcs://localhost:7051, connected:false, connectAttempted:true
2021-07-24T13:39:07.452Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server peer0.org1.example.com url:grpcs://localhost:7051 timeout:3000
2021-07-24T13:39:07.453Z - error: [ServiceEndpoint]: ServiceEndpoint grpcs://localhost:7051 reset connection failed :: Error: Failed to connect before the deadline on Discoverer- name: peer0.org1.example.com, url:grpcs://localhost:7051, connected:false, connectAttempted:true
2021-07-24T13:39:07.453Z - error: [DiscoveryService]: send[mychannel] - no discovery results
Can anyone help me in resolving this?
Thanks
That can be many things, but is quite often a TLS issue. Check the peer logs. You may see a Bad TLS message, etc. Or notice that it does not have an attempt to contact it, so it might be dns, or more general connectivity issues.
I have compiled this code:
program mpisimple
implicit none
integer ierr
include 'mpif.h'
call mpi_init(ierr)
write(6,*) 'Hello World!'
call mpi_finalize(ierr)
end
using the command: mpif90 -o helloworld simplempi.f90
When I run with this command:
$ mpiexec -np 1 ./helloworld
Hello World!
it works fine as you can see. But when I run with any other number of processors (here 4) I get the errors and I basically have to ctrl+C to kill it.
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(805).....: fail failed
MPID_Init(1859)...........: channel initialization failed
MPIDI_CH3_Init(126).......: fail failed
MPID_nem_init_ckpt(858)...: fail failed
MPIDI_CH3I_Seg_commit(427): PMI_KVS_Get returned 4
In: PMI_Abort(69777679, Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(805).....: fail failed
MPID_Init(1859)...........: channel initialization failed
MPIDI_CH3_Init(126).......: fail failed
MPID_nem_init_ckpt(858)...: fail failed
MPIDI_CH3I_Seg_commit(427): PMI_KVS_Get returned 4)
forrtl: severe (174): SIGSEGV, segmentation fault occurred
What could be the problem? I am doing this on a Linux hpc system.
I figured out why this happened. The system I am using does not require users to submit single-core jobs through the scheduler, but does require it for multi-core jobs. Once the mpiexec command was submitted through a PBS bash script, the errors went away and output was as expected.
I'm seeing an IO error on the Riak console. I'm not sure what the cause is as the owner of the directory is riak. Here's how the error looks.
2018-01-25 23:18:06.922 [info] <0.2301.0>#riak_kv_vnode:maybe_create_hashtrees:234 riak_kv/730750818665451459101842416358141509827966271488: unable to start index_hashtree: {error,{{badmatch,{error,{db_open,"IO error: lock /var/lib/riak/anti_entropy/v0/730750818665451459101842416358141509827966271488/LOCK: already held by process"}}},[{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,725}]},{hashtree,new,2,[{file,"src/hashtree.erl"},{line,246}]},{riak_kv_index_hashtree,do_new_tree,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,712}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{riak_kv_index_hashtree,init_trees,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,565}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,308}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}
2018-01-25 23:18:06.927 [info] <0.2315.0>#riak_kv_vnode:maybe_create_hashtrees:234 riak_kv/890602560248518965780370444936484965102833893376: unable to start index_hashtree: {error,{{badmatch,{error,{db_open,"IO error: lock /var/lib/riak/anti_entropy/v0/890602560248518965780370444936484965102833893376/LOCK: already held by process"}}},[{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,725}]},{hashtree,new,2,[{file,"src/hashtree.erl"},{line,246}]},{riak_kv_index_hashtree,do_new_tree,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,712}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{riak_kv_index_hashtree,init_trees,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,565}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,308}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}
2018-01-25 23:18:06.928 [error] <0.27284.0> CRASH REPORT Process <0.27284.0> with 0 neighbours exited with reason: no match of right hand value {error,{db_open,"IO error: lock /var/lib/riak/anti_entropy/v0/890602560248518965780370444936484965102833893376/LOCK: already held by process"}} in hashtree:new_segment_store/2 line 725 in gen_server:init_it/6 line 328
Any ideas on what the problem could be?
My WSO2 API Manager is logging continuously below logs. How to resolve this.
[2016-06-10 02:12:47,630] ERROR - AsyncDataPublisher Reconnection failed for for tcp://localhost:7612
[2016-06-10 02:13:17,852] ERROR - AsyncDataPublisher Reconnection failed for for tcp://localhost:7612
[2016-06-10 02:13:47,798] ERROR - AsyncDataPublisher Reconnection failed for for tcp://localhost:7612