I have configured my wso2 with custom name by setting
-->
secu.helomyl.in
<!--
Host name to be used for the Carbon management console
-->
<MgtHostName>secu.helomyl.in</MgtHostName>
It starts and i can access the url and get wso2.But the below error is in the logs.Can you please help?
[2017-02-17 14:46:32,513] INFO - QpidServiceComponent Successfully connected to AMQP server on port 5673
[2017-02-17 14:46:32,514] WARN - QpidServiceComponent MQTT Transport is disabled as per configuration.
[2017-02-17 14:46:32,514] INFO - QpidServiceComponent WSO2 Message Broker is started.
[2017-02-17 14:46:32,533] WARN - PropertiesFileInitialContextFactory Unable to create factory:Illegal character in query between indicies 66 and 1
amqp://admin:admin#clientid/carbon?brokerlist='tcp://15.100.133.77 :5673'
^
[2017-02-17 14:46:33,044] INFO - PassThroughHttpSSLListener Starting Pass-through HTTPS Listener...
[2017-02-17 14:46:33,047] INFO - PassThroughListeningIOReactorManager Pass-through HTTPS Listener started on 0.0.0.0
Check the api-manager.xml in wso2am-2.0.0/repository/conf location. There is space in the below configuration. That causes the issue.
tcp://15.100.133.77 :5673
Related
after running teltelegraf -debug with jolokia config
[[inputs.jolokia2_agent]]
urls = ["http://<other ip>:8080/jolokia-war-unsecured-1.6.2/"]
[[inputs.jolokia2_agent.metric]]
name = "jr"
mbean = "java.lang:type=Runtime"
paths = ["Uptime"]
I get this errors:
[agent] Initializing plugins
2022-07-02T12:51:57Z D! [agent] Connecting outputs
2022-07-02T12:51:57Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2022-07-02T12:51:57Z D! [agent] Successfully connected to outputs.influxdb_v2
2022-07-02T12:51:57Z D! [agent] Starting service inputs
2022-07-02T12:52:07Z E! [outputs.influxdb_v2] When writing to [https://MYIP:8086]: Post "https://MYIP:8086/api/v2/write?bucket=monitoringdb&org=myorg": http: server gave HTTP response to HTTPS client
2022-07-02T12:52:07Z D! [outputs.influxdb_v2] Buffer fullness: 81 / 10000 metrics
2022-07-02T12:52:07Z E! [agent] Error writing to outputs.influxdb_v2: failed to send metrics to any configured server(s)
2022-07-02T12:52:07Z E! [outputs.influxdb_v2] When writing to [https://MYIP:8086]: Post "https://MYIP:8086/api/v2/write?bucket=monitoringdb&org=myorg": http: server gave HTTP response to HTTPS client
This error is coming from your influxdb output. It says your client is using https; however, the server responded with an http response. In your config, you probably specified a URL with https://, but the server is probably only using http://.
When trying to start Yagna I receive this error, what can I do? I can probably get some DEBUG logs if needed?
[2021-05-06T08:45:08Z INFO yagna] Starting yagna service! Version: 0.6.4 (4fc72117 2021-04-15 build #135).
Log is written to /home/user/.local/share/yagna/yagna_rCURRENT.log
[2021-05-06T08:45:08Z INFO yagna] Data directory: /home/user/.local/share/yagna
[2021-05-06T08:45:08Z INFO ya_sb_router::unix] Router listening on: "/tmp/yagna.sock"
[2021-05-06T08:45:08Z INFO ya_persistence::executor] using database at: /home/user/.local/share/yagna/yagna.db
[2021-05-06T08:45:08Z INFO ya_persistence::executor] using database at: /home/user/.local/share/yagna/market.db
[2021-05-06T08:45:08Z INFO ya_persistence::executor] using database at: /home/user/.local/share/yagna/activity.db
[2021-05-06T08:45:08Z INFO ya_persistence::executor] using database at: /home/user/.local/share/yagna/payment.db
[2021-05-06T08:45:08Z INFO ya_identity::service::identity] using default identity: 0xf5ecffecf053508fe97255e046a04ce21c8ee525
[2021-05-06T08:45:08Z INFO yagna] Identity GSB service successfully activated
[2021-05-06T08:45:08Z INFO ya_metrics::pusher] Metrics pusher started
[2021-05-06T08:45:08Z INFO yagna] Metrics GSB service successfully activated
[2021-05-06T08:45:08Z INFO ya_service_bus::remote_router] trying to connect to: /tmp/yagna.sock
[2021-05-06T08:45:08Z INFO ya_service_bus::connection] started connection to gsb
[2021-05-06T08:45:08Z INFO ya_metrics::pusher] Starting metrics pusher
[2021-05-06T08:45:10Z INFO yagna] Version GSB service successfully activated
[2021-05-06T08:45:10Z INFO ya_net::service] using default identity as network id: 0xf5ecffecf053508fe97255e046a04ce21c8ee525
[2021-05-06T08:45:10Z WARN ya_net::handler] Failed to bind handlers: DNS Error: Not Implemented; retrying in 2 s
[2021-05-06T08:45:12Z WARN ya_net::handler] Failed to bind handlers: DNS Error: Not Implemented; retrying in 4 s
[2021-05-06T08:45:16Z WARN ya_net::handler] Failed to bind handlers: DNS Error: Not Implemented; retrying in 8 s
EDIT: nslookup
Server: 10.139.1.1
Address: 10.139.1.1#53
** server can't find _net._tcp.dev.golem.network: NOTIMP
I'm not sure what is the reason here, but it seems like DNS is not able to resolve _net._tcp.dev.golem.network SRV record yielding 'Not Implemented'. It is very odd, since Yagna is using Google's DNS servers as a default.
When you face this again pls try to check output of following command
nslookup -q=SRV _net._tcp.dev.golem.network 8.8.8.8
The user has trouble reaching Google's DNS with nslookup, so it appears to be something on his end. He is also using a proxy for his connection, so it must happen somewhere in there. Closing thread.
I have one API that have been published in WSO2 API gateway.
When I test API, I got this error message from console.
Exception in thread "pool-65-thread-1"
java.lang.NumberFormatException: For input string: "0:0:0:0:0:0:0:1"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.wso2.carbon.apimgt.impl.utils.APIUtil.ipToLong_aroundBody512(APIUtil.java:7851)
at org.wso2.carbon.apimgt.impl.utils.APIUtil.ipToLong(APIUtil.java:7847)
at org.wso2.carbon.apimgt.gateway.throttling.publisher.DataProcessAndPublishingAgent.run_aroundBody4(DataProcessAndPublishingAgent.java:155)
at org.wso2.carbon.apimgt.gateway.throttling.publisher.DataProcessAndPublishingAgent.run(DataProcessAndPublishingAgent.java:141)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
According to the blog, they say it need to disable IPv6.
I just disable by registry for IPv6 and add IPV6support only at JAVA_OPTS at wso2server.bat file.
Then I restart again, it still show as IPv6 address.
[2020-01-15 14:40:21,171] INFO - PassThroughListeningIOReactorManager
Pass-through HTTP Listener started on 0:0:0:0:0:0:0:0:8280
[2020-01-15 14:40:21,172] INFO - PassThroughHttpMultiSSLListener Starting
Pass-through HTTPS Listener...
[2020-01-15 14:40:21,192] INFO -
PassThroughListeningIOReactorManager Pass-through HTTPS Listener
started on 0:0:0:0:0:0:0:0:8243
[2020-01-15 14:40:21,449] INFO -
TaskServiceImpl Task service starting in STANDALONE mode...
[2020-01-15 14:40:21,509] INFO - RegistryEventingServiceComponent
Successfully Initialized Eventing on Registry
[2020-01-15 14:40:21,652] INFO - JMXServerManager JMX Service URL :
service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi
When I run again, I got same error.
Please help to answer?
In the latest API Manager versions, we have provided IPv6 support for throttling usecases. If you take the latest pack or a WUM updated pack of your current version
, you should not get this issue.
Also as a workaround, you could set an IPv4 address using X-Forwarded-For header and invoke the API
I have integrated WSO2 Identity server and WSO2 API Manager.
While starting the WSO2 Id Server , I am getting the below error in console..
ERROR {org.wso2.carbon.core.init.JMXServerManager} - Could not create the RMI local registry
java.rmi.server.ExportException: Port already in use: 9999; nested exception is:
java.net.BindException: Address already in use: JVM_Bind
at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:341)
at sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:249)
at sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:411)
at sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
at sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:208)
at sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:152)
at sun.rmi.registry.RegistryImpl.<init>(RegistryImpl.java:137)
Can anyone help?
But its starting successfully with the following message
INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - Server : WSO2 Identity Server-5.2.0
[2016-12-27 15:31:13,744] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - WSO2 Carbon started in 57 sec
[2016-12-27 15:31:14,909] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console URL : https://localhost:9443/carbon/
[2016-12-27 15:31:14,948] INFO {org.wso2.carbon.identity.authenticator.x509Certificate.internal.X509CertificateServiceComponent} - X509 Certificate Servlet activated successfully..
Before i started wso2server.bat , there is nothing in the port..
My machine's JAVA_OPTS was being set to debug mode and this was causing WSO2 to start in debug mode . Thus it was listening to 9999 port.
I have removed the JAVA_OPTS and now able to start it properly.
related issue
WSO2 Identity Server listening to port 9999
I'm using Infinispan to create a distributed cache between two servers and to leverage its failover feature.
I initially tested my webservice on two local instances of tomcat, using the pre-configured JGroups configuration file provided by infinispan-core-7.0.0.Final.jar. I was able to get the distributed cache working between the two Tomcat instances since the pre-configured xml files were using the loopback ip address.
I then moved the webservice onto two separate servers and have been unable to have them join the same Group. I created my own custom JGroups tcp configuration xml because using the loopback ip in the pre-configured one was causing some issues.
I don't have much experience in setting up tcp or udp channel, so I think the problem may lie with my JGroups configuration file (I based it off the pre-configured one).
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.4.xsd">
<!-- bind_addr="${jgroups.tcp.address:127.0.0.1}"-->
<TCP
bind_addr="GLOBAL"
bind_port="${jgroups.tcp.port:7800}"
port_range="30"
recv_buf_size="20m"
send_buf_size="640k"
max_bundle_size="31k"
use_send_queues="true"
enable_diagnostics="false"
bundler_type="sender-sends-with-timer"
thread_naming_pattern="pl"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="30"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="30"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
internal_thread_pool.enabled="true"
internal_thread_pool.min_threads="2"
internal_thread_pool.max_threads="4"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="100"
internal_thread_pool.rejection_policy="Discard"
/>
<!-- Ergonomics, new in JGroups 2.11, are disabled by default in TCPPING until JGRP-1253 is resolved -->
<!--
<TCPPING timeout="3000"
initial_hosts="localhost[7800],localhost[7801]"
port_range="5"
num_initial_members="3"
ergonomics="false"
/>
-->
<!-- bind_addr="${jgroups.bind_addr:127.0.0.1}" -->
<!-- ip_ttl="${jgroups.udp.ip_ttl:2}"-->
<MPING bind_addr="GLOBAL" break_on_coord_rsp="true"
mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}"
mcast_port="${jgroups.mping.mcast_port:43366}"
num_initial_members="3"/>
<MERGE3/>
<FD_SOCK/>
<FD timeout="3000" max_tries="5"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK2 use_mcast_xmit="false"
xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"/>
<UNICAST3 xmit_interval="500"
xmit_table_num_rows="20"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"
conn_expiry_timeout="0"/>
<pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>
<pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>
<tom.TOA/> <!-- the TOA is only needed for total order transactions-->
<MFC max_credits="2m" min_threshold="0.40"/>
<FRAG2 frag_size="30k"/>
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
</config>
My initial thought is that the problem may be with the bind_addr in the TCP and MPing elements. The two servers are on the same network and are able to ping each other. Anyone have any tips/insights on the configuration file above?
If it helps I've posted what's in the log file in regards to the Infinispan/JGroups startup below:
SERVER 1:
INFO JGroupsTransport - ISPN000078: Starting JGroups channel esrs
Nov 20, 2014 3:22:43 AM org.jgroups.logging.JDKLogImpl warn
WARNING: JGRP000014: Discovery.num_initial_members has been deprecated: will be ignored
INFO JGroupsTransport - ISPN000094: Received new cluster view for channel esrs: [udmesrs02-61057|0] (1) [udmesrs02-61057]
INFO JGroupsTransport - ISPN000079: Channel esrs local address is udmesrs02-61057
INFO GlobalComponentRegistry - ISPN000128: Infinispan version: Infinispan 'Guinness' 7.0.0.Final
SERVER 2:
INFO JGroupsTransport - ISPN000078: Starting JGroups channel esrs
Nov 20, 2014 3:20:28 AM org.jgroups.logging.JDKLogImpl warn
WARNING: JGRP000014: Discovery.num_initial_members has been deprecated: will be ignored
INFO JGroupsTransport - ISPN000094: Received new cluster view for channel esrs: [udmesrs01-16389|0] (1) [udmesrs01-16389]
INFO JGroupsTransport - ISPN000079: Channel esrs local address is udmesrs01-16389
INFO GlobalComponentRegistry - ISPN000128: Infinispan version: Infinispan 'Guinness' 7.0.0.Final
There are two possible issues: IPv4/IPv6 issues and UDP routing.
First try to set -Djava.net.preferIPv4Stack=true on both machines.
If that does not help, check your UDP firewall and routing settings.
If you don't find anything strange there, you'll have to use tcpdump on udp and port 43366 and tcp 7800 and see if there's any activity - there should be some multicast packet going from each node at least every 15 s.