AllJoyn Version: Thin core 16.04
aj\allseen\core\ajtcl\test\doorsvc.cc
I try to study the security flow for doorsvc example code.
…
AJ_SetIdleTimeouts(&bus, 10, 4);
What is the purpose to use “AJ_SetIdleTimeouts(&bus, 10, 4);”?
And any detail information about AJ_SetIdleTimeouts?
Thank you very much
AJ_SetIdleTimeouts() is a method defined at the aj_link_timeout.h file. The purpose of the method is to adjust the idle-timeout value between the routing node and the thin client. In AJ Thin Core Library, Thin Clients connect a standard client (AKA routing node) and use that client as routing their messages. During this connection, routing node send ping messages to the thin clients in order to make sure that they are up&running.
You can find the method declaration at the below:
/**
* Set the idle timeouts from the Routing node to the TCL.
*
* #param bus The bus attachment to which the app is connected to
* #param idleTo Requested Idle Timeout for the link. i.e. time after which the Routing node
* must send a DBus ping to Leaf node in case of inactivity.
* Use 0 to leave unchanged.
* #param probeTo Requested Probe timeout. The time from the Routing node sending the DBus
* ping to the expected response.
* Use 0 to leave unchanged.
* #return Return AJ_Status
* - AJ_OK if the request was sent
* - An error status otherwise
*/
AJ_EXPORT
AJ_Status AJ_SetIdleTimeouts(AJ_BusAttachment* bus, uint32_t idleTo, uint32_t probeTo);
Related
I noticed that a maintainer of gRPC suggested using MAX_CONNECTION_AGE to load balance long lived gRPC streams in their video, Using gRPC for Long-lived and Streaming RPCs - Eric Anderson, Google. I also found an implemented proposal which only lists C, Java and Go:
Status: Implemented
Implemented in: C, Java, Go
In practice, how do I use MAX_CONNECTION_AGE in a python server?
You can use grpc.server()'s options argument:
options – An optional list of key-value pairs (channel_arguments in
gRPC runtime) to configure the channel.
You would use GRPC_ARG_MAX_CONNECTION_AGE_MS, defined in grpc_types.h:
/** Maximum time that a channel may exist. Int valued, milliseconds.
* INT_MAX means unlimited. */
#define GRPC_ARG_MAX_CONNECTION_AGE_MS "grpc.max_connection_age_ms"
For 30 days as suggested in Using gRPC for Long-lived and Streaming RPCs - Eric Anderson, Google, GRPC_ARG_MAX_CONNECTION_AGE_MS should be 2592000000 (30*24*60*60*1000)
We have a public gRPC API. We have a client that is consuming our API based on the REST paradigm of creating a connection (channel) for every request. We suspect that they are not closing this channel once the request has been made.
On the server side, everything functions ok for a while, then something appears to be exhausted. Requests back up on the servers and are not processed - this results in our proxy timing out and sending an unavailable response. Restarting the server fixes the issue, and I can see backed up requests being flushed in the logs as the servers shutdown.
Unfortunately, it seems that there is no way to monitor what is happening on the server side and prune these connections. We have the following keep alive settings, but they don't appear to have an impact:
grpc.KeepaliveParams(keepalive.ServerParameters{
MaxConnectionIdle: time.Minute * 5,
MaxConnectionAge: time.Minute * 15,
MaxConnectionAgeGrace: time.Minute * 1,
Time: time.Second * 60,
Timeout: time.Second * 10,
})
We also have tried upping MaxConcurrentStreams from the default 250 to 1000, but the pod
Is there any way that we can monitor channel creation, usage and destruction on the server side - if only to prove or disprove that the clients method of consumption is causing the problems.
Verbose logging has not been helpful as it seems to only log the client activity on the server (I.e. the server consuming pub/sub and logging as a client). I have also looked a channelz, but we have mutual TLS auth and I have been unsuccessful in being able to get it to work on our production pods.
We have instructed our client to use a single channel, and if that is not possible, to close the channels that they are creating, but they are a corporate and move very slowly. We've also not been able to examine their code. We only know that they are developing with dotnet. We're also unable to replicate the behaviour running our own go client at similar volumes.
The culpit is MaxConnectionIdle, it will always create a new http2server after the specified amount of time, and eventually your service will crash due to a goroutine leak.
Remove MaxConnectionIdle and MaxConnectionAge, then (preferably) make sure both ServerParameters and ClientParameters are using the same Time and Timeout.
const (
Time = 5 * time.Second // wait X seconds, then send ping if there is no activity
Timeout = 5 * time.Second // wait for ping back
)
// server code...
grpc.KeepaliveParams(keepalive.ServerParameters{
Time: Time,
Timeout: Timeout,
MaxConnectionAgeGrace: 10 * time.Second,
})
// client code...
grpc.WithKeepaliveParams(keepalive.ClientParameters{
Time: Time,
Timeout: Timeout,
PermitWithoutStream: true,
}),
I have a question regarding Corda 3.1 and using the network map for the purpose of seeing if node is up - is it generally a good idea to use it for that ?
From these notes https://docs.corda.net/network-map.html#http-network-map-protocol as there is a polling of the network map participants data (in case that our cached data expired) it should be technically possible to do that.
Could you see any drawbacks of implementing this in that way ?
If the node is configured with the compatibilityZoneURL config then it first uploads its own signed NodeInfo to the server (and each time it changes on startup) and then proceeds to download the entire network map. The network map consists of a list of NodeInfo hashes. The node periodically polls for the network map (based on the HTTP cache expiry header) and any new entries are downloaded and cached. Entries which no longer exist are deleted from the node’s cache.
It is not a good idea to use the network map as a liveness service.
The network does have an event horizon parameter. If the node is offline for longer than the length of time specified by the event horizon parameter, it is ejected from the network map. However, the event horizon would usually be days (e.g. 30 days).
Instead, you can just ping the node's P2P port using a tool like Telnet. If you run telnet <node host> <P2P port> and the node is up, you'll see something like:
Trying ::1...
Connected to localhost.
Escape character is '^]'.
If the node is down, you'll see something like:
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
Alternatively, if you want to check liveness automatically from within a flow, you can define a subflow like the one below. This flow will return a boolean indicating whether a given party on the network is offline.
#InitiatingFlow
class IsLiveFlow(val otherPartyName: CordaX500Name) : FlowLogic<Boolean>() {
#Suspendable
override fun call(): Boolean {
val otherPartyInfo = serviceHub.networkMapCache.getNodeByLegalName(otherPartyName)!!
val otherPartyP2PAddress = otherPartyInfo.addresses.single()
return try {
Socket().use { socket ->
socket.connect(InetSocketAddress(otherPartyP2PAddress.host, otherPartyP2PAddress.port), 1000)
true
}
} catch (e: IOException) {
false
}
}
}
I want to get contact URI of specific peer in asterisk from diaplan.I have two peers 2001 and 2001 I want contact URI of both peers how can I get this?
You can get any sip header(including contact ofcz) by using SIP_HEADER function.
h2*CLI> core show function SIP_HEADER
-= Info about function 'SIP_HEADER' =-
[Synopsis]
Gets the specified SIP header from an incoming INVITE message.
[Description]
Since there are several headers (such as Via) which can occur multiple times,
SIP_HEADER takes an optional second argument to specify which header with
that name to retrieve. Headers start at offset '1'.
[Syntax]
SIP_HEADER(name[,number])
[Arguments]
number
If not specified, defaults to '1'.
[See Also]
Not available
However it will give contact of INVITE field,not registration info. You can get SIPPEER function to get peer info.
h2*CLI> core show function SIPPEER
-= Info about function 'SIPPEER' =-
[Synopsis]
Gets SIP peer information.
[Description]
Not available
[Syntax]
SIPPEER(peername[,item])
[Arguments]
item
ip - (default) The ip address.
port - The port number.
mailbox - The configured mailbox.
context - The configured context.
expire - The epoch time of the next expire.
dynamic - Is it dynamic? (yes/no).
callerid_name - The configured Caller ID name.
callerid_num - The configured Caller ID number.
callgroup - The configured Callgroup.
pickupgroup - The configured Pickupgroup.
codecs - The configured codecs.
status - Status (if qualify=yes).
regexten - Registration extension.
limit - Call limit (call-limit).
busylevel - Configured call level for signalling busy.
curcalls - Current amount of calls. Only available if call-limit
is set.
language - Default language for peer.
accountcode - Account code for this peer.
useragent - Current user agent id for peer.
maxforwards - The value used for SIP loop prevention in outbound
requests
chanvar[name] - A channel variable configured with setvar for this
peer.
codec[x] - Preferred codec index number <x> (beginning with
zero).
[See Also]
Not available
I have a client which is subscribed to several multicast feeds via a third party library with a callback. The third party library does something like this:
sockaddr_in senderAddr;
socklen_t len = sizeof(struct sockaddr_in);
for fd in allMulticastFeeds:
{
recvfrom(fd, buf, sizeof(buf), 0, (struct sockaddr*)&senderAddr, &len);
(*func)(buf, rc, &senderAddr, 0, stuff*);
}
Several listeners subscribe to me, and my job is to parse the messages contained in buf and callback on the appropiate clients subscribed to me. One of the options clients who subscribe to me give me is whether they want to receive messages outgoing from the local host or not. Because different subscribers to me might want different behavior I simply cannot translate this to a disabling of multicast and I must, for each client, check whether he or she requested local messages, and if not, then hand them over only if I can verify from the sockaddr_in that they are not local.
My approach to do this was to do the following in the constructor of my service:
sockaddr_in self;
hostent *he;
char local[HOST_NAME_MAX];
gethostname(local, sizeof(local));
he = gethostbyname(local);
if (he)
{
memcpy(&self.sin_addr, he->h_addr_list[0], he->h_length);
selfAddr_ = self.sin_addr.s_addr;
}
where selfAddr_ is a member variable of type long. Then, when I get the callback described above from the network listener I do something like this:
//code to get the correct listener for this type of packet after parsing it
if (listener->AskedForLocal || selfAddr_ != s_addr)
listener->onFoo(bla,bla);
where s_addr is the sin_addr.s_addr contained in the sockaddr_in* I get from the third party library calling recvfrom. This seemed to work when I tested it but I have a feeling it is failing sometimes and it might have to do with the adhoc using of the first element of he->h_addr_list. Is this method unreliable and if so, is the only reliable method to check every element of the he->h_addr_list against the incoming s_addr?
Looking at the constructor for your service and trying it on Ubuntu 10.04 LTS running as a virtualbox image I get selfAddr_ = 0x10107f
This corresponds to the loopback address 127.0.0.1
This might not be what you're looking for - I suspect you're probably after a specific network interface's IP address, if so try this answer.
Hope this helps.
Sorry but somehow I don't see 0x10107f corresponding to 127.0.0.1... Shouldn't that be 0x0100007F...