I'm writing my own SignalR Client on Java and I'm facing some troubles.
At first I want to implement PersistentConnection logic. My server code is taken from example:
public class Battle : PersistentConnection
{
protected override Task OnConnectedAsync(IRequest request, string connectionId)
{
return Connection.Broadcast("Connection " + connectionId + " connected");
}
protected override Task OnReconnectedAsync(IRequest request, IEnumerable<string> groups, string clientId)
{
return Connection.Broadcast("Client " + clientId + " re-connected");
}
protected override Task OnReceivedAsync(IRequest request, string connectionId, string data)
{
// return Connection.Broadcast("Connection " + connectionId + " sent ");
return Connection.Send(connectionId, "Connection " + connectionId + " sent ");
}
protected override Task OnDisconnectAsync(string connectionId)
{
return Connection.Broadcast("Connection " + connectionId + " disconncted");
}
protected override Task OnErrorAsync(Exception error)
{
return Connection.Broadcast("Error occured " + error);
}
}
Judging by .NET client code, I understood that in order to connect to server client should:
1) Send request to http://myserver/battle/negotiate and get ConnectionId from response
2) Send request to http://myserver/battle/connect?transport=longPolling&connectionId=<received_connection_id>
My question is waht should client do to maintain connection? How should it listen to server broadcasting messages?
Another issue is that I receive no response when I'm trying to send message from client to server after connection has been established. I send request to http://myserver/battle/send?transport=longPolling&connectionId=<received_connection_id>. Method OnReceivedAsync is always called, but I get no response (independently of data sent).
I'd be grateful for any explanations on my questions and on internal principles of SignalR work.
Thanks in advance.
I've tried to do the same thing that you are doing! I've implemented a SignalR-client for Android and I called it SignalA. :) Have a look at it on github.
There are several methods of communication used in SignalR. My understanding is that SignalR will use the best one it determines will work with the given connection.
The general idea behind long polling is this: The client sends a request to the server with a long timeout period. Say 2 minutes or 5 minutes. If the server has a message to send to the client, it then responds to the client request with the message. Otherwise the request will eventually timeout, at which point the client initiates a new request. So, basically, the client is nearly always in a call to the server. The server only ever answers when it has a message for the client. So the client could send the request to the server and say, 90 seconds later, the server gets a message for the client.
For more information, read the Long Polling section of this Wikipedia article: http://en.wikipedia.org/wiki/Push_technology
But for the specifics, you really need to examine the .NET code closely. Hopefully this overview will give you enough to understand what's going on there, though.
Related
I use aws-neptune.
And I try to implement my queries as transactional(with sessionClient like: https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-sessions.html). But when I try to implement it, closing client throws exception. There is similar issue like my case: https://groups.google.com/g/janusgraph-users/c/N1TPbUU7Szw
My code looks like:
#Bean
public Cluster gremlinCluster()
{
return Cluster.build()
.addContactPoint(GREMLIN_ENDPOINT)
.port(GREMLIN_PORT)
.enableSsl(GREMLIN_SSL_ENABLED)
.keyCertChainFile("classpath:SFSRootCAG2.pem")
.create();
}
private void runInTransaction()
{
String sessionId = UUID.randomUUID().toString();
Client.SessionedClient client = cluster.connect(sessionId);
try
{
client.submit("query...");
}
finally
{
if (client != null)
{
client.close();
}
}
}
And exception is:
INFO (ConnectionPool.java:225) - Signalled closing of connection pool on Host{address=...} with core size of 1
WARN (Connection.java:322) - Timeout while trying to close connection on ... - force closing - server will close session on shutdown or expiration.
java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1771)
Is there any suggestion?
This might be a connectivity problem with the server which you are not able to observe while sending the query because you are not waiting for the future to complete.
When you do a client.submit("query...");, you receive a future. You need to wait for that future to complete to observe any exceptions (or success).
I would suggest the following:
Try hitting the server with a health status call using curl to verify connectivity with the server.
Replace the client.submit("query..."); with client.submit("query...").all().join(); to get the error during connection with the server.
I am trying to play Widevine encrypted content on an Android TV application using Exoplayer. I have my video URL which is served from a CDN and acquired with a ticket. I have my widevine license URL, a ticket and a auth token for the license server.
I am creating a drmSessionManager, putting the necessary headers needed by the license server as follows:
UUID drmSchemeUuid = C.WIDEVINE_UUID;
mediaDrm = FrameworkMediaDrm.newInstance(drmSchemeUuid);
static final String USER_AGENT = "user-agent";
HttpMediaDrmCallback drmCallback = new HttpMediaDrmCallback("my-license-server", new DefaultHttpDataSourceFactory(USER_AGENT));
keyRequestProperties.put("ticket-header", ticket);
keyRequestProperties.put("token-header", token);
drmCallback.setKeyRequestProperty("ticket-header", ticket);
drmCallback.setKeyRequestProperty("token-header", token);
new DefaultDrmSessionManager(drmSchemeUuid, mediaDrm, drmCallback, keyRequestProperties)
After this Exoplayer handles most of the stuff, the following breakpoints are hit.
response = callback.executeKeyRequest(uuid, (KeyRequest) request);
in class DefaultDrmSession
return executePost(dataSourceFactory, url, request.getData(), requestProperties) in HttpMediaDrmCallback
I can observe that everything is fine till this point, the URL is correct, the headers are set fine.
in the following piece of code, I can observe that the dataSpec is fine, trying to POST a request to the license server with the correct data, but when making the connection the response code returns 405.
in class : DefaultHttpDataSource
in method : public long open(DataSpec dataSpec)
this.dataSpec = dataSpec;
this.bytesRead = 0;
this.bytesSkipped = 0;
transferInitializing(dataSpec);
try {
connection = makeConnection(dataSpec);
} catch (IOException e) {
throw new HttpDataSourceException("Unable to connect to " + dataSpec.uri.toString(), e,
dataSpec, HttpDataSourceException.TYPE_OPEN);
}
try {
responseCode = connection.getResponseCode();
responseMessage = connection.getResponseMessage();
} catch (IOException e) {
closeConnectionQuietly();
throw new HttpDataSourceException("Unable to connect to " + dataSpec.uri.toString(), e,
dataSpec, HttpDataSourceException.TYPE_OPEN);
}
When using postman to make a request to the URL, a GET request returns the following body with a response code of 405.
{
"Message": "The requested resource does not support http method 'GET'." }
a POST request also returns response code 405 but returns an empty body.
In both cases the following header is also returned, which I suppose the request must be accepting GET and POST requests.
Access-Control-Allow-Methods →GET, POST
I have no access to the configuration of the DRM server, and my contacts which are responsible of the DRM server tells me that POST requests must be working fine since there are clients which have managed to get the content to play from the same DRM server.
I am quite confused at the moment and think maybe I am missing some sort of configuration in exoplayer since I am quite new to the concept of DRMs.
Any help would be greatly appreciated.
We figured out the solution. The ticket supplied for the DRM license server was wrong. This works as it is supposed to now and the content is getting played. Just in case anyone somehow gets the same problem or is in need of a basic Widevine content playing code, this works fine at the moment.
Best regards.
I'm working my way through boost's asio tutorial. I'm looking into their chat example. More specifically, I'm trying to split their chat client from a sender+receiver, to just a sender and just a receiver, but I'm seeing some behaviour that I can't explain.
The setup consists of:
boost::asio::io_service io_service;
tcp::resolver::iterator endpoint = resolver.resolve(...);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
boost::asio::async_connect(socket, endpoint, bind(handle_connect, ... ));
The sending portion effectively conisists of:
while (std::cin.getline(str))
io_service.post( do_write, str );
and
void do_write (string str)
{
boost::asio::async_write(socket, str, bind( handle_write, ... ));
}
The receive section consists of
void handle_connect(...)
{
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
void handle_read(...)
{
std::cout << read_msg_;
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
If I comment out the content of handle_connect to isolate the send portion, my other client (compiled using the original code) does not receive anything. If I revert, then comment out the content of handle_read, my other client only receives the first message.
Why is it necessary to call async_read() in order to be able to post() an async_write()?
The full unmodified code is linked above.
The problem here is that, your io_service is running out of work and stops processing requests even before you start sending your chat messages.
If you comment out the body of handle_connect, then the only work it had to do was to dispatch the handle_connect handler and then execute it once the connection was done.
std::size_t scheduler::run(asio::error_code& ec)
{
.....
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
}
So, you have to provide it with something in it's operation queue. This was done with handle_read_header handler in the original code as this handler would always be in the need of servicing till the client gets something from the server.
You can do what you want to do by providing work to the io_service.
asio::io_context io_context;
asio::io_context::work wrk(io_context); // make `run` run forever
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], argv[2]);
chat_client c(io_context, endpoints);
asio::thread t(boost::bind(&asio::io_context::run, &io_context));
I have a problem here with the dynamic TCP connection approach (Spring-IP Dynamic FTP Sample). When a message is received, I want to get the TCP connection details for the received message. this way I can keep track in my application of the sender sending that message. But in Service activator I am not able to get this detail.
Also need the connection details when my TCP client is connected to the server. This way if the server wants to initiate the communication, it will have the connection details.
For info my application has more than one TCP clients and servers.
Got an answer reply in another post from Mr. Gary Russell.
Answer
For normal request/reply processing, using an inbound gateway, the framework will take care of routing the service activator reply to the correct socket. It does this by using the connection id header.
If you need to provide arbitrary replies (e.g. more than one reply for a message, you have to use inbound and outbound channel adapters and your application is responsible for setting up the connection id header.
There are two ways to access the required header in a POJO invoked by a service activator:
public void foo(byte[] payload, #Header(IpHeaders.CONNECTION_ID) String connectionId) {
...
}
public void foo(Message<byte[]> message) {
String connectionId = message.getHeaders().get(...);
}
Then, when you send your replies, you need to set that header somehow.
EDIT
Below Is My Implementation
To get all the connected clients simply get the ServerConnectionFactory from the context and access the method .getConnectedClients(). It returns the list connectionIds for each connected client.
AbstractServerConnectionFactory connFactory = (AbstractServerConnectionFactory) appContext.getBean("server");
List<String> openConns = connFactory.getOpenConnectionIds();
As mentioned above in Gary's response, use this connectionId and set it in conneciton header while sending the message to a client. Sample code as follows.
MessageChannel serverOutAdapter = null;
try{
serverOutAdapter = (MessageChannel) appContext.getBean("toObAdapter");
}catch(Exception ex){
LOGGER.error(ex.getMessage());
throw ex;
}
if(null == serverOutAdapter){
throw new Exception("output channel not available");
}
AbstractServerConnectionFactory connFactory = (AbstractServerConnectionFactory) appContext.getBean("serverConnFactoryBeanId");
List<String> openConns = connFactory.getOpenConnectionIds();
if(null == openConns || openConns.size() == 0){
throw new Exception("No Client connection registered");
}
for (String connId: openConns) {
MessageBuilder<String> mb = MessageBuilder.withPayload(message).setHeader(IpHeaders.CONNECTION_ID, connId);
serverOutAdapter.send(mb.build());
}
Note 1: If u want to send messages from the server then be cautious to configure the server and client connection factories in a way that they do not time-out. i.e put so-keep-alive = true in client connection factory.
Note 2: If the server has to communicate with the client then make sure that the client connects to the server as soon as the context is loaded. Because Spring-IP client connection factory connects only when the first message is sent out. In order to connect client after context load, put client-mode="true" in tcp client context for the "tcp-outbound-channel-adapter".
I am trying to write a simple burp extension to capture a HTTP packet, modify it and forward it to the server. I need to do this for some security testing. I started with a code to just print the received packet. Attaching the code below, which i got from various Burp tutorials. I configured Eclipse my proxy to localhost and then ran this code. The code runs fine, Burp opens up correctly and also intercepts the packet but I cant see anything on my IDE console. Please help me out, as I am pretty new to Burp and couldn't understand much from the help available online.
package burp;
public class BurpExtender2 implements IBurpExtender, IHttpListener, IProxyListener
{
private IBurpExtenderCallbacks callbacks;
public void registerExtenderCallbacks(IBurpExtenderCallbacks callbacks)
{
helpers = callbacks.getHelpers();
callbacks.registerHttpListener(this);
}
public void processHttpMessage(int toolFlag, boolean messageIsRequest, IHttpRequestResponse messageInfo)
{
System.out.println(
(messageIsRequest ? "HTTP request to " : "HTTP response from ") +
messageInfo.getHttpService() +
" [" + callbacks.getToolName(toolFlag) + "]");
}
public void processProxyMessage(boolean messageIsRequest, IInterceptedProxyMessage message)
{
System.out.println(message);
}
}
I just want to know how i can get the intercepted packet in my Code and then forward it.
I wonder why you're trying to write a Burp extension to capture packets. Wouldn't you use a sniffer for that?
Anyway, thanks to your code, I was able to get myself started writing extensions. Here's what I found with your method processHttpMessage
" [" + callbacks.getToolName(toolFlag) + "]"
Something is buggy here, possibly in the Burp Suite (1.5.16); I couldn't manage to get getToolName to print anything even with a hardcoded int, and it's not possible to put a debugger on Burp, so I gave up.
System.out.println(
(messageIsRequest ? "HTTP request to " : "HTTP response from ") +
messageInfo.getHttpService());
System.out.println(" [callbacks.getToolName(toolFlag)] = ");
System.out.println(callbacks.getToolName(toolFlag));
Anyway, this will print to the console, but you won't see any value for toolFlag.