How to forward incoming data via REST to an SSE stream in Quarkus - server-sent-events

In my setting I want to forward certain status changes via an SSE channel (Server sent events). The status changes are initiated by calling a REST endpoint. So, I need to forward the incoming status change to the SSE stream.
What is the best/simplest way to accomplish this in Quarkus.
One solution I can think of is to use an EventBus (https://quarkus.io/guides/reactive-messaging). The SSE endpoint would subscribe to the status changes and push it through the SSE channel. The status change endpoint publishes appropriate events.
Is this a viable solution? Are there other (simpler) solutions? Do I need to use the reactive stuff in any case to accomplish this?
Any help is very appreciated!

Easiest way would be to use rxjava as a stream provider. Firstly you need to add rxjava dependency. It can go either from reactive dependencies in quarkus such as kafka, or by using it directly(if you don't need any streaming libraries):
<dependency>
<groupId>io.reactivex.rxjava2</groupId>
<artifactId>rxjava</artifactId>
<version>2.2.19</version>
</dependency>
Here's example on how to send random double value each second:
#GET
#Path("/stream")
#Produces(MediaType.SERVER_SENT_EVENTS)
#SseElementType("text/plain")
public Publisher<Double> stream() {
return Flowable.interval(1, TimeUnit.SECONDS).map(tick -> new Random().nextDouble());
}
We create new Flowable which will fire every second and on each tick we generate next random double. Investigate any other options on how you can create Flowable such as Flowable.fromFuture() to adapt it for your specific code logic.
P.S code above will generate new Flowable each time you query this endpoint, I made it to save up space, in your case I assume you'll have a single source of events that you can build once and use the same instance every time endpoint queried

Dmytro, thanks for pointing me in the right direction.
I have opted for Mutiny in connection with Kotlin. My code now looks like this:
data class DeviceStatus(var status: Status = Status.OFFLINE) {
enum class Status {OFFLINE, CONNECTED, ANALYZING, MAINTENANCE}
}
#ApplicationScoped
class DeviceStatusService {
var deviceStatusProcessor: PublishProcessor<DeviceStatus> = PublishProcessor.create()
var deviceStatusQueue: Flowable<DeviceStatus> = Flowable.fromPublisher(deviceStatusProcessor)
fun pushDeviceStatus(deviceStatus: DeviceStatus) {
deviceStatusProcessor.onNext(deviceStatus)
}
fun getStream(): Multi<DeviceStatus> {
return Multi.createFrom().publisher(deviceStatusQueue)
}
}
#Path("/deviceStatus")
class DeviceStatusResource {
private val LOGGER: Logger = Logger.getLogger("DeviceStatusResource")
#Inject
#field: Default
lateinit var deviceStatusService: DeviceStatusService
#POST
#Consumes(MediaType.APPLICATION_JSON)
fun status(status: DeviceStatus): Response {
LOGGER.info("POST /deviceStatus " + status.status);
deviceStatusService.pushDeviceStatus(status)
return Response.ok().build();
}
#GET
#Path("/eventStream")
#Produces(MediaType.SERVER_SENT_EVENTS)
#SseElementType(MediaType.APPLICATION_JSON)
fun stream(): Multi<DeviceStatus>? {
return deviceStatusService.getStream()
}
}
As minimal setup the service could directly use the deviceStatusProcessor as publisher. However, the Flowable adds buffering.
Comments on the implementation are welcome.

Related

Ethereum Chainlink HTTP Get not pinging my HTTP endpoint

I am attempting to have my Ethereum smart contract connect to an external HTTP endpoint using Chainlink. Following along with Chainlink's documentation (https://docs.chain.link/docs/advanced-tutorial/) I deployed this contract onto the Rinkeby testnet.
pragma solidity ^0.8.7;
import "github.com/smartcontractkit/chainlink/blob/develop/contracts/src/v0.8/ChainlinkClient.sol";
// MyContract inherits the ChainlinkClient contract to gain the
// functionality of creating Chainlink requests
contract getHTTP is ChainlinkClient {
using Chainlink for Chainlink.Request;
bytes32 private thisDoesNotWork;
address private owner;
address private ORACLE_ADDRESS = 0x718Cc73722a2621De5F2f0Cb47A5180875f62D60;
bytes32 private JOBID = stringToBytes32("86b489ec4d84439c96181a8df7b22223");
string private url = "<myHTTPAddressAsString>";
// This endpoint URL is hard coded in my contract, and stored as a string (as in the example code).
// I control it and can have it reply with whatever I want, which might be an issue, returning data in a format that the oracle rejects
uint256 constant private ORACLE_PAYMENT = 100000000000000000;
constructor() public {
// Set the address for the LINK token for the network
setPublicChainlinkToken();
owner = msg.sender;
}
function requestBytes()
public
onlyOwner
{
Chainlink.Request memory req = buildChainlinkRequest(JOBID, address(this), this.fulfill.selector);
req.add("get", url);
sendChainlinkRequestTo(ORACLE_ADDRESS, req, ORACLE_PAYMENT);
}
function fulfill(bytes32 _requestId, bytes32 recVal)
public
recordChainlinkFulfillment(_requestId)
{
thisDoesNotWork = recVal;
}
function cancelRequest(
bytes32 _requestId,
uint256 _payment,
bytes4 _callbackFunctionId,
uint256 _expiration
)
public
onlyOwner
{
cancelChainlinkRequest(_requestId, _payment, _callbackFunctionId, _expiration);
}
// withdrawLink allows the owner to withdraw any extra LINK on the contract
function withdrawLink()
public
onlyOwner
{
LinkTokenInterface link = LinkTokenInterface(chainlinkTokenAddress());
require(link.transfer(msg.sender, link.balanceOf(address(this))), "Unable to transfer");
}
modifier onlyOwner() {
require(msg.sender == owner);
_;
}
// A helper funciton to make the string a bytes32
function stringToBytes32(string memory source) private pure returns (bytes32 result) {
bytes memory tempEmptyStringTest = bytes(source);
if (tempEmptyStringTest.length == 0) {
return 0x0;
}
assembly { // solhint-disable-line no-inline-assembly
result := mload(add(source, 32))
}
}
}
I found a node on the Chainlink market (https://market.link/jobs/529c7194-c665-4b30-8d25-5321ea49d9cc) that is currently active on rinkeby (according to Etherscan it has been active within the past 3 days and presumably still working).
I deploy the contract and fund the contract with LINK. I call the requestBytes() function through remix and everything works as expected. Metamask pays the gas, the LINK is removed from my contract, I get a transaction hash, and no errors.
However, my endpoint never logs a request attempt, the oracle never lists a transaction on its Etherscan page, and my data is not present.
I have attempted to use other jobs from the Chainlink market with similar outcomes.
I have also attempted to use other HTTP endpoints, like the ones from the Chainlink examples, with similar outcomes, however I doubt this is the issue, since it appears the HTTP request is never even getting called (as referenced by the fact that my HTTP endpoint does not log the request)
Without an error message, and being new to Web3 dev, I am not sure where to start debugging. I found this comment on Github: https://github.com/smartcontractkit/documentation/issues/513 and implemented the suggestion here without luck.
I also found this: Chainlink - Job not being fulfilled but this was not helpful either.
My current considerations for where the error might be:
The oracles are whitelisted and reject my request outright. Have considered creating my own node but want to avoid if possible at this stage.
I have an type error in how I am formatting the request in my contract, like the example in the GitHub exchange I found and referenced above.
EDIT: I am also open to other options beyond Chainlink to connect my contract to an HTTP GET endpoint, if anyone has any suggestions. Thanks!
I've been working on something similar recently and would suggest you try using the kovan network and the oracle that chainlink has there. Even more specifically, I think it would be a good idea to confirm you can get it working using the api, oracle, and jobid listed in the example on that page you are following... here:
https://docs.chain.link/docs/advanced-tutorial/#contract-example
Once you get that example working, then you can modify it for your usage. The jobid in that tutorial is for returning a (multiplied) uint256... which, for your API, I think is not what you want as you are wanting bytes32 it sounds like... so when you try to use it with your API that returns bytes32 the jobid would be: 7401f318127148a894c00c292e486ffd as seen here:
https://docs.chain.link/docs/decentralized-oracles-ethereum-mainnet/
Another thing that might be your issue, is your api. You say you control what it returns... I think it might have to return a response in bytes format, like Patrick says in his response (and his comments on his response) here:
Get a string from any API using Chainlink Large Response Example
Hope this is helpful. If you cannot get the example in the chainlink docs to work, let me know.

gRPC: How can I distinguish bi-streaming clients at server side?

In this tutorial and example code, a server can call onNext() method on every stream observer, which will broadcast messages to all clients bi-streaming with the server. But there is no method to identify which observer corresponds to which client. How can a server push a message to specific client instead of broadcasting?
According to this answer it is possible to map each observer if client id is provided by metadata. It seems const auto clientMetadata = context->client_metadata(); part does the trick, but I'm working with Java, not C++. Are there any Java equivalent for getting the metadata at server side?
The answer depends a bit on how the clients will be identified. If the initial request provided a handle (like a username, but not registered ahead-of-time), then you could just wait for the first onNext():
public StreamObserver<Chat.ChatMessage> chat(StreamObserver<Chat.ChatMessageFromServer> responseObserver) {
return new StreamObserver<Chat.ChatMessage>() {
#Override
public void onNext(Chat.ChatMessage value) {
String userHandle = value.getHandle();
// observers would now be a map, not a set
observers.put(userHandle, responseObserver);
...
Let's say instead that all users are logged in, and provide a token in the headers, like OAuth. Then you would use an interceptor to authenticate the user and Context to propagate it to the application, as in https://stackoverflow.com/a/40113309/4690866 .
public StreamObserver<Chat.ChatMessage> chat(StreamObserver<Chat.ChatMessageFromServer> responseObserver) {
// USER_IDENTITY is a Context.Key, also used by the interceptor
User user = USER_IDENTITY.get();
observers.put(user.getName(), responseObserver);
return new StreamObserver<Chat.ChatMessage>() {
...
The first one is easier/nicer when the identification only applies to this one RPC. The second one is easier/nicer when the identification applies to many RPCs.

Axon Sagas duplicates events in event store when replaying events to new DB

we have Axon application that stores new Order. For each order state change (OrderStateChangedEvent) it plans couple of tasks. The tasks are triggered and proceeded by yet another Saga (TaskSaga - out of scope of the question)
When I delete the projection database, but leave the event store, then run the application again, the events are replayed (what is correct), but the tasks are duplicated.
I suppose this is because the OrderStateChangedEvent triggers new set of ScheduleTaskCommand each time.
Since I'm new in Axon, can't figure out how to avoid this duplication.
Event store running on AxonServer
Spring boot application autoconfigures the axon stuff
Projection database contains the projection tables and the axon tables:
token_entry
saga_entry
association_value_entry
I suppose all the events are replayed because by recreating the database, the Axon tables are gone (hence no record about last applied event)
Am I missing something?
should the token_entry/saga_entry/association_value_entry tables be part of the DB for the projection tables on each application node?
I thought that the event store might be replayed onto new application node's db any time without changing the event history so I can run as many nodes as I wish. Or I can remove the projection dB any time and run the application, what causes that the events are projected to the fresh db again. Or this is not true?
In general, my problem is that one event produces command leading to new events (duplicated) produced. Should I avoid this "chaining" of events to avoid duplication?
THANKS!
Axon configuration:
#Configuration
public class AxonConfig {
#Bean
public EventSourcingRepository<ApplicationAggregate> applicationEventSourcingRepository(EventStore eventStore) {
return EventSourcingRepository.builder(ApplicationAggregate.class)
.eventStore(eventStore)
.build();
}
#Bean
public SagaStore sagaStore(EntityManager entityManager) {
return JpaSagaStore.builder().entityManagerProvider(new SimpleEntityManagerProvider(entityManager)).build();
}
}
CreateOrderCommand received by Order aggregate (method fromCommand just maps 1:1 command to event)
#CommandHandler
public OrderAggregate(CreateOrderCommand cmd) {
apply(OrderCreatedEvent.fromCommand(cmd))
.andThenApply(() -> OrderStateChangedEvent.builder()
.applicationId(cmd.getOrderId())
.newState(OrderState.NEW)
.build());
}
Order aggregate sets the properties
#EventSourcingHandler
protected void on(OrderCreatedEvent event) {
id = event.getOrderId();
// ... additional properties set
}
#EventSourcingHandler
protected void on(OrderStateChangedEvent cmd) {
this.state = cmd.getNewState();
}
OrderStateChangedEvent is listened by Saga that schedules couple of tasks for the order of the particular state
private Map<String, TaskStatus> tasks = new HashMap<>();
private OrderState orderState;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void on(OrderStateChangedEvent event) {
orderState = event.getNewState();
List<OrderStateAwareTaskDefinition> tasksByState = taskService.getTasksByState(orderState);
if (tasksByState.isEmpty()) {
finishSaga(event.getOrderId());
}
tasksByState.stream()
.map(task -> ScheduleTaskCommand.builder()
.orderId(event.getOrderId())
.taskId(IdentifierFactory.getInstance().generateIdentifier())
.targetState(orderState)
.taskName(task.getTaskName())
.build())
.peek(command -> tasks.put(command.getTaskId(), SCHEDULED))
.forEach(command -> commandGateway.send(command));
}
I think I can help you in this situation.
So, this happens because the TrackingToken used by the TrackingEventProcessor which supplies all the events to your Saga instances is initialized to the beginning of the event stream. Due to this the TrackingEventProcessor will start from the beginning of time, thus getting all your commands dispatched for a second time.
There are a couple of things you could do to resolve this.
You could, instead of wiping the entire database, only wipe the projection tables and leave the token table intact.
You could configure the initialTrackingToken of a TrackingEventProcessor to start at the head of the event stream instead of the tail.
Option 1 would work out find, but requires some delegation from the operations perspective. Option 2 leaves it in the hands of a developer, potentially a little safer than the other solution.
To adjust the token to start at the head, you can instantiate a TrackingEventProcessor with a TrackingEventProcessorConfiguration:
EventProcessingConfigurer configurer;
TrackingEventProcessorConfiguration trackingProcessorConfig =
TrackingEventProcessorConfiguration.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("{class-name-of-saga}Processor",
Configuration::eventStore,
c -> trackingProcessorConfig);
You'd thus create the desired configuration for your Saga and call the andInitialTrackingToken() function and ensuring the creation of a head token of no token is present.
I hope this helps you out Tomáš!
Steven's solution works like a charm but only in Sagas. For those who want to achieve the same effect but in classic #EventHandler (to skip executions on replay) there is a way. First you have to find out how your tracking event processor is named - I found it in AxonDashboard (8024 port on running AxonServer) - usually it is location of a component with #EventHandler annotation (package name to be precise). Then add configuration as Steven indicated in his answer.
#Autowired
public void customConfig(EventProcessingConfigurer configurer) {
// This prevents from replaying some events in #EventHandler
var trackingProcessorConfig = TrackingEventProcessorConfiguration
.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("com.domain.notreplayable",
org.axonframework.config.Configuration::eventStore,
c -> trackingProcessorConfig);
}

Send to certain connections only Spring Websockets

I am using grails/groovy so excuse the odd syntax, i am also new to using websockets so please let me know if i am going about this in the wrong way:
Using spring websockets i am able to send messages to certain subscribed users via
SimpMessagingTemplate brokerMessagingTemplate
users.each {
brokerMessagingTemplate.convertAndSendToUser(it.id,"/topic/path",data)
}
However, i want to send messages only to subscribed users have passed to the server a certain value/id over and above their user id. A connection is initialised on wepage load so i figured perhaps that i could add a STOMP header value which passes this information to the server, and the server only sends messages to connections which match this.
var socket = new SockJS("/url/stomp");
var client = Stomp.over(socket);
var headers = {'additionalId': additionalId};
client.connect({}, function() {
client.subscribe("/user/topic/path", function (data) {
}, headers);
firstly, i dont know whether adding a header value is the right way to do this, and secondly im not sure how to make the SimpMessagingTemplate send to those that have specifically provided the additional Id in the header.
Instead of using a header you can use DestinationVariable as so:
brokerMessagingTemplate.convertAndSend("/topic/something.${additionalId}".toString(), data)
and use
#MessageMapping("/something.{additionalId}")
protected String chatMessage(#DestinationVariable String additionalId, Principal principal, String data) { ... }
Additionally you may want to limit who subscribe to a specific /something.{additionalId} by implementing a TopicSubscriptionInterceptor() where you can validate the Principal

Suggestions for simple ways to do Asynchronous processing in Grails

Let's say I have a simple controller like this:
class FooController {
def index = {
someVeryLongCompution() //e.g crawl a set of web pages
render "Long computation was launched."
}
}
When the index action is invoked, I want the method to return immediately to the user while running the long computation asynchronously.
I understand the most robust way to do this would be to use a message broker in the architecture, but I was wondering if there is a simpler way to do it.
I tried the Executor plugin but that blocks the http request from returning until the long computation is done.
I tried the Quartz plugin, but that seems to be good for periodic tasks (unless there is a way to run a job just once?)
How are you guys handling such requests in Grails?
Where do you want to process veryLongComputation(), on the same Grails server, or a different server?
If the same server, you don't need JMS, another option is to just create a new thread and process the computation asynchronously.
def index = {
def asyncProcess = new Thread({
someVeryLongComputation()
} as Runnable )
asyncProcess.start()
render "Long computation was launched."
}
If you use a simple trigger in Grails Quartz and set the repeatCount to 0, the job will only run once. It runs separate from user requests, however, so you'd need to figure out some way to communicate to user when it completed.
I know this is a very old question, just wanted to give an updated answer.
Since grails 2.3, the framework supports async calls using Servlet 3.0 async request handling (of course, a servlet 3.0 container must be used and the servlet version should be 3.0 in the configuration, which it is per default)
It is documented here : http://grails.org/doc/latest/guide/async.html
In general, there are two ways achieving what you asked for:
import static grails.async.Promises.*
def index() {
tasks books: Book.async.list(),
totalBooks: Book.async.count(),
otherValue: {
// do hard work
}
}
or the Servlet Async way:
def index() {
def ctx = startAsync()
ctx.start {
new Book(title:"The Stand").save()
render template:"books", model:[books:Book.list()]
ctx.complete()
}
}
Small note - The grails method is using promises, which is a major (async) leap forward. Any Promise can be chained to further promised, have a callback on success and fail etc'
Have you tried Grails Promisses API? It should be as simple as
import static grails.async.Promise
import static grails.async.Promises
class FooController {
def index = {
Promise p = Promises.task {
someVeryLongCompution() //e.g crawl a set of web pages
}
render "Long computation was launched."
}
}
Try Spring events plugin - It supports asynchronous event listeners.
If you'd like to use the Quartz plugin (as we always do), you can do it like this. It works well for us:
DEFINE A JOB (with no timed triggers)
static triggers = {
simple name:'simpleTrigger', startDelay:0, repeatInterval: 0, repeatCount: 0
}
CALL <job>.triggerNow() to manually execute the job in asynchronous mode.
MyJob.triggerNow([key1:value1, key2: value2]);
Pro Tip #1
To get the named parameters out the other side...
def execute(context) {
def value1 = context.mergedJobDataMap.get('key1');
def value2 = context.mergedJobDataMap.get('key2');
...
if (value1 && value2) {
// This was called by triggerNow(). We know this because we received the parameters.
} else {
// This was called when the application started up. There are no parameters.
}
}
Pro Tip #2
The execute method always gets called just as the application starts up, but the named parameters come through as null.
Documentation: http://grails.org/version/Quartz%20plugin/24#Dynamic%20Jobs%20Scheduling
The best solution for this kind of problem is to use JMS, via the JMS plugin.
For a simpler implementation, that doesn't requires an externel server/service, you can try the Spring Events plugin.
Grails 2.2.1 Solution I had an additional requirement that said the report had to auto pop a window when it was complete. So I chose the servlet way from above with a twist. I replaced the render a view with a json formatted string return it looked like this.
Additionally my client side is not a gsp view, it is ExtJS 4.1.1 (a HTML5 product)
enter code here
def index() {
def ctx = startAsync()
ctx.start ({
Map retVar = [reportId: reportId, success: success];
String jsonString = retVar as JSON;
log.info("generateTwoDateParamReport before the render out String is: " + jsonString);
ctx.getOriginalWebRequest().getCurrentResponse().setContentType("text/html");
ctx.getOriginalWebRequest().getCurrentResponse().setCharacterEncoding("UTF-8");
log.info("current contentType is: "ctx.getOriginalWebRequest().getCurrentResponse().contentType);
try {
ctx.getOriginalWebRequest().getCurrentResponse().getWriter().write(jsonString);
ctx.getOriginalWebRequest().getCurrentResponse().getWriter().flush();
ctx.getOriginalWebRequest().getCurrentResponse().setStatus(HttpServletResponse.SC_OK);
}
catch (IOException ioe)
{
log.error("generateTwoDateParamReport flush data to client failed.");
}
ctx.complete();
log.info("generateNoUserParamsReport after complete");
});
}

Resources