WebFlux returning http.okay vice http.notFound - functional-programming

New to WebFlux, reactive, and handlers. I've got things "working", but am not understanding why following code is returning "okay" with empty body, vice "not found".
Clarification: The issue-of-concern is in the final return statement of DemoPOJOHandler.getById(). The "short-circuit" code works as expected (i.e., returns "Bad Request" status), but the "switchIfEmpty" path of the final return statement does not appear to get exercised if a DemoPOJORepo.getById(int) returns Mono.empty().
(Note: I've hacked up a list-based "repo" to avoid dealing with database while figuring out handlers and http return types.)
Router implementation ("/v1" is a set of annotation based RESTful endpoints)...
#Configuration
public class DemoPOJORouter {
#Bean
public RouterFunction<ServerResponse> route(DemoPOJOHandler requestHandler) {
return nest(path("/v2"),
nest(accept(APPLICATION_JSON),
RouterFunctions.route(RequestPredicates.GET("/DemoPOJO"), requestHandler::getAll)
.andRoute(RequestPredicates.GET("/DemoPOJO/{id}"), requestHandler::getById)
.andRoute(RequestPredicates.POST("/DemoPOJO"), requestHandler::add)));
}
}
Handler implementation has been "stripped down" to only the code in question. I have a feeling that much of the style is "still imperative", but I've attempted to put the reactive stuff where it "makes the most sense".
If I supply a bad value on the URI (i.e., "foo"), then I get the http "bad request" returned. But, never seem to get the "not found" that should be generated by "switchIfEmpty" if a validly formatted int value is supplied, but it does not map to an entry in the repo.
#Component
public class DemoPOJOHandler {
public static final String PATH_VAR_ID = "id";
private DemoPOJORepo repo = null;
public Mono<ServerResponse> getById(ServerRequest request) {
Mono<DemoPOJO> monoDemoPOJO = null;
Map<String, String> pathVariables = request.pathVariables();
int id = -1;
checkRepoRef(); // part of the list hack
// short-circuit if request doesn't contain id (should never happen)
if ((pathVariables == null)
|| (!pathVariables.containsKey(PATH_VAR_ID))) {
return ServerResponse.badRequest().build();
}
// short-circuit if bad id value
try {
id = Integer.parseInt(pathVariables.get(PATH_VAR_ID));
} catch(NumberFormatException e) {
return ServerResponse.badRequest().build();
}
// get entity by keyValue
monoDemoPOJO = repo.getById(id);
return monoDemoPOJO
.flatMap(demoPOJO -> ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.syncBody(demoPOJO)
.switchIfEmpty(ServerResponse.notFound().build()));
}
}
Hack of a list-based repo to avoid dealing with data/APIs while working on handlers and http return types.
// local hack to avoid a database for testing
public class DemoPOJORepo {
private static DemoPOJORepo fpRepo = null;
private static int NUM_ROWS = 100;
private Map<Integer, DemoPOJO> fooPOJOMap;
private DemoPOJORepo() {
initMap();
}
public static DemoPOJORepo getInstance() {
if (fpRepo == null) {
fpRepo = new DemoPOJORepo();
}
return fpRepo;
}
public Mono<DemoPOJO> getById(int id) {
Mono<DemoPOJO> monoDP;
if (fooPOJOMap.containsKey(id)) {
monoDP = Mono.just(fooPOJOMap.get(id));
} else {
monoDP = Mono.empty();
}
return monoDP;
}
private Mono<Void> initMap() {
fooPOJOMap = new TreeMap<Integer, DemoPOJO>();
int offset = -1;
for(int ndx=0; ndx<NUM_ROWS; ndx++) {
offset = ndx + 1;
fooPOJOMap.put(offset, new DemoPOJO(offset, "foo_" + offset, offset+100));
}
return Mono.empty();
}
}

Your brackets are in the wrong place causing the swithIfEmpy to apply to the ServerResponse.ok() publisher not the monoDemoPOJO, replace the return with this and it should work:
return monoDemoPOJO
.flatMap(demoPOJO -> ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).syncBody(demoPOJO))
.switchIfEmpty(ServerResponse.notFound().build());

As I can see the code is right. The response code is Bad request because you are trying to convert "foo" to Integer, and when it throws an exception you are returning a Bad request response, so I think it works perfectly fine.
If you use an Integer id that is not present in your database then the answer must be a not found response

Related

spring-kafka kafkaStreamsBuilder.getKafkaStreams() is null

Here is my code
The first bean is watching the messages on Topic.TRANSACTION_RAW and split one message into two and send them to Topic.TRANSACTION_INTERNAL
And the second bean is doing group and reducing and materialize it to the state store "StateStore.BALANCE".
The last one is to get the ReadOnlyKeyValueStore to read state from "ReadOnlyKeyValueStore".
#Configuration(proxyBeanMethods = false)
#EnableKafkaStreams
public class MyKafkaStreamsConfiguration {
#Bean
public KStream<String, BankTransaction> alphaBankKStream(StreamsBuilder streamsBuilder) {
JsonSerde<BankTransaction> valueSerde = new JsonSerde<>(BankTransaction.class);
KStream<String, BankTransaction> stream = streamsBuilder.stream(Topic.TRANSACTION_RAW,
Consumed.with(Serdes.String(), valueSerde));
stream.flatMap((k, v) -> {
List<BankTransactionInternal> txInternals = BankTransactionInternal.splitBankTransaction(v);
List<KeyValue<String, BankTransactionInternal>> result = new LinkedList<>();
result.add(KeyValue.pair(v.getFromAccount(), txInternals.get(0)));
result.add(KeyValue.pair(v.getToAccount(), txInternals.get(1)));
return result;
}).filter((k, v) -> !Constants.EXTERNAL_ACCOUNT.equalsIgnoreCase(k))
.to(Topic.TRANSACTION_INTERNAL, Produced.with(Serdes.String(), new JsonSerde<>()));
return stream;
}
#Bean
public KStream<String, BankTransactionInternal> alphaBankInternalKStream(StreamsBuilder streamsBuilder) {
JsonSerde<BankTransactionInternal> valueSerde = new JsonSerde<>(BankTransactionInternal.class);
KStream<String, BankTransactionInternal> stream = streamsBuilder.stream(Topic.TRANSACTION_INTERNAL,
Consumed.with(Serdes.String(), valueSerde));
KGroupedStream<String, Double> groupedByAccount = stream
.map((k,v) -> KeyValue.pair(k, v.getAmount()))
.groupBy((account, amount) -> account, Grouped.with(Serdes.String(), Serdes.Double()));
groupedByAccount.reduce(Double::sum,
Materialized.<String, Double, KeyValueStore<Bytes, byte[]>>as(StateStore.BALANCE)
.withValueSerde(Serdes.Double()));
return stream;
}
#Bean
public ReadOnlyKeyValueStore<String, Double> balanceStateStore(StreamsBuilderFactoryBean defaultKafkaStreamsBuilder) {
if (defaultKafkaStreamsBuilder == null) {
System.out.println("... defaultKafkaStreamsBuilder is null ...");
}
if (defaultKafkaStreamsBuilder.getKafkaStreams() == null) {
System.out.println("... defaultKafkaStreamsBuilder.getKafkaStreams() is null ...");
// this one got printed
}
ReadOnlyKeyValueStore<String, Double> store = defaultKafkaStreamsBuilder.getKafkaStreams().store(
StateStore.BALANCE,
QueryableStoreTypes.keyValueStore());
return store;
}
}
I always got NullPointException on defaultKafkaStreamsBuilder.getKafkaStreams().
Any idea what is wrong here? Thanks!
if (defaultKafkaStreamsBuilder.getKafkaStreams() == null) {
System.out.println("... defaultKafkaStreamsBuilder.getKafkaStreams() is null ...");
// this one got printed
}
This operation is not good to do during bean definition phase.
See its JavaDocs:
/**
* Get a managed by this {#link StreamsBuilderFactoryBean} {#link KafkaStreams} instance.
* #return KafkaStreams managed instance;
* may be null if this {#link StreamsBuilderFactoryBean} hasn't been started.
* #since 1.1.4
*/
public synchronized KafkaStreams getKafkaStreams() {
since you call this method far too early before a lifecycle start phase, you end-up with that error.
You should reconsider your logic in favor of SmartLifecycle.start() in the target service where you'd like to use that ReadOnlyKeyValueStore. So, you autowire over there this StreamsBuilderFactoryBean and call its getKafkaStreams() from the start() implementation.

Haxe: Binding pattern with abstract fields access methods

I'd like to make wrapper to implement simple data binding pattern -- while some data have been modified all registered handlers are got notified. I have started with this (for js target):
class Main {
public static function main() {
var target = new Some();
var binding = new Bindable(target);
binding.one = 5;
// binding.two = 0.12; // intentionally unset field
binding.three = []; // wrong type
binding.four = 'str'; // no such field in wrapped class
trace(binding.one, binding.two, binding.three, binding.four, binding.five);
// outputs: 5, null, [], str, null
trace(target.one, target.two, target.three);
// outputs: 5, null, []
}
}
class Some {
public var one:Int;
public var two:Float;
public var three:Bool;
public function new() {}
}
abstract Bindable<TClass>(TClass) {
public inline function new(source) { this = source; }
#:op(a.b) public function setField<T>(name:String, value:T) {
Reflect.setField(this, name, value);
// TODO notify handlers
return value;
}
#:op(a.b) public function getField<T>(name:String):T {
return cast Reflect.field(this, name);
}
}
So I have some frustrating issues: interface of wrapped object doesn't expose to wrapper, so there's no auto completion or strict type checking, some necessary attributes can be easily omitted or even misspelled.
Is it possible to fix my solution or should I better move to the macros?
I almost suggested here to open an issue regarding this problem. Because some time ago, there was a #:followWithAbstracts meta available for abstracts, which could be (or maybe was?) used to forward fields and call #:op(a.b) at the same time. But that's not really necessary, Haxe is powerful enough already.
abstract Binding<TClass>(TClass) {
public function new(source:TClass) { this = source; }
#:op(a.b) public function setField<T>(name:String, value:T) {
Reflect.setField(this, name, value);
// TODO notify handlers
trace("set: $name -> $value");
return value;
}
#:op(a.b) public function getField<T>(name:String):T {
trace("get: $name");
return cast Reflect.field(this, name);
}
}
#:forward
#:multiType
abstract Bindable<TClass>(TClass) {
public function new(source:TClass);
#:to function to(t:TClass) return new Binding(t);
}
We use here multiType abstract to forward fields, but resolved type is actually regular abstract. In effect, you have completion working and #:op(a.b) called at the same time.
You need #:forward meta on your abstract. However, this will not make auto-completion working unless you remove #:op(A.B) because it shadows forwarded fields.
EDIT: it seems that shadowing happened first time I added #:forward to your abstract, afterwards auto-completion worked just fine.

Constraints on parameters in api interface

I've declared an API call in an interface and was wondering if it is possible to put constraints on some of the parameters. The API I'm accessing has these constraints as well and would like to enforce them in my program.
#GET("/recipes/search")
Call<RecipeResponse> getRecipes(
#Query("cuisine") String cuisine,
#Query("diet") String diet,
#Query("excludeIngredients") String excludeIngredients,
#Query("intolerances") String intolerances,
#Query("number") Integer number,
#Query("offset") Integer offset,
#Query("query") String query,
#Query("type") String type
);
How can I do this?
I know that it is possible to do this with POST request, and passing along an object via the RequestBody through the #Body annotation. Can I do this with a GET request too, where information is passed via the query string?
Thanks!
I think I ended up finding a solution. I've made a class SearchRecipeRequest in which I declare all possible parameters as class variables. In the setters I do the data validation such as checking for null on parameters that are required, or min/max value constraints on integers as specified by the endpoint. I then made a SearchRecipeRequestBuilder class to build such an object like so to make it easier to deal with all those possible parameters:
public class SearchRecipeRequestBuilder {
private String _cuisine = null,
_diet = null,
_excludeIngredients = null,
_intolerances = null,
_query = null,
_type = null;
private Integer _number = null,
_offset = null;
public SearchRecipeRequestBuilder() {}
public SearchRecipeRequest buildRequest() {
return new SearchRecipeRequest(_cuisine, _diet, _excludeIngredients, _intolerances, _number, _offset, _query, _type);
}
public SearchRecipeRequestBuilder cuisine(String cuisine) {
_cuisine = cuisine;
return this;
}
public SearchRecipeRequestBuilder diet(String diet) {
_diet = diet;
return this;
}
public SearchRecipeRequestBuilder excludeIngredients(String excludeIngredients) {
_excludeIngredients = excludeIngredients;
return this;
}
public SearchRecipeRequestBuilder intolerances(String intolerances) {
_intolerances = intolerances;
return this;
}
public SearchRecipeRequestBuilder query(String query) {
_query = query;
return this;
}
public SearchRecipeRequestBuilder type(String type) {
_type = type;
return this;
}
public SearchRecipeRequestBuilder number(Integer number) {
_number = number;
return this;
}
public SearchRecipeRequestBuilder offset(Integer offset) {
_offset = offset;
return this;
}
}
Which allows me to build the request like so:
SearchRecipeRequest request = new SearchRecipeRequestBuilder()
.query("burger")
.buildRequest();
I then pass along that object to a different function that knows how to use the request object to pass it along to the API.
That's how I'm doing it right now, if someone has a better way I'd love to hear it. :)
I got the idea to use the Builder pattern from a different StackOverflow question: Managing constructors with many parameters in Java.

rxjava - Combine onerror and timout handling

I will start with what I want to achieve.
I want to call a method that returns an Observabe.
I do not know if the called method handles exceptions and timeouts
I want to combine observables in my call (merge/zip etc)
if one method fails, I want the answers from the methods that succeeded -
I don't want to break the flow.
In case of exception, I am capable of handling it and continuing with the flow,
but when I try to add timeoutmanagement I fail.
Here is my code
public static void main(String[] args) {
createObservables(false, true); // stalls for timeout
zip();
}
private static void createObservables(final boolean throwException,
final boolean stall) {
obs1 = Observable.just(1);
obs1 = obs1.map(new Func1<Integer, Integer>() {
#Override public Integer call(Integer integer) {
int i = 0;
if (throwException)
getObj().equals("");
if (stall)
zzz(10);
return ++integer;
}
});
obs2 = Observable.just(111);
}
private static void zip() {
System.out.println("**Zip**");
obs1 = obs1.onErrorReturn(new Func1<Throwable, Integer>() {
#Override public Integer call(Throwable throwable) {
return 999;
}
});
obs1 = obs1.timeout(5, TimeUnit.SECONDS);
Observable.zip(obs1, obs2, new Func2<Integer, Integer, ArrayList<Integer>>() {
#Override
public ArrayList<Integer> call(Integer integer1, Integer integer2) {
ArrayList<Integer> integers = new ArrayList<Integer>();
integers.add(integer1);
integers.add(integer2);
return integers;
}
}).subscribe(new Observer<Object>() {....}
);
}
Now, when I call
createObservables(false , false); // no exceptions and timeouts
I get onNext - [2, 111].
then I call
createObservables(true, false); // throw exception in one method only
I get onNext - [999, 111] - which is what I want. Exception and the result from the second method.
But when I call
createObservables(false, true); // stall on timeout
I get only onError.
But I want to get the other method answer.
Thanks.
Try creating an observable for your timeout value, in this case you want the same value as your error case:
Observable obs1Timeout = Observable.just(999);
Then in your timeout policy give it this observable as the fallback to use in the case of a timeout:
obs1 = obs1.timeout(5, TimeUnit.SECONDS, obs1Timeout);

Is this common practice for a read-only property that accesses the database

If I have a read-only property on an object that fills itself via the DB, is this what I should be doing, or is there a better way to make sure it's already been evaluated?
private List<Variable> _selectedVariables;
public new List<Variable> SelectedVariables
{
get
{
if (_selectedVariables == null)
{
_selectedVariables = SomeFunctionThatCallsDB();
}
return _selectedVariables;
}
}
That's fine for a single thread; but you will have problems if that is going to be in a situation where you have multithreaded gets.
EDIT: Threadsafing:
Simple Threadsafe pattern:
private readonly object _objectLock = new object();
private List<T> _someList = null;
public List<T> MyStuff
{
get
{
if(_someList == null)
{
lock(_objectLock)
{
if(_someList == null)
_someList = LoadFromDB();
}
}
return _someList;
}
}
You check to see if set, then lock, then check again to make sure you covered the race condition.

Resources