For a spring boot app I am using RedisTemplate injected into a service bean to do simple gets/sets. This is for AWS Elasticache cluster enabled. I started the app and sent a few requests thru and the performance is slow and the Elasticache metrics show new connections equivalent to the number of requests. By slow I am seeing times of roughly 100ms for each call. The long latency and metrics for new connections indicates the native connection in the LettuceConnection is not retained. I am only using spring-data to manage getting the connection setup. Specifically, I don't want to connect in a #Config class and have the app fail on startup if there is an issue connecting. It's a critical app that needs to start even if the cache is not available at that time. And I don't want to write the code to synchronize getting the single native connection during multi-threaded access. Any ideas why the native connection would not be saved? Here's my config:
private ClusterClientOptions clusterClientOptions() {
//#formatter:off
return ClusterClientOptions.builder()
.socketOptions(SocketOptions
.builder()
.connectTimeout(Duration.ofMillis(properties.getConnectionTimeoutMs()))
.build())
.requestQueueSize(properties.getRequestQueueSize())
.topologyRefreshOptions(ClusterTopologyRefreshOptions
.builder()
.enablePeriodicRefresh(properties.isPeriodicRefresh())
.build())
.build();
//#formatter:on
}
private LettuceClientConfiguration lettuceClientConfiguration() {
//#formatter:off
return LettuceClientConfiguration
.builder()
.clientOptions(clusterClientOptions())
.commandTimeout(Duration.ofMillis(properties.getCommandTimeoutMs()))
.useSsl()
.build();
//#formatter:on
}
private LettuceConnectionFactory serviceContextLettuceConnectionFactory() {
RedisClusterConfiguration clusterConfig = new RedisClusterConfiguration();
clusterConfig.clusterNode(properties.getCacheEndpoint(), properties.getCachePort());
clusterConfig.setPassword(RedisPassword.of(properties.getCachePassword()));
LettuceConnectionFactory lettuceConnectionFactory =
new LettuceConnectionFactory(clusterConfig, lettuceClientConfiguration());
lettuceConnectionFactory.setShareNativeConnection(true);
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory;
}
private RedisTemplate<String, String> redisTemplate() {
RedisTemplate<String, String> template = new RedisTemplate<>();
template.setConnectionFactory(serviceContextLettuceConnectionFactory());
template.afterPropertiesSet();
return template;
}
The template gets set into a singleton service class which calls template.opsForValue().get(key), etc. It works but it's slow and always creating new connections.
Solved: there was #RefreshScope in this #Configuration class (not shown) which caused a different behavior than I expected. Still not 100% sure of the details but it seemed to be recreating the factory and template each time causing the new connections. I removed that annotation and it is re-using the native connection as expected
Related
I'm writing a kafka consumer using 'org.springframework.kafka.annotation.KafkaListener' (#KafkaListener) annotation. This annotation is expecting the topic to be already at the time of subscribing and trying to create the topic if the topic is not present.
In my case, i don't want the consumer to create a topic with default configuration but it should create a topic with custom configurations (like the no of partitions, clean up policy etc). Is there any option for this in spring-kafka?
See the documentation configuring topics.
If you define a KafkaAdmin bean in your application context, it can automatically add topics to the broker. To do so, you can add a NewTopic #Bean for each topic to the application context. The following example shows how to do so:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(embeddedKafka().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic1() {
return new NewTopic("thing1", 10, (short) 2);
}
#Bean
public NewTopic topic2() {
return new NewTopic("thing2", 10, (short) 2);
}
By default, if the broker is not available, a message is logged, but the context continues to load. You can programmatically invoke the admin’s initialize() method to try again later. If you wish this condition to be considered fatal, set the admin’s fatalIfBrokerNotAvailable property to true. The context then fails to initialize.
If the broker supports it (1.0.0 or higher), the admin increases the number of partitions if it is found that an existing topic has fewer partitions than the NewTopic.numPartitions.
If you are using Spring Boot, you don't need an admin bean because boot will automatically configure one for you.
This is a question about the project https://github.com/DiUS/pact-jvm.
Problem
When I am validating pacts I need to be able to use client side authentication, as the providers actually require client side authentication. I'll prefix what I am saying with a declaration that I am not very familiar with groovy: I mostly program in scala, java or javascript. Having looked at the code I think that client side authentication is not currently supported, so I'd like to make a pull request with that support in it.
What I've done so far
I have managed to get Https working with a truststore: I copied the HttpTarget and created a HttpsTarget, and in the HttpsTarget specified the truststore in the providerinfo. Unfortunately looking at the code there doesn't seem to be a way of specifying the client certificate, so I need to change the providerinfo class to be able to specify where it is (in the same way the the truststore is provided).
My problem is that I've got the code compiling using the advice in the 'for contributors', but when I publish locally, I am only publishing for scala version 2_12. Because of version issues and binary incompatibilities between scala versions, I need to publish to scala 2_11. My skills with gradle are even less than my skills with groovy. I've done a search for all the references to scalaVersion, and found that there is quite a lot of logic around it, but I've not managed to track down where it is specified.
Question
If I can use client side authentication with the current pact validator could you let me know. If not, could you tell me how to publish the project with support for scala 2_11?
Thanks
In the end I made my own Http Target. My need is to run from Junit, not the general case, and this is good enough:
public class HttpsTarget extends HttpTarget {
public HttpsTarget(final int port) {
super("https", "localhost", port, "/", false);
}
static class HttpsClientFactory implements IHttpClientFactory {
#NotNull
#Override
public CloseableHttpClient newClient(Object o) {
SSLContext sslContext = // put here code to make ssl context
CloseableHttpClient httpClient = HttpClients
.custom()
.setSSLContext(sslContext)
.build();
return httpClient;
}
}
#Override
public void testInteraction(final String consumerName, final Interaction interaction, PactSource source) {
ProviderInfo provider = getProviderInfo(source);
ConsumerInfo consumer = new ConsumerInfo(consumerName);
ProviderVerifier verifier = setupVerifier(interaction, provider, consumer);
Map<String, Object> failures = new HashMap<>();
ProviderClient client = new ProviderClient(provider, new HttpsClientFactory());
verifier.verifyResponseFromProvider(provider, interaction, interaction.getDescription(), failures, client);
reportTestResult(failures.isEmpty(), verifier);
try {
if (!failures.isEmpty()) {
verifier.displayFailures(failures);
throw getAssertionError(failures);
}
} finally {
verifier.finialiseReports();
}
}
}
I'm quite new to the Microservice world and particularly vertX. I want my verticle to start anyway even there is no database connection available (e.g. database URL missing in configuration). I already managed to do this and my verticle is starting.
The issue now is that I want my verticle to notice when the database connection is available again and connect to it. How can I do this ?
I thought about creating another Verticle "DatabaseVerticle.java" which would send the current DB config on the event bus and my initial verticle would consume this message and check whether the config info is consistent (reply with success) or still missing some data (reply with fail and make the DatabaseVerticle check again).
This might work (and might not) but does not seem to be the optimal solution for me.
I'd be very glad if someone could suggest a better solution. Thank you !
For your use case, I'd recommend to use the vertx-config. In particular, have a look at the Listening to configuration changes section of the Vert.x Config documentation.
You could create a config retriever and set a handler for changes:
ConfigRetrieverOptions options = new ConfigRetrieverOptions()
.setScanPeriod(2000)
.addStore(myConfigStore);
ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
retriever.getConfig(json -> {
// If DB config available, start the DB client
// Otherwise set a "dbStarted" variable to false
});
retriever.listen(change -> {
// If "dbStarted" is still set to false
// Check the config and start the DB client if possible
// Set "dbStarted" to true when done
});
The ideal way would be some other service telling your service about database connection. Either through event bus or HTTP, what you can do is when someone tries to access your database when connection is not made just try to make some DB call and handle the exception, return a boolean as false. Now when you get a message on event bus, consume it and save it in some config pojo. Now when someone tries to access your database, look for config and if available make a connection.
Your consumer:
public void start(){
EventBus eb = vertx.eventBus();
eb.consumer("database", message -> {
config.setConfig(message.body());
});
}
Your db client(Mongo for this eg):
public class MongoService{
private MongoClient client;
public boolean isAvailable = false;
MongoService(Vertx vertx){
if(config().getString("connection")){
client = MongoClient.createShared(vertx, config().getString("connection"));
isAvailable = true;
}
}
}
Not everything in Vertx should be solved by another verticle.
In this case, you can use .periodic()
http://vertx.io/docs/vertx-core/java/#_don_t_call_us_we_ll_call_you
I assume you have some function that checks the DB for the first time.
Let's call it checkDB()
class PeriodicVerticle extends AbstractVerticle {
private Long timerId;
#Override
public void start() {
System.out.println("Started");
// Should be called each time DB goes offline
final Long timerId = this.vertx.setPeriodic(1000, (l) -> {
final boolean result = checkDB();
// Set some variable telling verticle that DB is back online
if (result) {
cancelTimer();
}
});
setTimerId(timerId);
}
private void cancelTimer() {
System.out.println("Cancelling");
getVertx().cancelTimer(this.timerId);
}
private void setTimerId(final Long timerId) {
this.timerId = timerId;
}
}
Here I play a bit with timerId, since we cannot pass it to cancelTimer() right away. But otherwise, it's quite simple.
I have a web application. I found that performance bottleneck could be that i am creating Http client again and again for every request.
public static class DemoHttpClient
{
public static HttpClient GetClient()
{
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(DemoConstants.DemoAPI);
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(
new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
return client;
}
}
public class DemoConstants
{
public const string DemoAPI = "http://localhost/";
}
I am planning to implement singleton for this. And found this very helpful article.
http://csharpindepth.com/Articles/General/Singleton.aspx
I am confused as to how exactly ASP.NET MVC web application lifecycle is with when it is deployed on the server. Assuming there will be multiple threads calling same resource, the resource further again and again making new http clients..
What should we do here..
1) Lazily load HTTP client?
2) Not lazily load it?
Which particular approach should we use?
This doesn't sound like a good idea. In particular, take a peek into the docs of the HttpClient class:
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
https://msdn.microsoft.com/en-us/library/system.net.http.httpclient%28v=vs.118%29.aspx
This means that accessing the very same singleton instance from multiple threads will lead to undefined issues.
What you could do however, is you could reuse the same instance across a single request. This can be done by storing an instance in the Items container:
private static string ITEMSKEY = "____hclient";
public static HttpClient GetClient()
{
if ( HttpContext.Current.Items[ITEMSKEY] == null )
{
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(DemoConstants.DemoAPI);
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(
new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
HttpContext.Current.Items.Add( ITEMSKEY, client );
}
return (HttpClient)HttpContext.Current.Items[ITEMSKEY];
}
Note, that since the HttpClient implements IDisposable, it still could be a good idea to dispose such instance somewhere in the pipeline, for example in the EndRequest event of the application pipeline.
Update: as noted in a comment by #LukeH, the updated version of the docs for the .NET 4.5 and 4.6 states that some of methods of the HttpClient class are thread safe:
https://msdn.microsoft.com/en-us/library/system.net.http.httpclient%28v=vs.110%29.aspx
The updated remarks section states that a single instance is basically a collection of shared settings applied to all requests executed by this instance. Then, the docs says:
In addition, every HttpClient instance uses its own connection pool, isolating its requests from requests executed by other HttpClient instances.
This means that the isolation of different pools could still make sense, my personal recommendation would still be then to not to have a singleton, as you possibly would still need to change some settings between consecutive requests.
I'm using HttpClient to execute a PostMethod against a remote servlet and for some reason a lot of my connections are hanging open and hogging up all of my server's connections.
Here's more info about the architecture
GWT client calls into a GWT Service
GWT service instantiates a HttpClient, creates a PostMethod and has the client execute the method
it then gets the input stream by calling method.getResponseBodyAsStream() and writes it out to a byte array
it then closes the input stream and flushes the byte array output stream, does a few more lines of code and then calls method.releaseConnection()
There has to be something obvious I'm overlooking that's causing this. If I perform a GET in a browser to the same service, the connections close immediately but something about HTTPClient is causing them to hang open.
You need to call HttpMethodBase#releaseConnection(). If you return a InputStream to be used later, a simple way is to wrap it by a anonymous FilterInputStream overwriting close():
final HttpMethodBase method = ...;
return new FilterInputStream(method.getResponseBodyAsStream())
{
public void close() throws IOException
{
try {
super.close();
} finally {
method.releaseConnection();
}
}
};