HealthCheck probe interval .net Core 2.2 - asp.net-core-2.2

Can one configure the probe time when using the HealthCheck framework in .NET 2.2.? Is there a way?
Timing it, the default refresh is 30 seconds or at least that's how often mine fires.

You can set the period property of IHealthCheckPublisher execution. The default value is 30 seconds.
services.Configure<HealthCheckPublisherOptions>(options =>
{
options.Period = TimeSpan.FromSeconds(10);
});
services.AddSingleton<IHealthCheckPublisher, SampleHealthCheckPublisher>();
services.AddHealthChecks()
.AddCheck<SampleHealthCheck>("Sample");

You can use AspNetCore.Diagnostics.HealthChecks and related packages provided by Xabaril via the BeatPulse project.
Here's the link to the repository: https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks
This package has a variety of implementations to provide health checks for different services like:
Sql Server
MySql
Oracle
Sqlite
RavenDB
Postgres
EventStore
RabbitMQ
Elasticsearch
Redis
System: Disk Storage, Private Memory, Virtual Memory
Azure Service Bus: EventHub, Queue and Topics
Azure Storage: Blob, Queue and Table
Azure Key Vault
Azure DocumentDb
Amazon DynamoDb
Amazon S3
Network: Ftp, SFtp, Dns, Tcp port, Smtp, Imap
MongoDB
Kafka
Identity Server
Uri: single uri and uri groups
Consul
Hangfire
This package makes refresh duration for probe configurable using settings in appSettings.json file.
You can find more details on the implementation for probe check here: https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks/blob/master/doc/kubernetes-liveness.md
Don't forget to mark the answer as accepted, if it resolved your issue

Related

Need help Confluent Kafka .net Core client trying to produce/consume message to and from Kafka cluster(Linux server) using SASL Kerberos authenticatio

I am trying to work on the Kafka connectivity issue.
Basically I am trying be produce/consume from .net core confluent kafka to kafka cluster( linux server)
and I am facing various issues.
I am using Confluent.Kafka(.net) 1.7.0 version to connect to Kafka cluster verion:2.7.2
My confluent kafka .net is being used in the docker container.
I am trying to run my .net core c# api in a docker container and I am getting following runtime errors:
No provider for SASL mechanism GSSAPI: recompile librdkafka with libsasl2 or openssl support. Current build options: PLAIN SASL_SCRAM OAUTHBEARER -
at Confluent.Kafka.Impl.SafeKafkaHandle.Create(RdKafkaType type, IntPtr config, IClient owner)
at Confluent.Kafka.Producer`2..ctor(ProducerBuilder`2 builder)
at Confluent.Kafka.ProducerBuilder`2.Build()
Anyone has some experience with this kind of problem.
Update:
I did and I ended up following [link]Confluent Kafka Dotnet Kerberos Support Dockerfile (No provider for SASL mechanism GSSAPI) article.
Made changes to dockerfile to complete the steps. After installing on container I am getting following msg:
When users attempt to use Kerberos and specify a principal or user name without specifying what administrative Kerberos realm that principal belongs to, the system appends the default realm. The default realm may also be used as the realm of a Kerberos service running on the local machine. Often, the default realm is the uppercase version of the local DNS domain.

Default Connection Policy of DocumentClient in Cosmos DB

In Performance tips of Cosmos DB suggested by Microsoft, it is recommended to use Direct Mode i.e. TCP and HTTPS protocol to query Cosmos DB, just wanted to know what is default connection policy of Cosmos DB document Client?
var client = new DocumentClient(new Uri(endpointUrl), _primaryKey)
If I use above code, what Connection Policy will be used?
https://learn.microsoft.com/en-us/azure/cosmos-db/performance-tips
If I use above code, what Connection Policy will be used?
I think the performance tips article already has made it clear. If you do not set the direct mode in sdk, it will be Gateway Mode (default).
You could see the statement:
Gateway Mode is supported on all SDK platforms and is the configured
default. If your application runs within a corporate network with
strict firewall restrictions, Gateway Mode is the best choice since it
uses the standard HTTPS port and a single endpoint.

DocumentDB Emulator Remote Connection

Do we have any way to connect document dB emulator from remote system ?
Can we create procedure , triggers , user defined functions etcs in document db emulator ?
The Emulator is meant for local dev scenarios, since it runs exposing a local port, you probably could (never tried, this is purely theoretical) work around the firewall and expose it, then connect from another system using your external IP and the exposed port.
There is also the local SSL certificate that you must solve (that probably is the biggest issue), though you could try with the TCP connection setting, might want to check this thread about which ports need to be opened.
Also, the Emulator does not have the entire feature set that the live service does:
The DocumentDB Emulator supports only a single fixed account and a well-known master key. Key regeneration is not possible in the DocumentDB Emulator.
The DocumentDB Emulator is not a scalable service and will not support a large number of collections.
The DocumentDB Emulator does not simulate different DocumentDB consistency levels.
The DocumentDB Emulator does not simulate multi-region replication.
The DocumentDB Emulator does not support the service quota overrides that are available in the Azure DocumentDB service (e.g. document size limits, increased partitioned collection storage).
As your copy of the DocumentDB Emulator might not be up to date with the most recent changes with the Azure DocumentDB service, please DocumentDB capacity planner to accurately estimate production throughput (RUs) needs of your application.
So, you are probably better off installing the emulator on the other system via the installer or Chocolatey and avoid all the problems.
UPDATE: My following attempted solution doesn't work. Connection Timeout, 192.168.0.101:8881 using the Node.js DocumentDB sdk. I think because of SSL. :/ Sorry. Leaving this "Answer" for documentation on what doesn't work, and if anyone knows how to bypass DocumentDB Emulator SSL.
I am trying to connect DocumentDB Emulator across my local network. (I dev on a virtual machine)
I am trying to do a port forward, to the 8081 local port that DocumentDB Emulator listens on. In Command Prompt (Run as Administrator)
netsh interface portproxy add v4tov4 listenaddress=192.168.0.101 listenport=8080 connectport=8081 connectaddress=127.0.0.1
192.168.0.101 is the network address of the PC.
Now I'm able to navigate to:
https://192.168.0.101:8080/_explorer/index.html and see the data explorer. Optimistic I can get this working for dev, with SSL turned off?
Also tried to use node.js http-proxy couldn't get it working with self-signed certificates. :(
Update, I actually got http-proxy working, but it only works if you start the servers in a specific order...
start api server
start proxy server (on windows box) with secure: true
make a failed connection
change proxy server (on windows box) to secure: false; restart;
now it's working... but useless for dev, because if you restart the API server after code change, the connection fails again.
Sample Node.js Proxy to be run on Windows box:
```
var fs = require('fs'),
httpProxy = require('http-proxy');
//
// Create the proxy server listening on port 443
//
httpProxy.createServer({
ssl: {
key: fs.readFileSync('valid-ssl-key.pem', 'utf8'),
cert: fs.readFileSync('valid-ssl-cert.pem', 'utf8')
},
target: 'https://localhost:8081',
secure: true // Depends on your needs, could be false.
}).listen(8881);
```
You just need to start the documentdb with additional parameters:
start "" "c:\Program Files\Azure Cosmos DB Emulator\CosmosDB.Emulator.exe" /AllowNetworkAccess /NoFirewall /Key=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
Checkout the documentdb docker file for more details: https://github.com/Azure/azure-cosmos-db-emulator-docker/blob/master/package_scripts/startemu.cmd

openldap synchronous vs. asynchronous replication

I've installed two openldap servers and configured them to do replication via 'syncrepl'. This replication is however asychronous. Changes are not immediately written to the other ldap server. What I would like to do is synchronous replication. With new or updated entries only written to the primary LDAP after they have been written to the second LDAP server.
Does anybody know if this is possible with openLDAP?

Azure Cloud Services vs VMs for Existing Asp.Net website

I have seen variations of this question but couldn't find any that dealt with our particular scenario.
We have an existing aps.net website that links to a SQL Server database.
The database has clr user-defined types, hence it can only be hosted in Azure VM since Cloud Services don't support said types.
We initially wanted to use a vm for the database and cloud service for the front-end, but then some issues arose:
We use StateServer for storing State, but Azure doesn't support that. We would need to configure either Table storage, SQL Databases, or a Worker role dedicated to State management (a new worker role is an added cost). Table storage wouldn't be ideal due to performance. The other 2 options are preferable but they introduce cost or app-reconfiguration disadvantages.
We use SimpleMembership for user management. We would need to migrate the membership tables from our vm instance sql server to Azure's SQL Databases. This is an inconvenience as we want to keep all our tables in the same database, and splitting up the 2 may require making some code changes.
We are looking for a quick solution to have this app live as soon as possible, and at manageable cost. We are desperately trying to avoid re-factoring our code just to accommodate hosting part of the app in Azure Cloud services.
Questions:
Should we just go the VM route for hosting everything?
Is there any cost benefit in leveraging a VM instance (for sql server) and a Cloud Service instance (for the front-end)?
It seems to me every added "background process" to a Cloud Service will require a new worker role. For example, if we wanted to enable smtp for email services, this would require a new role, and hence more cost. Is this correct?
To run SQL Server with CLR etc, you'll need to run SQL Server in a Virtual Machine.
For the web tier, there are advantages to Cloud Services (web roles), as they are stateless - very easy to scale out/in without worrying about OS setup. And app setup is done through startup scripts upon bootup. If you can host your session content appropriately, the stateless model will be simpler to scale and maintain. However: If you have any type of complex installations to perform that take a while (or manual intervention), then a Virtual Machine may indeed be the better route, since you can build the VM out, and then create a master image from that VM. You'll still have OS and app maintenance issues to contend with, just as you would in an on-premises environment.
Let me correct you on your 3rd bullet regarding background processes. A cloud service's web role (or worker role) instances are merely Windows Server VM's with some scaffolding code for startup and process monitoring. You don't need a separate role for each. Feel free to run your entire app on a single web role and scale out; you'll just be scaling at a very coarse-grain level.
Some things to consider...
If you want to be cheap, you can have your web/worker role share the same code on a single machine by adding the RoleEntryPoint. Here is a post that actually shows how to do what you are trying to do with sending email:
http://blog.maartenballiauw.be/post/2012/11/12/Sending-e-mail-from-Windows-Azure.aspx
Session management is painfully slow in SQL Azure DB, I would use the Azure Cache if you can..it is fast.
SQL Server with VMs is going to cause problems for you, because you will also need to create a virtual network between that and any cloud services. This is really stupid, but if you deploy a cloud service AND a VM they communicate over the PUBLIC LOAD BALANCER causing a potential security concern and network latency. So, first you need to virtual network them (that is an extra cost)..then you also need to host a DNS server to address the SQL Server VM. Yes this is really stupid, unless you are OK with your web/worker roles communicating with your SQL Server over the internet :)
EDIT: changed "public internet" to "public load balancer" (and noted latency)
EDIT: The above information is 100% correct contrary to the comment by David below. Please read the guidance from Microsoft here:
http://msdn.microsoft.com/library/windowsazure/dn133152.aspx#scenario
DIRECTLY FROM MICROSOFT GUIDANCE speaking about cross Cloud Service communication (VM->web/worker roles):
"We recommend that you implement the first option as the connection process would not need to go through the public Internet. Therefore, it would provide a better network performance."
As of today (8/29/2013) Azure VMs and Worker/Web Roles are deployed into DIFFERENT "Cloud Services". Therefore communication between them needs to be secured via a Virtual Network that exposes private IP addresses between the instances.
To follow up on David's point below, that about adding an ACL. You are still sending packets over the internet using TDS (SQL Server protocol). That can be encrypted, but no sane architect/enterprise governance/security governance would "allow" this scenario to happen in a production environment.

Resources