I'm looking for a way to display all the subjects that currently exist on a NATS Streaming Server during operation. Have thoroughly looked through documentation but have been unable to find a mention of this so far.
Is there a way to find a list of all available subjects that can be subscribed to on the server?
I have just merged a PR that adds Monitoring to the NATS Streaming server. If you start the server with a monitoring port, say -m 8222, then you can get the list of channels by pointing to http://localhost:8222/streaming/channelsz
This would return:
{
"cluster_id": "test-cluster",
"server_id": "d1dzRa72OpjGRROXKJtfSV",
"now": "2017-06-08T18:14:54.206006151+02:00",
"offset": 0,
"limit": 1024,
"count": 2,
"total": 2,
"names": [
"bar",
"foo"
]
}
For more information, checkout https://github.com/nats-io/nats-streaming-server#monitoring
Related
I am creating a new knowledge base connecting it to an already existing Azure Cognitive Service. But I am getting error: "No Endpoint keys found." when i click "Create KB".
See capture of the error:
My QnAMaker cognitive service has the endpoint
It seems that there is sometimes the problem that the endpoint keys can only be found, if the Resource Group holding all resources for the QnA Maker Service (like App Service, Application Insights, Search Service and the Application Service Plan) is hosted in the same region as the QnA Maker Service itself.
Since the QnA Maker service can only be hosted in West US (as far a I know and was able to find: https://westus.dev.cognitive.microsoft.com/docs/services?page=2), the current workaround for this case is to create a new QnA Maker service with the resource group being hosted in the West US region. Then the creation of a knowledge base should work as always.
PS: seems like this issues was already reported, but the problem still occurs for me from time to time (https://github.com/OfficeDev/microsoft-teams-faqplusplus-app/issues/71)
My resources and resource group were all in West US but I still got the same "No Endpoint keys found." error.
Eventually I figured out that the issue was related to my subscription levels. Make sure that they are all the same for all your created resources.
If you are using the deploy.ps1 script in the Virtual Assistant VS template, open the file at .\Deployment\Resources\template.json
That is a template for the resource creation. You can look through it to see exactly which resources will be created and what parameters are sent to Azure for each of the resources.
I am using a My Visual Studio subscription so it is registered as a free tier in Azure. What worked for me, is that I had to update all the "standard" subscriptions to free in the Parameters JSON array. I didn't update anything lower down for fear that it might interfere with the creation process too much.
An example is the appServicePlanSku parameter. It was set to
"appServicePlanSku": {
"type": "object",
"defaultValue": {
"tier": "Standard",
"name": "S1"
}
}
I updated it to
"appServicePlanSku": {
"type": "object",
"defaultValue": {
"tier": "Free",
"name": "F0"
}
}
I made multiple of these updates in the parameters array. After those changes, deleting the resource group for the 100th time and running the deployment script again, it worked.
I am using SCDF with Spring Boot 2.x metrics and SCDF metrics collector to collect metrics from my Spring Boot app. I really do not uderstand the logic of the collector regarding the aggregateMetricsdata.
When I am fetching the list of metrics collected for my stream, I only have the one starting with integration.channel.*and thus I have on ly the mean value. I tried everything to see other metrics appearing like the one exposed by the /actuator/prometheus endpoint.
I think I have misunderstand the way the metrics are aggregated. I noticed that SCDF add automatically some properties to metrics and I would like to apply these properties to all my metrics exposed in order to collect them all.
{
"_embedded": {
"streamMetricsList": [
{
"name": "poc-stream",
"applications": [
{
"name": "poc-message-sink",
"instances": [
{
"guid": "poc-stream-poc-message-sink-v7-75b8f4dcff-29fst",
"key": "poc-stream.poc-message-sink.poc-stream-poc-message-sink-v7-75b8f4dcff-29fst",
"properties": {
"spring.cloud.dataflow.stream.app.label": "poc-message-sink",
"spring.application.name": "poc-message-sink",
"spring.cloud.dataflow.stream.name": "poc-stream",
"spring.cloud.dataflow.stream.app.type": "sink",
"spring.cloud.application.guid": "poc-stream-poc-message-sink-v7-75b8f4dcff-29fst",
"spring.cloud.application.group": "poc-stream",
"spring.cloud.dataflow.stream.metrics.version": "2.0"
},
"metrics": [
{
"name": "integration.channel.input.send.mean",
"value": 0,
"timestamp": "2018-10-25T16:34:39.889Z"
}
]
}
],
"aggregateMetrics": [
{
"name": "integration.channel.input.send.mean",
"value": 0,
"timestamp": "2018-10-25T16:34:52.894Z"
}
]
},
...
I have some Micrometer counters that I want to get the values with the Metrics collector. I know they are well exposed because I have set all the properties right and I even went into the Docker container launched to check the endpoints.
I have read that
When deploying applications, Data Flow sets the
spring.cloud.stream.metrics.properties property, as shown in the
following example:
spring.cloud.stream.metrics.properties=spring.application.name,spring.application.index,spring.cloud.application.*,spring.cloud.dataflow.*
The values of these keys are used as the tags to perform aggregation.
In the case of 2.x applications, these key-values map directly onto
tags in the Micrometer library. The property
spring.cloud.application.guid can be used to track back to the
specific application instance that generated the metric.
Does that mean that I need to specifically add these properties myself into the tags of all my metrics ? I know I can do that by having a Bean MeterRegistryCustomizerreturning the following : registry -> registry.config().commonTags(tags) with tags beeing the properties that SCDF normally sets itself for integrationmetrics. Or SCDF adds to ALL metrics the properties ?
Thanks !
while your observation about the MetricsCollector is "generally" correct, I believe there is an alternative (and perhaps cleaner) way to achieve what you've been trying by using the SCDF Micrometer metrics collection approach. I will try to explain both approaches below.
As the MetricsCollector precedes in time the Micrometer framework they both implement a quite different metrics processing flows. The primary goal for the Metrics Collector 2.x was to ensure backward compatibility with SpringBoot 1.x metrics. The MetricsCollector 2.x allows mixing metrics coming from both SpringBoot 1.x (pre micrometer) and Spring Boot 2.x (e.g. micrometer) app starters. The consequence of this decision is that the Collector 2.x supports only the common denominator of metrics available in Boot 1.x and 2.x. This requirement is enforced by pre-filtering only the integration.channel.* metrics. At the moment you would not be able to add more metrics without modifying the metrics collector code. If you think that supporting different Micrometer metrics is more important than having backward compatibility with Boot 1.x then please open an new issue in Metrics Collector project.
Still I believe that the approach explained below is better suited for you case!
Unlike the MetricsCollector approach, the "pure" Micrometer metrics are send directly to the selected Metrics registry (such as Prometheus, InfluxDB, Atlas and so on). As illustrated in the sample, the collected metrics can be analyzed and visualized with tools such as Grafana.
Follow the SCDF Metrics samples to setup your metrics collection via InfluxDB (or Prometheus) and Grafana. Later would allow you explore any out-of-the-box or custom Micrometer metrics. The downside of this approach (for the moment) is that you will not be able to visualize those metrics in the SCDF UI's pipeline. Still if you find it important to have such visualization inside the SCDF UI please open a new issue in the SCDF project (I have WIP for the Altals Micrometer Registry).
I hope that this sheds some light on the alternative approaches. We would be very interested to hear your feedback.
Cheers!
I have a document called "chat"
"Chat": [
{
"User": {},
"Message": "i have a question",
"Time": "06:55 PM"
},
{
"User": {},
"Message": "will you be able to ",
"Time": "06:25 PM"
},
{
"User": {},
"Message": "ok i will do that",
"Time": "07:01 PM"
}
every time a new chat message arrives i should be able to simple append to this array.
mongodb API aggregation pipeline (preview) allows me to use things like $push $addToSet for that
if i use sql api i will have to pull the entire document every time modify it and create a new document every time.
Other Considerations :
This array can grow rapidly.
This "chat" document might also be nested into other document as well.
My Question
Does this means that mongodb API is better suited for this and sql api will have a performance hit for this scenario ?
Does this means that mongodb API is better suited for this and sql api
will have a performance hit for this scenario ?
It's hard to say which database is the best choice.
Yes,as found in the doc, Cosmos Mongo API supports $push and $addToSet which is more efficient. However,in fact, Cosmos Mongo API just supports a subset of the MongoDB features and translates requests into the Cosmos sql equivalent. So, maybe Cosmos Mongo API has some different behaviours and results. But the onus is on Cosmos Mongo API to improve their emulation of MongoDB.
When it comes to Cosmos Sql Api, partial update is not supported so far but it is hitting the road. You could commit feedback here. Currently, you need to update the entire document. Surely, you could use stored procedure to do this job to release pressure of your client side.
The next thing I want to say, which is the most important, is the limitation mentioned by #David. The document size has 2MB limitation in sql api and 4MB in mongo api:What is the size limit of a cosmosdb item?. Since your chat data is growing, you need to consider to split them. Then give the documents a partition key such as "type": "chatdata" to classify them.
I am now trying to copy data from cosmosdb to data lake store by data factory.
However, the performance is poor, about 100KB/s, and the data volume is 100+ GB, and keeps increasing. It will take 10+ days to finish, which is not acceptable.
Microsoft document https://learn.microsoft.com/en-us/azure/data-factory/data-factory-copy-activity-performance mentioned that the max speed from cosmos to data lake store is 1MB/s. Even this, the performance is still bad for us.
The cosmos migration tool doesn't work, no data exported, and no issue log.
Data lake analytics usql can extract external sources, but currently only Azure DB/DW and SQL Server are supported, no cosmosdb.
How/what tools can improve the copy performance?
According to your description, I suggest you could try to set high cloudDataMovementUnits to improve the performance.
A cloud data movement unit (DMU) is a measure that represents the power (a combination of CPU, memory, and network resource allocation) of a single unit in Data Factory. A DMU might be used in a cloud-to-cloud copy operation, but not in a hybrid copy.
By default, Data Factory uses a single cloud DMU to perform a single Copy Activity run. To override this default, specify a value for the cloudDataMovementUnits property as follows. For information about the level of performance gain you might get when you configure more units for a specific copy source and sink, see the performance reference.
Notice: Setting of 8 and above currently works only when you copy multiple files from Blob storage/Data Lake Store/Amazon S3/cloud FTP/cloud SFTP to Blob storage/Data Lake Store/Azure SQL Database.
So the max DMU you could set is 4.
Besides, if this speed doesn't match your current requirement.
I suggest you could write your own logic to copy the documentdb to data lake.
You could create multiple webjobs which could use parallel copy from the documentdb to data lake.
You could convert the document according to index range or partition, then you could make each webjob copy different part. In my opinion, this will be faster.
About the dmu, can I use it directly or should I apply for it first? The web jobs you mean is dotnet activity? Can you give some more details?
As far as I know, you could directly use the dmu, you could directly add the dmu value in the json file as below:
"activities":[
{
"name": "Sample copy activity",
"description": "",
"type": "Copy",
"inputs": [{ "name": "InputDataset" }],
"outputs": [{ "name": "OutputDataset" }],
"typeProperties": {
"source": {
"type": "BlobSource",
},
"sink": {
"type": "AzureDataLakeStoreSink"
},
"cloudDataMovementUnits": 32
}
}
]
The webjob which could run programs or scripts in WebJobs in your Azure App Service web app in three ways: on demand, continuously, or on a schedule.
That means you could write a C# program(or using other code language) to run the programs or scripts to copy the data from documentdb to data lake(all of the logic should be written by yourself).
I have two asp.net core apps which both are deployed via github integration directly to their own respective azure websites. One site has a custom domain and the other doesn't.
When initally configuring the integration on both sites they initially failed with space related warnings. So i scaled the sites to be a basic(1 small). I don't know
why i needed to do this as both the apps are considerably less than the 1G which i believe a shared webapp has as a limit. (the two sites on my local HDD are 117M and 120M respectively)
As a result of this i have two sites both sharing the same service plan which is a £41 a month rather than having one site on free and the other on a shared £7 a month(as it needs a custom domain)
If i try and scale down the serviceplan i get the following error. (redacted as expected)
{
"authorization": null,
"caller": null,
"channels": null,
"claims": {},
"correlationId": null,
"description": "Failed to update App Service plan defaultserviceplan: {\"Code\":\"Conflict\",\"Message\":\"Storage usage quota exceeded. Cannot update or delete a server farm.\",\"Target\":null,\"Details\":[{\"Message\":\"Storage usage quota exceeded. Cannot update or delete a server farm.\"},{\"Code\":\"Conflict\"},{\"ErrorEntity\":{\"ExtendedCode\":\"11006\",\"MessageTemplate\":\"Storage usage quota exceeded. Cannot update or delete a server farm.\",\"Parameters\":[],\"InnerErrors\":[],\"Code\":\"Conflict\",\"Message\":\"Storage usage quota exceeded. Cannot update or delete a server farm.\"}}],\"Innererror\":null}",
"eventDataId": null,
"eventName": null,
"eventSource": null,
"category": null,
"eventTimestamp": "Wed Jun 21 2017 11:01:25 GMT+0100 (GMT Summer Time)",
"id": "Failed to update App Service plan_Wed Jun 21 2017 11:01:25 GMT+0100 (GMT Summer Time)",
"level": "1",
"operationId": null,
"operationName": {
"value": "Failed to update App Service plan",
"localizedValue": "Failed to update App Service plan"
},
"resourceGroupName": null,
"resourceProviderName": null,
"resourceType": null,
"resourceId": null,
"status": {
"value": "Error",
"localizedValue": "Error"
},
"subStatus": null,
"submissionTimestamp": null,
"subscriptionId": null,
"properties": {
"correlationIds": "REDACTED"
},
"relatedEvents": []
}
How can i diagnose what is taking up the space, or report this issue?
The first thing to look at is your file-system usage. You can look at what the App Service thinks you are using by going to the App Service plan and clicking on filesystem on the left hand menu.
This will give you an aggregated view of how much space is being used by all apps in the App Service plan.
if this value is > 1 GiB then you won't be able to scale down to shared (I suspect this is what causing your issue)
The next step would be to look at the storage used by each of the apps in your App Service plan.
In the Web App UX you should be able to go to "Quota" and see what each app in the App Service plan is using.
If you find an app that using more space than you think it should be using, a here are a few things to look at:
Logs: if you are logging to the app's filesystem this can use up space quickly depending on the verbosity level.
MySQL In App: if you have enabled this feature, the db is stored as a file on disk, and will use up space as well.
Site extensions installed on your app
You should be able to use Kudu and the debug console to get a good idea of what is using space.