I'm running on cluster where manager (publisher, store, km, etc) is on one node and gateway workers are on other nodes. I'm using SVN synchronizer to sync between them, and indeed API's published on the manager are added properly to the gateway.
I also use tiers.xml for setting up rate limit for unauthenticated API's. This is done by changing the registry resource /_system/governance/apimgt/applicationdata/tiers.xml at the manager node.
<throttle:ID throttle:type="ROLE">Unauthenticated</throttle:ID>
<wsp:Policy>
<throttle:Control>
<wsp:Policy>
<throttle:MaximumCount>1000</throttle:MaximumCount>
<throttle:UnitTime>60000</throttle:UnitTime>
</wsp:Policy>
</throttle:Control>
</wsp:Policy>
However, changes done to this tiers.xml are not populated to worker nodes, so if I change maximum count in tiers.xml at the manager - this rate limit is not affecting the worker node.
Any idea how to populate tiers.xml changes to worker nodes?
tiers.xml, being part of the registry, is not synchronized by SVN. This is done as explained in the following document: https://docs.wso2.com/display/shared/Config+and+Governance+Partitions+in+a+Remote+Registry
Regarding database setup, instructions can be found here: https://docs.wso2.com/display/CLUSTER44x/Setting+up+the+Database
The information is valid for carbon version 4.4.x, APIM-1.10. My understanding is that in APIM-2 there are changes/improvement, but I did not upgrade to release 2 yet.
Related
I'm using Airflow's EcsOperator, ECS tasks writing to Cloudwatch.
Sometimes Airflow log fetcher collects logs from CloudWatch and sometimes does not.
On the CloudWatch console, I always see the logs.
On tasks that take a long time, I usually see the log or at least part of it.
Someone had the same issue with ECSOperator?
First ECSOperator is deprecated and removed in provider version 5.0.0
You should switch to EcsRunTaskOperator.
In EcsRunTaskOperator there is awslogs_fetch_interval which control over the interval to fetch logs from Ecs. The default is 30 seconds. If you wish for more frequent polls then set the parameter value accordingly.
You didn't mention what provider version you are on but this part of the code was refactored in version 5.0.0 (PR) so upgrading the Amazon provider might also resolve your issue.
I'm trying to allow my search indexer to connect to a cosmosdb behind a vnet. When adding a shared private access, the provisioning state is set to failed, without giving me an explanation. I have a private endpoint on the CosmosDB setup already. How do I make this work?
I had the same issue occur on the same day as you reported it.
I had been setting up running this connection via Azure Pipelines the day before but suddenly the same Pipeline stopped working.
I raised it as an issue with MS and it was quickly reproduced by the 1st line and passed onto escalation engineers who confirmed there was a recent change in the Fluent SDK used, when we initiate an ARM deployment for Shared Private Link resources we end up specifying both template link and template id (incorrectly/by accident). As a result, customers creating SPL resources are failing to do so with a 500.
I am told there is a world-wide fixed being rolled out and to be completed on Monday pacific time.
I wanted to resize RAM and CPU of my machine, so I stopped the VM instance and when I tried to start it I got an error:
The zone 'projects/freesarkarijobalerts/zones/asia-south1-a' does not
have enough resources available to fulfill the request. Try a
different zone, or try again later.`
Here you can see the screenshot.
I've tried to start VM instance today, but result was the same and I got an error message again:
The zone 'projects/freesarkarijobalerts/zones/asia-south1-a' does not
have enough resources available to fulfill the request. Try a
different zone, or try again later.`
Then I tried to move my instance to different region, but I got an error message:
sarkarijobalerts123#cloudshell:~ (freesarkarijobalerts)$ gcloud compute instances move wordpress-2-vm --zone=asia-south1-a --destination-zone=asia-south1-b
Moving gce instance wordpress-2-vm...failed.
ERROR: (gcloud.compute.instances.move) Instance cannot be moved while in state: TERMINATED
My website is DOWN for a couple of days, please help me.
The standard procedure is to create a snapshot out of the stopped VM instance [1] and then create a new one in another zone [2].
[1] https://cloud.google.com/compute/docs/disks/create-snapshots
[2] https://cloud.google.com/compute/docs/disks/restore-and-delete-snapshots#restore_a_snapshot_of_a_persistent_disk_to_a_new_disk
Let's have a look at the cause of this issue:
When you stop an instance it releases some resources like vCPU and memory.
When you start an instance it requests resources like vCPU and memory back and if there's not enough resources available in the zone you'll get an error message:
Error: The zone 'projects/freesarkarijobalerts/zones/asia-south1-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
more information available in the documentation:
If you receive a resource error (such as ZONE_RESOURCE_POOL_EXHAUSTED
or ZONE_RESOURCE_POOL_EXHAUSTED_WITH_DETAILS) when requesting new
resources, it means that the zone cannot currently accommodate your
request. This error is due to Compute Engine resource obtainability,
and is not due to your Compute Engine quota.
Resource availability are depending from users requests and therefore are dynamic.
There are a few ways to solve your issue:
Move your instance to another zone by following instructions.
Wait for a while and try to start your VM instance again.
Reserve resources for your VM by following documentation to avoid such issue in future:
Create reservations for Virtual Machine (VM) instances in a specific
zone, using custom or predefined machine types, with or without
additional GPUs or local SSDs, to ensure resources are available for
your workloads when you need them. After you create a reservation, you
begin paying for the reserved resources immediately, and they remain
available for your project to use indefinitely, until the reservation
is deleted.
I'm working on wso2 API manager 2.6.0, if I'm connected to a distributed environment and tried making changes from publisher its not getting reflected soon in store! Any suggestions for this to make it happen as soon as I published.
Try disabling registry cache. Open registry.xml and set caches to false.
<currentDBConfig>wso2registry</currentDBConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache> <<<<<<<<<<<<<<<<<<<<<<<<<<<
<registryRoot>/</registryRoot>
<remoteInstance url="https://localhost">
<id>gov</id>
<cacheId>wso2carbon#jdbc:h2:./repository/database/WSO2SHARED_DB;DB_CLOSE_ON_EXIT=FALSE</cacheId>
<dbConfig>govregistry</dbConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache> <<<<<<<<<<<<<<<<<<<<<<<<<<<<
<registryRoot>/</registryRoot>
</remoteInstance>
Please note this change in not recommended for production.
When running Corda nodes for testing or demo purposes, I often find a need to delete all the node's data and start it again.
I know I can do this by:
Shutting down the node process
Deleting the node's persistence.mv.db file and artemis folder
Starting the node again
However, I would like to know if it is possible to delete the node's data without restarting the node, as this would be much faster.
It is not currently possible to delete the node's data without restarting the node.
If you are "resetting" the nodes for testing purposes, you should make sure that you are using the Corda testing APIs to allow your contracts and flows to be tested without actually starting a node. See the testing API docs here: https://docs.corda.net/api-testing.html.
One alternative to restarting the nodes would also be to put the demo environment in a VmWare workstation, take a snapshot of the VM while the nodes are still "clean", run the demo, and then reload the snapshot.