I am trying to deploy two node analytics servers which are pointing to one database. And configured local shard configuration file for 1st node to index 0th,1st and 2nd shards, and 2nd node to index 3rd, 4th and 5th shards. I configured indexReplicationFactor to 1. My question is, whether above configuration will keep indexing data consistently?. Another concern is, Does having indexReplication factor as 1 will help in performance according to my deployment plan?
Related
This is my first time setting up an OpenStack instance on Ubuntu and I'm having some difficulty with setting up additional compute nodes. I've set up a controller node following the devstack instructions here with the stable/xena release and I'm trying to add an additional compute node so I've gone through the set up here but I have a few questions.
The additional compute node does not show up as a hypervisor (although it shows up under the compute service list), does someone have a resource for how to add the compute node as a hypervisor?
I ran the discover_hosts tool within the devstack repo so that the compute node gets picked up by the db but what transport url and database connections should the additional compute node use? Do I copy the transport url and database connection url used by the controller node?
Does Openstack use the resources (storage, RAM, cpus) of the additional compute node to create new VMs as well?
If someone could provide advice on how to go about setting up this compute node that would be greatly appreciated.
Thanks in advance!
Note: In the comments below I mention some steps I tried so I'll just sum them up here with their results.
nova-manage cell_v2 discover_hosts --verbose gave this output:
Found 3 cell mappings.
Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': <random_string1> .
Found 0 unmapped computes in cell <random_string1>.
Getting computes from cell: <random_string2>.
Checking host mapping for compute host 'vmname': <random_string3>.
Found 0 unmapped computes in cell <random_string2>
So the command runs but I think theres an issue with how things are set up in the db since the compute node doesn't seem to be linked to a cell.
nova-manage cell_v2 list_hosts output gives 2 hosts, the controller and the vm I am trying to add but the cell name for the compute node I'm trying to add is None.
nova-manage cell_v2 list_cells output gives 3 cells, one with no name value but it has the same cell uuid as <random_string2> in the above comment with a transport url that has no /nova_cell1 ending and the db connection string is the same as cell0.
So I think there is an issue with how the compute node is trying to be added to the db?
1, try run nova-manage cell_v2 discover_hosts in controller node to discover hypervisor.
2, you should not do anything if the step 1 works.
3, yes if the step 1 works.
So I have setup my cosmos in 2 regions (say West US and South Central US). I also have my app services running in these two region and connecting to cosmos. For app services running in each region I have configured my preferred region list. So for app service running WUS region, preferred list is in order [WUS, SCUS] and for app service running SCUS region, preferred list is in order [SCUS, WUS].
I want to verify if this configuration is working and my data was returned from cosmos region in order as i have mentioned. For example if accessed from WUS app service, verify if region chooses to execute the query was WUS and vice versa.
Is there any way to verify this?
NOTE : I am using spring-data-cosmosdb-2.1.2 to connect to cosmos.
Not sure if you can get that information from Response in code but you can make use of Cosmos Metrics in Azure portal.There you can filter the metrics on Region.
So, Attempt a request through App from region 1 and then verify in portal that expected Cosmos region served it. Test in same way for region 2.
Yes, just dump the Diagnostics returned in the response object and look for the FQDN to the endpoint. It will have a regional subdomain pre-pended to the URI.
We have downloaded 4 pre-configured Corda nodes from https://testnet.corda.network.
These nodes have X.500 distinguished names containing subparts - Organization, Location and Country.
Question 1: Can we replace the values in the above subparts of the X.500 distinguished name with our definitions?
Question 2: Can we add "organizationUnit" in the above X.500 name?
The Testnet was built for the community to experience the Corda network.
It is a pathway to the Corda Network (TCN) run by the Corda Foundation, an independent council which runs the TCN.
I will log your requests with the testnet team as a function enhancement request, but they will probably need more information from you & your team to make any further changes to the testnet.
You can reach me at http://slack.corda.net. We can resume the conversation there.
Yes, every node specifies its own X500 name. This is done in the node.conf using the myLegalName field: https://docs.corda.net/corda-configuration-file.html#configuration-file-fields This field is used during initial registration on the node's first startup to register with the network's identity service https://docs.corda.r3.com/node-commandline.html#sub-commands
The X500 name for a name of a node is extremely important as it represents the identity that a node uses when signing transaction. Because of this the X500 identity rules vary by network.
Testnet
The onboarding tool pre-generates an X500 for you based on your marketplace account. Your account is automatically built into a generated node.conf whichinstall.sh downloads for your node.
In Testnet there are no restrictions on identity and all registration requests are automatically approved. Therefore anyone can specify any identity they would like which is why Testnet must never be used for real financial transactions.
UAT
X500 names must followed a specific set of rules to be approved. Guidelines on how to select an X500 are here: https://corda.network/participation/distinguishedname.html
Examples of real world identity selection is here: https://corda.network/participation/legalentity.html
In UAT registration requests are all manually approved by the Corda Network Foundation. Follows the steps outlined here to onboard your node: https://uat.network.r3.com/pages/joining/joining.html
The Corda Network (tCN)
The production Corda Network follows the same guidelines for X500 names.
The onboarding process for nodes is also the same with different urls: https://corda.network/participation/index.html
I'm setting WSO2 APIM HA in distributed environment and I have some challanges using this documentation.
Documentation states: Note: When configuring clustering, ignore the WSO2_CARBON_DB data source configuration.
Question is, do I really cannot use CARBON db instead od UM un REG databases in HA?
Documentation mentions to configure following:
AM DB - in the Publisher, Store, and Key Manager nodes
UM DB - in the Publisher, Store, and Key Manager nodes
REG DB - in the API Publisher and Store nodes. (single tenant)
MB DB - in the Traffic manager nodes (each TM own DB)
Question is, can I completely fill one master-datasources.xml file and overwrite it on all components so I would not have to edit it on each server? (only editing the second TM datasource to aim to the second MB DB)
Yes, that is fine if you completely fill only one master-datasource.xml file & overwrite it on all other components. (except WSO2_MB_STORE_DB which is MB DB)
But MB DB (WSO2_MB_STORE_DB ) has to be separate for each node. As this DB is used for traffic as well as internally by Throttling policies, which has very high rate of DB transactions.
It will work if you don't keep WSO2_MB_STORE_DB separate, but it will have large number of DB transactions which can slower down your single DB. So it's Highly Advisable to maintain separate DB on each node. It will also help you in easy DEBUGGING in PROD environments.
I'm trying to deploy MoonMail on AWS. However, I receive this exception from CloudFormation:
Subscriber limit exceeded: Only 10 tables can be created, updated, or deleted simultaneously
Is there another way to deploy without opening support case and asking them to remove my limit?
This is an AWS limit for APIs: (link)
API-Specific Limits
CreateTable/UpdateTable/DeleteTable
In general, you can have up to 10
CreateTable, UpdateTable, and DeleteTable requests running
simultaneously (in any combination). In other words, the total number
of tables in the CREATING, UPDATING or DELETING state cannot exceed
10.
The only exception is when you are creating a table with one or more
secondary indexes. You can have up to 5 such requests running at a
time; however, if the table or index specifications are complex,
DynamoDB might temporarily reduce the number of concurrent requests
below 5.
You could try to open a support request to AWS to raise this limit for your account, but I don't feel this is necessary. It seems that you could create the DynamoDB tables a priori, using the AWS CLI or AWS SDK, and use MoonMail with read-only access to those tables. Using the SDK (example), you could create those tables sequentially, without reaching this simultaneously creation limit.
Another option, is to edit the s-resources-cf.json file to include only 10 tables and deploy. After that, add the missing tables and deploy again.
Whatever solution you apply, consider creating an issue ticket in MoonMail's repo, because as it stands now, it does not work in a first try (there are 12 tables in the resources file).