I have a kind events that is stored in different namespaces which depends on service stage (dev/stage/prod).
I want to add composite index only for a kind in dev namespace but I can't find a way to configure it
I use command to create the index:
gcloud datastore indexes create ~/myapp/index.yaml
index.yaml
indexes:
- kind: events
properties:
- name: created
direction: desc
- name: approved
direction: desc
Do you know a way to create an index only for one namespace?
Composite indexes are per database, and thus cross namespaces. There is no way to configure a composite index per namespace.
There is some other details in Are datastore indexes same across multiple namespaces? .
Related
I am using redis as my data store and i am using spring boot starter data redis as the dependency and a crud repository for CRUD operation. When i do deleteById it is working. But when i do deleteByName (Name is not an id column) it is saying query method not supported. When the data source is redis and if we use spring boot starter data redis, it is possible only to deleteById column and not by other columns?
I had the same problem that after a lot of searching and also checking the following link:
Redis 2.1.2.RELEASE doc reference
I realized that since Redis is based on Key and Valve, the following solution should be used:
List<SampleClass> lst = cacheTokenRepository.findAllBySampleFields(... sampleFields);
lst.forEach(item->
SampleRepository.deleteById(item.getId())
);
In this solution we search the list of records that match our conditions and delete them all based on id
In order to use multi-storage in Scalar DB, I am implementing it with MySQL and Dynamo DB Local, but the Endpoint Override setting for Dynamo DB Local does not work.
I have configured the following settings, but are they correct?
## Dynamo DB for the transaction tables
scalar.db.multi_storage.storages.dynamo.storage=dynamo
scalar.db.multi_storage.storages.dynamo.contact_points=ap-northeast-1
scalar.db.multi_storage.storages.dynamo.username=fakeMyKeyId
scalar.db.multi_storage.storages.dynamo.password=fakeMyKeyId
scalar.db.multi_storage.storages.dynamo.contact_port=8000
scalar.db.multi_storage.storages.dynamo.endpoint-override=http://localhost:8000
The format of the storage definition in Multi-storage configuration is as follows:
scalar.db.multi_storage.storages.<storage name>.<property name without the prefix 'scalar.db.'>"
For example, if you want to specify the scalar.db.contact_points property for the cassandra storage, you can specify scalar.db.multi_storage.storages.cassandra.contact_points.
In your case, the storage name is dynamo, and you want to specify the scalar.db.dymano.endpoint-override property, so you need to specify scalar.db.multi_storage.storages.dynamo.dynamo.endpoint-override as follows:
scalar.db.multi_storage.storages.dynamo.dynamo.endpoint-override=http://localhost:8000
Please see the following document for the details:
https://github.com/scalar-labs/scalardb/blob/master/docs/multi-storage-transactions.md
Hypothetically, given the gql query files, it could generate appropriate indexes itself, or just do so during the runtime. Searching the docs for index I got nothing.
Hasura does not automatically generate any indexes based on gql query files. You can verify this by querying the metadata in your Postgres instance; some helpful links to do that:
https://www.postgresqltutorial.com/postgresql-indexes/postgresql-list-indexes/
List columns with indexes in PostgreSQL
You can add indexes manually via a migration.
Related Github issue: https://github.com/hasura/graphql-engine/issues/2219
I am using Apache solr to create collection, shards. I am able to build collection using
sudo curl 'http://localhost:8983/solr/admin/collections?action=CREATE&name=demo&numShards=2&replicationFactor=1'
Here, collection name = "demo"
number of Shards = "2"
but when I am adding new shard using
sudo curl 'http://localhost:8983/solr/admin/collections?action=CREATESHARD&shard=shard3&collection=demo'
It is giving error :
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">400</int><int name="QTime">1</int></lst><lst name="error"><str name="msg">shards can be added only to 'implicit' collections</str><int name="code">400</int></lst>
</response>
From the documentation for CREATESHARD:
Shards can only created with this API for collections that use the 'implicit' router. Use SPLITSHARD for collections using the 'compositeId' router. A new shard with a name can be created for an existing 'implicit' collection.
So the proper way to do this is to issue a SPLITSHARD command instead, and then remove the old shard after the two new shards have been created. From the SPLITSHARD documentation:
Splitting a shard will take an existing shard and break it into two pieces. The original shard will continue to contain the same data as-is but it will start re-routing requests to the new shards. The new shards will have as many replicas as the original shard. After splitting a shard, you should issue a commit to make the documents visible, and then you can remove the original shard (with the Core API or Solr Admin UI) when ready.
Shards can only created with this API for collections that use the 'implicit' router (i.e., when the collection was created, router.name=implicit). A new shard with a name can be created for an existing 'implicit' collection.
reference,https://solr.apache.org/guide/8_6/collection-management.html
I intend to use: JdbcTokenStore.
As far as I can tell it uses two tables: oauth_access_token and oauth_refresh_token
I can reverse engineer the table structure; it isn't quite clear if there are references from one table to the other for which I should create a foreign key or not?
Is there a postgres specific schema somewhere? Or another schema that I can refer to?
Batch, for instance, includes the schemas in their dist. I wonder if Oauth2 could do that also?
Many thanks,
Matt
the schema is checked into git as well:
https://github.com/spring-projects/spring-security-oauth/blob/master/spring-security-oauth2/src/test/resources/schema.sql
when using postgres you should use bytea instead of LONGVARBINARY