I have a entity that use doctrine extension, tree behaviuor, i found problems in tree and don't know it's reason.
my entity:
MyEntity:
type: entity
gedmo:
tree:
type: nested
id:
id:
type: integer
generator:
strategy: AUTO
fields:
# ...
lft:
type: integer
gedmo:
- treeLeft
rgt:
type: integer
gedmo:
- treeRight
lvl:
type: integer
gedmo:
- treeLevel
root:
type: integer
gedmo:
- treeRoot
createdAt:
column: created_at
type: datetime
gedmo:
timestampable:
on: create
updatedAt:
column: updated_at
type: datetime
gedmo:
timestampable:
on: update
oneToMany:
children:
targetEntity: MyEntity
mappedBy: parent
orderBy:
lft: ASC
manyToOne:
parent:
targetEntity: MyEntity
inversedBy: children
joinColumn:
name: parent_id
referencedColumnName: id
onDelete: "restrict"
gedmo:
- treeParent
tree problem:(reported by verify() method)
0 => "index [8], duplicate on tree root: 1"
1 => "index [9], duplicate on tree root: 1"
2 => "index [20], duplicate on tree root: 1"
3 => "index [21], duplicate on tree root: 1"
4 => "node [8], left is greater than right on tree root: 1"
5 => "node [10] left is less than parent`s [7] left value"
6 => "node [16] right is greater than parent`s [7] right value"
7 => "node [19] right is greater than parent`s [6] right value"
8 => "node [20] right is greater than parent`s [6] right value"
9 => "node [21] right is greater than parent`s [6] right value"
10 => "node [22] right is greater than parent`s [6] right value"
11 => "node [23] right is greater than parent`s [6] right value"
12 => "node [24] right is greater than parent`s [6] right value"
13 => "node [31] left is less than parent`s [30] left value"
14 => "node [35] left is less than parent`s [8] left value"
15 => "node [36] left is less than parent`s [8] left value"
table data
I've encountered exactly the same issue. Perhaps this answer will help someone else.
Gedmo/Tree with the nested set executes changes in leaf nodes before the deletion transaction begins. Therefore, even the transaction is rollbacked, the updated changes persist. To resolve that you should start the transaction manually before.
Ref. https://github.com/doctrine-extensions/DoctrineExtensions/issues/1062 & https://github.com/doctrine-extensions/DoctrineExtensions/blob/main/doc/transaction-safety.md
I found problem, when i delete a entity by $em->remove method, doctrine extension assume that onDelete=cascade for entity & change lft & rgt values of tree, then run query for removing of entity(and all children), but my entity have onDelete=restrict, then lft & rgt values are updated, but children is not deleted and this raise error in tree
Related
We hit the "too many index entries for entity" error:
Error: 3 INVALID_ARGUMENT: too many index entries for entity
at Object.callErrorFromStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call.js:31:19)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client.js:195:52)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:365:141)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:328:181)
at /workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:188:78
at processTicksAndRejections (node:internal/process/task_queues:78:11)
It seams I have to exempt a field from automatic indexing. But how do I know which field? The document has a lot of data with nested maps. Here is an example of the document:
{products: {
"p1": {id: "p1", :name "Product1", :group "g1"},
"p2": {id: "p2", :name "Product2", :group "g2"},
"p3": {id: "p2", :name "Product3", :group "g3"}
}}
Any suggestions which field would cause the error?
Override field products in firestore.indexes.json:
{
"fieldOverrides": [
{
"collectionGroup": "menu",
"fieldPath": "products",
"indexes": []
}
]}
I have a query with hardcoded dates used in the parameters section.Instead I want to pass them as environment variables.Any suggestions on how to parameterize the QueryString parameter?
service: service-name
frameworkVersion: '2'
provider:
name: aws
runtime: go1.x
lambdaHashingVersion: 20201221
stage: ${opt:stage, self:custom.defaultStage}
region: us-east-1
tags: ${self:custom.tagsObject}
logRetentionInDays: 1
timeout: 10
deploymentBucket: lambda-repository
memorySize: 128
tracing:
lambda: true
plugins:
- serverless-step-functions
configValidationMode: error
stepFunctions:
stateMachines:
callAthena:
name: datasorting-dev
type: STANDARD
role: ${self:custom.datasorting.${self:provider.stage}.iam}
definition:
Comment: "Data Refersh"
StartAt: Refresh Data
States:
Refresh Data:
Type: Task
Resource: arn:aws:states:::athena:startQueryExecution.sync
Parameters:
QueryString: >-
ALTER TABLE table.raw_data ADD IF NOT EXISTS
PARTITION (YEAR=2021, MONTH=02, DAY=15, hour=00)
WorkGroup: primary
ResultConfiguration:
OutputLocation: s3://output/location
End: true
you can replace any value in your serverless.yml enclosed in ${} brackets,
Serverless Framework Guide to Variables:
https://www.serverless.com/framework/docs/providers/aws/guide/variables/
for example, you can create a custom: section looking for environment variables, and if they are not present, you can have default values:
service: service-name
frameworkVersion: '2'
custom:
year: ${env:YEAR, 'default-year'}
month: ${env:MONTH, 'default-month'}
day: ${env:DAY, 'default-day'}
hour: ${env:HOUR, 'default-hour'}
stepFunctions:
stateMachines:
callAthena:
...
Parameters:
QueryString: >-
ALTER TABLE table.raw_data ADD IF NOT EXISTS
PARTITION (YEAR=${self:custom.year}, MONTH=${self:custom.month}, DAY=${self:custom.day}, hour=${self:custom.hour})
...
I am trying to write a HOT template for Openstack volume, and need to have the volume_type as a parameter. I also need to support a case when the parameter is not given, and default to the Cinder default volume type.
First attempt was to pass null to the volume_type , hoping it would give the default volume type. However no matter what I pass (null, ~, default, "" ) , seems there is no way to get the default volume type.
type: OS::Cinder::Volume
properties:
name: test
size: 1
volume_type: { if: ["voltype_given" , {get_param:[typename]} , null] }
Is there any way to get the default volume type , when you have the "volume_type" property defined?
Alternatively, is there any way to have the "volume_type" property itself behind a conditional? I tried several ways, but no luck. Something like:
type: OS::Cinder::Volume
properties:
if: ["voltype_given" , [ volume_type: {get_param:[typename]} ] , ""]
name: test
size: 1
ERROR: TypeError: : resources.kk-test-vol: : 'If' object is not iterable
Could you do something like this?
---
parameters:
typename:
type: string
conditions:
use_default_type: {equals: [{get_param: typename}, '']}
resources:
MyVolumeWithDefault:
condition: use_default_type
type: OS::Cinder::Volume
properties:
name: test
size: 1
MyVolumeWithExplicit:
condition: {not: use_default_type}
type: OS::Cinder::Volume
properties:
name: test
size: 1
volume_type: {get_param: typename}
# e.g. if you need to refer to the volume from another resource
MyVolumeAttachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uid: some-instance-uuid
volume_id:
if:
- use_default_type
- get_resource: MyVolumeWithDefault
- get_resource: MyVolumeWithExplicit
I am using Firebase as follow:
Post--
-Kwc6asRRI1SUqrigYeD <- First input
-> Date: 1:00pm
-> ID: 1
-> Content: Hello!
-Kwc6fXQsN2xIQtHOofZ
-> Date: 2:00pm
-> ID: 2
-> Content: How are you?
-Kwc6fXQsN2xRO39LDPD <-Most recent one
-> Date: 3:00pm
-> ID: 3
-> Content: I am good.
These are "pushed" thus an unique key is generated which can be used to display them according to the "most recent to old" (or ID:3 to ID:1).
Now, when I need to update a data, let say ID:1 post's content from "Hello" to "My name is Steve", then it still maintains the unique key even though this is the most recent one.
Post--
-Kwc6asRRI1SUqrigYeD <-Most recent one
-> Date: 4:00pm
-> ID: 1
-> Content: My name is Steve
-Kwc6fXQsN2xIQtHOofZ
-> Date: 2:00pm
-> ID: 2
-> Content: How are you?
-Kwc6fXQsN2xRO39LDPD
-> Date: 3:00pm
-> ID: 3
-> Content: I am good.
I guess I can delete the post and set a new one, but that seems inefficient especially if I have more data on each child.
Is there a way to re-set the key so that it reflects the time change (like below)?
Post--
-Kwc6fXQsN2xIQtHOofZ
-> Date: 2:00pm
-> ID: 2
-> Content: How are you?
-Kwc6fXQsN2xRO39LDPD
-> Date: 3:00pm
-> ID: 3
-> Content: I am good.
-Kwc6asRRI1KDodkeixk <-Most recent one
-> Date: 4:00pm
-> ID: 1
-> Content: My name is Steve
There is no way to update the key of an existing node (see 1, 2, 3). If you want a new key, you'll have to generate a new node with the same content.
But in this case it seems much more likely that you want to keep your data structure as is and instead add a lastUpdated timestamp to each post:
Post--
-Kwc6asRRI1SUqrigYeD
-> Date: 1:00pm
-> lastUpdated: 1508215054096
-> ID: 1
-> Content: Hello!
-Kwc6fXQsN2xIQtHOofZ
-> Date: 2:00pm
-> lastUpdated: 1507610306270
-> ID: 2
-> Content: How are you?
-Kwc6fXQsN2xRO39LDPD
-> Date: 3:00pm
-> lastUpdated: 1508128668412
-> ID: 3
-> Content: I am good.
With this structure you can use a Firebase query to get the results in the order you want. In JavaScript this would look something like:
var ref = firebase.database().reference("Post");
ref.orderByChild("lastUpdated")
.once("value")
.then(function(snapshot) {
snapshot.forEach(function(post) {
console.log(snapshot.key+": "+snapshot.val().Content);
});
});
I am using gcloud-python library for querying data from the cloud datastore. Consider my snippet to be like this
from google.appengine.ext import ndb
from datetime import datetime
class Order(ndb.Model):
order_name = ndb.StringProperty(required=True)
date_created = ndb.DateTimeProperty(default= datetime.now())
#code for querying the cloud datastore
from gcloud.datastore.query import Query
date_start = datetime.combine(date(year=2015, month=08, day=01), time())
date_end = datetime.combine(date(year=2015, month=08, day=03), time())
query = Query(kind='Order')
query.add_filter('order_name', '=', 'grand-line-order')
query.add_filter('date_created', '<', date_end)
query.add_filter('date_created', '>', date_start)
iterator = query.fetch(limit=10)
records, more, cursor = iterator.next_page()
print records
For the above snippet i am getting
File "/Users/sathyanarrayanan/Desktop/app/services/cdr_helper.py", line 528, in fetch_cdr
records, more, cursor = iterator.next_page()
File "/Users/sathyanarrayanan/Desktop/app/gcloud/datastore/query.py", line 388, in next_page
transaction_id=transaction and transaction.id,
File "/Users/sathyanarrayanan/Desktop/app/gcloud/datastore/connection.py", line 257, in run_query
datastore_pb.RunQueryResponse)
File "/Users/sathyanarrayanan/Desktop/app/gcloud/datastore/connection.py", line 108, in _rpc
data=request_pb.SerializeToString())
File "/Users/sathyanarrayanan/Desktop/app/gcloud/datastore/connection.py", line 85, in _request
raise make_exception(headers, content, use_json=False)
PreconditionFailed: 412 no matching index found.
My Index.yaml file is like this.
indexes:
- kind: Order
ancestor: yes
properties:
- name: date_created
- kind: Order
ancestor: yes
properties:
- name: date_created
direction: desc
- kind: Order
ancestor: yes
properties:
- name: order_name
direction: asc
- name: date_created
direction: desc
- kind: Order
ancestor: yes
properties:
- name: order_name
direction: asc
- name: date_created
direction: asc
Am I doing something wrong? Please help me out.
All of your indexes using ancestor:yes so ancestor key should be added in your query. without ancestor your index configuration require another index with 'ancestor:no'
- kind: Order
ancestor: no
properties:
- name: order_name
direction: asc
- name: date_created
direction: desc
Note: specific indexes for each query
The index configuration docs indicate that the index configuration should be in an XML file called datastore-indexes.xml.