AzureDevOps CosmosDB enableTTL without default using runbook or template - azure-cosmosdb

I need to edit an existing powershell runbook which uses a template to create a cosmosDb in Azure.
I need to enable TTL without a default TTL value, in the examples I found so far there is always a value, this means that this value is used to delete expired documents.
How do I enable only the TTL without setting a default?
My reference: https://learn.microsoft.com/en-us/azure/cosmos-db/manage-with-powershell#create-container-unique-key-ttl

After digging in the Microsoft documentation I found this key table with examples:
+-------------+--------------------------------------------------------------------+
| TTL on item | Result |
+-------------+--------------------------------------------------------------------+
| TTL on container is set to null (DefaultTimeToLive = null) |
| |
| ttl = null | TTL is disabled. The item will never expire (default). |
| ttl = -1 | TTL is disabled. The item will never expire. |
| ttl = 2000 | TTL is disabled. The item will never expire. |
| | |
+-------------+--------------------------------------------------------------------+
| TTL on container is set to -1 (DefaultTimeToLive = -1) | |
| |
| ttl = null | TTL is enabled. The item will never expire (default). |
| ttl = -1 | TTL is enabled. The item will never expire. |
| ttl = 2000 | TTL is enabled. The item will expire after 2000 seconds. |
| | |
+-------------+--------------------------------------------------------------------+
| TTL on container is set to 1000 (DefaultTimeToLive = 1000) |
| |
| ttl = null | TTL is enabled. The item will expire after 1000 seconds (default). |
| ttl = -1 | TTL is enabled. The item will never expire. |
| ttl = 2000 | TTL is enabled. The item will expire after 2000 seconds. |
+-------------+--------------------------------------------------------------------+
This is not exactly referred to runbook and template but if I set -1 I can achieve my intent, as shown in the table above, setting in the container a TTL -1, this will be enabled and the TTL value in the documents will be used.
Using Get-Help New-CosmosDbCollection -full I could find the parameter -DefaultTimeToLive, this is what I am going to use because it looks like there is no option to do it in the ARM Template

Related

OpenStack Mistral workflow error while executing using GUI

I am getting error while executing OpenStack simple mistral workflow on OpenStack(wallaby) devstack environment. While I can execute the workflow from CLI command and got success But it fails if I try the same thing with GUI
root#openstack:~# openstack workflow definition show test_get
---
version: '2.0'
test_get:
description: Test Get.
tasks:
my_task:
action: std.http
input:
url: http://www.google.com
root#openstack:~# openstack workflow execution create test_get
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| ID | 482e3803-45ef-411e-a0f4-1427abfc8649 |
| Workflow ID | 9dc0d4a4-8c5b-4288-8126-e1147da3bd02 |
| Workflow name | test_get |
| Workflow namespace | |
| Description | |
| Task Execution ID | <none> |
| Root Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2021-06-21 16:58:54 |
| Updated at | 2021-06-21 16:58:54 |
| Duration | ... |
+--------------------+--------------------------------------+
But while executing in GUI I get **
Execution is missing field "workflow_identifier"
**
Faced the same issue in Yoga release. Spent a few hours to investigate it and found interesting thing:
/usr/local/lib/python3.8/dist-packages/mistralclient/api/v2/executions.py
class ExecutionManager(base.ResourceManager):
resource_class = Execution
def create(self, wf_identifier='', namespace='',
workflow_input=None, description='', source_execution_id=None,
**params):
self._ensure_not_empty(
workflow_identifier=wf_identifier or source_execution_id
)
But! in the webform we are using workflow_identifier instead of wf_identifier
/usr/local/lib/python3.8/dist-packages/mistraldashboard/workflows/forms.py
def handle(self, request, data):
try:
data['workflow_identifier'] = data.pop('workflow_name')
data['workflow_input'] = {}
for param in self.workflow_parameters:
value = data.pop(param)
if value == "":
value = None
data['workflow_input'][param] = value
ex = api.execution_create(request, **data)
FIX is to rename workflow_identifier to wf_identifier in the form like
data['wf_identifier'] = data.pop('workflow_name')
After that mistral-dashboard works fine with execution creating.

MariaDB InnoDB table: how to find statement causing "waiting for table metadata lock"

How do I determine what the SQL statement is of the thread ID showing up in a metadata lock info row (SELECT * FROM information_schema.metadata_lock_info) on MariaDB?
Server version: 10.0.15-MariaDB MariaDB Server
All of the related questions dive into the "Waiting for table metadata lock" from a MySQL perspective, but that does not help with MariaDB since their introspection is implemented differently from what I can tell. Googling around does not turn up a whole lot.
A "show full processlist" gives rows like:
| 57295 | main | localhost | joints | Execute | 50 | Waiting for table metadata lock | select ...
Which does show the statement, but does not show that it has the lock either. So, I turned on metadata lock info as explained here [0]. This only provides the thread ID of the lock holder, but not the statement:
MariaDB [joints]> SELECT * FROM information_schema.metadata_lock_info;
+-----------+--------------------------+-----------------+----------------------+--------------+----------------+
| THREAD_ID | LOCK_MODE | LOCK_DURATION | LOCK_TYPE | TABLE_SCHEMA | TABLE_NAME |
+-----------+--------------------------+-----------------+----------------------+--------------+----------------+
| 57322 | MDL_INTENTION_EXCLUSIVE | MDL_EXPLICIT | Global read lock | | |
| 57322 | MDL_SHARED_NO_READ_WRITE | MDL_EXPLICIT | Table metadata lock | joints | 16_study |
| 57322 | MDL_INTENTION_EXCLUSIVE | MDL_EXPLICIT | Schema metadata lock | joints | |
| 57269 | MDL_SHARED_READ | MDL_TRANSACTION | Table metadata lock | joints | authentication |
| 57301 | MDL_SHARED_READ | MDL_TRANSACTION | Table metadata lock | joints | authentication |
| 57280 | MDL_SHARED_READ | MDL_TRANSACTION | Table metadata lock | joints | authentication |
| 57317 | MDL_SHARED_READ | MDL_TRANSACTION | Table metadata lock | joints | ship |
| 57271 | MDL_SHARED_READ | MDL_TRANSACTION | Table metadata lock | joints | administration |
| 57264 | MDL_SHARED_READ | MDL_TRANSACTION | Table metadata lock | joints | server |
+-----------+--------------------------+-----------------+----------------------+--------------+----------------+
What I really want is to see the "join" of both of those outputs at the moment the locking is happening. I do not see a way to join the data from these two "tables" since the former does not appear to be a table. I'd like to avoid getting:
ERROR 1933 (HY000): Target is not running an EXPLAINable command
while attempting to do it in real-time, due to the thread ending while being inspected.
[0] https://mariadb.com/kb/en/mariadb/metadata_lock_info/
THREAD_ID maps to information_schema.PROCESSLIST.ID (the first column in show
[full] processlist;. ie:
SELECT * FROM information_schema.METADATA_LOCK_INFO AS mli
JOIN information_schema.PROCESSLIST AS pl ON mli.THREAD_ID = pl.ID
I am preferential towards something like the following to make it easier to see the what is happening (the newlines don't work well with the cli though):
SELECT
mli.THREAD_ID, mli.LOCK_MODE, mli.LOCK_TYPE,
CAST(GROUP_CONCAT(DISTINCT CONCAT(mli.TABLE_SCHEMA, '.', mli.TABLE_NAME) ORDER BY mli.TABLE_SCHEMA, mli.TABLE_NAME SEPARATOR '\n') AS CHAR) AS locked_tables,
pl.USER, pl.HOST, pl.DB, pl.COMMAND, pl.TIME, pl.STATE, pl.INFO, pl.QUERY_ID, pl.TID
FROM information_schema.METADATA_LOCK_INFO AS mli
JOIN information_schema.PROCESSLIST AS pl ON mli.THREAD_ID = pl.ID
GROUP BY mli.THREAD_ID, mli.LOCK_MODE, mli.LOCK_TYPE
ORDER BY time DESC, pl.ID;
Especially interesting is when pl.COMMAND = 'Sleep' as that indicates some connection pool or other (mostly read-only) program is holding open connections that have locks on them.

Firebase security rules with external id

For some applications my team creates authenticated users with a password/email combination. This will get the user an firebase user uid. The problem with this is that the keys in firebase itself are external id's, and they do not match the auth.uid. How would I go about creating security rules then?
Sample auth.uid:
9dkad6c7-s649-9623-99e2-5a0dbgf5dfdz
Then a sample of the structure:
database
|
—— conversations
|
——{external id 1}
| |
| ——{external id 2}
| |
| {data here}
|
messages
|
——{externalid1|externalid2}
| |
| —{-KFasdahsduids}
| |
| {data here}
|
|
users
|
——{externalId}
| |
| {first name}
| {last name}
| {firebaseUID}
| {more data here}
|
——{externalId2}
|
{first name}
{lastname}
{firebaseUID}
{more data here}
The problem really is that the auth.uid is not the same as the external ones, and we really need those external id's. Can I do something with the UID that is stored in the /users/? Any suggestions?

Error!: SQLSTATE[42000] [1226] ‘max_user_connections’ resource (current value: 30), but max_user_connections is configured to 1000

Visitors get a Mysql error when a mysql user has exceeded the max_user_connections, returning the current value as 30, but max_connections and max_user_connections are set to 1000. When the problem occurs, the CPU reaches almost 98 %.
On mysql error logs we received a lot of access denied errors from another user, around 5000 denied connections. My problem is not why the PHP script takes all these connections, but to know why the configured variables, max_user_connections and max_connections are not applied. Those are configured to 1000, but the error message returns 30. How it is possible ?
I activated log_warnings=2, to get more information, but we don't get an extra information. Any idea why this behavior ? or How to audit mysql to find the source of this problem ?
The error message received is :
Error!: SQLSTATE[42000] [1226] User ‘some_user’ has exceeded the ‘max_user_connections’ resource (current value: 30)
select ##session.max_user_connections, ##global.max_connections;
+--------------------------------+--------------------------+
| ##session.max_user_connections | ##global.max_connections |
+--------------------------------+--------------------------+
| 1000 | 1000 |
+--------------------------------+--------------------------+`
show global variables like '%connections%';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| extra_max_connections | 1 |
| max_connections | 1000 |
| max_user_connections | 1000 |
+-----------------------+-------+
show status like '%connected%';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Threads_connected | 4 |
+-------------------+-------+
select user,max_user_connections from mysql.user where host='localhost'\G
user: some_user
max_user_connections: 0
user: another_user
max_user_connections: 0`
The error seems to be :
Error: 1226 SQLSTATE: 42000 (ER_USER_LIMIT_REACHED)
Message: User '%s' has exceeded the '%s' resource (current value: %ld)
and not :
Error: 1203 SQLSTATE: 42000 (ER_TOO_MANY_USER_CONNECTIONS)
Message: User %s already has more than 'max_user_connections' active connections
We are using MariaDB, version :
select version();
+------------------------+
| version() |
+------------------------+
| 5.5.44-MariaDB-cll-lve |
+------------------------+
Solution :
You can reproduce the error with the following command :
mysqlslap -a --concurrency=500 --number-of-queries 5000 --iterations=500 --engine=innodb --debug-info -utest -p
The problem was caused by Governor, we have Cloudlinux installaed on the server, but this option was set OFF by default, but in this server was set to abusers. If the CPU is higher that 400 Gobernor set the max_user_connections for the user at 30.
You can check the logs on /var/log/dbgovernor-restrict.log
The solution si to set correctly this value or set off
dbctl --lve-mode off
/etc/container/mysql-governor.xml
<lve use="abuser"></lve>
<restrict level1="60s" level2="15m" level3="1h" level4="1d"
timeout="1h" log="/var/log/dbgovernor-restrict.log"
user_max_connections="30"></restrict>
<statistic mode="on"></statistic>
<default>
<limit name="cpu" current="400" short="380" mid="350" long="300">
</limit>

CherryPy 3.6 - reading Multipart Post http request

I coded a java client that sends a string of meta information and a byte array through a multipart post http request to my server running cherrypy 3.6.
I need to extract both values and I coded this in python3 on the server side to find out how to manipulate the result as I can't find any relevant documentation over internet that explains how to read this html part
def controller(self, meta, data):
print("meta", meta)
print("data", type(data))
outputs :
my meta information
<class 'cherrypy._cpreqbody.Part'>
Note : the data part contains raw binary data.
How can I read the http part content into a buffer or output it to a disk file ?
Thanks for your help.
Thanks for your answer.
I'v already read this doc but unfortunately methods read-into_file and make_file, read ... it doesn't work for me. for example when trying to read a zip file sent form my java client :
Assuming data is the Http post parameter
make_file()
fp = data.make_file()
print("fp type", type(fp)) # _io.BufferedRandom
zipFile = fp.read()
outputs:
AttributeError: 'bytes' object has no attribute 'seek'
line 651, in read_lines_to_boundary raise EOFError("Illegal end of multipart body.")EOFError: Illegal end of multipart body.
read_into_file()
file = data.read_into_file()
print("file type", type(file))
zipFile = io.BytesIO(file.read())
# zipFile = file.read() # => raises same error
outputs:
line 651, in read_lines_to_boundary raise EOFError("Illegal end of multipart body.")EOFError: Illegal end of multipart body.
I don't understand what happens ...
Actually "data" is not a file like object but a cherrypy._cpreqbody.Part one. It holds a "file" file an _io.BufferedRandom class property.
Its read() method returns the whole body content in a binary form (bytes).
so to end up the straightforward solution is :
class BinReceiver(object):
def index(self, data):
zipFile = io.BytesIO(data.file.read())
path = "/tmp/data.zip"
fp = open(path)
fp.write(zipFile, 'wb')
fp.close()
print("saved data into", path, "size", len(zipFile))
index.exposed = True
and this works fine ...
fyi : I'm running python3.2
It seems like data is a file-like object which you can call .read on. In addition CherryPy provides a method read_into_file.
See the full documentation by typing help(cherrypy._cpreqbody.Part) in your REPL.
class Part(Entity)
| A MIME part entity, part of a multipart entity.
|
| Method resolution order:
| Part
| Entity
| __builtin__.object
|
| Methods defined here:
|
| __init__(self, fp, headers, boundary)
|
| default_proc(self)
| Called if a more-specific processor is not found for the
| ``Content-Type``.
|
| read_into_file(self, fp_out=None)
| Read the request body into fp_out (or make_file() if None).
|
| Return fp_out.
|
| read_lines_to_boundary(self, fp_out=None)
| Read bytes from self.fp and return or write them to a file.
|
| If the 'fp_out' argument is None (the default), all bytes read are
| returned in a single byte string.
|
| If the 'fp_out' argument is not None, it must be a file-like
| object that supports the 'write' method; all bytes read will be
| written to the fp, and that fp is returned.
|
| ----------------------------------------------------------------------
| Class methods defined here:
|
| from_fp(cls, fp, boundary) from __builtin__.type
|
| read_headers(cls, fp) from __builtin__.type
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| attempt_charsets = ['us-ascii', 'utf-8']
|
| boundary = None
|
| default_content_type = 'text/plain'
|
| maxrambytes = 1000
|
| ----------------------------------------------------------------------
| Methods inherited from Entity:
|
| __iter__(self)
|
| __next__(self)
|
| fullvalue(self)
| Return this entity as a string, whether stored in a file or not.
|
| make_file(self)
| Return a file-like object into which the request body will be read.
|
| By default, this will return a TemporaryFile. Override as needed.
| See also :attr:`cherrypy._cpreqbody.Part.maxrambytes`.
|
| next(self)
|
| process(self)
| Execute the best-match processor for the given media type.
|
| read(self, size=None, fp_out=None)
|
| readline(self, size=None)
|
| readlines(self, sizehint=None)
|
| ----------------------------------------------------------------------
| Data descriptors inherited from Entity:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| type
| A deprecated alias for :attr:`content_type<cherrypy._cpreqbody.Entity.content_type>`.
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from Entity:
|
| charset = None
|
| content_type = None
|
| filename = None
|
| fp = None
|
| headers = None
|
| length = None
|
| name = None
|
| params = None
|
| part_class = <class 'cherrypy._cpreqbody.Part'>
| A MIME part entity, part of a multipart entity.
|
| parts = None
|
| processors = {'application/x-www-form-urlencoded': <function process_u...

Resources