If I'm describing an API in RAML and I have a request object schema that has a property with a default value that is calculated (rather than explicitly set), how would I describe that?
For instance, let's say I have an integer value whose default value is the current UTC time in epoch format, how would I describe that?
The default keyword is used to specify default values. But there is no way to say that it is a calculated value, other than stating it in the description.
foo:
description: This value is calculated somehow
type: integer
minimum: 10
maximum: 200
default: 30
example: 50
Related
In OpenAPI, both example and enum can be defined with type: string, such as:
type: string
example:
- FOOD
- WATER
type: string
enum:
- FOOD
- WATER
What difference does it make in terms of validation using any of the above structure? Can the above be used interchangeably?
enum and example have different meaning and are used for different purposes.
enum specifies possible values for an instance (in other words, it limits possible values). It's an analog of enums in C#, Java, and other programming languages. For example, enum: [FOOD, WATER] means that the value can be either "FOOD" or "WATER", and nothing else.
example specifies an example value for documentation or other purposes (but not for validation). Say, if you have the User schema with the username property, you can specify demoUser as an example username. But the example value is not the only possible value for the property, it can have other values.
User:
type: object
property:
username:
type: string
example: demoUser
A schema can have both enum and example:
type: string
enum:
- FOOD
- WATER
example: WATER
Unlike enum, example does not affect validation. However, tools like code generators and documentation generators may expect that the example values match their schemas. Such tools would flag the example in your question as invalid, because the schema is defined as string, but the example value is an array (i.e. a different data type).
type: string
# Incorrect
example:
- FOOD
- WATER
# Correct
example: FOOD
I've read multiple questions in Stack Overflow and the documentation but I couldn't find multiple validations that I can imagine that exist.
Per example, it is possible to check if request.resource.data.description is string but can is it possible to do the same when it comes to number, float, timestamp or even array/list? I couldn't even find the string one in the documentation so I can imagine it is missing more than just that one.
You might want to watch my video on data types in Firebase security rules. In it, I list all the different data types that you can check:
value is bool
value is int
value is float
value is number
value is string
value is list
value is map
value is timestamp
value is duration
value is path
value is latlng
As a title, I wonder whether Big data can be into the node.
Also, I'd like to know whether there's any capacity limitation and
it's possible to put the large value as one value into the vertex.
Eg) Will it be okay to put Datatype:binary to the value?
I believe that the value of vertex can be stored as a property.
This property is stores as jsonb type internally. And the maximum size of jsonb object is about 256MB.
When the title: ""value"" is stored, 'value' data being allowed internally in jsonb can be categorized into four types such as numeric, text, null and object.
Big data can be stored as a text type. But its tolerance range should be considered.
I'm looking to set a date on the YYYYMMDD that should be referencing to a date on a specific utcOffset. But I'm not sure what would be the best and elegant way to set this date after setting the utcOffset.
The close I can get is the following but its not the actual result I want. I need a way to first set the offset and then set the YYYYMMDD based on this offset.
moment.utc(ymdDate, 'YYYYMMDD').utcOffset(timeOffset)
Example:
In case I had a date like 20190420 that must be used on a moment object that should be referring to a different timezone and I do the following the date would result in April 19th.
moment.utc(20190420, 'YYYYMMDD').utcOffset(-300).format()
Result:
2019-04-19T19:00:00-05:00
Expected Result:
2019-04-20T00:00:00-05:00
You can use utcOffset passing true as second parameter
The utcOffset function has an optional second parameter which accepts a boolean value indicating whether to keep the existing time of day.
Passing false (the default) will keep the same instant in Universal Time, but the local time will change.
Passing true will keep the same local time, but at the expense of choosing a different point in Universal Time.
Here a live sample:
console.log( moment.utc(20190420, 'YYYYMMDD').utcOffset(-300, true).format() );
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.24.0/moment.min.js"></script>
I have a lot of objects with unique IDs. Every object can have several labels associated to it, like this:
123: ['a', 'hello']
456: ['dsajdaskldjs']
789: (no labels associated yet)
I'm not planning to store all objects in DynamoDB, only these sets of labels. So it would make sense to add labels like that:
find a record with (id = needed_id)
if there is one, and it has a set named label_set, add a label to this set
if there is no record with such id, or the existing record doesn't have an attribute named label_set, create a record and an attribute, and initialize the attribute with a set consisting of the label
if I used sets of numbers, I could use just ADD operation of UPDATE command. This command does exactly what I described. However, this does not work with sets of strings:
If no item matches the specified primary key:
ADD— Creates an item with supplied primary key and number (or set of numbers) for the attribute value. Not valid for a string type.
so I have to use a PUT operation with Expected set to {"label_set":{"Exists":false}}, followed (in case it fails) by an ADD operation. These are two operations, and it kinda sucks (since you pay per operation, the costs of this will be 2 times more than they could be).
This limitations seems really weird to me. Why are something what works with numbers sets would not work with string sets? Maybe I'm doing something wrong.
Using many records like (123, 'a'), (123, 'hello') instead of one record per object with a set is not a solutions: I want to get all the values from the set at once, without any scans.
I use string sets from the Java SDK the way you describe all the time and it works for me. Perhaps it has changed? I basically follow the pattern in this doc:
http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/API_UpdateItem.html
ADD— Only use the add action for numbers or if the target attribute is
a set (including string sets). ADD does not work if the target
attribute is a single string value or a scalar binary value. The
specified value is added to a numeric value (incrementing or
decrementing the existing numeric value) or added as an additional
value in a string set. If a set of values is specified, the values are
added to the existing set. For example if the original set is [1,2]
and supplied value is [3], then after the add operation the set is
[1,2,3], not [4,5]. An error occurs if an Add action is specified for
a set attribute and the attribute type specified does not match the
existing set type.
If you use ADD for an attribute that does not exist, the attribute and
its values are added to the item.
When your set is empty, it means the attribute isn't present. You can still ADD to it. In fact, a pattern that I've found useful is to simply ADD without even checking for the item. If it doesn't exist, it will create a new item using the specified key and create the attribute set with the value(s) I am adding. If the item exists but the attribute doesn't, it creates the attribute set and adds the value(s). If they both exist, it just adds the value(s).
The only piece that caught me up at first was that the value I had to add was a SS (String set) even if it was only one string value. From DynamoDB's perspective, you are always merging sets, even if the existing set is an empty set (missing) or the new set only contains one value.
IMO, from the way you've described your intent, you would be better off not specifying an existing condition at all. You are having to do two steps because you are enforcing two different situations but you are trying to perform the same action in both. So might as well just blindly add the label and let DynamoDB handle the rest.
Maybe you could: (pseudo code)
try:
add_with_update_item(hash_key=42, "label")
except:
element = new Element(hash_key=42, labels=["label"])
element.save()
With this graceful recovery approach, you need 1 call in the general case, 2 otherwise.
You are unable to use sets to do what you want because Dynamo Db doesn't support empty sets. I would suggest just using a string with a custom schema and building the set from that yourself.
To avoid two operations, you can add a "ConditionExpression" to your item.
For example, add this field/value to your item:
"ConditionExpression": "attribute_not_exists(RecordID) and attribute_not_exists(label_set)"
Source documentation.
Edit: I found a really good guide about how to use the conditional statements