i can not add an Two dimensional array property to a relationship - collections

i test Two dimensional array like:
RETURN [[0,1],[2,3],[4,5],[6,7],[8,9]] AS collection
it works.
but , when i try to add an Two dimensional array property to a relationship like:
MATCH (station_44:STATION {id:44}), (station_38:STATION {id:38}) CREATE UNIQUE (station_44)-[:test2 { path:[[1,2],[2,3],[3,4]] } ]->(station_38)
I get error: Collections containing mixed types can not be stored in properties.
How can i do it? is it a bug?

You can not have an array containing an array as node or relationship property values.
You can only have an array of one primitive type, for eg int or string.
Documentation reference on property values
Anyway, if you need to query the subdimension of the arrays, then your model is definitely wrong, so I would suggest to redefine your model for the queries you need to do.
If you just want to store it as a property for later retrieval, you can store it as json String and serialize/deserialize at the application level.

Related

How can I read the parquet dictionary in java

I have seen that parquet format uses dictionaries to store some columns and that these dictionaries can be used to speed up the filters if useDictionaryFilter() is used on the ParquetReader.
Is there any way to access these dictionaries from java code ?
I'd like to use them to create a list of distinct members of my column and though that it would be faster to read only the dictionary values than scanning the whole column.
I have looked into org.apache.parquet.hadoop.ParquetReader API but did not found anything.
The methods in org.apache.parquet.column.Dictionary allow you to:
Query the range of dictionary indexes: Between 0 and getMaxId().
Look up the entry corresponding to any index, for example for an int field you can use decodeToInt().
Once you have a Dictionary, you can iterate over all indexes to get all entries, so the question boils down to getting a Dictionary. To do that, use ColumnReaderImpl as a guide:
getDictionary(ColumnDescriptor path, PageReader pageReader) {
DictionaryPage dictionaryPage = pageReader.readDictionaryPage();
if (dictionaryPage != null) {
Dictionary dictionary = dictionaryPage.getEncoding().initDictionary(path, dictionaryPage);
}
}
Please note that a column chunk may contain a mixture of data pages, some dictionary-encoded and some not, because if the dictionary "gets full" (reaches the maximum allowed size), then the writer outputs the dictionary page and the dictionary-encoded data pages and switches to not using dictionary-encoding for the rest of the data pages.

Dynamodb index with Json attribute

I am referring to a thread creating an index with JSON
I have a column called data in my DynamoDB table. This is in JSON and the structure of this file looks like this:
{
"config": "aasdfds",
"state":"PROCESSED",
"value" "asfdasasdf"
}
The AWS documentation says that I can create an index with the top level JSON attribute. However I don't know how to do this exactly. When I create the index, should I specify the partition key as data.state, then, in my code, use a query with the column data.state with the value set to PROCESSED, or should I create the partition key as data, then, in my code, look for the column data with the value set to state = "PROCESSED" ?
Top level attribute means DynamoDB supports creating index on Scalar attributes only (String, Number, or Binary).
The JSON attribute is stored as Document data type. So, index can't be created on Document data type.
The key schema for the index. Every attribute in the index key schema
must be a top-level attribute of type String, Number, or Binary. Other
data types, including documents and sets, are not allowed.
Scalar Types – A scalar type can represent exactly one value. The
scalar types are number, string, binary, Boolean, and null.
Document Types – A document type can represent a complex structure
with nested attributes—such as you would find in a JSON document. The
document types are list and map.
Set Types – A set type can represent multiple scalar values. The set
types are string set, number set, and binary set.

where clause does not work in GAE datastore viewer

I am new to GAE, please excuse for being naive.
datastore viewer query with where clause returns "No results in Empty namespace.".
For instance:
select * from GaeUser
returns all the entires.
Something like,
select * from GaeUser where firstName = 'somename'
or
select * from GaeUser where dayOfBirth = 5
returns nothing but the message No results in empty namespace.
I am expecting some pointers on how to debug this.
Thanks for reading this!!
Simply you just wrote an incorrect/misspelled query.
Note that GAE datastore is schema-less. Writing a query for a nonexisting entity or for a nonexisting property or specifying a filter condition where you use an incorrect data type will not result in error but rather in an empty result.
Being schema-less also means that 2 entities of the same kind might have the same property with different types. For example you might have a Person entity with an age property of type int and another Person with an age property of type String. Obviously in this case if you write something like
select * from Person where age='5'
will not return the person who has age=5 property having int type.
So just simply double check the names and types of the entity and properties and try again.
Another important note:
Properties are indexed by default. This means when saving an entity, index records for an indexed property will automatically be created and saved, this allows you to find this entity by this indexed property. Properties can be made unindexed. When you save an entity with an unindexed property, index records will not be saved (or if there were any, they will be removed) for this unindexed property and you will not be able to query/find this entity by this unindexed property.

Riak inserting a list and querying a list

I was wondering if there was a effecient way of handling arrays/lists in Riak. Right now I'm storing the whole array as a string and searching the string to find out if a element exists in the array.
ID (key) : int[] (Value)
And also How do I write a map/reduce query to give all the keys for which the value array contains a element
For example 1 : 2,3,4
2 : 2,5
How would I write a M/R query to give me all the keys for which value contains 2 the result is 1,2 in this case.
Any help is appreciated
If you are searching for a specific element in the list and are using the LevelDB backend, you could create a secondary index that will contain the values of the array. Secondary indexes in Riak may contain multiple values and can be searched for equality, which should allow you to search for single elements in the array without having to resort to MapReduce.
If you need to make more complicated queries based on either several elements in the list or other parameters, you could retrieve a subset of records based on the secondary index and then process them further on the client side or perhaps even through a MapReduce job.

DynamoDB ordered list

I'm trying to store a List as a DynamoDB attribute but I need to be able to retrieve the list order. At the moment the only solution I have come up with is to create a custom hash map by appending a key to the value and converting the complete value to a String and then store that as a list.
eg. key = position1, value = value1, String to be stored in the DB = "position1#value1"
To use the list I then need to filter out, organise, substring and reconvert to the original type. It seems like a long way round but at the moment its the only solution I can come up with.
Does anybody have any better solutions or ideas?
The List type in the newly added Document Types should help.
Document Data Types
DynamoDB supports List and Map data types, which can be nested to represent complex data structures.
A List type contains an ordered collection of values.
A Map type contains an unordered collection of name-value pairs.
Lists and maps are ideal for storing JSON documents. The List data type is similar to a JSON array, and the Map data type is similar to a JSON object. There are no restrictions on the data types that can be stored in List or Map elements, and the elements do not have to be of the same type.
I don't believe it is possible to store an ordered list as an attribute, as DynamoDB only supports single-valued and (unordered) set attributes. However, the performance overhead of storing a string of comma-separated values (or some other separator scheme) is probably pretty minimal given the fact that all the attributes for row must together be under 64KB.
(source: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/DataModel.html)
Add a range attribute to your primary keys.
Composite Primary Key for Range Queries
A composite primary key enables you to specify two attributes in a table that collectively form a unique primary index. All items in the table must have both attributes. One serves as a “hash partition attribute” and the other as a “range attribute.” For example, you might have a “Status Updates” table with a composite primary key composed of “UserID” (hash attribute, used to partition the workload across multiple servers) and a “Time” (range attribute). You could then run a query to fetch either: 1) a particular item uniquely identified by the combination of UserID and Time values; 2) all of the items for a particular hash “bucket” – in this case UserID; or 3) all of the items for a particular UserID within a particular time range. Range queries against “Time” are only supported when the UserID hash bucket is specified.

Resources