We have one data stream created and want to know whether we can create multiple index patters so that I can push logs from multiple applications
Related
It's my understanding that the PartitionedTableAppender method of DolphinDB Python API can implement concurrent data writes. I'm trying to write data to a dfs table with compo domain where the partitions are determined by the values of "datetime" and "symbol". Now the data I'd like to write include records of 150 symbols in one day. This is what I tried:
But it seems only one partitioning column can be specified in partitionColName. Please inform me if I do have a wrong way of writing this.
Just specify one partitioning column in this case even if it uses compo domain. Based on the given information, it is suggested to specify partitionColName to be "symbol" and then concurrent writes can be implemented. However, the script still works if you set it to be "datetime", but the data cannot be written concurrently because it only contains one day's records with which only one partition is created.
Refer to the basic operating rules when you are using PartitionedTableAppender:
DolphinDB does not allow multiple writers to write data to one
partition at the same time. Therefore, when multiple threads are
writing to the same table concurrently, it is recommended to make sure each of
them writes to a different partition. Python API provides a
convenient way by dividing data by partition automatically.
With DolphinDB server version 1.30 or above, we can write to DFS
tables with the PartitionedTableAppender object in Python API. The
user needs to first specify a connection pool. The system obtains
the partitioning information before assigning them to the connection pool
for concurrent writing. A partition can only be written to by one
connection pool at one time.
Therefore, only one partitioning column is needed for a table with compo domain. Just specify a highly-differentiated partitioning column to create numbers of partitions and assign them to multiple connection pools. Thus, the data can be written concurrently to the dfs tables.
From the old version of analytics I have 1 property with many data streams.
Each data stream represents 1 Android app.
The reports are not relevant as they aggregate all data streams with no way to segment each individual app.
How can I get a separate reporting for each app using the same data streams i.s. WITHOUT creating new firebase config files.
I have already tried to create a new property but when adding the datastream, there is no way to use an existing one.
Just found out a way to get what I need:
In a new report, click All users, then the Audience Name should be Stream ID and then the Dimension Value should be the app you want the report from.
I'm building an apps that involved travel planning using flutter. this app will help people plan their travel by providing few options for them to choose from either cheapest, fastest, shortest etc.
I'm quite new with firebase and i need some advice with the data structure, I was thinking of having public transportation such as train. this train will have it's own schedule. What is the best way to structure this schedule inside the firestore. so that i could create a view that will display train schedule.
If it were me, I would want to map out how I intend to interact with the application. I usually draw this out by hand as I find it much quicker than trying to model something electronically. I review other similar apps at the same time and try and identify data that are missing. Start simple and build up the various data points that you want to include.
The main thing to note is that Cloud Firestore is a NoSQL, document-oriented database; there are no tables or rows. You store data in documents, which are organised into collections that contain a set of key-value pairs.
Once you have a basic structure, you can opt for one of the Firestore data structures and test it out:
Documents
Multiple collections
Subcollections within documents
Nested data
A typical use case for this might involve a chat app, and you want to store a user's three most recently visited chat rooms as a nested list in their profile.
Subcollections
You can create collections within documents when you have data that might expand over time; for example, you might create a collection of users or messages within chat room documents.
Root-level collections
You can create collections at the root level to organise distinct data sets for users and another for rooms and messages.
I have multiple flatfiles (CSV) (with multiple records) where files will be received randomly. I have to combine them (records) with unique ID fields.
How can I combine them, if there is no common unique field for all files, and I don't know which one will be received first?
Here are some files examples:
In real there are 16 files.
Fields and records are much more then in this example.
I would avoid trying to do this purely in XSLT/BizTalk orchestrations/C# code. These are fairly simple flat files. Load them into SQL, and create a view to join your data up.
You can still use BizTalk to pickup/load the files. You can also still use BizTalk to execute the view or procedure that joins the data up and sends your final message.
There are a few questions that might help guide how this would work here:
When do you want to join the data together? What triggers that (a time of day, a certain number of messages received, a certain type of message, a particular record, etc)? How will BizTalk know when it's received enough/the right data to join?
What does a canonical version of this data look like? Does all of the data from all of these files truly get correlated into one entity (e.g. a "Trade" or a "Transfer" etc.)?
I'd probably start with defining my canonical entity, and then look towards the path of getting a "complete" picture of that canonical entity by using SQL for this kind of case.
Is it possible to manage data stored amongst several files?
Let's say I have several files data1.realm, data2.realm, data3.realm, etc. containing objects with the same model. Is it possible to get a unique RLMRealm instance that will access the datas of all these files?
If not, what is the best way to handle this situation? Migration?
It's definitely possible to manage data stored amongst separate Realms, but it wouldn't be automatic. You would need to manage access to this data in your own app's logic.
RLMRealm instances themselves represent a single file on disk and cannot be dynamically created to reference combinations of other Realms. Once an RLMObject has been added to a parent RLMRealm, it cannot be moved/backed to another RLMRealm representing a different file.
It most likely depends on your specific use-cases, but the simplest solution would be to simply query for your objects in separate RLMRealm instances for each file, and placing the resulting RLMResults objects from each one in an NSArray.
While data can't be directly shared between Realms, you could use globally unique primary keys (For example NSUUID) to indicate relationships between objects in different Realms.
If you need, it's also possible to create Realmless copies of RLMObjects if you do end up wanting to move objects between Realms:
Dog *savedDog = [[Dog allObjects] firstObject];
Dog *copiedDog = [[Dog alloc] initWithValue:savedDog];