Can a connection be in multiple groups at once? (ie, 'chatRoom51' and 'adminUsers')
Yes. A connection can belong to any number of groups simultaneously.
Related
It's my understanding that the PartitionedTableAppender method of DolphinDB Python API can implement concurrent data writes. I'm trying to write data to a dfs table with compo domain where the partitions are determined by the values of "datetime" and "symbol". Now the data I'd like to write include records of 150 symbols in one day. This is what I tried:
But it seems only one partitioning column can be specified in partitionColName. Please inform me if I do have a wrong way of writing this.
Just specify one partitioning column in this case even if it uses compo domain. Based on the given information, it is suggested to specify partitionColName to be "symbol" and then concurrent writes can be implemented. However, the script still works if you set it to be "datetime", but the data cannot be written concurrently because it only contains one day's records with which only one partition is created.
Refer to the basic operating rules when you are using PartitionedTableAppender:
DolphinDB does not allow multiple writers to write data to one
partition at the same time. Therefore, when multiple threads are
writing to the same table concurrently, it is recommended to make sure each of
them writes to a different partition. Python API provides a
convenient way by dividing data by partition automatically.
With DolphinDB server version 1.30 or above, we can write to DFS
tables with the PartitionedTableAppender object in Python API. The
user needs to first specify a connection pool. The system obtains
the partitioning information before assigning them to the connection pool
for concurrent writing. A partition can only be written to by one
connection pool at one time.
Therefore, only one partitioning column is needed for a table with compo domain. Just specify a highly-differentiated partitioning column to create numbers of partitions and assign them to multiple connection pools. Thus, the data can be written concurrently to the dfs tables.
For example, I have a order assigning system. And different types of orders need to be sent to different clients. So should I define one single hub to handle all types of orders, or one hub for one type of order? The messages of different type of order could be different too.
Actually if I use only one hub, will different types of messages coming from different services/threads be sent sequentially in one thread or simultaneously in multiple threads?
Check my answer to a similar question here. Officially, no performance difference in having more than one hub. Link to the documentation is there also.
I have Golden Gate on oracle 12c. Is it possible to get subset of data (like Where condition) from source to replicate in the target database.
Second question: Is it possible to replicate data using golden gate in to two different databases if so how. From one source to two target schemas.
Yes, it is possible to get subset of data. Use the:
FILTER
WHERE
clause at the Extract or the Replicat param file.
Yes, it is possible to replicate to two targets. There are two possibilities:
You just can one Extract process and two Replicat processes (which are reading one trail),
You can create two Extract processes writing data to two trail files and two Replicat processes.
Each one of the Replicat processes would be writing data to a separate database target.
I am using vis.js for network diagramming, I need to cluster some nodes.
For Example: create new cluster when a node property is 'aaa' (Suppose two nodes has 'aaa' they will create cluster).
Again create new cluster when a node property is 'bbb' (Suppose two nodes has 'bbb' they will create another cluster).
I have two problems:
1) I don't want to hard-code 'aaa' or 'bbb'.
2) I don't want to create multiple clusterOptionsByData object and invoke network.cluster(clusterOptionsByData) multiple times.
Is there any way to pass multiple joinconditions while clustering in vis.js?
I'm the developer of the visjs network. The answer to 2) is no.
What is the use case of not wanting to call the method twice? You can construct your own join condition based on variables and pass that into the cluster method. You don't need to hardcore anything.
Every cluster call makes one cluster. Multiple join conditions do not make sense. It's up to you to come up with a good join condition that covers all your cases.
Next time you have a question, post it on our github issues page. We get email updates on these so we'd be able to help you quicker.
Cheers
I'm using SignalR in a prototype I'm building. I need to broadcast messages to a number of clients, but there is some logic as to which clients will get which message which is complex enough to rule out using Groups. Instead, I'm basically checking each connected client - if they are applicable, they're added to a List<>. I then send the message using:
var clients = DetermineClients(msg);
foreach (var client in clients)
client.Send(msg);
Of course, if I were able to use Groups, I could do...:
var group = DetermineGroup(msg);
group.Send(msg);
... since the 'Send' method of group appears to basically do the same thing - enumerate the clients in the group and call 'Send' on those. Is this the 'correct' way to do this? Or is there some way to create a temporary group on the fly? The 'dynamic' type of a group or singular client makes it difficult for me to determine whether I'm doing this right. If there is some magic going on behind the scene for optimising a broadcast to a number of clients, I'd obviously rather use that!
Any advice would be appreciated. Let me know if you need more info.
If you're simply trying to send the same message to a number of different clients based on a complex criteria your best bet would be to have a group that contains all of your clients and then query to find out which clients DO NOT fit your criteria. With the list of connection IDs that do not fit your criteria you can do:
Clients.Group("myGroupThatHasAllMyClients",myExcludedConnectionIds).bar();