How does the GUI load a partitioned table on remote server in dolphindb? - database-partitioning

How does the GUI load a partitioned table on remote server in dolphindb?
I run scripts in GUI and want to load a partitoned table which is on the remote server, how to do?

The first method is to connect to the data node of the remote server, and then load it through the loadTable() function.
Establish a connection with the remote server through the xdb function, and then execute the script on the remote node through the remoteRun function, and return the result. The sample code is as follows:
//Establish a remote connection
conn = xdb( "115.239.209.234" , 28948 , "admin" , "123456" ) 
//Execute the script on the remote node, the list variable is on the remote server
remote Run(conn, "list =exec top 500 distinct(ID) from oadTable('dfs://DataDB','pt') where TradeDate=2019.09.06" ) 
//Load the pt table in DataDB on the remote server Part of the data in the current GUI connected to the data node (not belonging to the remote server) memory, the variable is t2
t2 = remote Run(conn, "select * from loadTable('dfs://DataDB','pt') where TradeDate=2019.09.06 a

Related

[airflow]hive_hook: can not connect to hive metastore

I installed it and run a HivePartitionSensor to test hive meta store connection.It throw a exception than tell me can not connect to hive meta store
thrift.transport.TTransport.TTransportException: Could not connect to ...
,but I'm sure the host and port is right.I find that the port return from mysql is long type such as 9083L,so I change the source code in hive_hooks.py from
socket = TSocket.TSocket(ms.host, ms.port)
to
socket = TSocket.TSocket(ms.host, int(ms.port))
just cast ms.port to int,it works!
Does anyone came across this problem before?
Env:python2.7,airflow:v1-8-stable,mysql 5.7

Terminating (graciously) MonetDB process in R

I'm using MonetDB on a variety of platform (e.g. OS X and Linux Centos) with a shiny application.
It is difficult to disconnect the db all the times, so currently my approach is to terminate the shiny app without disconnecting the db.
This means that the shiny app before accessing the data tries to stop any "old" process with:
monetdb.server.stop(pid)
From the source of the command I understand that it basically kills the process associated to the pid provided (which means among other things that the user running the app must have access to the kill function).
This works OK some of the times, but sometimes when I try to start MonetDB again, I get
!FATAL: GDKlockHome: Database lock '.gdk_lock' denied
Warning in socketConnection(host = host, port = port, blocking = TRUE, open = "r+b", :
localhost:50000 cannot be opened
Error in socketConnection(host = host, port = port, blocking = TRUE, open = "r+b", :
cannot open the connection
Is there a way to avoid this error (without forcibly disconnecting the database all the time I use it in shiny)?
It can indeed take a couple of seconds for MonetDB to shut down. If a new process is started on the same dbfarm directory in the meantime, you will get the !FATAL: GDKlockHome: Database lock '.gdk_lock' denied error. I'm considering adding a wait parameter to monetdb.server.stop.
So in the meantime, waiting a bit before the server is started again is a good idea. Otherwise, consider using monetdbd to manage your MonetDB servers.

Newby to TcpClient Server

I have inherited a project which involves a client connection to a server which in turn connects to another server in some instances.
The client starts a transaction which is sent to Server A which in turn sends xml to Server B. Server B returns xml to server A which returns it to client.
Each transaction requires this loop to be executed 3 or 4 times depending on client selections after the first loop has completed.
Server B requires that the connection from Server A remain open for the duration of the transaction and a sessionid is assigned as part of the return xml message after the initial connection is made.
My problem comes in when another client connects to server A and therefore a new transaction is triggered between Server A and Server B and due to my lack of experience with TcpClient programming I am unable to identify which connection is linked to each individual client. Currently there are over 200 clients and there are times when there could be upto 50 transactions at differing stages of completion.
Each client sends a unique identifier with every transaction and Server B sends a unique session id with every connection, I need to figure out a way of linking the 2 on server A.

H2 Database Server to serve TCP clients from connection pool

I have a H2 server that I start from console. The client from different machine accesses the server and calls a function alias (registered at the database). The problem is, this function is called more than a million times. So, in this case, the connection timesout but then I solved it with changing the client connection string to have AUTORECONNECT=TRUE. This solves the problem but adds a reconnection time delay (which I want to avoid).
Is there any flag/command that we can use to specify with Server to dedicate X amount of connections?
Also, I looked into the possibility of starting the server from within Application. Like,
JdbcConnectionPool cp = JdbcConnectionPool.create(
"jdbc:h2:tcp://IPADDRESS:9092/~/test", "sa", "");
cp.setMaxConnections(MAX_CONN_IN_POOL);
// start the TCP Server
Server server = Server.createTcpServer().start();
Connection conn = cp.getConnection();
Statement stat = conn.createStatement();
stat.execute("SELECT myFunctionAlias(arg)");
cp.dispose();
server.stop();
The above sample code does start the server and will only run once. I want the server to be open and keep listening to clients, and serve them from the connection pool. Any pointers?
You should start the server in AUTO_SERVER mode, this way a leader election algorithm will elect a leader who will read the database file and open a TCP server for all other clients.
Thsi way the server will be really fast to read the file while other clients will be as fast as the network is.
This is transparent to the user as long as you use the same connection string.
Connection connection;
String dataBaseString = "jdbc:h2:/path/to/db/" + File.separator + "db;create=true;AUTO_SERVER=TRUE;AUTO_RECONNECT=TRUE";
try
{
Class.forName("org.h2.Driver");
log.info("getConnection(), driver found");
}
catch (java.lang.ClassNotFoundException e)
{
log.error("getConnection(), ClassNotFoundException: " + e.getMessage(), e);
Main.quit();
}
try
{
connection = DriverManager.getConnection(dataBaseString);
}

HTTP Error: 400 when sending msmq message over http

I am developing a solution which will utilize msmq to transmit data between two machines. Due to the seperation of said machines, we need to use HTTP transport for the messages.
In my test environment I am using a Windows 7 x64 development machine, which is attempting to send messages using a homebrew app to any of several test machines I have control over.
All machines are either windows server 2003 or server 2008 with msmq and msmq http support installed.
For any test destination, I can use the following queue path name with success:
FORMATNAME:DIRECT=TCP:[machine_name_or_ip]\private$\test_queue
But for any test destination, the following always fails
FORMATNAME:DIRECT=$/test_queue
I have used all permutations of machine names/ips available. I have created mappings using the method described at this blog post. All result in the same HTTP Error: 400.
The following is the code used to send messages:
MessageQueue mq = new MessageQueue(queuepath);
System.Messaging.Message msg = new System.Messaging.Message
{
Priority = MessagePriority.Normal,
Formatter = new XmlMessageFormatter(),
Label = "test"
};
msg.Body = txtMessageBody.Text;
msg.UseDeadLetterQueue = true;
msg.UseJournalQueue = true;
msg.AcknowledgeType = AcknowledgeTypes.FullReachQueue | AcknowledgeTypes.FullReceive;
msg.AdministrationQueue = new MessageQueue(#".\private$\Ack");
if (SendTransactional)
mq.Send(msg, MessageQueueTransactionType.Single);
else
mq.Send(msg);
Additional Information: in the IIS logs on the destination machines I can see each message I send being recorded as a POST with a status code of 200.
I am open to any suggestions.
The problem can be caused by the IP address of the destination server having been NAT'ed through a Firewall.
In this case the IIS server receives the message okay and passes it on to MSMQ. MSMQ then reads the message and sees the destination of the message as being something different than the known IP addresses of the server. At this point MSMQ rejects the message and IIS returns a HTTP status 400.
Fortunately the solution is fairly straightforward. Look in %windir%\System32\msmq\mapping. This folder can contain a bunch of xml files (often sample files are provided) that each contain mappings between one address and another. The name of the file can be anything you like, here is an example of the xml formatted contents:
<redirections xmlns="msmq-queue-redirections.xml">
<redirection>
<from>http://external_host/msmq/external_queue</from>
<to>http://internal_host/msmq/internal_queue</to>
</redirection>
</redirections>
The MSMQ service then needs restarting to pickup the new configuration, for instance from the command line:
net stop msmq
net start msmq
References:
http://blogs.msdn.com/b/johnbreakwell/archive/2008/01/29/unable-to-send-msmq-3-0-http-messages.aspx
http://msdn.microsoft.com/en-us/library/ms701477(v=vs.85).aspx
Maybe you have to encode the $ as %24.

Resources