Does calcite support custom push down - bigdata

I have three tables a, b and c, of which a and b are from mysql. When performing associated queries on these three tables, I want a and b to be pushed down to mysql for execution,does calcite support such optimization
I find it seems not

Related

Clickhouse compatibility between different versions for Merge tables

I have a question regrading compatibility between different Clickhouse server versions. I have a ReplicatedMergeTree Table Engine, let's call it Table A on Clickhouse server version 20.3. I have another Replicated MergeTree Table Engine, let's call it Table B on Clickhouse server version 21.8. I have a Merge Table Engine, let's call it Table C, running on Clickhouse server version 20.3 which merges data from Table A and Table B. Since the different tables are on different versions of Clickhouse, I wanted to know whether there would be any issues because of it.
In short
Node 1 (version 20.3) - Local Table A.
Node 2 (version 21.8) - Local Table B.
Node 3 (version 20.3) - Distribute Table A, Distributed Table B, Merge Table C(of Distributed Table A and Distributed Table B)
Is it supported by Clickhouse or not?
Is it supported by Clickhouse or not?
Not in general. Distributed queries over different versions may produce incorrect results. I know 2 issues for this case with 20.3 + 21.8.
Also I don't understand how you going to use Engine=Merge, you need Distributed table to query remote server. And I think you don't need Engine=Merge. It's a simple 2 shards schema for Distributed table.
Anyway, I don't recommend to use it. But it's possible and will work with limited usecases.

How to get mixed results from Realm and display in list/recycler view

Let's say I have two models A, and B. I need to get results with both A, and B in it. The list view could be something like
A 1
A 2
A 3
B 1
B 2
A 4
A 5
B 3
I realize it's not possible to get mixed results in Realm. It also appears we can't inherit a Realm https://github.com/realm/realm-java/issues/761.
Here is one way through composition
Best practice and how to implement RealmList that need to support different types of objects.
What's the best way to achieve this?
The best practice you've linked is the current best approach to achieve what you seek for.

Symfony2: how to work with data from another db?

I have one master database that hosts all master and transaction details of my users.
Now I would like to have 3 web-applications (symfony2) connected to this database, A B and C. All of them have a local database.
A is for my users, B and C will be admin applications. (Basically these are for different departments within my company that do distinctly different things with info from the master database). The master database is fed by A, but should be able to get modified by B and C too.
Lets say in B I need to assign an activity to a user that has created an account in A and store that info in B's local database.
I have been pondering on this for a month: how do I work with a User Entity in B (coming from A) so that I am still be able to do something like:
$activity = new Activity();
$activity->setUser($user);
Or something like:
$activity->getUser();
How can an object relationship be maintained if info needs to come from two different databases?
I am very new to this way of working. Do I need to work with AbstractClasses, or API's or something else?
If someone has at least some tips as to how what I should take in consideration, I would be most grateful.
EDIT:
Does this then mean I would need to have two entity classes for, for example, the user entity? One in A, and one in B (and possibly one in C).
The problem is that in A, the user entity has relationships with other entities, than in B. Remember that A and B are different softwares.
I thought maybe I should have a reusable bundle that is used in both A and B? But then again the problem is that some entities have relationships in A that do not exist in B.
In other words, how would I map the same data from the database differently to entities in multiple software.
You will need to set up multiple entity managers in your configuration. This link will teach you how.

Does versionable behaviour impact query speed in Doctrine 1.2.4?

I am seeing odd behaviour with using Doctrine::getTable() and running queries using it.
In some cases there is little to no overhead and in othercases there is 200+ms of overhead when the Doctrine::getTable() is first called (although little overhead for subsequent calls to the same table).
The action works like this.
a. DoctrineTable is called to run a query (the table in question was used to generate the action module files) with little to no overhead on table A
b. Form is saved
c. DoctrineTable is called to run a query on an unrelated table (table B) and has considerable overhead (200+ms)
d. DoctrineTable is called on another table (table C) to run another query with little to no overhead
I've tried the action with the DoctrineTable query in step c removed to see if it was general loading issue but the query in step d still runs with little or no overhead. I've run the queries using Doctrine_Query directly in the action to see if that made the difference and the speed impact is still there.
It doesn't matter what the query on the problem table is, the same overhead/performance penalty is there.
The only difference with the slow table (table B) is that it has the versionable behaviour where as the other tables (table A and table C don't). Could that be impacting the speed of the initial query (subsequent queries to that table are fast once the first one is done)?
So after more digging and testing I can confirm that Versionable behaviour on tables adds to the overhead when running queries (and inserts) on that table.

ASP.Net MVC - Data Design - Single Wide Record versus Many small record retrievals

I am designing a Web application that we estimate may have about 1500 unique users per hour. (We have no stats for concurrent users.). I am using ASP.NET MVC3 with an Oracle 11g backend and all retrieval will be through packaged stored procedures, not inline SQL. The application is read-only.
Table A has about 4 million records in it.
Table B has about 4.5 million records.
Table C has less than 200,000 records.
There are two other tiny lookup tables that are also linked to table A.
Tables B and C both have a 1 to 1 relationship to Table A - Tables A and B are required, C is not. Tables B and C contain many string columns (some up to 256 characters).
A search will always return 0, 1, or 2 records from Table A, with its mate in table b and any related data in C and the lookup tables.
My data access process would create a connection and command, execute the query, return a reader, load the appropriate object from that reader, close the connection, and dispose.
My question is this....
Is it better (as performance goes) to return a single, wide record set all at once (using only one connection) or is it better to query one table right after the other (using one connection for each query), returning narrower records and joining them in the code?
EDIT:
Clarification - I will always need all the data I would bring over in either option. Both options will eventually result in the same amount of data displayed on the screen as was brought from the DB. But one would have a single connection getting all at once (but wider, so maybe slower?) and the other would have multiple connections, one right after the other, getting smaller amounts at a time. I don't know if the impact of the number of connections would influence the decision here.
Also - I have the freedom to denormalize the table design, if I decide it's appropriate.
You only ever want to pull as much data as you need. Whichever way moves less from the database over to your code is the way you want to go. I would pick your second suggestion.
-Edit-
Since you need to pull all of the records regardless, you will only want to establish a connection once. Since you're getting the same amount of data either way, you should try to save as much memory as possible by keeping the number of connections down.

Resources