How to define prodataset with a join table - openedge

How do you actually define a dataset with a join table? Whenever we do this we get the error, buffer could only have 1 active parent relation.
DEFINE DATASET FOR eOrder, eOrderLine, eProduct
DATA-RELATION r1 for eOrder, eOrderLine
RELATION-FIELDS (OrderID, OrderID)
DATA-RELATIOn r2 for eOrder, eProduct.
RELATION-FIELDS(ProductID, ProductID)

It looks like your syntax is a little off. You don't have a dataset name in there. The syntax is:
DEFINE DATASET <DatasetName> FOR...
Also, the period at the end of data relation r2 is ending the statement before the relation fields. Here is an example that will work with the Sports database:
DEFINE TEMP-TABLE eOrder LIKE Order.
DEFINE TEMP-TABLE eOrderLine LIKE Order-Line.
DEFINE TEMP-TABLE eCustomer LIKE Customer.
DEFINE DATASET dsOrder FOR eOrder, eOrderLine, eCustomer
DATA-RELATION r1 for eOrder, eOrderLine
RELATION-FIELDS (Order-Num, Order-Num)
DATA-RELATION r2 for eOrder, eCustomer
RELATION-FIELDS (Cust-Num, Cust-Num).

Related

Is it possible to store the result of transformation in a variable (R Script/ Power Query like) in SQL server

Is it possible to store the result of a transformation in a variable like R/Power query.
For example, with R/power query following can be achieved now
X<- table1
Y<- table2
Z<- table3
A<- X LEFT JOIN Y
B<- Z LEFT JOIN A
and so on ....
It is so much easier if this can be done in SQL server. It is much easier to store the result of a transformation in a variable to be utilized later in perpetuity within the code.
Is it possible? Thank you in advance.

Converting date to a varchar using "like" in pl/sql

I need to go through few millions of data searching for a year sent as a parameter to a method. The year comes as a varchar.
This is the query I'm working with
SELECT X,Y
FROM A
WHERE mch_code = 'KN'
AND contract = '15KTN'
AND to_char(cre_date, 'YYYY') = year_;
cre_ date is of type date and year_ is from type carchar.
when performing this query it take around 25 minutes to process it completely.
Is anyone knows about a different approach to find out the quick execution.
Please help.
This didn't work out.
SELECT X,Y
FROM A
WHERE mch_code = 'KN'
AND contract = '15KTN'
AND cre_date LIKE '%2013';
The reason might be 'cre_date' and '%2013' are of different types
If you have an index on (mch_code, contract, cre_date) columns, you can improve performance by doing something like:
select x, y
from a
where mch_code = 'KN'
and contract = '15KTN'
and cre_date >= to_date('01/01/'||year_, 'dd/mm/yyyy')
and cre_date < add_months(to_date('01/01/'||year_, 'dd/mm/yyyy'), 12);
Even better would be to declare the start of the year as a DATE variable prior to running the sql, eg:
v_year_dt := to_date('01/01/'||year_, 'dd/mm/yyyy');
which would make the query:
select x, y
from a
where mch_code = 'KN'
and contract = '15KTN'
and cre_date >= v_year_dt
and cre_date < add_months(v_year_dt, 12);
If you don't have an index on those three columns, you could create a function based index on (mch_code, contract, to_char(cre_date, 'yyyy')) that should help speed up your query, depending on the percentage of rows you're expecting to select. It may help even more if you added the x and y columns into the index, so that no table access was required at all.
Alternatively, you could think about partitioning the table on cre_date, monthly or yearly.
The reason your query is slow is that you're applying a function to a column on every row in your table. Let's try it another way:
SELECT X,Y
FROM A
WHERE mch_code = 'KN' AND
contract = '15KTN' AND
CRE_DATE BETWEEN TO_DATE('01/01/' || year_, 'DD/MM/YYYY')
AND TO_DATE('01/01/' || year_, 'DD/MM/YYYY') + INTERVAL '1' YEAR;
This eliminates the need to apply a function against every row in the table, and should allow any indexes on CRE_DATE to be used.
Best of luck.
You can try with EXTRACT function:
SELECT X,Y
FROM A
WHERE mch_code = 'KN'
AND contract = '15KTN'
AND EXTRACT(YEAR FROM cre_date) = year_;

Creating an embedded R windowing function in monetdb

I'm trying to create a windowing aggregate function using embedded R in monetdb. The function I have used is:
CREATE AGGREGATE r_sw(val double, part varchar(255), endtime timestamp, starttime timestamp) RETURNS double LANGUAGE R {
library(data.table)
library(zoo)
DT=data.table(ag=aggr_group,pa=part,va=val,et=endtime,st=starttime)
setorder(DT,pa,et)
DT[, o:=mapply(function(x,y) DT[(et>=x & pa==y),.N], DT$st, DT$pa)]
as.data.frame(DT[,.(s:=rollapply(va,o,sum), by=pa)]$s)
};
When attempting to select from the function I am getting an error claiming the aggregate doesn't exist:
Error: SELECT: no such operator 'r_sw'
SQLState: 22000
ErrorCode: 0
I think this is an issue with the number of parameters I am passing, and nothing to do with the R code. I have created aggregates with 2 parameters which work perfectly, but 3 or more seems to cause a problem. Is there something else I need to be doing to get this to work?
(EDIT)Steps to reproduce:
CREATE TABLE mytable (myval double, mypart varchar(255), myend timestamp, mystart timestamp);
INSERT INTO mytable VALUES (10,'A','2016-01-01 00:00:00','2016-01-07 00:00:00');
INSERT INTO mytable VALUES (200,'A','2016-01-04 00:00:00','2016-01-12 00:00:00');
SELECT mypart, r_sw(myval,mypart,myend,mystart) from mytable GROUP BY mypart;

How to get count of all columns of a table, which are not null using PL/SQL?

Is there any PL/SQL function, which allows to pass a table name and returns the count of all columns, which don't include null values?
I have a huge number of columns and don't want to query each and every column. I'm new to PL/SQL and highly appreciate your help.
As of a comment to the question one approach to solve this is the following query:
SELECT t.table_name,
t.num_rows,
c.column_name,
c.num_nulls,
t.num_rows - c.num_nulls num_not_nulls,
c.data_type,
c.last_analyzed
FROM all_tab_cols c
JOIN sys.all_all_tables t ON c.table_name = t.table_name
WHERE c.table_name LIKE 'EXT%'
AND c.nullable = 'Y'
GROUP BY t.table_name,
t.num_rows,
c.column_name,
c.num_nulls,
c.data_type,
c.last_analyzed
ORDER BY t.table_name,
c.column_name

Impact of add column on superprojection in vertica DB

I have a conceptual question in vertica DB. If I create a table 'abc' in vertica with columns a,b,c order by a,b, it will automatically create a superprojection for it.
Now, If I alter table 'abc' add column 'd' to it, it will create a new superprojection.
The question is, will the 'order by a,b' be impacted in this new superprojection? Will vertica retain this order by in the new superprojection? Also, will it also include the column 'd' to this order by? What is the default behaviour?
Will vertica retain this order by in the new superprojection?
It will retain the order by specified in the initial CREATE TABLE statement.
Also, will it also include the column 'd' to this order by?
Vertica will only add new columns to the super projection (this is the default behavior).
Walk through
Let's create the table & add data:
CREATE TABLE public.abc (
a int,
b int,
c int
) ORDER BY a, b;
INSERT INTO public.abc (a, b, c) VALUES (1, 2, 3);
A super-projection is automatically added when data is added to the table:
CREATE PROJECTION public.abc /*+createtype(P)*/
(
a,
b,
c
)
AS
SELECT abc.a,
abc.b,
abc.c
FROM public.abc
ORDER BY abc.a,
abc.b
SEGMENTED BY hash(abc.a, abc.b, abc.c) ALL NODES KSAFE 1;
Let's add a new column to the table:
ALTER TABLE public.abc ADD COLUMN d int;
The new column gets added only to the projection columns and table columns in any super-projections (not in ORDER BY):
CREATE PROJECTION public.abc /*+createtype(P)*/
(
a,
b,
c,
d -- Added here
)
AS
SELECT abc.a,
abc.b,
abc.c,
abc.d -- Added here
FROM public.abc
ORDER BY abc.a,
abc.b
SEGMENTED BY hash(abc.a, abc.b, abc.c) ALL NODES KSAFE 1;

Resources