How to get count of all columns of a table, which are not null using PL/SQL? - css

Is there any PL/SQL function, which allows to pass a table name and returns the count of all columns, which don't include null values?
I have a huge number of columns and don't want to query each and every column. I'm new to PL/SQL and highly appreciate your help.

As of a comment to the question one approach to solve this is the following query:
SELECT t.table_name,
t.num_rows,
c.column_name,
c.num_nulls,
t.num_rows - c.num_nulls num_not_nulls,
c.data_type,
c.last_analyzed
FROM all_tab_cols c
JOIN sys.all_all_tables t ON c.table_name = t.table_name
WHERE c.table_name LIKE 'EXT%'
AND c.nullable = 'Y'
GROUP BY t.table_name,
t.num_rows,
c.column_name,
c.num_nulls,
c.data_type,
c.last_analyzed
ORDER BY t.table_name,
c.column_name

Related

How to find cardinality of all columns in kusto?

I'm trying to find number of distinct values in all columns for some query. I found that dcount works well but you have to supply the specific column. I want to do this on all columns, where the column names and the number of columns are dynamic
you'll have to explicitly include all columns of interest.
note that any additional column you add to the query will increase the resources utilization of the query, so if you have any knowledge about columns that are likely to be of high cardinality, consider including only those.
FWIW: you can generate the query (for all columns, with the caveat above) dynamically, then invoke the result of this:
let tableName = "my_table";
let datetime_column_name = "my_datetime_column";
let lookback_period = 1h;
let column_names = toscalar(
table(tableName)
| getschema
| summarize make_set(ColumnName)
);
print query = strcat(
tableName,
"\n| where ",
datetime_column_name,
" > ago(timespan(",
lookback_period,
"))\n| summarize dcount(",
strcat_array(column_names, "),\ndcount("),
")")

how to use gcheckboxgroup in a for loop in R

I am very much new to the R GUI programming.
I wanted the user to dynamically select columns in the dataframe and then after that dynamically select the levels of the selected columns.
My intent is to allow users to select columns and the filter values and then get the dataframe filtered upon those. For getting the column names, I am getting correct values. However, while fetching the levels of the selected column, the for loop exits and the selected values are not getting captured in the cba and cbv variable.
items<-colnames(joined_final)
items<-levels(joined_final$State)
cbg<-gcheckboxgroup(items,cont=TRUE,use.table = TRUE, index=TRUE,container = w)
cb<-svalue(cbg,index=TRUE)
j<-length(cb)
func(joined_final,j,cb)
func<-function(joined_final,j,cb){
cbv=c()
for (i in seq(j)){
items_1<-levels(joined_final[,cb[i]])
cba<-gcheckboxgroup(items_1,cont=TRUE,use.table = TRUE, container = w)
cbv<-svalue(cba)
}
return(cbv)
}
Please help me with this. Thanks in advance

Converting date to a varchar using "like" in pl/sql

I need to go through few millions of data searching for a year sent as a parameter to a method. The year comes as a varchar.
This is the query I'm working with
SELECT X,Y
FROM A
WHERE mch_code = 'KN'
AND contract = '15KTN'
AND to_char(cre_date, 'YYYY') = year_;
cre_ date is of type date and year_ is from type carchar.
when performing this query it take around 25 minutes to process it completely.
Is anyone knows about a different approach to find out the quick execution.
Please help.
This didn't work out.
SELECT X,Y
FROM A
WHERE mch_code = 'KN'
AND contract = '15KTN'
AND cre_date LIKE '%2013';
The reason might be 'cre_date' and '%2013' are of different types
If you have an index on (mch_code, contract, cre_date) columns, you can improve performance by doing something like:
select x, y
from a
where mch_code = 'KN'
and contract = '15KTN'
and cre_date >= to_date('01/01/'||year_, 'dd/mm/yyyy')
and cre_date < add_months(to_date('01/01/'||year_, 'dd/mm/yyyy'), 12);
Even better would be to declare the start of the year as a DATE variable prior to running the sql, eg:
v_year_dt := to_date('01/01/'||year_, 'dd/mm/yyyy');
which would make the query:
select x, y
from a
where mch_code = 'KN'
and contract = '15KTN'
and cre_date >= v_year_dt
and cre_date < add_months(v_year_dt, 12);
If you don't have an index on those three columns, you could create a function based index on (mch_code, contract, to_char(cre_date, 'yyyy')) that should help speed up your query, depending on the percentage of rows you're expecting to select. It may help even more if you added the x and y columns into the index, so that no table access was required at all.
Alternatively, you could think about partitioning the table on cre_date, monthly or yearly.
The reason your query is slow is that you're applying a function to a column on every row in your table. Let's try it another way:
SELECT X,Y
FROM A
WHERE mch_code = 'KN' AND
contract = '15KTN' AND
CRE_DATE BETWEEN TO_DATE('01/01/' || year_, 'DD/MM/YYYY')
AND TO_DATE('01/01/' || year_, 'DD/MM/YYYY') + INTERVAL '1' YEAR;
This eliminates the need to apply a function against every row in the table, and should allow any indexes on CRE_DATE to be used.
Best of luck.
You can try with EXTRACT function:
SELECT X,Y
FROM A
WHERE mch_code = 'KN'
AND contract = '15KTN'
AND EXTRACT(YEAR FROM cre_date) = year_;

Update data.table column changes data type

I am testing a small scale scenario before rolling it out in a larger production environment and am experiencing a strange occurrence.
I have 2 data sets:
dtL <- data.table(URN=c(1,2,3,4,5), DonorType=c("Cash","RG","Emergency","Emergency","Cash"))
dtL[,c("EmergVal","EmergDate") := list(as.numeric(NA),as.Date(NA))]
setkey(dtL,URN)
dtR <- data.table(URN = c(1,1,1,2,3,3 ,3 ,4,4, 4,4,5),
class=c(5,5,5,1,5,40,40,5,40,5,40,5),
xx= c(25,50,25,10,100,20,25,20,40,35,20,25),
xdate=as.Date(c("2013-01-01","2013-06-05","2014-05-27","2014-10-14",
"2014-06-09","2014-04-07","2014-10-16",
"2014-07-16","2014-10-21","2014-10-22","2014-09-18","2013-12-19")))
setkey(dtR,URN)
I am wanting to update dtL where the DonorType is equal to "Emergency", but only for a subset of records from dtR. I have seen Update subset of data.table based on join and thus have used that as a foundation for my solution.
dtL[dtR[class==40,list(maxxx=max(xx)),by=URN],
EmergVal := ifelse(DonorType=="Emergency",i.maxxx,as.numeric(NA))]
dtL[dtR[class==40,list(maxdate=max(xdate)),by=URN],
EmergDate := ifelse(DonorType=="Emergency",as.Date(i.maxdate),as.Date(NA)),nomatch=0]
I don't get any errors, however when I look at the data now in dtL it has changed the datatype for EmergDate to num rather than what it originally was (i.e. Date).
So three questions
Why has it changed the data type (especially when it is a Date when first created in dtL, and I tell it to put it as a date in my ifelse statement?
How do I get it to keep the date type when I assign it? or will I have to do some post assignment conversion/castint?
Is there a clean way I could do my assignment of EmergVal and EmergDate in a single statement given that I don't have a field DonorType in dtR and I don't want to add it in (so can't use a multiple key for the join)?

how to find the total sum of particular column using C#?

Suppose In database their is column called INCOME_PER_DAY. I bring data of this column in the gridview .
Now My question is that I want to find the total sum of the column INCOME_PER_DAY using C# .how to do this?
Please tell me.
Do this on server-side (database).
Return 2 recordsets: one with details and the second one (one row) with SUM(INCOME_PER_DAY).
or use this query:
SELECT ROW_TYPE = 1, FIELD1, FIELD2, FIELD3, INCOME_PER_DAY FROM MYSALES
UNION ALL
SELECT ROW_TYPE = 2, NULL, NULL, NULL, INCOME_PER_DAY = SUM(INCOME_PER_DAY) FROM MYSALES
ROW_TYPE = 1 - detail row
ROW_TYPE = 2 - summary row
On a page, use, for example, datagrid in the ItemDataBound event handler: check ROW_TYPE to apply valid CSS style (detail and summary)
Unfortunately, you have to loop through the column and add up rows line-by-line.

Resources