I'm defining my own table-valued-functions in sqlite. I'd like to be able to get a list of these functions, along with the names of their arguments and return columns.
I can get everything except a list of their arguments.
Arguments of table-valued-functions are basically hidden columns in SQLite. This is convenient because they don't show up the results, but my problem is I can't inspect them using pragma table_info.
Is there any way to get a list of hidden columns for a (virtual) table?
PRAGMA table_xinfo("table_name")
table_xinfo = table_info + hidden columns
Related
xquery version "1.0-ml";
declare function local:sortit(){
for $i in ('a','e','f','b','d','c')
order by $i
return
element Result{
element N{1},
element File{$i}
}
};
local:sortit()
the above code is sample, I need the data in this format. This sorting function is used multiple places, and I need only element N data some places and only File element data at other places.
But the moment I use the local:sortit()//File. It removes the sorting order and gives the random output. Please let me know what is the best way to do this or how to handle it.
All these data in File element is calculated and comes from multiple files, after doing all the joins and calculation, it will be formed as XML with many elements in it. So sorting using index and all is not possible here. Only order by clause can be used.
XPath expressions are always returned in document order.
You lose the sorting when you apply an XPath to the sequence returned from that function call.
If you want to select only the File in sorted order, try using the simple mapping operator !, and then plucking the F element from the item as you are mapping each item in the sequence:
local:sortit() ! File
Or, if you like typing, you can use a FLWOR to iterate over the sequence and return the File:
for $result in local:sortit()
return $result/File
I have an update policy which populates a target table column of dynamic type. The update policy logic for populating the dynamic column is as:-
project target_calculated_column = pack("key1",src_col1,
"key2",src_col2,
"key3",src_col3,
.
.
"keyN",src_colN)
The columns src_col1,src_col2,...,src_colN are fixed number of columns coming from a specific table which is source for the update policy. These columns are of various datatypes, mostly some of them are strings and others are integers. Also the main thing here is that these columns may or may not contain any values for the input rows. What this means is that for integer columns values could be null or in case of string columns it could be blank. Now the issue here is that the update policy function is obviously written before hand and hence it can't know which rows will have nulls or blanks etc. That's something that will only be known when update policy starts running. So when the update policy starts running we end up with the following type of data in the target column target_calculated_column , showing one sample value from target row:-
{
"key1":"sometext",
"key2":30,
"key3":null,
"key5":"hello",
"key6":"",
"key7":112,
"key8":"",
"key9":"",
.
.
"keyN":10
}
This demonstrates the problem. I don't want to keep the key value pairs as part of target_calculated_column which are empty (nulls, blanks etc.). I think what I am asking for is a conditional version of pack() that can ignore key value pairs with empty values, but I don't think such an option is there. Is there way I can postprocess target_calculated_column so that I can eliminate such key value pairs? Basically in case of this example I should be getting the following output:-
{
"key1":"sometext",
"key2":30,
"key5":"hello",
"key7":112,
.
.
"keyN":10
}
pack_all([ignore_null_empty]) function allows you to ignore nulls/empty values. If you want to remove some columns you can use the bag_remove_keys() function. The pack() function itself does not provide this option.
I notice one fact that when predicate has dynamic field to compare then it doesn't work.
For example:
db:open("library")//book[$filterFields = $pattern]
for this I get 0 results,
but when I put for example category instead of $filterField then I have some results.
How can I use variable in predicate as field?
If $filterFields is supposed to contain a list of element names, you can possibly use the following query:
db:open("library")//book
[*[name() = $filterFields] = $pattern]
Sorry in advance due to being new to Rstudio...
There are two parts to this question:
1) I have a large database that has almost 6,000 tables in it. Many of these tables have no data in them. Is there a code using R to only pull a list of tables names that have data in them?
I know how to pull a list of all table names and how to pull specific table data using the code below..
test<-odbcDriverConnect('driver={SQL Server};server=(SERVER);database=(DB_Name);trusted_connection=true')
rest<-sqlQuery(test,'select*from information_schema.tables')
Table1<-sqlFetch(test, "PROPERTY")
Above is the code I use to access the database and tables.
"test" is the connection
"rest" shows the list of 5,803 tables names.. one of which is called "PROPERTY"
"Table1" is simply pulling one of the tables named "PROPERTY".
I am looking to make "rest" only show the data tables that have data in them.
2) My ultimate goal, which leads to the second question, is to create a table that shows a list of every table from this database in column#1 and then column 2,3,4,etc... would include every one of the column headers that is contained in each table. Any idea how do to that?
Thanks so much!
The Tables object below returns a data frame giving all of the tables in the database and how many rows are in each table. As a condition, it requires that any table selected have at least one record. This is probably the fastest way to get your list of non-empty tables. I pulled the query to get that information from https://stackoverflow.com/a/14163881/1017276
My only reservation about that query is that it doesn't give the schema name, and it is possible to have tables with the same name in different schemas. So this is likely only going to work well within one schema at a time.
library(RODBCext)
Tables <-
sqlExecute(
channel = test,
query = "SELECT T.name TableName, I.rows Records
FROM sysobjects t, sysindexes i
WHERE T.xtype = ? AND I.id = T.id AND I.indid IN (0,1) AND I.rows > 0
ORDER BY TableName;",
data = list(xtype = "U"),
fetch = TRUE,
stringsAsFactors = FALSE
)
This next part uses the tables you found above and then gets the column information from each of those tables. Lastly, it makes on single data frame with all of the column names.
Columns <-
lapply(Tables$TableName,
function(x) sqlColumns(test, x))
Columns <- do.call("rbind", Columns)
sqlColumns is a function in RODBC.
sqlExecute is a function in RODBCext that allows for parameterized queries. I tend to use that anytime I need to use quoted strings in a query.
I am trying to use the hash package in R to replicate dictionary behavior in python. I have created it like this,
library(hash)
titles = hash(NAME = list("exact"=list('NAME','Age'), "partial"=list()),
Dt = list("exact"=list('Dt'), "partial"=list()),
CC = list("exact"=list(), "partial"=list()))
I can access the keys in the hash using keys(titles) , values using values(titles), and access values for a particular key using values(titles['Name']).
But how can I access the elements of the inner list? e.g. list('NAME','Age') ?
I need to access the elements based on its names, in this case - "exact" or else I need to know which element of the outer list this element belong to, whether its "exact" or "partial".
Simply:
titles[["NAME"]][["exact"]]
as hrbmstr wrote. There's nothing special about this whatsoever.
In your nested-list, "exact" and "partial" are simply two string keys. Again, there's no special magic significance to their names.
Also, this is in fact the recommended proper R syntax (esp. when the key is variable), it's not "bringing gosh-awful Python syntax".