What Parts of a SQLite Statement Can Have Bound Parameters? - sqlite

What parts of a SQLite statement are able to take bound parameters? For instance, I have discovered that the following is invalid:
SELECT #column1 FROM #table WHERE #column2 = #criteria
The only parameter in the example that I've been able to get to work properly is #criteria, leading me to think that only values on the right hand side of an equation can be bound as a parameter.
I'm having a hard time finding an answer for this in the official documentation on bound parameters and my searching on the internet, so could anybody please tell me definitively what parts of a SQLite statement can be bound?

That documentation says:
literals may be replaced by a parameter
A literal value is:
a constant of some kind. Literal values may be integers, floating point numbers, strings, BLOBs, or NULLs.
Table and column names are not strings (although SQLite sometimes allows you to use the same syntax for compatibility with MySQL).

Related

How to create an aggregate UDF in Snowflake

I'm trying to create an aggregate UDF, for example something like sum or median.
The documentation and examples at https://docs.snowflake.net/manuals/sql-reference/udf-sql.html and https://docs.snowflake.net/manuals/sql-reference/sql/create-function.html don't explain how to do so.
Can someone please explain how and/or provide a MWE?
you want to use a javascript user defined table function of which that have a sum example here
The main gotcha's you want to look out for, is inside the UDF code the SQL parameters names are uppercase, you can see this in the examples, but if you missed noting it, it can lead to lots of head banging.
Also Javascript has no int types so, all value have to go in/out via a double, but an int32 can safely be stored in a double, so that is not a major concern.. and if you need more precision, you might want to make you function output not the final "sum" but return multiple values one of which is the an aggregate key, and then sum in SQL space.

tsrange or daterange for sqlite?

Is there a datatype equivalent to tsrange or daterange (or actually any of the range types available in PostgreSQL) in e.g. sqlite?
If not, what is the best way to work around it?
Sorry, there only 5 datatypes in SQLite: https://www.sqlite.org/datatype3.html
NULL, INTEGER, REAL, TEXT, and BLOB. And NUMERIC.
As # a_horse_with_no_name mentioned, "You would typically use two columns of that type that store the start and end value of the range". This does make it a bit tricky if you want to do database calculations on intervals. But this resources might be found as an run-time loadable extension.
You would typically use two columns of that type that store the start and end value of the range. – a_horse_with_no_name 42 mins ago
Be cautious; SQLite is quite forgiving in what it accepts for each data type:
SQLite uses a more general dynamic type system. In SQLite, the
datatype of a value is associated with the value itself, not with its
container. The dynamic type system of SQLite is backwards compatible
with the more common static type systems of other database engines in
the sense that SQL statements that work on statically typed databases
should work the same way in SQLite. However, the dynamic typing in
SQLite allows it to do things which are not possible in traditional
rigidly typed databases.
This means that you can throw text at integer fields; if the text is an integer, that's fine. If it's not, that's fine. It will be stored and returned to you when retrieved. The difference being, if it could be converted to an integer, it will be and you will be returned an integer. If it could not be converted, you will get returned a text string. This could make programming with SQLite databases interesting.

API design: naming "I want one more value outside time boundaries"

I'm designing an API to query the history of a value over a time period. Think about a temperature value, and you want to query all the values for today.
I have a from and a to parameter to specify the boundaries of the query.
The values available may not exactly match the boundaries requested. For example, if from is 2016-02-17T00:00:00Z, the first value may be on 2016-02-17T00:04:30Z. To fully represent a graph of the period, it is necessary to retrieve one more value outside the given range. The value on 2016-02-16T23:59:30Z is useful and it would be convenient for the user to not have to make another query to retrieve it.
So as the API designer I'm thinking about a parameter with a pair a of boolean values that would tell for each boundary: give me one more value if there is no value exactly on the boundary.
My question is how to name this parameter as English is not my native language.
Here are a few ideas I have so far but with which I'm not totally satisfied:
overflow=true,true
overstep=true,true
edges=true,true
I would also appreciate any links to existing APIs with that feature, either web API or in programming languages.
Is it possible to make this more of a function/RPC that a traditional rest resource endpoint, so rather than requesting data for a resource between 2 dates like
/myResource?from=x&to=x
something more like
/getGraphData?graphFrom=&graphTo=x
Whilst its only a naming thing, it makes it a bit more acceptable to retrieve results for a task wrapped with outer data, rather than violating parameters potentially giving unexpected or confusing results.

Teradata - Column name on which error occurred

We don't have access to our Teradata PROD and we develop scripts and test in SIT, UAT. When promoted to PROD, occasionally the following errors occur:
Invalid Date/Timestamp
Numeric overflow occurred
Untranslatable character
....
Why doesn't Teradata show the exact column name on which the error occurred?
We need to go through the script where around 20 columns are being casted from varchar to date/timestamp and around 10 columns are prone to Numeric overflow occurred. We need to individually go through each column expecting this might be the culprit one. Will be more relieved when the error does show up the column name.
I am sure that as it was not implemented till now, assume that this should have been more complex due to run time errors.
However, the ET_, UV_ error tablenames does capture some of these errors, I guess (may be not all).
Can you please explain when it was possible on ET_, UV_ tables, why can't be it implemented for a normal SQL query to show on which column the error occurred?
These runtime errors are associated with an operation on some value, not necessarily with a particular column -- it could also be a result of an expression.
I imagine associating all the fallible expressions in a query with the corresponding parts of the original SQL would incur certain overhead. It would definitely require non-trivial amount of development work. You might want to ask your Teradata representative about this.
The ET/UV tables are maintained by TPT, which handles external data and is more likely to encounter unexpected values.
If this is a common situation, perhaps you need to cleanse your data. There's usually a way to find the rows that cause the listed errors using built-in SQL functions or UDFs, for example:
Invalid Date/Timestamp - isdate() UDF or SQL
Numeric overflow occurred - comparisons, possibly after cast(... as BIGINT)
Untranslatable character - TRANSLATE_CHK()
(There doesn't appear to be a common way to check if a CAST will succeed.)

REST resources with a triple as a parameter

When needing to create a URL that takes a finite set of parameters, where all of said parameters are semantically the same "level", what is the current consensus around the use of delimiters within URLs? Here's an example:
/myresource/thing1,thing2,thing3
/myresource/thing2,thing1
/myresource/thing1;thing2;thing3
/myresource/thing1;thing3
That is to say, the parameter here could be a single, a pair or a triple. They can be specified in any order because they are not a logical tree, and thing2 is not a subordinate resource of thing1, so doing something like this seems "wrong":
/myresources/thing1/thing2/thing3
This bothers me because it implies a tree-like relationship between the elements of the triple, and that is not the case (despite many HTTP frameworks seemingly pushing this, wrongly in my view). In addition, using a query string doesn't feel right as this is not a search operation, it is a known triple in a very finite space - there's nothing to query or search, so to speak.
I suppose the other option would be to make it a POST request and supply a body that details the parts of the triple being supplied. This doesn't give me warm fuzzies though, for some reason.
How have others handled this? Delimiters seem clean to me, and communicate the intended semantics of the resource, but i know there are folks would would take a different view, and I was looking to understand the experiences of others who've had similar use cases.
Since any value can be missing and values can appear in any order, How would you know which value is for which parameter (if that matters).
I would have used query string for GET, or in the payload for POST.
Use query parameters
/path/to/the/resource?key1=value1&key2=value2&key3=value3
or matrix parameters
/path/to/the/resource;key1=value1;key2=value2;key3=value3
Without a proper example, I'm not sure exactly about your needs.
However, a little known fact is that any HTTP parameter can have multiple values. It is the way to go when you have a set of objects (see GoogleMaps static API for an example).
/path/to/the/resource?things=thing1&things=thing2&things=thing3
Then you can use the same API for single, pairs, triples (and more).

Resources