Is there a datatype equivalent to tsrange or daterange (or actually any of the range types available in PostgreSQL) in e.g. sqlite?
If not, what is the best way to work around it?
Sorry, there only 5 datatypes in SQLite: https://www.sqlite.org/datatype3.html
NULL, INTEGER, REAL, TEXT, and BLOB. And NUMERIC.
As # a_horse_with_no_name mentioned, "You would typically use two columns of that type that store the start and end value of the range". This does make it a bit tricky if you want to do database calculations on intervals. But this resources might be found as an run-time loadable extension.
You would typically use two columns of that type that store the start and end value of the range. – a_horse_with_no_name 42 mins ago
Be cautious; SQLite is quite forgiving in what it accepts for each data type:
SQLite uses a more general dynamic type system. In SQLite, the
datatype of a value is associated with the value itself, not with its
container. The dynamic type system of SQLite is backwards compatible
with the more common static type systems of other database engines in
the sense that SQL statements that work on statically typed databases
should work the same way in SQLite. However, the dynamic typing in
SQLite allows it to do things which are not possible in traditional
rigidly typed databases.
This means that you can throw text at integer fields; if the text is an integer, that's fine. If it's not, that's fine. It will be stored and returned to you when retrieved. The difference being, if it could be converted to an integer, it will be and you will be returned an integer. If it could not be converted, you will get returned a text string. This could make programming with SQLite databases interesting.
Related
I have a software which logs data into a table with current date and time accurate to milliseconds. There is no problem in PostgreSQL and MSSQL Server, but in MDB I have primary key violation. When I look into my table using MS Access, it shows datetimes accurate to seconds.
Could milliseconds be written to MDB at all?
The DateTime field in Access has seconds as accuracy (it's actually stored as a floating number, but it reports and sets in seconds). If you want to store milliseconds, you could store them in a different field.
You can store the millisecond part of the date/time in an integer field, and then use a composite primary key linking the two fields. I've never heard a solid argument against composite primary keys, but it's odd at best.
Just for completeness it's probably worth mentioning that you could save the timestamps as yyyy-mm-dd hh:nn:ss.fff in a TEXT(23) column. That column could be used as a primary key, and the values could be sorted and compared directly if the numeric parts were all zero-padded. Date arithmetic would be a bit awkward, however.
I am facing a performance issue in one of my stored procedures.
Following is the pseudo-code:
PROCEDURE SP_GET_EMPLOYEEDETAILS(P_EMP_ID IN NUMBER, CUR_OUT OUT REF CURSOR)
IS
BEGIN
OPEN CUR_OUT FOR
SELECT EMP_NAME, EMAIL, DOB FROM T_EMPLOYEES WHERE EMP_ID=P_EMP_ID;
END;
The above stored procedure takes around 20 seconds to return the result set with let's say P_EMP_ID = 100.
However, if I hard-code employee ID as 100 in the stored procedure, the stored procedure returns the result set in 40 milliseconds.
So, the same stored procedure behaves differently for the same parameter value when the value is hard-coded instead of reading the parameter value.
The table T_EMPLOYEES has around 1 million records and there is an index on the EMP_ID column.
Would appreciate any help regarding this as to how I can improve the performance of this stored procedure or what could be the problem here.
This may be an issue with skewed data distribution and/or incomplete histograms and/or bad system tuning.
The fast version of the query is probably using an index. The slow version is probably doing a full-table-scan.
In order to know which to do, Oracle has to have an idea of the cardinality of the data (in your case, how many results will be returned). If it thinks a lot of results will be returned, it will go straight ahead and do a full-table-scan as it is not worth the overhead of using an index. If it thinks few results will be returned it will use an index to avoid scanning the whole table.
The issues are:
If using a literal value, Oracle knows exactly where to look in the histogram to see how many results would be returned. If using a bind variable, it is more complicated. Certainly, on Oracle 10 it didn't handle this well and just took a guess at the cardinality. On Oracle 11, I am not sure as it can do something called "bind variable peeking" - see SQL Plan Management.
Even if it does know the actual value, if your histogram is not up-to-date, it will get the wrong values.
Even if it works out an accurate guess as to how many results will be returned, you are still dependent on the Oracle system parameters being correct.
For this last point ... basically, Oracle has some parameters that tell it how fast it thinks a FTS is vs how fast an index look-up is. If these are not correct, it will may do an FTS even if it is a lot slower. See Burleson
My experience is that Oracle tends to flip to doing FTS way too early. Ideally, as the result set grows in size there should be a smooth transition in performance at the point where it goes from using an index to using an FTS, but in practice the systems seem to be set up to favour bulk work.
Hello I'm trying to connect to a Universe DB with ODBC.
I have successfully imported some data into access for most of the tables.
(I'm using access just to look at the data and get a general idea of everything)
There are a few tables that will not import due to precision errors.
I'm just starting out with this database type so I'm fairly new to all this. Although I do have past AS/400 (DB2) experience back in the day. The dictionary files remind me of that a bit.
Anyways the problem is a with a field with amounts in it. It works fine unless the amount is greater then 999.99 then I get an error about the field being too small. Apparently ODBC is assuming the field is precision of 5 with 2 decimal places. I looked at the dictionary file and as far as I can tell the field is set to 10R with a conversion code of MR2 which seems like it should be adequate.
Where do I set this in Universe so that ODBC knows it is larger then that.
Thanks for any help.
Update::: I was looking at wrong field the output format of the field I need in the dictionary is actually 7R. If that makes any difference.
Try setting attribute 6 in your dictionary entry to DECIMAL then run HS.UPDATE.FILEINFO at TCL:
>ED DICT MYFILE I.PAY
10 lines long.
----: 6
0006:
----: R DECIMAL
0006: DECIMAL
----: FI
Check out Rocket's ODBC documentation (page 75-76) for how to optionally set custom precision and scale in the dictionary entry for the DECIMAL SQL Data Type.
I use SQL Server and when I create a new table I make a specific field an auto increment
primary key. The problem is some people told me making the field an auto increment for the primary key means when deleting any record (they don't care about the auto increment field number) the field increases so at some point - if the type of my field is integer for example - the range of integer will be consumed totally and i will be in trouble. So they tell me not to use this feature any more.
The best solution is making this through the code by getting the max of my primary key then if the value does not exist the max will be 1 other wise max + 1.
Any suggestions about this problem? Can I use the auto increment feature?
I want also to know the cases which are not preferable to use auto increment ..and the alternatives...
note :: this question is general not specific to any DBMS , i wanna to know is this true also for DBMSs like ORACLE ,Mysql,INFORMIX,....
Thanks so much.
You should use identity (auto increment) columns. The bigint data type can store values up to 2^63-1 (9,223,372,036,854,775,807). I don't think your system is going to reach this value soon, even if you are inserting and deleting lots of records.
If you implement the method you propose properly, you will end up with a lot of locking problems. Otherwise, you will have to deal with exceptions thrown because of constraint violation (or even worse - non-unique values, if there is no primary key constraint).
An int datatype in SQL Server can hold values from -2,147,483,648 through 2,147,483,647.
If you seed your identity column with -2,147,483,648, e.g. FooId identity(-2,147,483,648, 1) then you have over 4 billion values to play with.
If you really think this is still not enough, you could use a bigint, which can hold values from -9,223,372,036,854,775,808 through 9,223,372,036,854,775,807, but this almost guaranteed to be overkill. Even with large data volumes and/or a large number of transactions, you will probably either run out of disk space or exhaust the lifetime of your application before you exhaust the identity values when using an int, and almost certainly when using a bigint.
To summarise, you should use an identity column and you should not care about gaps in the values since a) you have enough candidate values and b) it's an abstract number with no logical meaning.
If you were to implement the solution you suggest, with the code deriving the next identity column, you would have to consider concurrency, since you will have to synchronise access to the current maximum identity value between two competing transactions. Indeed, you may end up introducing a significant performance degradation, since you will have to first read the max value, calculate and then insert (not to mention the extra work involved in synchronising concurrent transactions). If, however, you use an identity column, concurrency will be handled for you by the database engine.
The solution they suggest can, and most likely will, create a concurrency problem and/or scalability problem. If two sessions use the Max technique you describe at the same time, they can come up with the same number and then both try to add it at the same time. This will create a constraint violation.
You can work around that problem by locking the table or catching exceptions, and keep re-inserting.. but that's a really bad way to do things. Locking will reduce performance and cause scalability issues (and if you're planning as many records as to be worried about overflowing an int then you will need scalability).
Identity fields are atomic operations. Two sessions cannot create the same identity field, so this problem is non-existent when using it.
If you're concerned that an identity field may overflow, then use a larger datatype, such as bigint. You would be hard pressed to generate enough records to overflow that.
Now, there are valid reasons NOT to use an identity field, but this is not one of them.
Continue to use the identity feature with PK in SQL Server. In mysql, there is also auto increment feature. Don't worry that you run out of integer range, you will run out of hard disk space before that happens.
I would advice AGAINST using the Identity/Auto-increment, because:
It's implementation is broken in SQL server 2005/2008. Read more
It doesn't work well if you are going to use an ORM to map your database to objects. Read more
I would advice you to use the Hi/Lo generator if you usually access your database through a program and don't depend on sending insert statements manually to the DB. You can read more about it in the second link.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Are there any disadvantages to always using nvarchar(MAX)?
Is there a general downside by chosing 'ntext' as column type instead of a type that contains chars but with a limited max size like 'char' or 'varchar'?
I'm not sure whether a limited column size is applyable to all my colums. Therefore I would use 'ntext' for all columns containing text. Might this lead to problems in the future?
(I'm using Linq-To-SQL in a ASP.net Webforms application)
NTEXT is being deprecated for a start, so you should use NVARCHAR(MAX) instead.
You should always try to use the smallest datatype possible for a column. If you do need to support more than 4000 characters in a field, then you'll need to use NVARCHAR(MAX). If you don't need to support more than 4000 characters, then use NVARCHAR(n).
I believe NTEXT would always be stored out of row, incurring an overhead when querying. NVARCHAR(MAX) can be stored in row if possible. If it can't fit in row, then SQL Server will push it off row. See this MSDN article.
Edit:
For NVARCHAR, the maximum supported explicit size is 4000. After that, you need to use MAX which takes you up to 2^31-1 bytes.
For VARCHAR, the maximum supported explicit size is 8000 before you need to switch to MAX.
In addition to what AdaTheDev said, most of the standard T-SQL string functions do not work with NTEXT data types. You are much better off using VARCHAR(MAX) or NVARCHAR(MAX).
NVARCHAR furthermore is for widechars, e.g. non latin letters.
I had a stored procedure which ran with NVARCHAR parameters, when I changed it to use VARCHAR instead, I more than doubled the performance.
So if you know you won't need widechars in your columns, you're best of using VARCHAR.
and like the other answers says, don't use TEXT/NTEXT at all, they're deprecated.
You can never have an index on any text column because an index is limited to 900 bytes.
And ntext can't be indexed anyway, but there are still limitations on newer BLOB types too.
Do you plan on having only non-unique text columns? Or never plan to search them?