Hello I'm trying to connect to a Universe DB with ODBC.
I have successfully imported some data into access for most of the tables.
(I'm using access just to look at the data and get a general idea of everything)
There are a few tables that will not import due to precision errors.
I'm just starting out with this database type so I'm fairly new to all this. Although I do have past AS/400 (DB2) experience back in the day. The dictionary files remind me of that a bit.
Anyways the problem is a with a field with amounts in it. It works fine unless the amount is greater then 999.99 then I get an error about the field being too small. Apparently ODBC is assuming the field is precision of 5 with 2 decimal places. I looked at the dictionary file and as far as I can tell the field is set to 10R with a conversion code of MR2 which seems like it should be adequate.
Where do I set this in Universe so that ODBC knows it is larger then that.
Thanks for any help.
Update::: I was looking at wrong field the output format of the field I need in the dictionary is actually 7R. If that makes any difference.
Try setting attribute 6 in your dictionary entry to DECIMAL then run HS.UPDATE.FILEINFO at TCL:
>ED DICT MYFILE I.PAY
10 lines long.
----: 6
0006:
----: R DECIMAL
0006: DECIMAL
----: FI
Check out Rocket's ODBC documentation (page 75-76) for how to optionally set custom precision and scale in the dictionary entry for the DECIMAL SQL Data Type.
Related
Is there a datatype equivalent to tsrange or daterange (or actually any of the range types available in PostgreSQL) in e.g. sqlite?
If not, what is the best way to work around it?
Sorry, there only 5 datatypes in SQLite: https://www.sqlite.org/datatype3.html
NULL, INTEGER, REAL, TEXT, and BLOB. And NUMERIC.
As # a_horse_with_no_name mentioned, "You would typically use two columns of that type that store the start and end value of the range". This does make it a bit tricky if you want to do database calculations on intervals. But this resources might be found as an run-time loadable extension.
You would typically use two columns of that type that store the start and end value of the range. – a_horse_with_no_name 42 mins ago
Be cautious; SQLite is quite forgiving in what it accepts for each data type:
SQLite uses a more general dynamic type system. In SQLite, the
datatype of a value is associated with the value itself, not with its
container. The dynamic type system of SQLite is backwards compatible
with the more common static type systems of other database engines in
the sense that SQL statements that work on statically typed databases
should work the same way in SQLite. However, the dynamic typing in
SQLite allows it to do things which are not possible in traditional
rigidly typed databases.
This means that you can throw text at integer fields; if the text is an integer, that's fine. If it's not, that's fine. It will be stored and returned to you when retrieved. The difference being, if it could be converted to an integer, it will be and you will be returned an integer. If it could not be converted, you will get returned a text string. This could make programming with SQLite databases interesting.
I'm trying to save a property value that looks similar to a date, or part of a date, that gives me an error in Azure Cosmos DB with Graph API (Gremlin) like the following:
g.V('id').property('PartReference', '2016-02');
The error message
Gremlin Query Compilation Error: Data type 'Date' not yet supported by
Binary Comparison functions
To me it seems like Gremlin or Cosmos DB is trying to guess the datatype and get it wrong?
At the time of writing, Azure's graph API cares only about three types of data: bool, string and number. At the route of it, you should be able to convert any complex or contextual data into it's primitive representation and by-pass this delight of theirs...
For date and time data, I have settled for using ticks, which can be saved as a number which is filterable
I disagree that it only cares about bool, string and number as it is obviously trying to process a date string as a date. I have hit this problem where I have serialised to ISO and got back a US format with only seconds.
I do agree that the work around for now is to use ticks, I have switched to ticks, and hope when this problem is solved I can reprocess the data and go back to ISO format.
I have not tried the gremlin.net api, this might handle dates consistently.
i have a binary file of mobile, in this binary file msgs and contacts of phone-book are stored i have extracted msgs from it but now have to extract contacts saved in phone book.in this binary file the data is stored in sqlite format as i found this string 53514C69746520666F726D617420330000 in my binary file. now how to extract list of contacts saved in phone book.
You need to first work out the format of the file from which you are extracting information, then write code to extract it. A good starting point would be The SQLite Database File Format.
The first part of that string you give (53514C69746520666F726D6174203300) is ASCII hex for SQLite format 3<nul>, which matches the header shown in that link above, so that may go some way toward helping you figure out how best to process it.
Although, given the fact it appears to be just a normal SQLite database file, you may get lucky and be able to use it as-is with a normal SQLite instance. That would be the first thing I'd try since you can then use regular SQL queries to output the data in a more usable form.
For example, if the file is called pax.db, simply run:
sqlite pax.db
to open it, then you may find you can use all the regular investigative commands like .databases, .schema, .tables and so on.
I think the answer to my question may be that it is not possible, but I am working on a multinational ASP.NET application where users will want to copy a column of numbers from out of an Excel worksheet and into a web GridView. This could be from any client operating system and we want to avoid any client plug-ins, etc.
The problem is that in many countries, number delimiters for decimal and thousands portions are completely reversed. For instance, in Germany a value of 999.999.999,999 translates in the USA to 999,999,999.999. For the raw text "999,999" without knowledge of a format and/or location number preference, it is not know whether that should be (in USA format) 999,999.000 or 999.999.
As far as I have been able to ascertain, in the copy/paste operations from an OS clipboard into a web page, there is no way to also transfer the underlying original Excel data and datatype, e.g., a number represented without these textual delimiters. The only way the data is transmitted is though the formatted text.
Anyone know otherwise or can offer helpful advise?
I am building an application in ASP.NET, C#, MVC3 and SQL Server 2008.
A form is presented to a user to fill out (name, email, address, etc).
I would like to allow the admin of the application to add extra, dynamic questions to this form.
The amount of extra questions and the type of data returned will vary.
For instance, the admin could add 0, 1 or more of the following types of questions:
Have you a full, clean driving liscence?
Rate your drivings skills from 1 to 5.
Describe the last time you went on a long journey?
etc ...
Note, that the answers provided could be binary (Q.1), integer (Q.2) or free text (Q.3).
What is the best way of storing random data like this in MS SQL?
Any help would be greatly appriecated.
Thanks in advance.
I would create a table with the following columns and store the name of the variable along with value in the appropriate column with all other values null.
id:int (primary)
name:varchar(100)
value_bool:bit(nullable)
value_int:int (nullable)
value_text:varchar(100) (nullable)
Unless space is an issue, I would use VARCHAR(MAX). It gives you up to 8,000 characters and stores numbers and text.
edit: Actually as Aaron points out below, that will give you 2 billion characters (enough for a book). You might go with VARCHAR(8000) or the like then, wich does give you up to 8,000 characters. Since it is VARCHAR, it will not take empty space (so a 0 or 1 will not take up 8,000 characters worth of space, only 1).