My database field is as is : num_in_stock, mediumint(8), NULL (yes)
I'm generating my services using the Connect to Data/Service tool in Flash Builder 4
What is the best course of action if I want my num_in_stock field to have a value of NULL, not zero (0) when we leave the field empty?
Thanks!
In Flash, you can't have an int as null. int and uint default to 0. Number however defaults to NaN. You could always convert your int to Number before it's sent over to Flash and make sure it's NaN. However, this would depend on your data transportation protocol (amf, json, etc..)
Related
I'm using DataNucleus as my persistence layer (JDO) and am forced to use Oracle 11g as a database (using 11g XE for development). Unfortunately since version 11.2 the default length semantics (NLS_LENGTH_SEMANTICS) for VARCHAR columns have been set to byte instead of char which means that all columns that only provide a size will result in a type definition like this:
VARCHAR2(50 byte)
instead of:
VARCHAR2(50 char)
From Oracle's perspective the solution is threefold:
declare a unit in your size definiton (what I'm trying to achieve with JDO metadata)
alter the NLS_LENGTH_SEMANTICS attribute on every session
alter the NLS_LENGTH_SEMANTICS attribute globally (which seems not to work with the "Express Edition", I've already wasted a whole day on this)
I've searched the Mapping and Persistence Documention up and down to find metadata attributes that let me do this. I can specify jdbcType, sqlType and of course length for a column. But no unit.
Any help would be greatly appreciated.
I am having problems inserting values into SQL Server columns of type decimal(38, 20) from BizTalk 2013 using WCF-SQL adapter. I get InvalidCastException with message: "System.InvalidCastException: Specified cast is not valid"
If I test column type decimal(18,18) it works.
Seems like the WCF-SQL adapter does not handle decimal with very high precision. Question is what is the limitation? And, if there is a workaround?
When I generate XSD from database table information, decimal(38,20) turns into xs:string with length restriction of 40. Maybe this is a sign of that WCF-SQL adapter cannot handle such precision...? I have also tested to alter the XSD to be xs:decimal, but no difference.
Anyone?
ADDITION:
Did not find any "good" way to handle this limitation.
Final setup is: XML => WCF-SQL adapter => Stored Procedure with table type parameter containing varchar(40) columns => CAST table variable columns to decimal(38,20) one-by-one => INSERT into destination table.
So, solution was to modify table type to accept varchar, and manually convert in stored procedure.
Would be happy if someone could explain the better solution!
Decimal precision is limited to the .NET framework type. See here.
Also described in the BizTalk documentation here. "Decimal if precision <= 28. String if precision > 28".
So your way of handling with strings is an option. Use the Round functoid in your map to the SQL schema if you don't really need more than 29 positions.
Another option you could consider is changing the regional settings for the BizTalk host user running the send port. The current setting/language of your decimal separator is a comma instead of a dot (or the other way around) and not matching the data type for SQL Server. For this option you have to keep the type as string in your schema and keep it decimal in your SQL Server table.
May a Cassandra (CQL 3) map hold null values? I thought null values were permitted, but failure of my program suggests otherwise. Or is there a bug in the driver I am using?
The official documentation for CQL maps says:
A map is a typed set of key-value pairs, where keys are unique. Furthermore, note that the map are internally sorted by their keys and will thus always be returned in that order.
So the keys may not be null (otherwise sorting would be impossible), but there is no mention of a requirement that map values are not null.
I have a field that is a map<timestamp,uuid>, which I am trying to write to using values in a Java Map< Date, UUID >. One of the map values (UUIDs) is null. This seems to cause a NPE in the Cassandra client code (Cassandra version 1.2.6, called from DataStax Java driver 1.0.1) when marshalling the UUID of the map:
java.lang.NullPointerException
at org.apache.cassandra.utils.UUIDGen.decompose(UUIDGen.java:82)
at org.apache.cassandra.cql.jdbc.JdbcUUID.decompose(JdbcUUID.java:55)
at org.apache.cassandra.db.marshal.UUIDType.decompose(UUIDType.java:187)
at org.apache.cassandra.db.marshal.UUIDType.decompose(UUIDType.java:43)
at org.apache.cassandra.db.marshal.MapType.decompose(MapType.java:122)
at org.apache.cassandra.db.marshal.MapType.decompose(MapType.java:29)
at com.datastax.driver.core.BoundStatement.bind(BoundStatement.java:188)
at [my method]
The UUIDGen.decompose(UUID) method has no special handling of a null UUID, hence the NPE. Contrast with JdbcBoolean.decompose(Boolean), which decomposes a null Boolean to an empty byte-buffer. Similarly, JdbcDate.decompose(Date) decomposes a null Date to an empty byte-buffer.
I can produce a similar problem if I have a map holding null integers (using a Java Map< Date, Integer > with a null value, for a Cassandra map<timestamp,int>), so this problem is not restricted to uuid values.
You are right, null values are not (yet?) supported inside Maps. I faced this thing before and like you couldn't find relative documentation -- In similar situation I help myself with cqlsh
A small test give you the answer
CREATE TABLE map_example (
id text,
m map<text, text>,
PRIMARY KEY ((id))
)
try
insert into map_example (id, m ) VALUES ( 'a', {'key':null});
> Bad Request: null is not supported inside collections
HTH, Carlo
I am using Telerik Data Access Fluent Model(Open Access) Code first approach for generating Database. Everything is going right except some issues.
• I have created a Property as Decimal in code. But in database its data type is numeric not decimal. I need to set data type as decimal but this is giving me numeric.
• Same type of issue is there with Bool property in code that gives me tinyint as datatype in database instead bit. I also set the property as Boolean in C# code and generated column is still tinyInt. I need to set it as bit in database
Here are images for my properties and generated columns(From these properties in Database)
These are properties that are written in code
http://screencast.com/t/sOXOi3as0N
And this is the image of generated table in database
http://screencast.com/t/9KmmEK1IL
it seems to be the default behavior of the product - to map decimal CLR type props to columns of numeric SQL type and bool props to tinyint cols. You need to slightly change the mapping configuration for the package persistent type by specifying the correct SQL types for underlying columns mapped to these props in the following way:
mappingConfiguration.HasProperty(x => x.BasicPrice).HasColumnType("decimal").HasPrecision(18).HasScale(2);
mappingConfiguration.HasProperty(x => x.IsActive).HasColumnType("bit");
I am trying to read a 10-12 digit signed integer value stored in a sqlite database. I want to read it into an int variable in my c++ code. I am trying the following query but I know I am going wrong somewhere as the value I retrieve from the database is always a negative number, different from the one in the database.
"SELECT _id FROM Picture where Time<%lld"
I am then appending the integer value to the above string, before sending it to sqlite, by using sprintf. When I print out the query, it shows a neg. long int number. What am I doing wrong with the query?
Thanks,
Ab
I figured out where I was going wrong. The field I was trying to read holds 64-bit integer type and I was using sqlite3_column_int instead of sqlite3_column_int64. I changed it to the latter and got the data back in my c++ signed long long variable.
Thanks for giving me little bit of direction there.