Select command in Servicestack.Ormlite is difference in 4.0.54 and 4.0.56 when I profiling - ormlite-servicestack

When I profiling the same select command:
4.0.50:
SELECT "CustomerID", "CustomerCode", "CustomerName"
FROM "dbo"."Customer"
WHERE "CustomerCode" In ('871110000','864483025')
4.0.56:
exec sp_executesql N'SELECT "CustomerID", "CustomerCode", "CustomerName"
FROM "Customer"
WHERE "CustomerCode" In (#0,#1)',N'#0 nvarchar(max) ,#1 nvarchar(max) ',#0=N'871110000',#1=N'864483025'
Why does SS has this change?
My CustomerCode is Varchar field but generated command is Nvarchar and dont use my Index so the command very slow.
How can i fix that?
Thank you!

I'm unable to find a way to set the IN Parameter types, so we will choose the nuclear option.
If you are still using 4.0.56, you can add the following line to your application startup.
OrmLiteConfig.UseParameterizeSqlExpressions = false;
This will make ORMLite use the "old" way that it created SQL queries with parameters (the way it did pre-4.0.54). Note that this property is set as deprecated, so if you've upgraded ORMLite you'll have to determine if it still exists.

Related

Can I use the 'WHERE' clause in an 'INSERT' command in SQLite3?

I am using SQLite3 with Python. I am quite certain that the 'WHERE' clause does not work with 'INSERT' operation, but I really need to have a workaround to solve my issue. I have the following prepopulated database:
I was hoping to come up with an SQL statement where I can add the VALUES ('2021-01-13', '36.8') to the table WHERE family='FAA' AND model='MAA'. I have read a lot of stuff online but still no luck on my side.
I think you want an update here:
UPDATE yourTable
SET date = '2021-01-13', duration = 36.8
WHERE family = 'FAA' AND model = 'MAA';
You want to update your table, insert is only for new rows. When you want to change a value, you must to use update statement.
UPDATE table_name
SET date = '2021-01-13', duration = '36.8'
WHERE family='FAA' AND model='MAA';
https://www.sqlite.org/lang_update.html

Microsoft Database Project - How to change column type and avoid data loss error

I am trying to change the type of a column from VARCHAR to INT.
During the deployment, the database project will stop the deployment due to the "data loss" error:
RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127)
I know the data is convertible and if I run a manual ALTER TABLE script it will be fine. However, I cannot integrate that properly with this scenario to avoid the error during the deployment.
What is your solution to resolve my problem?
Is there a method
to override this behaviour in a database project and for this
particular case, use a custom script?
One way in such scenario is using PreDeployment script and deploy twice.
Change data type column in table definition as usual
Add in Predeploy script:
-- this script has to be idempotent, and removed after some time
IF EXISTS (SELECT 1
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'table_name'
AND TABLE_SCHEMA = 'schema_name'
AND COLUMN_NAME = 'column_name
AND DATA_TYPE != 'INT'
)
BEGIN
ALTER TABLE schema_name.table_name ALTER COLUMN Column_name INT NULL/NOT NULL;
END
First publish will change the data type during PreDeploy, and deploy will fail with Potential Data loss error.
Second publish will omit the part of PreDeploy(if condition), and schema compare does not detect any changes, meaning it has been changed.
Next step should be removing the manual part from PreDeployment script.

dbms_metadata bug if I try to generate ddl for java?

I have to copy some objects from one schema to another on the same database, between others java sources too. The dbms_metadata.get_ddl(object_type, object_name, schema_name) returns schema name in the ddl. Because I want to execute this ddl on the new schema the old schema name in the ddl doesn't help me in my job. To avoid this problem I use a following function a step before:
execute dbms_metadata.set_transform_param(dbms_metadata.session_transform,'EMIT_SCHEMA', false);
In case of a table it works (it means, it omits the schema name in ddl):
select dbms_metadata.get_ddl('TABLE', object_name, schema_name) from dual;
but in case of java source:
select dbms_metadata.get_ddl('JAVA_SOURCE', object_name, schema_name) from dual;
it doesn't!
I've tested these functions on VM with database 12.2 from Oracle too. The same behavior.
Is it a bug? Any workaround?
Regards,
Jacek

Using Limit and offset in Sqlite update statmet

update table set column_name limit 3 offset 2;
The above query is not working.
Throws error
sql error: syntax error near 'limit'.
An UPDATE statement expects a new value after the column_name, like this:
update thetable set column_name = 'some new value'
Furthermore, the documentation mentions that you need to have compiled SQLite with the SQLITE_ENABLE_UPDATE_DELETE_LIMIT option, which is not enabled by default.
Sqlite does not allow the use of LIMIT and OFFSET statements like in MYSQL. You will have to use a nested query to workaround it . Or use two queries.

Query to get all table names

Can any one tell me that how to get names of all tables of a database using asp.net
A newer method on SQL Server is to use the INFORMATION_SCHEMA Views to get the information:
SELECT table_name FROM INFORMATION_SCHEMA.Tables WHERE table_type='BASE TABLE'
This particular view also includes Views in its list of tables, which is why you need the where clause.
You didn't mention which database engine you are using. On SQL Server, you can query the sysobjects table and filter for objects with type U:
SELECT name FROM sysobjects WHERE type = 'U'
In case you are interested in the MySQL way to achieve this, you can use
DESCRIBE tableName;

Resources