How to ignore data loss warning while schema comparison? - asp.net

While trying to update the database in SQL Schema Comparison in Visual Studio, I am getting the below error.
(48,1): SQL72014: .Net SqlClient Data Provider: Msg 50000, Level 16, State 127, Line 6 Rows were detected. The schema update is terminating because data loss might occur.
An error occurred while the batch was being executed.
I understand that tool has inspected data loss if it updates.
I was thinking that there would be some option where I ignore this.
After googling I got the below link, but in Visual Studio 2012,
https://social.msdn.microsoft.com/Forums/en-US/ce95ac1d-a31c-4e83-904e-78a8491d0761/shema-compare-force-update-with-data-loss?forum=vstsdb
But I don't find any such option in my Schema options

In 2015 the sequence is: Create Compare, click on gear icon, general tab, unclick "Block on data loss". I have to set this each time I create a new comparisons, I have been unable to find a way to set a default that sticks other than saving the comparison.

I had this same problem, and unchecking the "Block Incremental Deployment if data loss might occur" didn't fix the issue. I still got lost of errors regarding column size changes that I couldn't work around. I also had to uncheck the "Verify deployment" checkbox, the last item in the lower section, as well.

If deploying the dacpac using sqlpackage.exe command-line utility(used for automating build/deployments like in DevOps), then we need to pass the argument: /p:BlockOnPossibleDataLoss=False
More info here -> https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver15

Related

How to change a field's type in CRM 2016

We are running CRM 2016 SP1 on-premise. We have DEV, QA, Staging, and Production environments. The solution in our DEV, QA and Staging is unmanaged but in Production is managed.
We have a requirement to change some fields' data types from single line text to multiple line text in our production environment.
I have been researching this and have found the following links:
https://debajmecrm.com/2014/04/12/change-field-data-type-in-mscrm-without-dropping-and-recreating-the-field/
https://community.dynamics.com/crm/b/workandstudybook/archive/2014/07/28/converting-single-line-text-to-multi-line-text-using-crm-sdk-s-configuration-migration-tool
Convert Single line text to Multiline text (MS CRM 2016)
From what I understand from these pages, my options are as follows:
Change the field types directly in the database
Use tools such as Configuration Manager or Attribute Manager (XrmToolbox) to export data, delete the field, create new field, and import data back for the new field.
Option 1 requires making changing directly to the database which is something we rather not do as it will cause problems with Microsoft licensing.
Option 2 requires deleting the old field and creating a new field with the same name in EVERY environment. This means the new field will have the same name but different GUID values in every environment.
Am I right in assuming that option 2 will result in errors in the future when we want to deploy a solution from one environment to another because the GUIDs for the new field are different?
Also, Option 2 requires the solution to be unmanaged in all environments. However, in our case, it is managed in Production.
With all these in mind, what are my choices? What is the best way of achieving this?
Your comments are greatly appreciated.
Kind regards
Easiest way to do this:
Hide the original field from forms, views, reports, etc.
Optional - Set the original field to be non-searchable (so it doesn't appear in advanced find).
Optional - Rename the original field so it's clear it shouldn't be used, someone people like to prefix with a 'z' so it appears at the bottom of lists.
Create your new field, put it in all the same places as the original.
Migrate the data from original to new field. A workflow executed in bulk could do, or perhaps an export, edit, import.
Optional - Delete the original field.
In terms of your options above; 1, that's unsupported (unlikely to mess your licencing, but has a good chance of ruining CRM with no way to sensible recover). 2, looks similar to my suggestion above.

Knime too slow - performance

I just started to use KNIME and it suppose managed a huge mount of data, but isn't, it's slow and often not response. I'll manage more data than that I'm using now, What am I doing wrong?.
I set in my configuration file "knime.ini":
-XX:MaxPermSize=1024m
-Xmx2048m
I also read data from a database node (millions of rows) but I can't limit it by SQL (I don't really mind, I need this data).
SELECT * FROM foo LIMIT 1000
error:
WARN Database Reader com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'LIMIT 0' at line 1
I had the same issue... and was able to solve it really simply, KNIME has a KNIME.ini file, this one is like the paramethers KNIME uses to execute...
The real issue is that JBDC driver is set for 10 Fetch Size. By default, when Oracle JDBC runs a query, it retrieves a result set of 10 rows  at a time from the database cursor. This is the default Oracle row fetch size value... so whenever you are reading database you will have a big pain waiting to retrieve all the lines.
The fix is simply, go to the folder where KNIME is installed, look for the file KNIME.ini, open it and then add the following sentences to the bottom, it will override the defauld JBDC fetching, and then you will get the data in literally seconds.
-Dknime.database.fetchsize=50000
-Dknime.url.timeout=9000
Hope this helps :slight_smile:
see http://tech.knime.org/forum/knime-users/knime-performance-reading-from-a-database for the rest of this discussion and solutions...
I'm not sure if your question is about the performance problem or the SQL problem.
For the former, I had the same issue and only found a solution when I started searching for Eclipse performance fixes rather than KNIME performance fixes. It's true that increasing the Java heap size is a good thing to do, but my performance problem (and perhaps yours) was caused by something bad going on in the saved workspace metadata. Solution: Delete contents of the knime/workspace/.metadata directory.
As for the latter, not sure why you're getting that error; maybe try adding a semicolon at the end of the SQL statement.

Why "Error: Subreport could not be shown" for some reports and not others?

I'm using VS2010 and the built-in visual Report Designer to create RDLC templates for rendering reports with sub-reports as PDF files in an ASP.NET application using a ReportViewer control and the .LocalReport member. The code iterates over a set of records, producing one report (with its sub-reports) for each record.
I noticed recently that for a small number of the reports, one of the sub-reports was failing and giving the "Error: Subreport could not be shown" message. What's puzzling me about this case, in contrast to the many posts about this error that I've read (and previous times I've wrestled with it myself), is that it is only occurring for a subset of cases; from what I've seen elsewhere, the problem is usually all-or-nothing -- this error always appears until a solution is found, then this error never appears.
So... what could cause this error for only a subset of records? I can run the offending sub-report directly without errors; I can open the .xsd file and preview the DataSet for the offending records without errors; I can run the query behind the DataSet in SQL Server Mgt Studio without errors... I'm not sure where else to look for the cause(s) of this problem which only appears when I run the report-with-subreports?
I tracked this down to an out-of-date .xsd file (DataSet) -- somewhere along the way a table column string width was increased, but the DataSet was not updated or regenerated, so it still had the old width limit on that element, e.g., <xs:maxLength value="50" /> in the .xsd XML instead of the new width of 125 characters. The error was being thrown for those cases where at least one record in the subreport had a data value (string) in that column that exceeded the old width of 50.
An important clue came from adding a handler for the DataSet's .Selected event; I was already using the .Selecting event to set the sub-report's parameter (to tie it to the parent record), but I couldn't see anything useful when breaking in that event. However, examining the event args variable in the .Selected event, after the selection should have occurred, I found an Exception ("Exception has been thrown by the target of an invocation") with an InnerException ("Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints"). There was also a stack trace which indicated the point of failure was executing Adapter.Fill(dataTable).
While this turned out to be pretty misleading -- I had no such constraints in place on the tables involved in the query behind the DataSet -- it at least got me focusing on the specific records in the subreports. After much fruitless searching for anomalies in the subreport record data SQL Server Mgt Studio, I eventually started removing the records one-by-one from one of the offending subreport cases, re-running the report each time to see if I had fixed the error. Eventually I removed a subreport record and the report worked -- the remaining subreport records appeared!
Now I had a specific sub-report record to examine more closely. By chance (wish I could call it inspired intuition...), I decided to edit that record in the web app instead of looking at it as I had been in SQL Server. One of the fields was flagged with an alert saying the string value was too long! That was a mystery to me for a moment: if the string value was too long, how could it already be saved in the database?! I double-checked the column definition in the table, and found it was longer than what the web-app front-end was trying to enforce. I then realized that the column had been expanded without updating the app UI, and I suspected immediately that the .xsd file also had not been updated... Bingo!
There are probably a number of morals to this story, and it leaves me with a familiar and unwelcome feeling that I'm not doing some things as intelligently as I ought. One moral: always update (or better and usually simpler, just re-build) your .xsd DataSet files whenever you change a query or table that its based on... easier said than remembered, however. The queasy feeling I have is that there must be some way that I haven't figured out to avoid building brittle apps where a column width that's defined in the database is also separately coded into the UI and/or code-behind to provide user feedback and/or do data validation... suggestions on how to manage that more robustly are welcome!

Restoring an item from an SQL Database in ASP.net

I'm currently learning ASP.Net and I was wondering how you can restore an item that was deleted from an SQL Data source.
The item was deleted through a Details View using the "Enable Deleting" option, so I was testing each of the buttons and this happened.
I understand that I can simply recover the same untouched database file, and replace it with the current one I'm using.
However, I'd like to know if restoring the item is possible through other means, like creating a button which will restore the Database Item from deletion. Thanks!
As far as "standard SQL" is concerned, once you do an SQL DELETE on a row, it's gone for good.
If you need "undelete" functionality in your system, you could instead just add a "deleted" column to the database, and set that instead of deleting. That has the major downside that all your SQL needs to take that column into account, so that rows with the delete column set are not returned. Quite a lot of work, and does not sound like what you're looking for in this case.
In certain RDBMS' (SQL server comes to mind), there are ways to recover deleted rows from the transaction log, but that does not sound like what you're looking for either, and it's quite an advanced topic.

How to handle concurrency control in ASP.NET Dynamic Data?

I've been quite impressed with dynamic data and how easy and quick it is to get a simple site up and running. I'm planning on using it for a simple internal HR admin site for registering people's skills/degrees/etc.
I've been watching the intro videos at www.asp.net/dynamicdata and one thing they never mention is how to handle concurrency control.
It seems that DD does not handle it right out of the box (unless there is some setting I haven't seen) as I manually generated a change conflict exception and the app failed without any user friendly message.
Anybody know if DD handles it out of the box? Or do you have to somehow build it into the site?
Concurrency is not handled out the of the box by DD.
One approach would be to implement this on the database side, by adding a "last updated" timestamp column (or other unique stamp, such as a GUID) to each table.
You then create an update trigger for each table. For each row being updated, is the "last updated" stamp passed in the same as the one on the row in the database?
If so, update the row, but give it a new "last updated" stamp.
If not, raise a specific "Data is out of date" exception.
On the client side, for each row you update, you'd need to refresh the "last updated" stamp.
In the client code you watch for the "Data is out of date" exception and display a helpful message to the user, asking them to refresh the data and re-submit their change.
Hope this helps.
All depends on the definition, what do you mean under "out of the box". Of cause you have to create a lot of code to handle concurrency, but some features help us to implement it.
My favorite model is "optimistic concurrency" based on rowversion datatype of SQL Server. It is like "last updated" timestamp, but you need not use any update trigger for each table. All updates of the corresponding "timestamp" column in your tables will be made automatically by SQL server at every update of data in the table row. I describes it in my old answer Concurrency handling of Sql transactrion. I hope it will be helpful for you.
I was of the impression the Dynamic data does the update on the underlying data source. Maybe you can specify the concurrency model (pessimistic/optimistic) on the data meta model that gets registered on the App_Init section. But you would probably get unable to save changes error, so by default would be pessimistic, last in loses....
Sorry to replay late. Yes DD is too strong when it come to fast development of project. Not only that it is base for .Net 4.0. DD is more enhance and have been included in .Net 4.0.
DD mostly work on Linq to sql. I will suggest you to have a look on that part.
In linq to SQl when you go to property of table you will find a property there which specify wheater to check the old value before updating new value. If you set that true I think your proble will get handle.
wish you best luck.
Let's learn from each other.
The solution given by Binary Worrier works and it's widely used on platforms providing a GUI to merge the changes (e.g. source control programs, wiki engines, etc). That way none of the users lose their changes. In the other hand, it requires much code or using external components or DLLs.
If you are not happy with that, another approach is just to lock the record that is being edited. Nobody else will be able to edit that record until the user commit the changes or his session expires. It has pros and cons but requires little code compared with the first option.

Resources