Setup:
Asp.NET Web Api 2 (Running in Azure Cloud service, 3 instances), Entity Framework 6.1 and Sql Azure
Problem
My application started reporting a ton of weird errors all of a sudden.
When using EF to get entities from the database these types of errors are reported:
"The '{PropertyName}' property on '{TableName}' could not be set to a 'System.String' value. You must set this property to a non-null value of type 'System.Int64'"
And
"The '{PropertyName}' property on '{TableName}' could not be set to a 'System.Int64' value. You must set this property to a non-null value of type 'System.String'."
My interpretation is that the database is returning non-matching objects compared to what I'm trying to map against, but I can't see why that would start happening out of the blue, after running just fine for millions of requests.
While I was writing this I rebooted the api instances and now the errors are gone.
Any help in figuring this out will be highly appreciated.
Turned out the probable cause was a mistake in the setup of the unit of work pattern. Note to self: Do not create a new dbcontext to replace a dbcontext that is being used :)
Related
I'm using Azure Cosmos DB .NET SDK Version 3.0 and I want to create container programmatically without partition key. Is it possible? I always got error saying Value cannot be null.
Parameter name: partitionKey
I use method CosmosContainers.CreateContainerIfNotExistsAsync
Reproduce your issue on my side always.
Notice the exception is caused by below method:
Try to deserialize the dll source code and find the detailed logical code.
It seems we can't cross this judgement so far because cosmos db team is planning to deprecate ability to create non-partitioned containers, as they do not allow you to scale elastically.(Mentioned in my previous case:Is it still a good idea to create comos db collection without partition key?)
But you still could create non-partitioned containers with DocumentDB .net package or REST API.
I have been searching for an answer on MS, SE and Google and cannot find it. I want to use the GRS option for Azure Storage (Cloud Block Blobs) but I cannot figure out how to properly do that.
I created my storage object in Azure and chose the GRS option.
I get that I have a primary and secondary connection string and know how to get that from the Azure portal.
What I do not know, in ASP.NET 4.0, is how to set both connection strings in the CloudBlockClient and gracefully handle the primary storage being unavailable.
--What exception is thrown and where, when primary is unavailable? Is this thrown when I create the client, or when I try to get a blob reference?
-- How do I then use the secondary?
Do I have to just test for any old exception and then try using the secondary connection string in a new CloudBlockClient if the primary does not work? Or is there anything in the API for this. I would think there would be but I cannot find it.
None of the "How to use Azure Storage" tutorials I have seen go into this. Most of the documentation seems to date from before mid-2014 when this feature became generally available.
This blog post should help you. In short if you want to read from both primary and secondary you want to enable RA-GRS - essentially read access from the secondary. If you are using out storage client libraries you can also enable a retry policy that will first try to read from a primary and then from the secondary if the first read fails.
I'm developing application using Symfony 2.4 with Doctrine. As caching system i'm using Memcached via BabelCache (http://docs.webvariants.de/babelcache/latest/).
When i start my memcached server application start working very strange, when I made changes on object which I earlier put into cache, my application throws " Unique violation: 7 ERROR: duplicate key value violates unique constraint". It's strange because object exist in database. When I turn memcached server off everything works fine.
I started debugging problem and it's seems that Doctrine threat this object as new(It's sets entityState to STATE_DETACHED in UnitOfWork.php).
Have anyone can help me with this problem?(I'm new in Symfony). What my be causing this problem?
Thx, for help.
I have a StoredProcedure that returns a simple table containing several records:
DECLARE #STEPS_TABLE AS TABLE (OrchestrationID uniqueidentifier, [Message] nvarchar(1000));
-- LOADING THE VALUES HERE
SELECT * FROM #STEPS_TABLE As Step FOR XML AUTO, XMLDATA, ELEMENTS
I used the SQL Transport Schema Generation Wizard to create my schema and could configure the port correctly. If I use this schema on my orchestration, it works perfectly. BizTalk starts one instance of the orchestration everytime the #STEPS_TABLE has more than 1 record.
Reading Microsoft technical documentation, they recommend to get several messages in one call and then use the XML pipeline to disassemble the multi-row BizTalk message into a single-row BizTalk message.
I haven't used the XML pipeline before, so I tried the provided steps but couldn't get it to work.
Could somebody provide me a link to a "how to" (didn't find anything until now, after several hours of searching) or give me some hints to succeed.
Thanks in advance.
... some hours later I could figure it out myself. So if anybody comes across the same issue as me, here you have some guidelines to make it work on your environment.
At the end I followed a different walkthrough from Microsoft and avoided the pipeline recommendation altogether. The documentation I found is called "Disassembling Result Sets Using the SQL Adapter" and does exactly what i was looking for. You can just follow the whole walkthrough from Microsoft but avoid the creation of the send port and make some little adjustment on the receive port.
After following the technical document you will end up with two schemas, I will call them message and envelope (contains several messages) for the sake of this excercise. In your orchestration you can create a receiving port that maps to the message and then when you configure it as a SQL Port and you link it to your stored procedure (or select statement), you only have to change the Document Root Element Name to the envelope root name; the XML Receive pipeline (provided by default in BizTalk 2006) will do the magic of disassembling the messages contained in the envelope and instantiating an orchestration for each message.
The Microsoft "Disassembling Result Sets Using the SQL Adapter" walkthrough can be found under:
http://msdn.microsoft.com/en-us/library/aa562098(v=bts.20).aspx
Mission accomplished :)
I have created a connectionstring in mvc3 application and it is working fine in mvc views and controllers and I am able to fetch data. Now I have called the repository/model functions in a Unit Test in Test project and I am getting error:
System.Data.EntityCommandExecutionException: An error occurred while executing the command definition. See the inner exception for details. ---> System.Data.SqlClient.SqlException: Invalid object name 'dbo.tblProduct'.
How can i fix it?
That has nothing to do particularly with MVC. As it seems, when testing, you use ConnectionString on database that does not have 'dbo.tblProduct' table/view. Check the connetion string and database. You may need to debug tests
Check your table may have different schema (other then dbo) change it to dbo using query below
look at this.
How do I change db schema to dbo
Its very strange for my case, as it is required to do mapping between model and tables, the name has to be the same. When I added 's' at the end of table's name, it works. I don't know if this part work of what LINQ does.