Getting strange error when reading empty cosmos DB container - azure-cosmosdb

I am reading the data from cosmos db in azure synase pipelines.
I can read the data successfully if container has some data. If it is empty I am getting below error.
message":"Job failed due to reason: Cannot resolve column name "_ts"
Below is the snapshot of the code.
Where I am first looking up the database names from a table. Then for each database , I am reading data from each cosmosDB container.
So, currently it is for 4 database. 2 database container has some data and 2 are empty.
Below is the snapshot of execution where last 2 Data flow execution is in error and first 2 are executed succefully.
Below is the actual error.
Any one know how to resolve it?
Thanks a lot for the help in advance.

Related

Dump to Storage Container failed ingestion in ADX

I've problems when I ingest data from an IoT Hub in my Azure Data Explorer.
When I execute on ADX the command to see the errors:
.show ingestion failures | where FailedOn >= make_datetime('2021-09-09 12:00:00')
I get the error message:
BadRequest_NoRecordsOrWrongFormat: The input stream produced 0 bytes. This usually means that the input JSON stream was ill formed.
There is a column with IngestionSourcePath, but it seems to be an internal URI of the product itself.
I read in another Stack Overflow question, that there is a command in ADX to dump the failed ingestion blob into a container with this syntax:
.dup-next-failed-ingest into TableName to h#'Path to Azure blob container'
The problem is that this command is not documented through Microsoft documentation.
The questions are:
What are the full syntax of this command?
Can you show me some examples?
Which are the needed permissions to run this command over ADX and also over the Blob Container?
Is another command to remove this dump after fix the ingestion errors?
The full syntax of the command is:
.dup-next-failed-ingest into TableName to h#'[Path to Azure blob container];account-key'
or
.dup-next-failed-ingest into TableName to h#'[Path to Azure blob container]?SAS-key'
We will add the documentation for this command.
The error you encountered most likely indicates that the JSON flavor you are using does not match the flavor you specified for the data connection, or that the JSON objects are not syntactically solid. My recommendation would be to make sure you use the "MultiJSON" as the data connection format for any JSON payloads.
When looking at the interim blobs created using this command, please keep in mind that you will not be looking at the original events sent into the IoT Hub, but batches of these events, created by ADX internal batching mechanism.

aws codedeploy - running sql scripts

I run my sql scripts which inserts data to DB as a part of my codedeploy lifecycle event on a Autoscaling group. The Autoscaling group has 2 instances, the sql scripts run fine on the 1st instance and the deployment is successful on that instance.
In the 2nd instance, as the DB has the data already inserted the sql script fails with the below error message:
[stderr]ERROR 1062 (23000) at line 32: Duplicate entry
Any workaround or solution will be of great help.
Thanks
It suggests that the DB already has an entry which you're trying to insert, hence that error. You may like to first check if the DB has that entry or not.
To identify which part of the script is giving you this error, you may try to create subset of your script and identify the actual cause.
This certainly is the issue when you already have some record(s) and the DB / Table / schema does not allow for duplicate entry.
Assuming your deployment group is a OneAtATime deployment type, then your lifecycle hook should check for the entry before it inserts the SQL.
That way, only the first deployed instance will apply the change. The other deployments will test for the entry, and then skip the insert code phase.

why the output values length are getting reduced when using DB links in oracle and asp.net

We are retrieving the output from table through DB link by executing a stored procedure and input parameters which worked previously and got the output in asp.net application.But now we noted that outputs through DB links are getting trimmed say if status is 'TRUE' ,we are getting as 'TRU' etc why the output values are getting trimmed.The only change we did recently was we changed one of the type of input parameter from number to varchar at the receiving remote side,But i don't think that is the issue??whe we execute the stored procedure remotely on the table.It is giving proper output but through DB link outputs are getting trimmed.ANy one has any idea about this issue??
My oracle client is having issues,i reinstalled and it worked fine.Only my system was having issues.so i desided to reinstall

SQLite database not persisting between sessions

I working with a Windows Phone 8 application using
C#/XAML
SQLite v3.7.15
sqlite-net 1.0.7'
& Peter Huene's sqlite-net-wp8 (https://github.com/peterhuene/sqlite-net-wp8)
When debugging from VS I'm able to create a table, add data to the table and display the data in the UI. However, stop debugging and then resume the data from the last session is gone.
I create the connection like this
Connection = new SQLiteAsyncConnection("taskDB.db");
I'm not sure where that is putting the database?
I have tried the below so I could be sure where the database was being put but it results in the below error. I am surprised by this as I have seen this statement used in multiple examples.
_dbPath = Path.Combine(ApplicationData.Current.LocalFolder.Path, "taskDB.db");
Connection = new SQLiteAsyncConnection(_dbPath);
Which results in this error within SQLite.cs itself:
Error Message
SQLite.SQLiteException was unhandled by user code
HResult=-2146233088
Message=no such table: Tasks
Source=JustSQLite
Any idea why the database is not persisted between debug sessions?
The Emulator instance persists the changes till it is running.
Once you close the Emulator, the file will no longer persist as it is dependent on the Emulator instance

Timeout when uploading images

I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.

Resources