Mapping reference 'myMappingName' of type 'mappingReference' in database 'myDatabaseName' could not be found. (In ADX) - azure-data-explorer

Upon ingestion, a failure is always reported regarding a missing mapping reference. This mapping reference however never existed, nor is it used anywhere. Ingestion however always goes through fine, and all data is present. Any help is greatly appreciated!
I have looked into the logs to see the full reason, and the cause is that it can't find a certain mapping reference. This reference isn't used anywhere and was never created (to the best of my knowledge). Going through all the ingestion mappings on the cluster didn't give any information as to the reason why.
A bit more info on the log:
"OriginatesFromUpdatePolicy": false,
"ErrorCode": BadRequest_MappingReferenceWasNotFound,

Reason for getting Error:
If we did not provide proper Database(onkdb) & Column Data Then we get an Mapping error shown Below.
Resolution for this issue:
Now I have provided Existing Database name(onkdb2) & Column data, Now I am getting Mapping data Below.

In case anyone wanted to know how we are dealing with the problem, currently we have created a mapping that creates empty rows (thus silencing the error). These rows are not mapped out of the source table, so do not affect the child tables.

Related

Can't access a variable from another variable in Google Tag Manager

I am having trouble reading between variables in Google Tag Manager.
I am able to fetch data from the data layer, but when I try to access this data from another variable I get "undefined".
Then I tried setting a variable as a constant
and then reading from this variable
But I can't even read a constant value
Any idea why this might be the case?
I tried pulling data from the data layer which worked. The initial value is filled out.
Then I tried to access this value from another variable using {{Variable Name}}, but this value returns "undefined"
I just ran a quick test to reproduce what you have, and I see both of them working just fine like expected on every event:
you have a problem in your testing somewhere.
Like maybe you're looking at a wrong container? Wrong enviornment? Forgot to save code changes?
Oh! There's one more thing! CJS in GTM (and other TMSes) require CSP to allow unsafe-eval. Because all CJS is executed with eval. Without unsafe-eval, all CJS variables will always return undefined. That's most likely your issue. Ask your devs about it. You can read more about GTM and CSP here.

How to recover from "missing docs" in xtdb?

I'm using xtdb in a testing environment with a RocksDB backend. All was well until yesterday, when the system stopped ingesting new data. It tells me that this is because of "missing docs", and gives me the id of the allegedly missing doc, but since it is missing, that doesn't tell me much. I have a specific format for my xt/ids (basically type+guid) and this doesn't match that format, so I don't think this id is one of mine. Calling history on the entity id just gives me an empty vector. I understand the block on updates for consistency reasons, but how to diagnose and recover from this situation (short of trashing the database and starting again)? This would obviously be a massive worry were it to happen in production.
In the general case this "missing docs" error indicates a corrupted document store and the only proper resolution is to manually restore/recover based on a backup of the document store. This almost certainly implies some level of data loss.
However, there was a known bug in the transaction function logic prior to 1.22.0 which could intermittently produce this error (but without any genuine data loss), see https://github.com/xtdb/xtdb/commit/1c30550fb14bd6d09027ff902cb00021bd6e57c4
However, if you weren't using transaction functions then there may be another unknown explanation.

bad int8 external representation"VM000141210"

While running an ab-initio PSET I'm getting the error like: "bad int8 external representation"VM000141210"". What the pset does is take data from different source tables and loads into target table with transformations. Can anyone help me with this?
I have no knowledge of ab-initio, but in other ETL tools you can receive similar error messages if the meta data inside the tool is different from that of the target table itself.
Could it be that this particular PSET command is dependent on all columns being filled out, and in the same order as they occur in the database?
Write back with more specifics if you need more help...
what data types are in play?
what do you see if you stage data in a file instead?
if you apply filters on the source, can you the make the session succeed (binary search)?

Is it possible to get information where a variable, handle, buffer are defined?

I write some little log files where i can see from which program something is called with the help of
PROGRAM-NAME(i).
It would be really interesting if I could also get information about my variables, handles, buffers, ... and where they are defined.
SOURCE-PROCEDURE:GET-SIGNATURE
is a little step in the right way, but this gives me only the possible input and output of my source-procedure.
Handle based objects have an INSTANTIATING-PROCEDURE property of type handle, that references the handle of the procedure that created the instance.
Alternative, if the Dynobjects.* log-manager entry type to get that information when an handle based object is created and deleted in the current client log file.
If you want a lot of run-time data check out the "LOG-MANAGER" handle in general, particularly the 4GLTrace setting.

Why "Error: Subreport could not be shown" for some reports and not others?

I'm using VS2010 and the built-in visual Report Designer to create RDLC templates for rendering reports with sub-reports as PDF files in an ASP.NET application using a ReportViewer control and the .LocalReport member. The code iterates over a set of records, producing one report (with its sub-reports) for each record.
I noticed recently that for a small number of the reports, one of the sub-reports was failing and giving the "Error: Subreport could not be shown" message. What's puzzling me about this case, in contrast to the many posts about this error that I've read (and previous times I've wrestled with it myself), is that it is only occurring for a subset of cases; from what I've seen elsewhere, the problem is usually all-or-nothing -- this error always appears until a solution is found, then this error never appears.
So... what could cause this error for only a subset of records? I can run the offending sub-report directly without errors; I can open the .xsd file and preview the DataSet for the offending records without errors; I can run the query behind the DataSet in SQL Server Mgt Studio without errors... I'm not sure where else to look for the cause(s) of this problem which only appears when I run the report-with-subreports?
I tracked this down to an out-of-date .xsd file (DataSet) -- somewhere along the way a table column string width was increased, but the DataSet was not updated or regenerated, so it still had the old width limit on that element, e.g., <xs:maxLength value="50" /> in the .xsd XML instead of the new width of 125 characters. The error was being thrown for those cases where at least one record in the subreport had a data value (string) in that column that exceeded the old width of 50.
An important clue came from adding a handler for the DataSet's .Selected event; I was already using the .Selecting event to set the sub-report's parameter (to tie it to the parent record), but I couldn't see anything useful when breaking in that event. However, examining the event args variable in the .Selected event, after the selection should have occurred, I found an Exception ("Exception has been thrown by the target of an invocation") with an InnerException ("Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints"). There was also a stack trace which indicated the point of failure was executing Adapter.Fill(dataTable).
While this turned out to be pretty misleading -- I had no such constraints in place on the tables involved in the query behind the DataSet -- it at least got me focusing on the specific records in the subreports. After much fruitless searching for anomalies in the subreport record data SQL Server Mgt Studio, I eventually started removing the records one-by-one from one of the offending subreport cases, re-running the report each time to see if I had fixed the error. Eventually I removed a subreport record and the report worked -- the remaining subreport records appeared!
Now I had a specific sub-report record to examine more closely. By chance (wish I could call it inspired intuition...), I decided to edit that record in the web app instead of looking at it as I had been in SQL Server. One of the fields was flagged with an alert saying the string value was too long! That was a mystery to me for a moment: if the string value was too long, how could it already be saved in the database?! I double-checked the column definition in the table, and found it was longer than what the web-app front-end was trying to enforce. I then realized that the column had been expanded without updating the app UI, and I suspected immediately that the .xsd file also had not been updated... Bingo!
There are probably a number of morals to this story, and it leaves me with a familiar and unwelcome feeling that I'm not doing some things as intelligently as I ought. One moral: always update (or better and usually simpler, just re-build) your .xsd DataSet files whenever you change a query or table that its based on... easier said than remembered, however. The queasy feeling I have is that there must be some way that I haven't figured out to avoid building brittle apps where a column width that's defined in the database is also separately coded into the UI and/or code-behind to provide user feedback and/or do data validation... suggestions on how to manage that more robustly are welcome!

Resources