I am having an issue with my cube processing. It is failing in production after I made an update to the view used to build one of my dimensions. Here are the steps I took:
The update added a single field and did not touch anything else that was already present.
I updated the cube dimension in BIDS and did a full process of the cube successfully.
I have a scheduled job that reprocesses the cube every 15 mins and it
ran for 16 hours without issue.
Then the job started failing with the error "could not find attribute key".
The key it could not find was the ID column.
The ID column already existed, and is a numeric column.
I double checked to make sure there were no null ID fields, and there weren't.
The ID field was aliased in the data source view, so I tried updating the table definition to use a named query and aliasing the ID field directly in the query.
When I did that, I started getting errors that it could not find the [ID] field ( the
original field name).
Eventually I had to roll the changes back because I couldn't figure out the cause and I had to get production back up. Now, 17 hours later, the processing of the cube is failing again when no changes have been made. I'm getting the error now that the attribute key cannot be found when processing . When I look for the actual ID value that it gives me in the error, I find it in both of the views that make up my dimension and my fact table.
My underlying data source is Oracle 11g. Any help would be greatly appreciated.
Can you process the updated dimension(if you can restore the production db and BIDs project on local environment) ? check if it gives any error in dimension processing. Then try processing the related measure group alone and then try processing the complete cube and OLAP db. You will be able to find the error step more specifically and could be of help for analysis.
Also you can check this..
http://toddmcdermid.blogspot.in/2009/01/ssas-quick-reference-attribute-key.html
Related
I have written a function that runs every x amount of time and changes a field in a document in cloud firestore if a specific condition applies. However, the function fails due to the function needing an index. I don't know what index I need.
This is the line that gives the error.
const snapshot = db.collectionGroup('people').where('dateEndFunctional', '<', today).where('state', '==', 'active').get()
.then(function(querySnapshot){
querySnapshot.forEach(function(doc){
This is the error message I'm getting
Error: 9 FAILED_PRECONDITION: The query requires an index. You can create it here: https://console.firebase.google.com/v1/r/project/fmis-online-dev/firestore/indexes?create_composite=Ck5wcm9qZWN0cy9mbWlzLW9ubGluZS1kZXYvZGF0YWJhc2VzLyhkZWZhdWx0KS9jb2xsZWN0aW9uR3JvdXBzL3Blb3BsZS9pbmRleGVzL18QAhoJCgVzdGF0ZRABGhUKEWRhdGVFbmRGdW5jdGlvbmFsEAEaDAoIX19uYW1lX18QAQ
If I follow the link, I always get an error, like the image below.
It states:
Loading failed.
An error occured while loading [link]. Try again.
However, eery time I try again, it gives the same error.
I have tried to create a combined index for both of the fields the where clauses are testing, as well as creating two seperate indexes for each of the fields. Both resulted in the same error. What index do I need for this function to properly work?
Thanks in advance for your help!
The link should work, just make sure you are logged into the right account on google cloud.....
For the people that want the answer:
A collection group index on people with state ASC and dateEndFunctional ASC
I don't know why it didn't work before, because I think I have tried this. Perhaps the order of fields matter. The link should take care of that
I'm working on a scenario where I have to compare a data record which is coming from a file with the data from a table as part of validation check before loading the data file into the staging table. I have come up with a couple of possible scenarios which involve something that needs to change within the load mapping, but my team suggested to me to make a change to something that is easy to notice since it is a non-standard approach.
Is there any approach that we can handle within the workflow manager using any of the workflow tasks or session properties?
Create a mapping that will read the file, join data with the table, do the required validation and will write nothing out (use a filter with FALSE condition) and set a variable to 0/1 to indicate if the loading should start.
Next, run the loading session if the validation passed.
This can be improved a bit if you want to store the validation errors in some audit table. Then you don't need a variable - the condition can refer to $PMTargetName#numAffectedRows built-in variable. If it's more then zero - meaning there were some errors - don't start the load.
create a workflow with command line where you need to write a script which will pull the data from the table by using JDBC connections and try to compare with data present in the file and then flag whether to load or not .
based on this command line output you need to go ahead with staging workflow or not..
Use awk commands for comparison of the data , where you ll get flexibility to compare date parts in a column
FYR : http://www.cs.unibo.it/~renzo/doc/awk/nawkA4.pdf
I have a form on MS Access, where I am trying to upload an excel spreadsheet into the database.
It has been working for other spreadsheets, but for some reason I am trying to upload data into a table, and I continue to get the error:
"you cannot record changes because a value you entered violates the settings defined for this table or list (for example, a value is less than the minimum or greater than the maximum). correct the error and try again"
Can someone explain to me why I am getting this error, and how to fix it?
I have already made sure that:
- In the access table, there are no "required" fields
- The appropriate data type is associated to the field
In the "Simulator" built into the Firebase site is it possible to simulate deleting a node?
I tried entering the path to a node in the URL field (e.g. /my/path/-JCNAUFZJFJMGX1RYWJL) and I entered {} into the data field but I think this just simulates adding nothing as apposed to deleting.
In Firebase, writing a null value is equivalent to removing the data at a given reference (i.e. ref.set(null) is effectively the same as ref.remove()), and thus using a null value is an effective way to test removing data in the simulator.
How can I make this BP error go away: Unique index error: Unique index introduced. Upgrade script required.
http://msdn2.microsoft.com/en-gb/library/aa884122.aspx tells me to implement
an upgrade script.
How do I implement an upgrade script and will this make the BP error go away?
Or even better, how can I get rid of this error without the scripts, because the project has not jet deployed to customers?
If you just want to get rid of the error without the scripts you can modify \Classes\SysBPCheckTable\checkIndicesMoreUnique accordingly or rather comment out this.checkIndicesMoreUnique(); in \Classes\SysBPCheckTable\check.
If you choose to ignore the BP warning, beware that your synchronization at your customers may fail due to duplicate keys. This is especially true if 1. customer table contains records, 2. the new index has a new field as well.
The way to make an update script is described in the link you provided. You will find lots of examples in the ReleaseUpdate classes.
Before merely ‘getting rid of’ the BP error, you have to investigate the index first. Which fields make up the index?
If the index is not needed, and is in a layer you can delete from, then delete the index. Having said that, you should afterwards do a compile on the AOT as well to make sure that this index isn’t references somewhere in the code (for example where selects are done with index hint).
But first of all you need to establish why the index has been created.