How can I make this BP error go away: Unique index error: Unique index introduced. Upgrade script required.
http://msdn2.microsoft.com/en-gb/library/aa884122.aspx tells me to implement
an upgrade script.
How do I implement an upgrade script and will this make the BP error go away?
Or even better, how can I get rid of this error without the scripts, because the project has not jet deployed to customers?
If you just want to get rid of the error without the scripts you can modify \Classes\SysBPCheckTable\checkIndicesMoreUnique accordingly or rather comment out this.checkIndicesMoreUnique(); in \Classes\SysBPCheckTable\check.
If you choose to ignore the BP warning, beware that your synchronization at your customers may fail due to duplicate keys. This is especially true if 1. customer table contains records, 2. the new index has a new field as well.
The way to make an update script is described in the link you provided. You will find lots of examples in the ReleaseUpdate classes.
Before merely ‘getting rid of’ the BP error, you have to investigate the index first. Which fields make up the index?
If the index is not needed, and is in a layer you can delete from, then delete the index. Having said that, you should afterwards do a compile on the AOT as well to make sure that this index isn’t references somewhere in the code (for example where selects are done with index hint).
But first of all you need to establish why the index has been created.
Related
I am asked to automate the tracking of changes in the structure of the database: Any modification, addition or removal of tables, fields, indexes, etc.
I have searched the audit but only found that it can track changes in the "Database schema", which is something else.
Do you know if it is possible to do that?
We use 11.6.3.
One wonders how those magical changes in the schema (I think you clarified that it was actually schema changes you wanted to automate) occur. Optionally it could be up to those making the changes to also keep track of them. Usually (hopefully) the database is updated using "delta df-files". Those df-files if kept are a changelog of the database.
Another option is to daily/hourly/weekly dump the data definitions:
CREATE ALIAS DICTDB FOR DATABASE sports.
DISPLAY LDBNAME("DICTDB").
RUN prodict/dump_df.p ("ALL",
"c:/temp/sports.df",
"").
DELETE ALIAS DICTDB. /* Optional */
Taken from this entry in the knowledge base: https://community.progress.com/s/article/15884
Then you can diff that df-file using your favorite tool or keep as it is.
If you actually mean structure (that's more how the data is stored in different files on disc) you can use the prostrct command to save a new st-file to disc:
prostrct list sports
This will save a file called sports.st. Handle it as above and you will have a changelog of the database structure.
I'm creating a data entity with multiple tables and I'm getting duplicate results. Because of the nature of the duplicates, I thought an easy solution would be to add the relevant fields to the Group By section in the primary datasource. However, when I run the entity as a data project in the DMF, I'm getting the following error:
Has anybody run into this before and how do I resolve it? I've tried adding the RecId to the group by (even though it shouldn't even be in the select list as it is not in the field list for the entity). I have noticed that adding fields to the Group By section changes the view in SQL and removes all of the RecId#2, Partitian#2 etc. fields. Does the Group By section even work or is it a broken "feature"?
The entity works perfectly (other than with duplicate results of course) when I remove all fields from the group by section.
UPDATE: From what I can find online, the group by functionality doesn't work. I'll update this question if somebody finds an answer. I am lucky this was an XML export, so I was able to use a XSLT file in the transformations under data entity mapping to remove duplicates.
I'm working on a scenario where I have to compare a data record which is coming from a file with the data from a table as part of validation check before loading the data file into the staging table. I have come up with a couple of possible scenarios which involve something that needs to change within the load mapping, but my team suggested to me to make a change to something that is easy to notice since it is a non-standard approach.
Is there any approach that we can handle within the workflow manager using any of the workflow tasks or session properties?
Create a mapping that will read the file, join data with the table, do the required validation and will write nothing out (use a filter with FALSE condition) and set a variable to 0/1 to indicate if the loading should start.
Next, run the loading session if the validation passed.
This can be improved a bit if you want to store the validation errors in some audit table. Then you don't need a variable - the condition can refer to $PMTargetName#numAffectedRows built-in variable. If it's more then zero - meaning there were some errors - don't start the load.
create a workflow with command line where you need to write a script which will pull the data from the table by using JDBC connections and try to compare with data present in the file and then flag whether to load or not .
based on this command line output you need to go ahead with staging workflow or not..
Use awk commands for comparison of the data , where you ll get flexibility to compare date parts in a column
FYR : http://www.cs.unibo.it/~renzo/doc/awk/nawkA4.pdf
I am having an issue with my cube processing. It is failing in production after I made an update to the view used to build one of my dimensions. Here are the steps I took:
The update added a single field and did not touch anything else that was already present.
I updated the cube dimension in BIDS and did a full process of the cube successfully.
I have a scheduled job that reprocesses the cube every 15 mins and it
ran for 16 hours without issue.
Then the job started failing with the error "could not find attribute key".
The key it could not find was the ID column.
The ID column already existed, and is a numeric column.
I double checked to make sure there were no null ID fields, and there weren't.
The ID field was aliased in the data source view, so I tried updating the table definition to use a named query and aliasing the ID field directly in the query.
When I did that, I started getting errors that it could not find the [ID] field ( the
original field name).
Eventually I had to roll the changes back because I couldn't figure out the cause and I had to get production back up. Now, 17 hours later, the processing of the cube is failing again when no changes have been made. I'm getting the error now that the attribute key cannot be found when processing . When I look for the actual ID value that it gives me in the error, I find it in both of the views that make up my dimension and my fact table.
My underlying data source is Oracle 11g. Any help would be greatly appreciated.
Can you process the updated dimension(if you can restore the production db and BIDs project on local environment) ? check if it gives any error in dimension processing. Then try processing the related measure group alone and then try processing the complete cube and OLAP db. You will be able to find the error step more specifically and could be of help for analysis.
Also you can check this..
http://toddmcdermid.blogspot.in/2009/01/ssas-quick-reference-attribute-key.html
I am having some difficulties with my module I am currently working on. As part of this module I have created a few fields that appear on a form. This form is based in a custom entity.
First I am using field_create_field($field); to create the row in the field_config table. I am then using field_create_instance($instance); to create the row in the instance table and also create the table that begins with field_data_field.
The problem I am running into is how to remove these tables correctly at the end. I have tried manual deletion (via hook_uninstall), I've tried field_delete_field, I've tried to use the remove_instance hook that is built into the Commerce module. Either way, I end up getting lots of field_deleted_data_xxx tables being created. These don't even have data in them as I created a manual query to empty the main data tables before this function was called that seems to create these tables.
Has anyone else ever run into this problem? How do I stop Drupal from creating these tables??
You can't stop Drupal from creating them but I believe you can rid yourself of them totally using field_purge_batch and its related functions.
I really wish I knew the answer to your second question (in your comment above), my instinct would be that if you re-attach the field to the bundle then that data would become automatically available again (otherwise it really doesn't make sense to keep hold of the deleted tables) but I really can't be sure about that.