ibm cognos transformer multiple fact table not supported by dimension - cognos-10

i'm actually working on a solution that have two fact tables, and i need to add one more fact table
so the etl work fine
i add my new table to the pack in ibm framework manager , and i make relations between my fact table and other tables "tables that are used for dimensions in transformer"
i make some tests in framework manager the relations are goud
test relation between the fact table and the dimensions
when i create the cube i cant find aggregation between dimensions and my new fact table it give the total sum of my fact table.
when i search suported data sources in dimension i realize that the my fact table is not supported by the dimensions
my fact table not supported
i dont know how to make that relation in transformer......
any help
sincerely
hope you have a goud day

Related

Table for all companies

I need my conversion table (we integrate data from other system where their values mean another thing in our AX instance) to be encompassing all companies.
When I deploy the project, I'll upload that table's data through an Excel import but I don't want to do it for all 5 of our companies.
I know when code runs as Admin, it fetches data from tables regardless of company (unless you specify so in the where clause) but I want standard users to see the table's data regardless of the company they are in when they run the code.
Is this possible ?
Thanks.
Yes, there s a property on tables called SaveDataPerCompany. The default is yes but if you change it to no, then essentially the DataAreaId field is no longer applicable and the same data will be seen in all companies. You change the property by finding the table in the AOT (e.g Data Dictionary > Tables) and right-clicking it and choosing Properties towards the bottom.

symfony2 / doctrine views? or equivalent

I have a question about queries optimization and data "grouping"
Here's my setup :
I have a single application for multiple sites
Items I display to my sites are stored in a single table (one item can be shown in different sites)
So I have joins between my "sites" table and "items" table
When a user authenticate in site01 i'd like to present him only the data for this site.
I was wondering if there was a genius doctrine (or other) trick to make it transparent in my code:
For example:
User authenticates for site01
I set in my siteManager that it's 01
And all my entities of items (for this session) now refer to a subset of my data according to the JOIN with site01.
Like a view or something.
My concern is as much performance optimization than transparency in code.
Performance coming first.
nb: the choice of using a single item table is not definite yet, it might be a bit to meta and/or conceptual to provide good performance.
Thank you in advance for any input on my question.

Dynamically named tables in entity framework queries

I am thinking about ways to manage a table naming issue for an application I'm planning.
Currently there is a table that contains raw materials (approximately 150 rows) and each row has about 20 columns mainly float-value numbers used for calculations. For reference, let's refer to this table as 'materials'.
As these number columns are periodically updated and some users may also wish to create custom versions, I will create a management utility that allows copy/edit of this material table but, crucially it will only allow "save as" under a new table name so that data is never destroyed.
Thus using an automated routine, I'll create 'materials_1", "materials_2" and so on. When a calculation is done in another part of the website, I will store the name of the materials table that was used for the calculation so in future there can be audits done to find where a result has come from.
Now... is it possible for me to do this in an entity framework / unit of work/ repository way (like John Papa's codecamper example) without me having to continually edit the datacontext and add the newly created table names? I can't see how I can pass in a variable to the repository interface that contains the specific table name, which will always be a 'materials' entity even though the physical table name varies?

Microsoft Lightswitch and Entity Framework Code First Inconsistency

I've begun investigating Microsoft Lightswitch 2011 as a possible solution for developing quick "admin" apps for updating various databases - primarily those containing lookup tables or configuration data for internal corporate websites or applications.
I have a website that was developed using ASP.NET MVC. EF Code First was used in building out the data layer. Some of the relationships are many-to-many which EF CF handles by creating a join table with just two fields containing the primary keys of the two tables involved in the relationship. The primary key of the join table is combination of the two fields. For example, a document entity can have many categories and a category can consist of many documents. Three tables get created: Documents, Categories, and DocumentCategories. DocumentCategories only has two columns: DocumentID and CategoryID.
When this database is attached to Lightswitch as an external database, and a master-detail screen is created for the Documents table (and showing the related Categories), data can be deleted from and added to the related table (the join table) but not modified.
Investigation revealed that Lightswitch requires a join table in a many-to-many relationship to have its own primary key that is not a concatenation of the keys of the related tables. In other words the table must be of the format: DocumentCategoryID, DocumentID, CategoryID. If the join table is structured that way, it becomes possible to update the entries in the related table.
I know that I can work-around this by not updating records and simply deleting and re-adding them. That's not a big deal since 1) Lightswitch makes that easy, and 2) there's usually not wholesale changes in the related data. It goes against my sensibilities though of "what's right".
So at the risk of providing fodder for all the Microsoft tool haters, 1) is this just a case of Microsoft making a mistake and not being consistent or is there some other force at work here, and 2) is there a way to "fix" this without having to rework my ASP.NET MVC, EF CF app and changing the database structure?
1) LightSwitch works. Yes there are 'haters' but then there were haters when nail guns came out ( http://jhurst.blogspot.com/2011/01/nail-gun-or-hand-nail-your-roof-which.html ) The end user who is paying for our services will demand that we use tools that can reduce costs by 90%+
2) Using WCF RIA Services allows you to place a layer between whatever you have going on in your data layer and LightSwitch (see: http://lightswitchhelpwebsite.com/Blog/tabid/61/tagid/21/WCF-RIA-Service.aspx ) Only use the 'data source wizard' when it works for you. If it gives you any problems then a WCF RIA Service, that will only take you 5 minutes to code up, will resolve any situation because you are able to drop into procedural code to handle any transform you need.

SQL Server 2005 - Select From Multiple DB's & Compile Results as Single Query

Ok, the basic situation: Due to a few mixed up starts, a project ends up with not one, but three separate databases, each containing a portion of the overall project data. All three databases are the same, it's just that, say 10% of the project was run into the first, then a new DB was made due to a code update and 15% of the project was run into the new one, then another code change required another new database for the rest of the project. Again, the pertinent tables are all exactly the same across all three databases.
Now, assume I wanted to take all three of those databases - bearing in mind that they can't just be compiled into a single databases due to Primary Key issues and so on - and run a single query that would look through all three of them, select a given set of data from each, then compile those three sets into one single result and return it to the reporting page I'm working on.
For reference, at its endpoint the data is output to an ASP.Net/VB.Net backed page, specifically a Gridview object. It doesn't need to be edited, fortunately, just displayed.
What would be the best way to approach this mess? I'm thinking that creating a temporary table would be my best bet, but honestly I'm stepping into a portion of SQL that I'm not familiar with here, and would appreciate any guidance somebody more experienced might have.
I'd say your best bet is to suck it up and combine the databases, even if it is a major pain to combine the primary keys. It may be a major pain now, but it is going to be 10x as painful over the life of the project.
You can do a union across multiple databases as Scott has pointed out, but you are in for a world of trouble as the application gets more complex. For example, even if you circumvent the technical limitations by having multiple tables/databases for the same entity, having duplicates in the PK for a logical entity is a world of trouble.
Implement the workaround solution if you must, but I guarantee you will hate yourself for it later.
Why not just use 3 part naming on the tables and union them all together?
select db1.dbo.Table1.Field1,
db1.dbo.Table1.Field2
from db1.dbo.Table1
UNION
select db2.dbo.Table1.Field1,
db2.dbo.Table1.Field2
from db2.dbo.Table1
UNION
select db3.dbo.Table1.Field1,
db3.dbo.Table1.Field2
from db3.dbo.Table1
-- where ...
-- order by ...
You should create what is called a Partitioned View for each of your tables of interest. These views do a union of the underlying base tables and eventually add a syntetic column to uniquefy the rows:
CREATE VIEW vTableXDB
AS
SELECT 'DB1' as db_key, *
FROM DB1.dbo.table
UNION ALL
SELECT 'DB2' as db_key, *
FROM DB2.dbo.table
UNION ALL
SELECT 'DB3' as db_key, *
FROM DB3.dbo.table;
You create one such view for each table and then design your reports on these views, not on the base tables. You must add the db_key to your join conditions. The query optimizzer has some understanding of the partitioned views and might be able to create plans that do the right thing and avoid joins that span multiple dbs, but that is not guaranteed. If things go haywire and the optimizer does not recognize the partitioning resulting in very bad execution times, you may have to move the db_key into the tables themselves and add some artificial check constraints on the base tables so that the optimizer can understand the partitioning (see the article I linked for details).
You can actually join tables on different databases. If I remember right the syntax is changed from "tablename.columnName" to "Server.Owner.tablename.columnName". You will need to run some stored procedures as an admin to allow this connectivity. It's also pretty slow but the effort to get it working is low.
If you have time to do it right look at data warehouse concepts. That's basically a temp table that collects the data you need to report on.
Building on Scott Ivey's excellent example above,
Use table name aliasing to simplify your code
Use UNION ALL instead of UNION assuming that your data is unique between the three databases
Code:
select
d1t1.Field1,
d1t1.Field2
from db1.dbo.Table1 AS d1t1
UNION ALL
select
d2t1.Field1,
d2.Field2
from db2.dbo.Table1 AS d2t1
UNION ALL
select
d3t1.Field1,
d3t1.Field2
from db3.dbo.Table1 AS d3t1
-- where ...
-- order by ...

Resources