In the default perspective of my cube I renamed the dimension, hide one, and changed default member. For example something like this:
default [Geography].[Geo].[City].[Madrid]
-[Time]
rename [Geography].[Geo] as [Location]
If I want to see the changes in the Excel Pivot Table I need to redeploy and reload the schema, but as I understand while loading schema, data from underlying tables are also reloaded. Is it possible to avoid reloading of data, and at the same time to see this changes in the cube?
Perspectives cannot be defined dynamically so you'll have to reload the schema.
Nevertheless, you could setup an offline snapshost (www) to be able to reload faster your schema.
Hope that helps.
Related
I have an SQLite database of 5GB which gets updated few times a day and is used to refresh a PowerBI dashboard. While below 1GB I could refresh the dashboard in under a minute, but now takes around 20 minutes.
Should I create views so merges, joins, etc. are made in the view instead of loading the table itself and using Power Query to perform data manipulation?
Should I use incremental refresh? Is it possible in SQLite?
I would implement views on the database side in this case. This is in my eyes the benefit of having control of the DB yourself - pushing these data transforms to the DB instead of tinkering in Power Query is a major benefit.
When the views are set up, work on incremental refresh if the SQLITE3 connector supports it, but if it doesn't use the "regular" SQL Database connector it may not do so, according to the official documentation.
5GB is tiny in size and your refreshes should not be overly long. How complicated are your queries and what kind of transformations are you doing?
I would first look at making sure you're not breaking query folding. Seemingly simple steps do not always support folding but by rearranging them you can ensure folding happens and have a drastic effect on refresh times. Check at what stage folding is breaking and see if you can move things around.
Next, I'd look at moving your transformations upstream to a view (Roche's maxim).
Incremental refresh will also work so it is up to you really to pick from the available options.
I have a usecase wherein I save an entity for the first time and a second after saving it I fetch it, update it a bit and save it in batch along with two other entites (different 'kinds'). In a few cases (10 out of 50K), the update to datastore is ignored.
I mean, it's there in the objectify cache but the change didn't happen in datastore.
How I am able to justify the above, is because after the save, I fetch it again after a second and I'm able to see it.
PS: I also use .now() while saving. This shouldn't happen when now() is used right?
Sounds like you are seeing eventual consistency in the datastore. There is quite a bit of Google documentation available, but this looks to be the most comprehensive:
https://cloud.google.com/datastore/docs/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore/
There are a lot of ways to deal with eventual consistency, either by avoiding it (using get-by-key operations), changing the structure of your data (using #Parent relationships), or masking it with UI behavior (say, add the new thing to the list in UI code instead of just refreshing the whole list).
I am a beginner in AX and I am trying to set access rights for some users and on a specific operation they get the error that they don't have access to the table SalesCreateReleaseOrderLineTmp. I have manually searched for this table in every category, but without success. I found on a website the full description of this table -> Order Lines - SalesCreateReleaseOrderLineTmp - ID: 995. I've search for the ID as well, but again no result. With admin rights everything is ok, but obviously not a solution.
Is there a fix location of this table and can anyone tell me where it is? :) Or is there any way to search for this table (by ID or name)?
I guess with
I have manually searched for this table in every category, but without
success
you mean you tried to find the table in the form for maintaining the user group permissions?
If so then this is due to the fact that temporary tables are hidden from that tree view as the class method SysDictTable.allowSecuritySetup is called from SysSecurity.expandSecurityKey while building the tree view and in this method there is - among other things - a check whether the table is temporary.
So essentially you have 3 options:
Give your permission group the desired access on the security key so that the group 'inherits' access to the table through it - downside of course could be to be too permissive but upside is better maintainability :)
Remove the security key on the temporary table as this in general is IMHO a wrong decision anyway. The application shouldn't restrict access to temporary tables (which are intrinsically scoped to the user session anyway) but rather force access checks in the code filling that table or even higher level processes.
Customize the code which builds the security tree view so that it includes temp. tables.
Try to apply the first option above that works for you as the first one does not need any application modification and the second one is only a simple property change which in my opinion is currently bad configured anyway. The last option should be the last resort.
I'm using the promoted activity (http://msdn.microsoft.com/en-us/library/ff642473.aspx) to store information needed to track a workflow.
During execution values are correctly stored and I can query them using the view but, if the workflow is persisted, the view becomes empty and I can't find information anymore.
Can someone explain me how to keep those values until the natural completion of the workflow?
Thanks
update
a few more details
I'm using IIS to store workflows
I promote values at the beginning of the workflow and I wouldn't like to do it again on every persistence property (this was the first workaround I thought)
Each time a workflow is persisted the complete state is persisted. There is no incremental addition. So by not adding the promoted properties on subsequent persists you are effectively removing them from the instance store.
I finally found the problem.
In the web.config I was adding my extension after the element, but order matters.
My configuration looks like this now:
..
Everythings works fine now and promoted values are always available.
I have a very strange problem with a website where several objects are cached.
We have a lot of DataTables, strings, booleans and other stuff that are cached for quick fetching in later requests.
Sometimes we get a periodic error where it looks like some of the cache items have been mixed up.
An example of how this shows itself is when a piece of code fetches a DataTable from the cache and then tries to access a certain column of that DataTable.
We then see a yellow screen of death with the exception "Cannot find column [ColumnName]", where "ColumnName" of course is some column name that was supposed to be in the DataTable.
When I inspect the cache item with a little home made tool, I see that a completely different DataTable is in the cache item. It is almost like some of the cache items have been mixed up.
Does anybody have an idea how this happens?
We are not able to reproduce the error. It occurs at apparently random intervals.
Whats the issue
When you add items to the cache, you need to lock the process that you create them and added to the cache.
First lets clarify that the cache is keep a reference to your data, is not clone them, nether knows whats is not that data ! reference: http://msdn.microsoft.com/en-us/library/6hbbsfk6(VS.71).aspx
Second clarify that the default session of a page is lock the pages and is by make most of the request safe because all users lock until a page fully load and send.
When its appear
So the lock issue may appear when you try to make cache by a thread, or by a handler, or by a page that have session off.
How to lock
If you use only one pool, then the simple lock(object){} can work, if you use many pools then you need to use mutex() for lock
You need to lock the full process of making your data if you change them later and still existing on cache, or only the cache reference if you make clone of them.
For example, if you read some data that you have get from the cache, then the time you edit them, if some other read the same cache it will get corrupted data, because the cache is give you a reference to them.
Hope that all this helps.