Error Importing Cost from Excel to Ms Project - ms-project

I am trying to import data from Excel to Ms Project. I have recorded a Map file in Ms Project and everything works fine, I can import Tasks, Resources and Assignments, but I have a problem importing the Cost field in Assignments.
I have both Cost and Material Type Resources. Cost resource assignment imports without problem, for them there is no calculation like resource_rate*units for cost, you just enter the cost manually. However, for the Material resource assignments, I leave the Cost field blank, as Ms Project calculates it itself for those resources.
My Excel Assignment Sheet simply looks like this:
Units
Cost
Task_ID
Resource_ID
$100
1
1
=> This resource type is Cost and imports without problem
2.54
1
2
=> This resource type is Material and gives error
When I start Import I get the below error, And When I click "No" everything is imported correctly, but I have to bypass the error, is there a way I can bypass this error? (Column 4 is "Cost" field in my Assignment Sheet)

Related

Can I use PowerBI to access SharePoint files, and R to write those files to a local directory (without opening them)?

I have a couple of large .xlsb files in 2FA-protected SharePoint. They refresh periodically, and I'd like to automate the process of pulling them across to a local directory. I can do this in PowerBI already by polling the folder list, filtering to the folder/files that I want, importing them and using an R script to write that to an .rds (it doesn't need to be .rds - any compressed format would do). Here's the code:
let
#"~ Query ~"="",
//Address for the SP folder
SPAddress="https://....sharepoint.com/sites/...",
//Poll the content
Source15 = SharePoint.Files(SPAddress, [ApiVersion=15]),
//... some code to filter the content list down to the 2 .xlsb files I'm interested in - they're listed as nested 'binary' items under column 'Content' within table 'xlsbList'
//R export within an arbitrary 'add column' instruction
ExportRDS = Table.AddColumn(xlsbList, "Export", each R.Execute(
"saveRDS(dataset, file = ""C:/Users/current.user/Desktop/XLSBs/" & [Label] & ".rds"")",[dataset=Excel.Workbook([Content])[Data]{0}]))
However, the files are so large that my login times out before the refresh can complete. I've tried using R's file.copy command instead of saveRDS, to pick up the files as binaries (so PowerBI never has to import them):
R.Execute("file.copy(dataset, ""C:/Users/current.user/Desktop/XLSBs/""),[dataset=[Content]])
with dataset=[Content] instead of dataset=Excel.Workbook([Content])[Data]{0} (which gives me a different error, but in any event would result in the same runtime issues as before) but it tells me The Parameter 'dataset' isn't a Table. Is there a way to reference what PowerBI sees as binary objects, from within nested R (or Python) code so that I can copy them to a local directory without PowerBI importing them as data?
Unfortunately I don't have permissions to set the SharePoint site up for direct access from R/Python, or I'd leave PowerBI out entirely.
Thanks in advance for your help

Dimension lookup hangs AX client?

I have an import interface (not coded by me) that imports XML data and creates LedgerJournalTable (1) and LedgerJournalTrans (1..n) records.
When handling LJT dimensions, the code first checks that the dimension exists in AX, then inserts the data in the dimension[x] field. However, in the case that the dimension doesn't exist, a warning is shown to the user after the import run ends, but the data is still inserted as is.
And when the user goes to the LJT line after the import is complete, the erronous value is shown in the dimension field. When the lookup/drop-down of this dimension is clicked, the lookup does not open and AX client hangs. Ctrl+break will recover it, but the lookup never opens. You can delete the value, save, and the problem will still persist. You can manually enter an existing value and save, and the problem will still persist.
Problem extends to the table browser also.
Any idea why this is happening and how can it be fixed, other than not saving the erronous value in the first place (I have no idea why this is done this way in the first place)?
Thanks in advance.
Let me know if I'm reading this correctly.
User runs some process to import LJ table/trans records from XML.
If a bad dimension is inside XML, it shoves the data into the LJ trans dimension[x] field even though it's invalid, and presents a warning to user.
User views the journal and sees the bad data and attempts to use the lookup to correct it, but the lookup hangs/crashes.
Seems to me the issue may be that you've been shoving a bunch of bad data into AX and the lookup is trying to use table/edt relations that are invalid.
If I'm right, you need to go to SQL directly and query the ledger trans table and look for any bad dimension data and correct/remove it.
I suspect existing bad data is causing the lookup to fail and not merely whatever bad data you imported and are looking at.
Perhaps what caused the problem is, a user imported bad data, received a warning, ignored warning, clicked "post" as-is (with bad data) and now it's in AX? And now when you do a 2nd import, and try to use the lookup, it's crashing on that bad-data-relation.
Edited: So, while there was an corruption in the DB, the actual culprit was found: the standard AX code creating temp data for the dimension lookup - there was a mod code in Dimensions.insert() that wrote an XML file every dimensions were updated or inserted. This took so long in this case that it hang up the client. I put the code inside an if clause like so:
if(!this.isTemp())
{
// offending code
}
Problem solved.

Import custom python modules into dag file without mixing dag environs and sys.path?

Is there any way to import custom python modules into dag file without mixing dag environs and sys.path? Can't use something like
environ["PROJECT_HOME"] = "/path/to/some/project/files"
# import certain project files
sys.path.append(environ["PROJECT_HOME"])
import mymodule
because it the sys.path is shared among all dags and this causes problems (eg. sharing of values between dag definitions) if want to import modules from different places that have the same name for different dag definitions (and if there are many dags, this is hard to keep track of).
The docs for using packaged dags (which seemed like a solution) do not seem to avoid the problem
the zip file will be inserted at the beginning of module search list (sys.path) and as such it will be available to any other code that resides within the same interpreter.
Anyone with more airflow knowledge know how to handle this kind of situation?
* Differs from linked-to question in that is less specific about implementation
Ended up doing something like this:
if os.path.isfile("%s/path/to/specific/module/%s.py" % (PROJECT_HOME, file_name)):
import imp
f = imp.load_source("custom_module", "%s/path/to/specific/module/%s.py" % (PROJECT_HOME, file_name))
df = f.myfunc(sparkSession, df)
To get the needed module file explicitly from known paths, based on the SO post here.

Dspace import failed

I just installed Dspace 5.4 and I am trying to move a collection from greenstone to Dspace.
I successfully exported the collection from greenstone but when I try to load it into Dspace via batch import (zip) I get the following error:
Notice
Import failed
/dspace/imports/New Folder.zip/New Folder/exported_DSpace/dublin_core.xml (No such file or directory)
Can anyone tell me what have I missed?
We do not have a great deal of information to go on from your question, such as how you did the export from greenstone. From what I can tell, it seems possible that you did not export the data in the correct format for dspace.
The structure should be this simple archive format
archive_directory/
item_000/
dublin_core.xml -- qualified Dublin Core metadata for metadata fields belonging to the dc schema
metadata_[prefix].xml -- metadata in another schema, the prefix is the name of the schema as registered with the metadata registry
contents -- text file containing one line per filename
file_1.doc -- files to be added as bitstreams to the item
file_2.pdf
item_001/
dublin_core.xml
contents
file_1.png
...
To export a collection from greenstone so it is suitable for dspace you can follow these steps it seems. Here is some information that might help
It seems possible that you have exported the data from greenstone but not in the correct format for DSpace.
For some more information on how the structure should look like when importing data into DSpace, you can take a look at here

Dealing with the error: A different document with value xxxx already exists in the index

What would cause multiple documents on my catalog to have the same "unique id"? Effectively an error like this:
ERROR Products.ZCatalog A different document with value
'xxxx341a9f967070ff2b57922xxxx' already exists in the index.'
And how do I go about fixing it?
I had the same error today.
In short: the UID index in portal_catalog (ZCatalog UUIDIndex) complains that you are trying to index multiple objects with the same UID
In my case it was caused by a zexp import of a folder that contained images that where already available in another folder.
To reproduce:
copy production buildout, database and blobstorage to staging server
do some changes to staging.com/folder1
move staging.com/galleries/gallery1 to staging.com/folder1
export staging.com/folder1 to folder1.zexp
remove production.com/folder1
use ZMI import/export on production.com/manage to import folder1.zexp
you'll get these errors for the gallery1 folder and all of its content items:
2015-06-15T17:58:22 ERROR Products.ZCatalog A different document with value '618a9ee3544a4418a1176ac0434cc63b' already exists in the index.'
diagnosis
production.com/resolveuid/618a9ee3544a4418a1176ac0434cc63b
will take you to production.com/galleries/gallery1/image1
whereas staging/resolveuid/618a9ee3544a4418a1176ac0434cc63b
will take you to staging.com/folder1/gallery1/image1
production.com/folder1/gallery1/image1 did get cataloged too, but because it has the same uid as production.com/galleries/gallery/image1 results of resolveuid, catalog queries, internal links and such stuff might be random
how to repair
In my case I think it's probably the best to either
delete production.com/galleries/gallery1 and run a clear and rebuild on the portal catalog.
or replace production.com/folder1/gallery by production.com/galleries/gallery1 (delete, cut, paste)
If the objects with the same UID are not actually the same (as in my case) you might be able to give them new and uniques UIDS using object._setUID('new-uid') and rebuild the catalog afterwards.

Resources