Importing Application data in PeopleSoft 9.2 - peoplesoft

I want to import Recruitment data from web application into our record but the CI of related component is not working.It's giving an error saying
First operand of . is NULL, so cannot access member GetParameter. (180,236) HRS_APPLICANT_TRACKING.RelatedActions.AaaS.ProxyGate.OnExecute Name:ProxyGate PCPC:435 Statement:8
Called from:HRS_APP_PROFILE.GBL.PreBuild Statement:5
So i want to know if there are any other options to import data into records without using CI.
Kindly help.
Regards,
Aiman Farid

Related

Azure ADX Data Ingestion from Python SDK is failing saying "BadRequest_MappingReferenceWasNotFound" even when mapping is available

I am trying to ingest data into Azure ADX table and using Python SDK's QueuedIngestClient.ingest_from_dataframe(df, INGESTION_PROPERTIES) and data is not inserted.
I have run ".show ingestion failures" and it is shown like "Mapping reference 'SL_Depth_mapping' of type 'mappingReference' in database 'my-db' could not be found.". I have this mapping available in the database and the Service Principal is given Ingestor role and AllDatabasesAdmin role.I have validated the input json and mapping and they are working when i try to ingest from the json file using same mapping, below given is the snippet of the code i am using
from azure.kusto.data.exceptions import KustoServiceError
from azure.kusto.data.helpers import dataframe_from_result_table
from azure.kusto.data import KustoClient, KustoConnectionStringBuilder,ClientRequestProperties
from azure.kusto.ingest import QueuedIngestClient, IngestionProperties, FileDescriptor, BlobDescriptor, DataFormat, ReportLevel, ReportMethod
kcsb_ingest = KustoConnectionStringBuilder.with_aad_application_key_authentication(
self.KUSTO_INGEST_URI, self.CLIENT_ID, self.CLIENT_SECRET, self.AUTHORITY_ID)
KUSTO_INGESTION_CLIENT = QueuedIngestClient(kcsb_ingest)
INGESTION_PROPERTIES = IngestionProperties(database=self.KUSTO_DATABASE, table=DESTINATION_TABLE, data_format=DataFormat.JSON,
ingestion_mapping_reference=DESTINATION_TABLE_COLUMN_MAPPING, additional_properties=additional_properties)
KUSTO_INGESTION_CLIENT.ingest_from_dataframe(df, INGESTION_PROPERTIES)
I have figured out the issue after some analysis. The actual issue was that the mapping was indeed wrong however the error message made me think that ADX is not able to get the mapping that i provided. Error message was Mapping reference is not found.
I was dealing with json data so i created the json based mapping however since i was using QueuedIngestClient.ingest_from_dataframe i was providing data in pandas dataframe and QueuedIngestClient was considering data as csv. So i created csv mapping and it worked.
P.S: QueuedIngestClient convert dataframe to csv and upload to blob storage and ingestion happen from there!

How to overwrite sqlite database for existing users in a Xamarin Forms app?

We have a Xamarin Forms app.
We have a few existing users which had been using the app since last year where the database file name is MyAppName.db
Our new app strategy requires us to include the .db (Sqlite) along with the app to ensure we can include persistent information and does not require internet when you install the app, meaning we hope to overwrite the db file for existing users.
After we publish this new change where the database file is now MyNewDbFile.db, our users complain that they do not see any data, the app does not crash tough.
We capture error reports and we can see a common error in the our tool stating about "row value missused", a quick search indicates the value are not present and so the "SELECT * FROM TABLE WHERE COLUMNAME" query does not work causing the exception.
We can confirm we are using
Xamarin.Forms version 4.5.x
sqlite-net-sqlcipher version 1.6.292
We do not have any complex logic and cannot see a very specific reason causing this as this is not produced when we test our apps from the Alpha channel or the TestFlight.
We investigated the heart of the problem and can confirm that the troublemaker is the device Culture information.
We have been testing with English as the device language, however the minute the app is used by folks with any other language except English the query causes exception.
We make use of the Xamarin Essentials library to get device location (which is optional).
The latitude and longitude returned are culture specific.
So if you are using a Spanish language on device, the format for the Latitude and Longitude information is a , (comma) and NOT a . (dot).
That is, a Latitude value is something similar to 77.1234
where it is separated by a . (dot)
However for users with different region settings on device, the format would change to
77,1234 as value where it is separated by a , (comma).
The query requires us to have the values as string and that when not separated by a . (dot) fails to execute:
Query:
if (deviceLocation != null)
queryBuilder.Append($" ORDER BY((Latitude - {deviceLocation.Latitude})*(Latitude - {deviceLocation.Latitude})) +((Longitude - {deviceLocation.Longitude})*(Longitude - {deviceLocation.Longitude})) ASC");
Where the deviceLocation object is of type Xamarin.Essentials.Location which has double Latitude and Longitude values but when used as a string or even if you explicitly do deviceLocation.Latitude.ToString(), it would return the formatted values as per the device CultureInfo.
We can confirm that the problem is due to a mismatch of the format type and causing the query to throw an exception and in return making the experience as if there is no data.

Airflow Custom Metrics and/or Result Object with custom fields

While running pySpark SQL pipelines via Airflow I am interested in getting out some business stats like:
source read count
target write count
sizes of DFs during processing
error records count
One idea is to push it directly to the metrics, so it will gets automatically consumed by monitoring tools like Prometheus. Another idea is to obtain these values via some DAG result object, but I wasn't able to find anything about it in docs.
Please post some at least pseudo code if you have solution.
I would look to reuse Airflow's statistics and monitoring support in the airflow.stats.Stats class. Maybe something like this:
import logging
from airflow.stats import Stats
PYSPARK_LOG_PREFIX = "airflow_pyspark"
def your_python_operator(**context):
[...]
try:
Stats.incr(f"{PYSPARK_LOG_PREFIX}_read_count", src_read_count)
Stats.incr(f"{PYSPARK_LOG_PREFIX}_write_count", tgt_write_count)
# So on and so forth
except:
logging.exception("Caught exception during statistics logging")
[...]

Java API to query CommonCrawl to populate Digital Object Identifier (DOI) Database

I am attempting to create a database of Digital Object Identifier (DOI) found on the internet.
By manually searching the CommonCrawl Index Server manually I have obtained some promising results.
However I wish to develop a programmatic solution.
This may result in my process only requiring to read the index files and not the underlying WARC data files.
The manual steps I wish to automate are these:-
1). for each CommonCrawl Currently available index collection(s):
2). I search ... "Search a url in this collection: (Wildcards -- Prefix: http://example.com/* Domain: *.example.com) " e.g. link.springer.com/*
3). this returns almost 6MB of json data that contains approx 22K unique DOIs.
How can I browse all available CommonCrawl indexes instead of searching for specific URLs?
From reading the API documentation for CommonCrawl I cannot see how I can browse all the indexes to extract all DOIs for all domains.
UPDATE
I found this example java code https://github.com/Smerity/cc-warc-examples/blob/master/src/org/commoncrawl/examples/S3ReaderTest.java
that shows how to access a common crawl dataset.
However when I run it I receive this exception
"main" org.jets3t.service.S3ServiceException: Service Error Message. -- ResponseCode: 404, ResponseStatus: Not Found, XML Error Message: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>common-crawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00160-ip-10-164-35-72.ec2.internal.warc.gz</Key><RequestId>1FEFC14E80D871DE</RequestId><HostId>yfmhUAwkdNeGpYPWZHakSyb5rdtrlSMjuT5tVW/Pfu440jvufLuuTBPC25vIPDr4Cd5x4ruSCHQ=</HostId></Error>
In fact every file I try to read results in the same error. Why is that?
what is the correct common crawl uri's for their datasets?
The data set location has changed since more than one year, see announcement. However, many examples and libraries still contain the old pointers. You can access the index files for all crawls back to 2013 on s3://commoncrawl/cc-index/collections/CC-MAIN-YYYY-WW/indexes/cdx-00xxx.gz - replace YYYY-WW with year and week of the crawle and expand xxx to 000-299 to get all 300 index parts. New crawl data is announced on the Common Crawl group, or read more about how to access the data.
To get the example code to work replace lines 24 and 25 with:
String fn = "crawl-data/CC-MAIN-2013-48/segments/1386163035819/warc/CC-MAIN-20131204131715-00000-ip-10-33-133-15.ec2.internal.warc.gz";
S3Object f = s3s.getObject("commoncrawl", fn, null, null, null, null, null, null);
Also note that the commoncrawl group have an updated example.

How to create an image database for HP IDOL OnDemand?

For Image-recognition, we require to send 'enum' as a parameter.
The description says that is a parameter for Image-database.
How to create such a database ?
Thanks
At the time of this response (I work for HPE Haven OnDemand), it is not currently possible to create your own image database with this API. The API is currently in Preview release, and uses the provided default corporatelogos public dataset. For a list of corporate logos used in the training set for the API, see Corporate Logo Training Set.

Resources