Standard SAP S/4HANA API to query ArchiveLink tables and get metadata - sap-basis

The setup is:
on-prem S/4HANA
third-party document management system (DMS) using ArchiveLink to store all types of documents
By design, ArchiveLink is used only to store the files in the external DMS and store links between files and business objects in the tables TOA01, TOA02 and TOA03.
However, in my DMS, I would like to have some metadata like author, title, document type... for each stored file.
In order to get those information, I need that the DMS queries S/4HANA with the GUID of a stored document as input paramter.
My question : is it possible to use only standard API available on api.sap.com to implement that integration?

Related

import individual files automated to firestore with the POST: https://firestore.googleapis.com/v1beta1/{database}/documents.commit?

As the title suggested, can I post with the: https://firestore.googleapis.com/v1beta1/{database}/documents.commit command a single JSON file directly in my Firestore database and will they be processed? Added to the collections etc? Or should I go with POST projects.databases.documents.createDocument. I was reading this documentation
I want to put json files from different sources in to my Firestore database to build up my collection.
And where should I put the filename of the json file that I want to upload?
Thanks!!
You can see here [1] the usage of both calls:
documents.commit= Commits a transaction, while optionally updating documents
documents.createDocument= Creates a new document
For using the JSON in the API call you need to send a POST request, check this question [2].
Also, regarding your last comment, you can start collections and add documents using the Firestore UI, but also you can do that using client libraries in different languages (Python, Java, Go...). Here is a list of "How to"s regarding Firestore [3].
In case you think that some features are missing, you can always file a Feature Request following this link [4] (as Firestore is still not there, I would choose Datastore), but keep in mind that Firestore is still in Beta.

GCP encryption thru Beam / Dataflow APIs for Bigquery and Cloud SQL

Context: We are trying to load some CSV format data into GCP BigQuery using GCP Dataflow (Apache Beam). As a part of this for the first time (for each table) creating the BQ tables thru BigQueryIO API. One of the customer requirement is the data on GCP needs to be encrypted using Customer supplied/managed Encryption keys.
Problem Statement: We are not able to find any way to specify the "Custom Encryption Keys" thru APIs while creating Tables. The GCP documentation details about how to specify the Custom encryption keys thru GCP BQ Console but could not find anything for specifying it thru APIs from within DataFlow Code.
Code Snippet:
String tableSpec = new StringBuilder().append(PipelineConstants.PROJECT_ID).append(":")
.append(dataValue.getKey().target_dataset).append(".").append(dataValue.getKey().target_table_name)
.toString();
ValueProvider<String> valueProvider = StaticValueProvider.of("gs://bucket/folder/");
dataValue.getValue().apply(Count.globally()).apply(ParDo.of(new RowCount(dataValue.getKey())))
.apply(ParDo.of(new SourceAudit(runId)));
dataValue.getValue().apply(ParDo.of(new PreProcessing(dataValue.getKey())))
.apply(ParDo.of(new FixedToDelimited(dataValue.getKey())))
.apply(ParDo.of(new CreateTableRow(dataValue.getKey(), runId, timeStamp)))
.apply(BigQueryIO.writeTableRows().to(tableSpec)
.withSchema(CreateTableRow.getSchema(dataValue.getKey()))
.withCustomGcsTempLocation(valueProvider)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
Query: If anybody could let us know
If this is possible to provide encryption key thru Beam API?
If its not possible with the current version what could be the possible work
around?
Kindly let know if additional information is required.
Customer supplied encryption keys is a new feature, not all libraries have been updated to support it yet.
If you know the table name in advance, you can use UI/CLI or API to create table, then run your normal flow to load data into that table. That might be a work around for you.
https://cloud.google.com/bigquery/docs/customer-managed-encryption#create_table
API to create table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert
You need to set this section on table object:
"encryptionConfiguration": {
"kmsKeyName": string
}
More details on table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource

cosmosdb - archive data older than n years into cold storage

I researched several places and could not find any direction on what options are there to archive old data from cosmosdb into a cold storage. I see for DynamoDb in AWS it is mentioned that you can move dynamodb data into S3. But not sure what options are for cosmosdb. I understand there is time to live option where the data will be deleted after certain date but I am interested in archiving versus deleting. Any direction would be greatly appreciated. Thanks
I don't think there is a single-click built-in feature in CosmosDB to achieve that.
Still, as you mentioned appreciating any directions, then I suggest you consider DocumentDB Data Migration Tool.
Notes about Data Migration Tool:
you can specify a query to extract only the cold-data (for example, by creation date stored within documents).
supports exporting export to various targets (JSON file, blob
storage, DB, another cosmosDB collection, etc..),
compacts the data in the process - can merge documents into single array document and zip it.
Once you have the configuration set up you can script this
to be triggered automatically using your favorite scheduling tool.
you can easily reverse the source and target to restore the cold data to active store (or to dev, test, backup, etc).
To remove exported data you could use the mentioned TTL feature, but that could cause data loss should your export step fail. I would suggest writing and executing a Stored Procedure to query and delete all exported documents with single call. That SP would not execute automatically but could be included in the automation script and executed only if data was exported successfully first.
See: Azure Cosmos DB server-side programming: Stored procedures, database triggers, and UDFs.
UPDATE:
These days CosmosDB has added Change feed. this really simplifies writing a carbon copy somewhere else.

How to retrieve the source code of an XQuery module stored in eXist-db?

I am looking for a way how to retrieve the source code of an XQuery module stored in the database.
Is there any way how to do this using eXist-db's REST API or an XQuery extension function or any other eXist-db interface?
If you are using the REST Server, you have two main options:
Do a GET on the XQuery stored in the db, with the query string parameter _source=yes. You need to change some settings in $EXIST_HOME/descriptor.xml to enable that.
Write a query for retrieving queries. A query stored in the database is like any other binary document, so you could use util:binary-doc() to get it.

some basic oracle concepts

Hi:
In our new application we have to use the oracle as the db,and we use mysql/sqlserver before,when I come to oracle I am confused by its concepts,for exmaple,the table space,the object,the schema table,index, procedure, database link,...:(
And the schema is closed to the user,I can not make it.
Since when we use the mysql,I just know that one database contain as many tables,and contain as many users,user have different authentication for different table.
But in oracle,everything is different.
Anyone can tell me some basic concepts of oracle,and some quick start docs?
Oracle has specific meanings for commonly-used terms, and you're right, it is confusing. I'll build a hierarchy of terms from the bottom up:
Database - In Oracle, the database is the collection of files that make up your overall collection of data. To get a handle on what Oracle means, picture the database management system (dbms) in a non-running state. All those files are your "database."
Instance - When you start the Oracle software, all those files become active, things get loaded into memory, and there's an entity to which you can connect. Many people would use the term "database" to describe a running dbms, but, once everything is up-and-running, Oracle calls it an, "instance."
Tablespace - A abstraction that allows you to think about a chunk of storage without worrying about the physical details. When you create a user, you ask Oracle to put that user's data in a specific tablespace. Oracle manages storage via the tablespace metaphor.
Data file - The physical files that actually store the data. Data files are grouped into tablespaces. If you use all the storage you have allocated to a user, or group of users, you add data files (or make the existing files bigger) to the tablespace they're configured to use.
User - An abstraction that encapsulates the privileges, authentication information, and default storage areas for an account that can log on to an Oracle instance.
Schema - The tables, indices, constraints, triggers, etc. that are owned by a particular user. There is a one-to-one correspondence between users and schemas. The schema has the same name as the user. The difference between the two is that the user concept is all about account information, while the schema concept deals with logical database objects.
This is a very simplified list of terms. There are different states of "running" for an Oracle instance, for example, and it's easy to get into very nuanced discussions of what things mean. Here's a practical exercise that will let you put your hands on these things, and will make the distinctions clearer:
Start an already-created Oracle instance. This step will transform a group of files, or as Oracle would say, a database, into a running Oracle instance.
Create a tablespace with the CREATE TABLESPACE command. You'll have to specify some data files to put into the tablespace, as well as some storage parameters.
Create a user with the CREATE USER command. You'll see that the items you have to specify have to do with passwords, privileges, quotas, and the like. Specify that the user's data be stored in the tablespace you created in step 2.
Connect to the Oracle using the credentials you created with the new user from step 3. Type, "SELECT * FROM CAT". Nothing should come back yet. Your user has a schema, but it's empty.
Run a CREATE TABLE command. INSERT some data into the table. The schema now contains some objects.
table spaces: these are basically
storage definitions. when defining a
table or index, etc., you can specify
storage options simply by putting
your table in a specific table_space
table, index, procedure: these are pretty much the same
user, schema: explained well before
database link: you can join table A in instance A and table B in instance B using a - database link between the two instances (while logged in on of them)
object: has properties (like a columns in a table) and methods that operate on those poperties (pretty much like in OO design); these are not widely used
A few links:
Start page for 11g rel 2 docs http://www.oracle.com/pls/db112/homepage
Database concepts, Table of contents http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/toc.htm

Resources