Best way to export and import all Apigee Edge objects related to an org? - apigee

Are there scripts for exporting and importing all Apigee Edge objects, such as developers, users, apps, caches, key value maps, etc?
To clarify, it would be nice to have non-runtime objects as a priority vs. the runtime data contained within. E.g., the current content of caches are not as critical as just having the cache object available.

I have released a tool that can be used to retrieve Apigee organization settings. This tool has been in use internally at Apigee for some time, but this is the first time it has been released to the public. It uses the Apigee management API to pull configuration data, and that data to be pulled is configurable. The data is stored in a hierarchical directory structure, which can be archived, explored, or used to compare organizations. It can be used with both the Apigee Edge cloud and on-prem offerings.
A few caveats:
This tool does not retrieve all data from an organization. For example, it does not retrieve API proxies. Use the Apigee management UI or management API to retrieve API proxies.
The tool is composed of a few bash scripts. It has been successfully run on Linux and Mac OS X.
The tool does not write data back into the organization, although the files it retrieves can often be POSTed back to the organization using the management API.
This tool is released as-is. It is not officially supported by Apigee.
Find the tool at the api-platform samples site (https://github.com/apigee/api-platform-samples) in the tools/org-snapshot directory.

There is work planned to provide a tool that will export/import provisional data (such as apps, developer, products). Other aspects of an org's configuration require access to the production Cassandra database, which cannot be given out publicly. We have a provisional tool for in-house use that we are currently hardening. If the consumer tool (when it is available) doesn't provide all of the backup support you need, you will need to log a support ticket for them to run the in-house tool.

There are scripts for importing a set of objects (developers, apps, API products) that work with the sample proxies that you can find on GitHub:
https://github.com/apigee/api-platform-samples/tree/master/setup

For Perl programmers: see also Apigee::Edge on CPAN

Related

How to share pact file in bitbucket

I want to share the pact file from consumer to bitbucket and then provider can use from the same location.
Does anybody implemented this?
Thanks in advance.
It's worth noting that it's very much a non-standard way of doing it and I would highly recommend you don't do it this way (see https://docs.pact.io/pact_nirvana/step_4/). You're going to have to build a lot of the workflows again, and that will require investment in building tooling and coming up with ideas to evolve the contract. At some point, you'll be rebuilding key features the Pact Broker already has and would be better off just running that or using a hosted service like pactflow.io.
So, without a pact broker, you don't get the can-i-deploy tool, versioning, environment management and all of these powerful workflows.
With that said - if you do want to use bitbucket:
you'll need to create a manual process (i.e. script) to upload (from consumer) and download (provider) from bitbucket
You'll only want to upload from CI, so that's easier to control
For provider verification, you really want to do this on a laptop, so you'll need a standard approach for pulling down the correct contract to verify there and on CI. Every team member will need credentials to read from that repo
Pact doesn't have a mechanism to pull from git protocols, but you could potentially add that feature to the languages you need them for

Github Enterprise API - determine if a user is dormant?

Running Github Enterprise 2.18. Is there any way to determine via the API that a user is dormant? I don't see a specific call for it anywhere...
To answer your question....
Running Github Enterprise 2.18. Is there any way to determine via the
API that a user is dormant?
No, it's not possible using GitHub API (Enterprise or not).
GitHub has a strict privacy agreement for it's users. Since repositories can
be public as well as private, you're out of luck getting dormancy info via their API.
GitHub's own help page mentions it in their "Reports" section:
If you need to get information on the users, organizations, and
repositories in your GitHub Enterprise Server instance, you would
ordinarily fetch JSON data through the GitHub API. Unfortunately, the
API may not provide all of the data that you want and it requires a
bit of technical expertise to use. The site admin dashboard offers a
Reports section as an alternative, making it easy for you to download
CSV reports with most of the information that you are likely to need
for users, organizations, and repositories.
Specifically, you can download CSV reports that list
all users
all users who have been active within the last month
all users who have been inactive for one month or more
all users whohave been suspended
all organizations
all repositories
The help page also goes on to show examples on how to invoke the report data via CURL (you could do this with other methods as well, like in Powershell, which I prefer)
curl -L -u username:password/token http(s)://hostname/stafftools/reports/dormant_users.csv
You could make use of that data, without the API and parse it into your application. The world is your oyster.

Google stackdriver database agent for OracleDB?

We know that Google's stackdriver supports monitroing for third-party applications like postgresql, mysql, couchdb and others mentioned here. They have also defined the service configuration files for the monitoring agent here.
As per my understanding, I think they somehow use collectd's third-party plugins somewhere in this. Also, since there exists a plugin for Oracle, stackdriver should support that too. But I can't see Oracle in the list of supported third-party applications. So, does stackdriver support it or not?
The Stackdriver monitoring agent package does not bundle the oracle plugin, so it's not supported. You may be able to write a shell script (invoked via the exec plugin) or a Python script (invoked via the python plugin) to query your database, and the custom metrics mechanism to ingest metrics.
You could also try BindPlane from our partner, Blue Medora.
Disclaimer: I'm an engineer on the Stackdriver team.

Firebase Reporting Options

All my app's data is stored in Firebase. I'd like to build some reports with my data that aren't necessarily accessible through the web/app front-end. I don't see any good options for this in the Console. Has anyone found a good reporting solution for Firebase? I am looking for something like Crystal Reports or just an easy way to render Firebase data based on a query.
Thanks,
Rima.
I found a issue with the solution above. Firebase stores its data in JSON format, which cannot be consumed by solutions such as BigQuery, because it is expecting JSONL format and you get an error. It beats me why Google are not providing an elegant solution when integrating between two of their products, but I believe they have something planned.
Firebase does not have any "built-in" reporting tools other than the defined querying APIs.
If your database is small, you dump the JSON from the Firebase Console and then manually run analysis on it.
If your database is large, you can upgrade to the Flame or Blaze plans and sign up for daily private backups. This will create a JSON dump in the background without affecting your database performance and store it in the cloud. You could then use tools to grab that dump and perform advanced reporting on it.
1) Via BigQuery
Official Docs:
https://cloud.google.com/bigquery/docs/loading-data-cloud-firestore
Codelab Walkthrough:
https://codelabs.developers.google.com/codelabs/modern-data-pipeline-firestore-bigquery-dataflow-templates/index.html?index=..%2F..next17
Connect whatever BI tool you want to BigQuery. Google Data Studio is
free as is Metabase. Almost every Enterprise BI tool has a BigQuery
connector.
From https://www.reddit.com/r/Firebase/comments/arps42/reportingbi_tools_and_firestore/
2) Via "Custom Data Sources"
Cloud Firestore (and probably Realtime-Db) has a RESTFUL API. Many popular reporting tools support "custom", "restful", "ajax", and/or "HTTP" sources.
You should search your favorite reporting tools, and internet search accordingly.
I can see that Stimulsoft seems to support custom/RESTFUL sources. A PowerBI data connector seems to provide a lot of latitude - https://github.com/Microsoft/DataConnectors
Of course, this means that you need to create several data sources, and they probably won't be as optimised as a built-in source type. For example, the report engine probably won't know how to translate any front-end UI filters into custom-source query filters. Perhaps some platforms support the ability for you to create your own adaptors.

Working With JAVA Callout in apigee?

can any one explain java callout a little help will do.Actually i am having several doubts regarding where to add the expressions and message flow jar and where to add my custom jar.
Can i access the resources/java folder directly and can i use it to store my data?
First, check the docs on apigee at
Customize an API using Java
http://apigee.com/docs/api-services/content/customize-api-using-java
Keep in mind Java Callouts are only supported in the paid, Apigee Edge product, not the free Developer platform.
As you decide how to use Java, you should consider this basic hierarchy of policy management:
Policy Configuration First: Apigee policy configurations are in broad use and therefore tested daily by clients and most performant.
Javascript Callout: For stuff you can't do in a standard policy there is Javascript -- keep in mind this is "Compiled Javascript" which means at the time you deploy your project the JS gets interpreted by the Java Rhino engine and then runs like native code. Very fast, very scalable, and very easy to manage as your code is all in plain text files.
Java: You have to have a pretty compelling reason to use Java. Most common cases are where you have some complex connection that needs to be negotiated with custom encryption schemas or manipulating binary content. While perfomant, it's the most difficult code to manage (you upload compiled jars, so if someone takes over your work, the source code is in a separate place than your deployment bundle), and it's the most difficult to debug in the event of a failure.
To your specific question: All Apigee variables are available in Java and Java gives you pretty much god-like powers on the local server where the code is executed. Keep in mind, Apigee's physical architecture is distributed -- your jar may run on different servers for different API calls, so any persistent data (that you might want to store locally) should really be put into Key Value Map and read as needed. Keep your API development as stateless as possible.
Hope that helps.

Resources