Cross-account DynamoDB access - amazon-dynamodb

I want to migrate data from DynamoDB from one AWS account to another. Could you please advise whether it is possible using AWS Data Pipeline? Otherwise, what are other options to do this?
I have tried migrating data within the account using Data Pipeline using HiveCopyActivity. But require more details/info how it can be done across accounts.

Yes, you can use Data Pipeline:
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-importexport-ddb.html
For cross account, you need to share S3 in source account to another account

Related

How to get the role assignments of a resource through Resource Graph API?

I want to use the Azure Resource Graph API to get the role assignments of a resource (who are owners, contributors, etc.). That is, I want to create a query that finds the role assignments for a specific resource id that I provide. I've been going through the documentation, but I haven't found any way to get this information.
The only thing I found was this question from a couple of years ago, where it is mentioned as something that could be done somehow ("query the RBAC of each one of those resources").
Could anyone point me to how this could be done? Or is it not possible to do in Resource Graph API, and I need to use the Management API or something else?
I searched through the Azure Resource Graph table and resource type reference and the Advanced Resource Graph query samples, but didn't find an answer
I tried to reproduce the same in my environment and got the results like below:
I created Azure AD Application and added API permissions:
I generated an access token by using below parameters:
https://login.microsoftonline.com/TenantID/oauth2/v2.0/token
client_id:xxxxxx-xxx-xxx-xxxx-xxxxxxxx
client_secret:ClientSecret
scope:https://management.azure.com//.default
grant_type:client_credentials
To list the Role assignments in the subscription scope, I used the below query:
GET https://management.azure.com/subscriptions/subscriptionId/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01
Based on your requirement you can change the scope and add the filter to get the role assignments. Refer the below MsDoc:
List Azure role assignments using the REST API - Azure RBAC
Currently it is not feasible to retrieve the role assignments via Azure Resource Graph. Alternatively, you can make use of Azure PowerShell or Azure CLI.
Get-AzRoleAssignment -Scope "/subscriptions/SubscriptionId/resourcegroups/RGName/providers/Providername/ResourceType/Resource"

Effective way to query BMC Helix Remedy database other than Smart Reporting

Currently we are connecting the AR System through the Oracle database for this purpose. I need to know: is there any alternative way to access or query the Remedy database effectively? Is there any built-in API which we can utilise which will increase the efficiency of the work?
What could be used is the REST api, in which you can query directly the forms.
Please check following url:
REST API Doc
It will result with JSON object containing all data.
In order to obtain access to all forms you need to create a "service" user with fixed license and permissions to the forms which you would like to read using the API call.
You can query the Oracle back-end directly, with a few caveats. It should only be for reading data, not writing or modifying data. Otherwise, you could break data integrity as well as bypass workflow that should be fired. Also, this direct access does not enforce any permissions, nor does it translate any of the data. For example, selection fields come back as a number instead of their string value, dates are in epoch format, etc.
There is a Remedy ODBC driver, which isn't being updated, nor does it support joins. However, you can open multiple connections with it and join them manually. Plus, it does handle permissions and translations for you.
https://docs.bmc.com/docs/ars1911/odbc-database-access-introduction-896318914.html
If you know in advance what joins you will be doing, you should setup join forms within Remedy. That way the joins are done efficiently in the database. Otherwise, you are stuck with either of the above solutions or using one of the APIs which don't support ad-hoc joins.

Which API should be used for querying Application Insights trace logs?

Our ASP.NET Core app logs trace messages to App Insights. We need to be able to query them and filter by some customDimentions. However, I have found 3 APIs and am not sure which one to use:
App Insights REST API
Azure Log Analytics REST API
Azure Data Explorer .NET SDK (Preview)
Firstly, I don't understand the relationships between these options. I thought that App Insights persisted its data to Log Analytics; but if that's the case I would expect to only be able to query through Log Analytics.
Regardless, I just need to know which is the best to use and I wish that documentation were clearer. My instinct says to use the App Insights API, since we only need data from App Insights and not from other sources.
The difference between #1 and #2 is mostly historical and converging.
Application Insights existed as a product before log analytics, and were based on different underlying database technologies
Both Application Insights and Log Analytics converged to use the same underlying database, based on ADX (Azure Data Explorer), and the same exact REST API service to query either. So while your #1 and #2 links are different, they point to effectively the same service backend by the same team, but the pathing/semantics are subtly different where the service looks depending on the inbound request.
both AI and LA introduce the concept of multi-tenancy and a specific set of tables/schema on top of their azure resources. They effectively hide the entire database from you, and make it look like one giant database.
there is now the possibility (suggested) to even have your Application Insights data placed in a Log Analytics Workspace:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/create-workspace-resource
this lets you put the data for multiple AI applications/components into the SAME log analytics workspace, to simplify query across different apps, etc
Think of ADX as any other kind of database offering. If you create an ADX cluster instance, you have to create database, manage schema, manage users, etc. AI and LA do all that for you. So in your question above, the third link to ADX SDK would be used to talk to an ADX cluster/database directly. I don't believe you can use it to directly talk to any AI/LA resources, but there are ways to enable an ADX cluster to query AI/LA data:
https://learn.microsoft.com/en-us/azure/data-explorer/query-monitor-data
And ways to have a LA/AI query also join with an ADX cluster using the adx keyword in your query:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy

How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation?

How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation for fetching the Data Store Credentials instead of credentials being put up directly in ADF Linked Service. Please suggest ARM or PowerShell methods for the policy implementation.
As of yesterday, the Data Factory Azure Policy integration is available which means you can now find some built-in policies that can be assigned to ADF.
One of those is exactly what you're asking for as you can see in the image below. You can find more information here
Edit: Based on your comment, I'm editing this answer with the info you want. When it comes to custom policies, it's pretty much up to you to come up with them and create what fits your needs. In your particular case, I've created one policy that does what you want, please see here.
This policy will audit your data factory linked services and check if they're using a self-hosted integration runtime. Currently, that check is only done for a few types of integration runtimes (if you look at the policy, you can see 5 of them) which means that if you want to check more types of linked services, you'll need to add them to the list of allowed values and select them when assigning the policy definition.
Bear in mind that for some linked services types, such as Key Vault, that check won't make sense since that service can't use a self-hosted IR

Firebase Cloud Functions Secure HTTPS Endpoints with API key

I've looked at a few places, Including this post and the firebase panel
Is there no way to use these api's to secure these endpoints using an api key you create per client who uses your cloud functions?
I'm able to block every one putting a restriction on the Browser key, but I would like to create a new api key, and use that as a way to authenticate my endpoint for various clients.
Creating a new api key, and using that as a parameter in my query doesn't work (don't now if I'm doing anything wrong)
Is there a way to do this?
Option 1: handle authentication within the function
https://github.com/firebase/functions-samples/tree/master/authorized-https-endpoint
Adapt above to use clients/keys stored in firestore
Option 2: Use an an API Gateway
Google Cloud Endpoints (no direct support for functions yet, need to implement a proxy)
Apigee (higher cost, perhaps more than you need)
Azure API Management (lower entry cost + easy to implement as a facade for services hosted outside Azure)
there are more..
The above gateways are probably best for your use case in that the first two would let you keep everything within Google, albeit with more complexity/cost -- hopefully Endpoints will get support for functions soon. Azure would mean having part of your architecture outside Google, but looks like an easy way to achieve what your after (api key per client for your google cloud / firebase functions)
Here's a good walkthrough of implementing Azure API Management:
https://koukia.ca/a-microservices-implementation-journey-part-4-9c19a16385e9
Not to achieve what you are after, as far as firebase and GCP is concerned your clients is your specific business problem.
One way you could tackle this (with the little information that is provided);
You need somewhere to store a list of clients + their API key (I would use firestore)
For the endpoints you want to secure with a client-specific API key you can include a check to confirm the header exists and also exists in your firestore client record.
Considerations:
Depending on your expected traffic loads and the the number of firestore reads you'll be adding, you might want to double check this kind of solution will work for your budget.
Is the API-key type solution the only option you must go for? You Could probably get pretty far using the https://github.com/firebase/firebaseui-web and doing user checks in your function with no extra DB read required. If you go down this path most of the user signup/ emails / account creation logic is ready to go.
https://firebase.google.com/docs/auth/web/password-auth#before_you_begin
Curious to see what some other firebase users suggest.

Resources