I have a GKE cluster where my application is running. My application is built using springboot. And I have google Datastore database which is running in a separate GCP Project.
It throws the error "DatastoreException: Unauthenticated" when it tries to connect to datastore database during application start-up.
The connection happens through "Service account" permissions, application service account having necessary(Datastore user) permission to the datastore database. But it still fails.
Below two are the similar scenarios where it works:
1.
With similar set-up in a lower environment, it works.
2.
I have another application running inside same GKE cluster which is using the same service account but connecting to another datastore database with similar role to the service account, it works fine here.
Related
I'm trying to use the Google.cloud.logging.log4net library in a .NET Web application to send logs to Google Stackdriver. I've followed the steps from stackdriver logging how to guide Option 2 https://cloud.google.com/logging/docs/integrate/dotnet
It seems to work from my local machine when I provide the service account credentials. But it does not send logs when I run it from a GCE instance.
I have an Azure Web App hosting an API (ASP.NET MVC project) that interacts with a CosmosDB database and collections to get subscriptions and other information.
The CosmosDB database is accessed R/W by the Web App middle-ware uses through the nuget package "Microsoft.Azure.DocumentDB" SDK v1.19.1.
I am trying to set up the CosmosDB IP Firewall through the Azure Portal. I allowed the Azure Portal to have access to the db and then I needed to also allow the web app (also hosted on Azure) to have access. To do this, I copied the Virtual IP Address of the Web App from the Properties tab in the Azure Portal.
But this was not enough. I waited more than 10 minutes trying my web app but all the calls to the CosmosDB were rejected with error 404, which as the documentation states it is the proper behavior for SDK Calls (security reasons).
Then I added, all the Outbound IP Addresses stated at the same Properties Tab of the Web App. Waited for more than 20 mins and still 404 error.
What are the correct steps to achieve the requested task?
For example in SQL On Azure, the IP Filtering allowed for an option, to allow access from any Azure App/ VM / Service. How can we achieve the equivalent in CosmosDB?
Thanks in advance
Since Azure App Service is PaaS, and following this article, please try adding the IP 0.0.0.0.
On the Azure Portal, this can also be set by switching on Allow access to Azure Services.
I am trying to test run a basic .NET web application on pivotal cloud foundry. This web application uses as its database a MongoDB server hosted on my local machine. At the moment I am limited to use of the cloud infrastructure by using just the Apps Manager.
I have read the pivotal cloud foundry docs about user provided services, but cannot figure out as to how the connection is to be really made. I have already come across various other ways like using MongoDB as a service (beta version), but at the moment I am not allowed access to the Operations Manager. Looking for an explanation on user provided services or how to implement the service broker API, specifically.
I am new to Mongo as well, so any suggestion regarding making a connection through tweaking Mongo may help as well. Thanks
The use case you describe (web app in PCF connecting to a resource in your local machine) is not recommended.
You can create a MongoDB instance for development purposes in PCF.
$ cf marketplace
...
mlab sandbox Fully managed MongoDB-as-a-Service
...
You can create a mlab service and bind it to your application. You will then have a MongoDB instance in PCF that you can use for development purposes.
Edit:
In that case a user provided service might help you, where you pass in your remote MongoDB instance configuration that you can read in your application. e.g.:
cf.exe cups my-mongodb -p '{"key1":"value1","key2":"value2"}'
You can add your local mongo-db as a CUPS service to your PCF Dev.
Check out the following post.
How to create a CUPS service for mongoDB?
I have created a subscription with Azure Germany, and now I am attempting to deploy my application topology there using the Azure Management API and a service principal.
Deployment works fine towards the "regular" Azure cloud, however when I attempt to deploy towards my subscription in Azure Germany, I get the following error message: The subscription '[...]' could not be found.
I am able to successfully acquire an authentication token using AuthenticationContext.AcquireTokenAsync(), and I am using "https://login.microsoftonline.de/[directoryId]" as authority and "https://management.core.cloudapi.de/" as resource. Additionally, I am using "Germany Northeast" as location/region.
The error occurs as soon as I attempt to perform a typical management task, such as creating a resource group.
I have checked the following things:
App registration settings
App permissions (Windows Azure Active
Directory + Windows Azure Service Management API)
Correctness of
subscription id, app id, and app secret/key
I am at a loss at what could be wrong. What could be causing this error message?
you should point your app to correct subscription first.
Try setting subscription using this link.
Add Microsoft Graph permissions to your Azure AD app.
I am aware of how the permission system works in AWS:
By giving an EC2 instance a specific IAM role, it is possible to give all programs running on that specific EC2 instance some set of permissions for accessing other AWS services (e.g. permission to delete an EBS volume).
Is there something similar for Openstack? If you would like a program that is running on an Openstack server to be able to programmatically make changes through the Openstack API:s, how do you solve that?
The scenario I am thinking of is this:
You create a new Rackspace OnMetal cloud server together with an extra Rackspace Cloud Block Storage volume, and copy a big input data file to it with scp. You log in to the server with ssh and start a long running compute job. It would be great if the compute job by itself would be able to copy the result files to Rackspace Cloud Files and then unmount and delete the
Rackspace Cloud Block Storage volume that was used as temporary storage during the computation.
Rackspace's Role Based Access Control (RBAC) system is similar to AWS IAM roles. It lets you create users that restricted to specific APIs and capabilities. For example, a readonly cloud files user, or a cloud block storage administrator.
You could create a new user that only has access to the areas required for this compute job, e.g. cloud block storage and cloud files. Then your job would use that user's apikey to request a token and call the cloud block storage and cloud files api.
You did not mention a specific language but I recommend using an SDK, as it will handle the API specifics and quirks and get you up and running more quickly.