I am working on the code that interacts with Aws s3 to perform various operations like create bucket, Delete bucket, upload and download files and so on.
An issue is occurring while trying to delete the bucket; Access Denied
At present, I am using Root user credentials to create and delete the bucket. No versioning is enabled and could not see any bucket Policy in AWS Console attached to this bucket.
It is showing strange behaviour; sometimes gives access denied error while trying to delete the empty bucket , sometime it just gets delete effortlessly.
I am able to delete the bucket via AWs s3 console without any trouble. It is just through the code it is behaving random.
Can please somebody explain; what could be the reason?
here is my code
public string DeleteBucket(string bucketName, string S3Region)
{
string sts = "";
Chilkat.Http http = new Chilkat.Http();
// Insert your access key here:
http.AwsAccessKey = "AccessKey";
http.AwsSecretKey = "SecretKey"; //root user
http.AwsRegion = S3Region;
bool success = http.S3_DeleteBucket(bucketName);
if (success != true)
{
return sts = "{\"Status\":\"Failed\",\"Message\":\""http.lastErrorText"\"}";
}
else
{
return sts = "{\"Status\":\"Success\",\"Message\":\"Bucket deleted!\"}";
}
}
You should examine the HTTP response body to see the error message from AWS.
For example:
http.KeepResponseBody = true;
bool success = http.S3_DeleteBucket(bucketName);
if (success != true) {
Debug.WriteLine(http.LastErrorText);
// Also examine the error response body from AWS:
Debug.WriteLine(http.LastResponseBody);
}
else {
Debug.WriteLine("Bucket created.");
}
Related
I'm not sure if this is a bug. It works last month and runs into issues a couple of weeks later. I will post a bug report if this issue cannot be resolved.
I have an Android app that allows users to share files with another person via email address. When the file was uploaded to the Firebase Storage successfully, the app pops up a dialog to allow users to type in the address of the recipient for file sharing. And the email address will be written into custom metadata as a key.
In Firebase Storage, each user uploads files to their own folder(email address as folder name). The Storage rules are listed below. The idea is users only can access the files in their own folders, and has read permission for shared files.
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
// read and write permission for owners
match /users/{userEmail}/{allPaths=**} {
allow read, write: if request.auth.token.email == userEmail && request.auth.token.email_verified;
}
// read permission for shared files
match /users/{userEmail}/{allPaths=**} {
allow read: if request.auth != null && request.auth.token.email != userEmail && request.auth.token.email in resource.metadata.keys() && request.auth.token.email_verified;
}
// samples are public to read
match /samples/{allPaths=**} {
allow read;
}
}
}
The rules were modified from this thread.
Firebase rules: dynamically give access to a specific user
To work with the shared files, the app writes the recipient's email address to the file as a key of custom metadata. The Android code for updating metadata is listed below.
private void updateMetadataForSharing(String fileLocation, String documentId, String recipientEmail) {
// write file metadata
StorageMetadata metadata = new StorageMetadata.Builder()
.setCustomMetadata(recipientEmail,"")
.build();
// Update metadata properties
StorageReference storageRef = storage.getReference();
StorageReference fileRef = storageRef.child(fileLocation);
fileRef.updateMetadata(metadata)
.addOnSuccessListener(new OnSuccessListener<StorageMetadata>() {
#Override
public void onSuccess(StorageMetadata storageMetadata) {
// Updated metadata is in storageMetadata
Toast.makeText(ReviewActivity.this, "The file has been shared to "+recipientEmail+", please paste the sharable link from clipboard.", Toast.LENGTH_LONG).show();
String sharableLink = "https://web.app.com/?u="+documentId;
ClipboardManager clipboard = (ClipboardManager) getSystemService(Context.CLIPBOARD_SERVICE);
ClipData clip = ClipData.newPlainText("sharable link", sharableLink);
clipboard.setPrimaryClip(clip);
}
})
.addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception exception) {
// Uh-oh, an error occurred!
Toast.makeText(ReviewActivity.this, "Error occurred attempting to share the file to "+recipientEmail, Toast.LENGTH_LONG).show();
}
});
}
But the file is not accessible after metadata updated. It was fine if the no metadata written to the file. The web app showed the errors as the picture shown.
web app error message for failing to download the file
I assume it may associate with the access token of file. It has nothing to do with the rules, because it's still not working when I grant all permissions temporarily.
Please advise. Thanks.
I ran into the same problem today with an uploaded file not being accessible after the metadata was updated. It seems like the file becomes inaccessible if the metadata key contains the # character. For some reason the key cannot contain the character but its fine in the value.
I have .NET Framework application where I try to read data from AWS parameter store using AmazonSimpleSystemsManagementClient on my local environment. Besides I have credentials generated by AWS CLI and located in
Users/MyUser/.aws
folder. When I try to connect to the parameter store from CMD using the creds it works fine. Though the AmazonSimpleSystemsManagementClient in the application with default constructor, it throws exception "Unable to get IAM security credentials from EC2 Instance Metadata Service." When I tried to pass BasicAWSParameters to the client with hardcoded working keys I got another exception "The security token included in the request is invalid".
Also I tried installing EC2Config, initializing AWS SDK Store from Visual Studio AWS Toolkit. Though it didn't change the game.
I would want to avoid using environment variables or hardcoding the keys since keys are generated and valid only 1 hour. Then I should regenerate so copying them somewhere every time is not convenient for me.
Please advice how to resolve the issue.
Some code
_client = new AmazonSimpleSystemsManagementClient()
public string GetValue(string key)
{
if (_client == null)
return null;
var request = new GetParameterRequest
{
Name = $"{_baseParameterPath}/{key}",
WithDecryption = true,
};
try
{
var response = _client.GetParameterAsync(request).Result;
return response.Parameter.Value;
}
catch (Exception exc)
{
return null;
}
}
credentials file looks as following (I removed key values not to expose):
[default]
aws_access_key_id= KEY VALUE
aws_secret_access_key= KEY VALUE
aws_session_token= KEY VALUE
[MyProfile]
aws_access_key_id= KEY VALUE
aws_secret_access_key= KEY VALUE
aws_session_token= KEY VALUE
As long as you have your creds in .aws/credentials, you can create the Service client and the creds will be located and used. No need to create a BasicAWSParameters object.
Creds in a file named credentials:
[default]
aws_access_key_id=Axxxxxxxxxxxxxxxxxxxxxxxxxxx
aws_secret_access_key=/zxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This .NET code works.
using System;
using System.Threading.Tasks;
using Amazon.SimpleSystemsManagement;
using Amazon.SimpleSystemsManagement.Model;
namespace ConsoleApp1 {
class Program {
static async Task Main(string[] args) {
var client = new AmazonSimpleSystemsManagementClient();
var request = new GetParameterRequest()
{
Name = "RDSConnection"
};
var response = client.GetParameterAsync(request).GetAwaiter().GetResult();
Console.WriteLine("Parameter value is " + response.Parameter.Value);
}
}
}
I am accessing my backend with an access token obtained from firebase auth the following way:
login via email & password
receive the current user object
obtain the token from the user object
store the token locally to allow furher access to my backend (which uses firebase admin to validate the token)
This works, as long as the access token is stale.
This may as well work, if the application remains open and an 403 due to an expired token can be catched (I can just reuse the current user object to obtain a new token). However, if the token expires while the app is closed, opening it again (no more user object) results in forcing the user to reenter its credentials, does it?
One way that came to my mind was using the custom tokens functionality:
I could send the refresh token to the client after a login, which then stores it and would use it to log in (in an automatic manner) instead of using the credentials.
But the word "custom" made me think that I am on the wrong way somehow. There surely must be an easy way to do this with the intended functions.
Can any one help me out with this?
Greetings,
Codehai
Using this listener refreshes the token automatically, won't work in editor.
For my code to work, somehow I have to add TaskScheduler.FromCurrentSynchronizationContext() on all Firebase Tasks..
void Start()
{
auth = Firebase.Auth.FirebaseAuth.DefaultInstance;
auth.IdTokenChanged += IdTokenChanged;
}
void IdTokenChanged(object sender, System.EventArgs eventArgs)
{
Firebase.Auth.FirebaseAuth senderAuth = sender as Firebase.Auth.FirebaseAuth;
if (senderAuth == auth && senderAuth.CurrentUser != null && !fetchingToken)
{
fetchingToken = true;
senderAuth.CurrentUser.TokenAsync(true).ContinueWith(
task =>
{
if (task.IsCanceled)
{
Debug.Log("canceled");
}
if (task.IsFaulted)
{
foreach (var errors in task.Exception.InnerExceptions)
{
Debug.Log(errors.InnerException.Message);
}
}
Debug.Log("New Token: " + task.Result);
// save task.Result
fetchingToken = false;
}, TaskScheduler.FromCurrentSynchronizationContext());
}
}
private void OnDestroy()
{
auth.IdTokenChanged -= IdTokenChanged;
}
I am building a Web Api (using ASP.NET Web API), that connects via Secure WebSockets to an endpoint that our client exposed (wss://client-domain:4747/app/engineData). They gave me their certificates all in .pem format (root.pem and client.pem), and a private key (client_key.pem).
In order to get this done I did the following:
1) Converted client.pem and client_key.pem to a single .pfx file (used this here: Convert a CERT/PEM certificate to a PFX certificate)
2) I used the library System.Net.WebSockets, and wrote the following code:
private void InitWebSockesClient()
{
client = new ClientWebSocket();
client.Options.SetRequestHeader(HEADER_KEY, HEADER_VALUE); //Some headers I need
AddCertificatesSecurity();
}
private void AddCertificatesSecurity()
{
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls
| SecurityProtocolType.Tls11
| SecurityProtocolType.Tls12;
// I KNOW THIS SHOULDNT BE USED ON PROD, had to use it to make it
// work locally.
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
X509Certificate2 x509 = new X509Certificate2();
// this is the pfx I converted from client.pem and client_key
byte[] rawData = ReadFile(certificatesPath + #"\cert.pfx");
x509.Import(rawData, "123456", X509KeyStorageFlags.UserKeySet);
X509Certificate2Collection certificateCollection = new X509Certificate2Collection(x509);
client.Options.ClientCertificates = certificateCollection;
}
And when I want to connect I call:
public async Task<bool> Connect()
{
Uri uriToConnect = new Uri(URL);
await client.ConnectAsync(uriToConnect, CancellationToken.None);
return client.State == WebSocketState.Open;
}
This works fine locally. But whenever I deploy my Web Api on Azure (App Service) and make an HTTP request to it, it throws:
System.Net.WebSockets.WebSocketException - Unable to connect to the remote server.
And the inner exception:
System.Net.WebException - The request was aborted: Could not create SSL/TLS secure channel.
I enabled WebSockets on the AppService instance.
If I delete the line that always return true for the certificate validation, it doesn't work even locally, and the message says something like:
The remote certificate is invalid according to the validation procedure.
So definitely I got something wrong with the certificates, those three .pem files are being used right now in a similar [![enter image description here][1]][1]app in a node.js and work fine, the WSS connection is established properly. I don't really know what usage give to each one, so I am kind of lost here.
These are the cipher suites of the domain I want to connect: https://i.stack.imgur.com/ZFbo3.png
Inspired by Tom's comment, I finally made it work by just adding the certificate to the Web App in Azure App Service, instead of trying to use it from the filesystem. First I uploaded the .pfx file in the SSL Certificates section in Azure. Then, in the App settings, I added a setting called WEBSITE_LOAD_CERTIFICATES, with the thumbprint of the certificate I wanted (the .pfx).
After that, I modified my code to do work like this:
private void InitWebSockesClient()
{
client = new ClientWebSocket();
client.Options.SetRequestHeader(HEADER_KEY, HEADER_VALUE); //Some headers I need
AddCertificateToWebSocketsClient();
}
private void AddCertificateToWebSocketsClient()
{
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls11
| SecurityProtocolType.Tls12;
// this should really validate the cert
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
// reading cert from store
X509Store certStore = new X509Store(StoreName.My, StoreLocation.CurrentUser);
certStore.Open(OpenFlags.ReadOnly);
X509Certificate2Collection certCollection =
certStore.Certificates.Find(X509FindType.FindByThumbprint,
CERTIFICATES_THUMBPRINT,
false);
if (certCollection.Count > 0)
{
client.Options.ClientCertificates = certCollection;
}
else
{
// handle error
}
certStore.Close();
}
Where CERTIFICATES_THUMBPRINT is a string (thumbsprint of your certificate, the one you saw on Azure).
In case you want to make it work locally, you just need to install the certificate on your computer, as otherwise it won't obviously find it on the store.
Reference for all this in Azure docs: https://learn.microsoft.com/en-us/azure/app-service/app-service-web-ssl-cert-load.
in ASP.NET web API in the log in algorithm i have a action filter that generates a token for each user and the front end sends that token back to authenticate the user by using that token in web server i can get current user information till now every thing is working fine however i have new requirements that every user has relation many to many with account which means the same user can exists in more than one account with different roles for example in account one he is an admin in account two he is normal user so i have to regenerate the token which requires the user to re log in again i do not want him to be redirected to the log in page again. what i think of is to store user name and password in html 5 local storage but i think that is a bad practices any ideas.
Her is how i generate token.
public override void OnActionExecuting(HttpActionContext actionContext)
{
if (!actionContext.Request.Headers
.Any(header => header.Key == "AuthorizationHeader"))
{
if (this.IsAnonymousAllowed(actionContext) == false)
{
actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized, "Un Autherized");
}
}
else
{
string token = actionContext.Request.Headers
.Where(header => header.Key == "AuthorizationHeader")
.First().Value.First();
if (this.IsAnonymousAllowed(actionContext) == true)
{
return;
}
string passPhrase = System.Configuration.ConfigurationSettings.AppSettings["PassPhrase"];
string ticket_string = Crypto.Decrypt(token, passPhrase);
TicketData ticket = JsonConvert.DeserializeObject<TicketData>(ticket_string);
if (ticket == null || ticket.Expiration < DateTime.Now)
{
actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized, "UnAuthorized");
}
else
{
OurIdentity identity = (OurIdentity)ticket.TokenData.OurIdentity;
System.Threading.Thread.CurrentPrincipal = new OurPrincipal
{
OurIdentity = identity,
};
}
}
}
You are right saving username and password in the local storage is bad. It is bad to save it anywhere on the client.
Usually a token is generated and put in a cookie. That token corresponds with a record on the server, in either a session log or a database.
I strongly suggest to use existing methods for this, like OAUTH Bearer tokens in this tutorial.
As far as I understand, if you are storing a hash (perhaps with a salt for extra protection) it is not nessecescarily bad to store the credentials. These would have to be stored somewhere at the end of the day anyway.