trying to upload image to Azure but getting 404 - asp.net

I'm trying to upload an image from Server to Azure:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GLOBAL_AZURE.AZURE_STORAGE_CONNECTION_STRING);
CloudBlobContainer container = storageAccount.CreateCloudBlobClient().GetContainerReference("my-container");
CloudBlockBlob blockBlob = container.GetBlockBlobReference("my-img.jpg");
using (FileStream img = File.Open("d:\...\my-img.jpg",FileMode.Open))
{
blockBlob.UploadFromStream(img);
}
Everything works fine until UploadFromStream throws:
"The remote server returned an error: (404) Not Found."
my-container was created on the Portal and was defined "Public Blob".
Any ideas what might be the problem?

This is caused if the container does not exist
See this SO question as well getting 404 error when connecting to azure storage account
You can ensure the container exists by calling container.CreateIfNotExists() prior to uploading the blob.
Personally I run this as part of some application start up code rather than on every blob upload.
This article has background
https://azure.microsoft.com/en-gb/documentation/articles/storage-monitoring-diagnosing-troubleshooting/#the-client-is-receiving-404-messages
In the scenario where a client is attempting to insert an object, it
may not be immediately obvious why this results in an HTTP 404 (Not
found) response given that the client is creating a new object.
However, if the client is creating a blob it must be able to find the
blob container, if the client is creating a message it must be able to
find a queue, and if the client is adding a row it must be able to
find the table.

Related

How to resolve 'error - InvalidDatasourceError: Datasource URL should use prisma'

So I am using Prisma as an ORM on my project to communicate with the database that I set up with AWS. Not happy with the AWS service I am now switching my database to railway.app - which is working out well for me. However, I set up a Prisma data proxy on my app with the AWS connection string, and now that I don't seem to want/ need it anymore I removed it but getting an error:
error - InvalidDatasourceError: Datasource URL should use Prisma:// protocol.
If you are not using the Data Proxy, remove the data proxy from the preview features in your
schema and ensure that PRISMA_CLIENT_ENGINE_TYPE environment variable is not set to data proxy.
Since getting the error I have removed previewFeatures = ["dataProxy"] from the prisma.schema file to make it look like this (back to what it was before configuring with dataproxy):
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url= env("DATABASE_URL")
}
but the error still persists, how do I fix this?
running prisma generate fixes this issue

Use Realm GraphQL Client with a global/shared realm in Realm Cloud

I can successfully use the Realm GraphQL Client with a realm path like myInstance.us1.cloud.realm.io/~/realmName but when trying to use a global path, i.e., myinstance.us1.cloud.realm.io/realmName, I always get a 502 response from the server.
Any thoughts?
TLDR;
I have been fighting with getting data from a global/shared realm, i.e., no /~/ in the realm path with no luck. I always get a 502 Bad Gateway in response to executing a query. If I add the /~/ to the realm path, a connection is established and a new and empty user-specific realm is created (as expected) but then queries fail because the realm is empty (also expected).
Does the GraphQL Service provided by Realm Cloud support connecting to global/shared realms? I’ve skimmed over the source for both the server and client and did not see any specific reason why global/shared would not be supported.
I also tried passing isQueryBasedSync to the GraphQLConfig which results in a connection and successfully executed query, but the query responses are always empty
Any advice is greatly appreciated.
I got past the 502 Bad Gateway error using the undocumented API(s) shown below (I had to find them by reading the current code in the realm-graphql repo):
const credentials = Credentials.usernamePassword(<username>, <password>);
const user = await User.authenticate(credentials, <server>);
const config = await GraphQLConfig.create(user, <realm_name>, undefined, false);
const client = config.createApolloClient();
However, I now frequently receive the following error during GraphQLConfig.create execution:
network timeout at: https://.cloud.realm.io/auth
Additionally, I posted this question on the Realm Forums that you may want to follow and received the following response:
Getting a 502 in the GraphQL service usually means you were trying to open a very large Realm that runs into some resourcing limits.
I am still waiting for more information from the Realm team and will update this answer accordingly.

Exception getting while migration of alfresco content, Too many open files

I am getting this error while migrating content from one alfresco repository to other.
And I am getting this error on live production server logs.
And also Server is getting down while migration is running.
Can anyone please help me to resolve this issue or any suggestion is there to avoid this issue.
Any help or comments will be appreciated.
Thanks in Advance.
I have written below code snippet
ContentStream contentStream = new ContentStreamImpl("content." + FilenameUtils.getExtension(fileName),
BigInteger.valueOf(fileName.length()), new MimetypesFileTypeMap().getContentType(newfile), doc.getContentStream().getStream());
I have 2 repositorys,Using above code I am reading content stream from source and creating new file in target repository and adding the content stream. But I didn't found any way to to close the content stream.
Please find below error log for more details.
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
If you are using the DotCMIS method GetContentStream on the client side, make sure you always close the stream - even if you are not reading it. Otherwise, the socket to the server stays open. Depending on your application the client and/or the server can run out of sockets.
Closing the stream works like this:
IContentStream contentStream = document.GetContentStream();
Stream stream = contentStream.Stream;
... do something with the stream ...
stream.Close();

403 error in production from WindowsAzure.Storage

I have a WebForms app that uses the WindowsAzure.Storage API v3. It works fine in development and in one production environment, but I'm rolling out a new instance and any code that calls out Azure Blob Storage gives me a 403 error.
I've been fiddling with this for awhile, and it fails on any call out to Blob Storage, so rather than show my code I'll show my stack trace:
[WebException: The remote server returned an error: (403) Forbidden.]
System.Net.HttpWebRequest.GetResponse() +8525404
Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync(RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext) +1541
[StorageException: The remote server returned an error: (403) Forbidden.]
Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync(RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext) +2996
Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer.CreateIfNotExists(BlobContainerPublicAccessType accessType, BlobRequestOptions requestOptions, OperationContext operationContext) +177
ObsidianData.Azure.Storage.GetContainer(CloudBlobClient client, Containers targetContainer) in D:\Dev\nSource\Obsidian\Source\ObsidianData\Azure\Storage.vb:84
ObsidianWeb.Leads.HandleListenLink(String fileName, HyperLink link) in D:\Dev\nSource\Obsidian\Source\ObsidianWeb\Bdc\Leads.aspx.vb:188
ObsidianWeb.Leads.LoadEntity_ContactDetails(BoLead lead) in D:\Dev\nSource\Obsidian\Source\ObsidianWeb\Bdc\Leads.aspx.vb:147
ObsidianWeb.Leads.LoadEntity(BoLead Lead) in D:\Dev\nSource\Obsidian\Source\ObsidianWeb\Bdc\Leads.aspx.vb:62
EntityPages.EntityPage`1.LoadEntity() +91
EntityPages.EntityPage`1.Page_LoadComplete(Object sender, EventArgs e) +151
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4018
Here's what I've tried...
The AzureStorageConnectionString that fails in this environment definitely works in production
Other connection strings (from the other production environment, which works) also get a 403 here
There seemed to be an issue with timestamps in some old versions of the REST api (which I am not directly using...) so I made certain the times are correct, even tried switching the server to UTC time.
Tried toggling the connection string between http/https.
Upgraded to the latest version of the API (v3.1)
Tried fiddling with the code to ensure that every call out to Azure Storage gets 403. It does.
In desperation, Installed Azure Powershell on the server just to verify that some type of communication with Azure is working. And that worked fine.
Browsed to the azure management portal as well and that works fine.
Any ideas? This should just be using port 80 or 443, right? So there should be no way this is some kind of network issue. Let me know if that's wrong.
The working production machine is an Azure VM (Server 2008 R2 with IIS 7.5)
There are also some differences with the server:
This new machine is physical hardware (Server 2012 and IIS 8)
This IS using a different storage account inside my azure subscription, however I've tried a total of 3 connection strings and none of them work here.
UPDATE: someone asked to see the code. Okay, I wrote a class called Azure.Storage, which just abstracts my cloud storage code. We are failing on a call to Storage.Exists, so here's the part of that class that feels relevant:
Public Shared Function Exists(container As Containers, blobName As String) As Boolean
Dim Dir As CloudBlobContainer = GetContainer(container)
Dim Blob As CloudBlockBlob = Dir.GetBlockBlobReference(blobName.ToLower())
Return Blob.Exists()
End Function
Private Shared Function GetContainer(client As CloudBlobClient, targetContainer As Containers)
Dim Container As CloudBlobContainer = client.GetContainerReference(targetContainer.ToString.ToLower())
Container.CreateIfNotExists()
Container.SetPermissions(New BlobContainerPermissions() With {.PublicAccess = BlobContainerPublicAccessType.Blob})
Return Container
End Function
Private Shared Function GetCloudBlobClient() As CloudBlobClient
Dim Account As CloudStorageAccount = CloudStorageAccount.Parse(Settings.Cloud.AzureStorageConnectionString())
Return Account.CreateCloudBlobClient()
End Function
...Containers is just an enum of container names (there are several):
Public Enum Containers
CallerWavs
CampaignImports
Delve
Exports
CampaignImages
Logos
ReportLogos
WebLinkImages
End Enum
...Yes, they have upper-case characters, which causes problems. Everything is forced to lowercase before it goes out.
Also I did verify that the correct AzureConnectionString is coming out of my settings class. Again, I tried a few that work elsewhere. And this one works elsewhere also!
Please check the clock on the servers in question. Apart from the incorrect account key, you can also get 403 error if the time on the server is not in sync with the time on storage servers (Give or take +/- 15 minutes deviation is allowed).
I also ran into this error. My problem was that I had turned ON dynamic IP security restrictions in my web.config and the number of files being downloaded in some cases (e.g. with pages with lots of images) was exceeding the max thresholds I had defined in my web.config.
In my case Access key is not same as connection string using by the source code.
So try to recheck on your Azure -> [Storage Account Name] -> Access Keys -> key1 -> Key & Connection string.

iis7 website accessed externally downloads files to server instead of local machine

I've a site set up in IIS. It's allows users to download files from a remote cloud to their own local desktop. HOWEVER, the context seems to be mixed up, because when I access the website externally via the IP, and execute the download, it saves the file to the server hosting the site, and not locally. What's going on??
My relevant lines code:
using (var sw2 = new FileStream(filePath,FileMode.Create))
{
try
{
var request = new RestRequest("drives/{chunk}");
RestResponse resp2 = client.Execute(request);
sw2.Write(resp2.RawBytes, 0, resp2.RawBytes.Length);
}
}
Your code is writing a file to the local filesystem of the server. If you want to send the file to the client, you need to do something like
Response.BinaryWrite(resp2.RawBytes);
The Response object is what you use to send data back to the client who made the request to your page.
I imagine that code snippet you posted is running in some sort of code-behind somewhere. That is running on the server - it's not going to be running on the client. You will need to write those bytes in the Response object and specify what content-type, etc. and allow the user to Save the file himself.

Resources