I've came across a very strange issue with firebase storage. One of our users manages to create files ending with slash:
He claims that he uses only web console, etc and doesn't do anything special. If I try to copy such file using gsutil I (obviously) get the following error:
Copying images/20610/...
Skipping attempt to download to filename ending with slash
(images/20610/). This
typically happens when using gsutil to download from a subdirectory
created by the Cloud Console (https://cloud.google.com/console)
Extended attributes do not show anything unusual except that this is indeed a file with a slash at the and in its filename:
Creation time: Mon, 27 Apr 2020 16:32:12 GMT
Update time: Mon, 27 Apr 2020 16:32:12 GMT
Storage class: STANDARD
Content-Length: 11
Content-Type: text/plain
Hash (crc32c): XkI+Dw==
Hash (md5): apnFdauH+MfR7R5S5+NJzg==
ETag: CL7wy46EiekCEAE=
Generation: 1588005132499006
Metageneration: 1
My question basically is - how is it possible? and what to do to prevent this?
thanks in advance!
You'll get an object with a trailing slash in the name if you create a folder using the Cloud Console.
Related
There's a way to change timezone in response body using curl (CEST instead of GMT)?
E.G.
curl -v http://ip-api.com/line?fields=timezone
Trying 10.247.129.103... connected
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
**< Date: Mon, 12 Oct 2020 10:38:06 GMT**
Europe/Rome
Server TZ is :
cat /etc/sysconfig/clock
# The time zone of the system is defined by the contents of /etc/localtime.
# This file is only for evaluation by system-config-date, do not rely on its
# contents elsewhere.
ZONE="Europe/Rome"
Thanks in advance.
You seem to be asking about the Date in the HTTP response. It's a header - not in the response body.
This header has nothing to do with cURL. It's the standard HTTP Date header, that every HTTP server must include in its response. It's defined in RFC 7231 Section 7.1.1, and it must always be in terms of GMT.
This particular website you are calling is using a geolocation technique to resolve an approximate IANA time zone identifier (Europe/Rome in your example) from the caller's IP address. You can take this identifier and use it in your own logic to resolve the current time in that time zone. For example, after your cURL call, assuming you are using a Linux distribution that has a tzdata package installed (which most do), you can set the TZ environment variable and use the date command like this:
TZ=Europe/Rome date
Example output:
Mon Oct 12 18:50:04 CEST 2020
There are plenty of other ways you can use the time zone in different programming languages and environments, so choose an approach that works for your use case.
Novice Artifactory user here so please bear with.
Trying to use curl on linux to deploy a file in a repo and failing. trying this running on the same server that's serving artifactory.
% --> curl -u my_idsid -X PUT
"http://localhost:8081/artifactory/test_repo/test_artifact_linux_01" -T
test_aftifact_linux_01
Enter host password for user 'my_idsid':<I entered the pw die my_user here>
HTTP/1.1 100 Continue
HTTP/1.1 403 Forbidden
Server: Artifactory/5.4.6
X-Artifactory-Id: 3444ab9991d26041:29864758:15e2f4940fa:-8000
Content-Type: application/json;charset=ISO-8859-1
Content-Length: 65
Date: Tue, 05 Sep 2017 21:00:20 GMT
Connection: close
{
"errors" : [ {
"status" : 403,
"message" : ""
} ]
}
I was able to put (deploy) a file (an artifact) in this same repo from windows with the drag-drop method for that same user. So I'm thinking that this confirms several things...
1) the repo exists
2) the permissions to allow that user write access is ok
3) the server is up and running ok
And the pw is OK, because when I intentionally enter something wrong, it comes up with a 401 instead.
I looked in artifactory.log and it has this for my attempt...
2017-09-05 17:15:00,332 [http-nio-8081-exec-7] [WARN ]
(o.a.w.s.RequestUtils:155) - Request /test_repo/test_artifact_linux_01
should be a repo request and does not match any repo key
Does not match a repo key ?
I'm using the example here...
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-Example-DeployinganArtifact
and plugging in the repo name right after "artifactory" (replacing "my_repository") in the path.
I have a strong suspicion that I'm goofing up the path name, but I don't know what's wrong. How does one determine the proper path to use ?
Thanks in advance for any help.
UPDATE:
My bad. The repo was actually named "test-repo", not "test_repo".
Worked fine once I made that change.
Sorry for the false alarm.
This is the code that provides the error, along with the output from it.
I'm positive my access keys and tokens are correct. I triple checked them.
I'm guessing my query may be wrong somehow? My guess was defaulting since_id=0 for my first run, but removal of that provides the same error.
mentions = GET(final_url, sig)
mentions
Response [https://api.twitter.com/1.1/search/tweets.json?q=#lolhauntzer&until=2016-01-20&since_id=0&result_type=recent&lang=en&count=100]
Date: 2016-01-19 05:09
Status: 401
Content-Type: application/json; charset=utf-8
Size: 64 B
Woops. Brain lapse. Need to replace the "#" in the URL with "%40". The "#" works on my other workstation though, which is kind of baffling right now.
Have an issue with asp.net s3 Object Copying using the asp.net s3 library.
I'm attempting to run the following code :-
CopyObjectRequest copyObjectRequest = new CopyObjectRequest()
{
SourceBucket = Bucket,
SourceKey = s3FileKey,
DestinationBucket = Bucket,
DestinationKey = archiveS3FileKey
};
CopyObjectResponse copyObjectResponse = s3Client.CopyObject(copyObjectRequest);
The request will just hang on the s3Client.CopyObject(copyObjectRequest); line.
Using fiddler/wireshark i extract the http header seen below (proper bucket and auth keys removed)
PUT https://bucket-name.s3-ap-southeast-2.amazonaws.com/Archive/test.zip HTTP/1.1
x-amz-copy-source: /bucket-name/App/test.zip
x-amz-metadata-directive: COPY
User-Agent: MyDotNetConsoleApp
Content-Type: application/x-amz-json-1.0
x-amz-date: Mon, 30 Jun 2014 05:34:53 GMT
Authorization: AWS <authorization code>
Host: bucket-name.s3-ap-southeast-2.amazonaws.com
Transfer-Encoding: chunked
When I alter the request by removing the Transfer-Encoding: chunked the request runs fine. See below alteration to the s3 library from aws.
And the object will be copied to the new key
To get around this I added the following to the s3 Client library as a temporary measure, obviously though I would like to understand properly, if I'm not setting something in the client request.
(line 281 in AmazonWebClientService.cs)
if (state.WebRequest.Headers.Get("x-amz-copy-source") != null)
{
state.WebRequest.SendChunked = false;
}
// this was the existing line above added.
httpResponse = state.WebRequest.GetResponse() as HttpWebResponse;
I can delete object from the same s3Client object leading up to this copyobject, and also full permissions have been granted to this access key on this bucket.
Anyone have any idea what would cause this or how to get around it without altering the standard library supplied from aws ?
I cannot seem to find an answer for this anywhere. Superuser root has a crontab with a couple of jobs that send the resultant output to root's mailbox addressed from my non-superuser account foo.
It is my understanding that the owner of the cron job is supposed to be the sender of the resultant cron job output. Account foo does not have a crontab, and in-fact I have even tried explicitly removing foo's crontab, but still root receives root's cron job output from user foo.
When I edit root's crontab, I log into the system as foo, and then su - to root. Does this have anything to do with it?
When I ls -alF /var/spool/cron/crontabs there is no file for user foo.
Does anyone know why my non-superuser account foo, that does not have a crontab file, seems to be sending mail to superuser root?
It also seems that for some of root's cron jobs, that it executes as root and as foo which both send email to root's mailbox.
Example:
From foo Sat Oct 30 19:01:01 2010
Received: by XXXXXX (8.8.8/1.1.22.3/15Jan03-1152AM)
id TAA0000027883; Sat, 30 Oct 2010 19:01:01 -0400 (EDT)
Date: Sat, 30 Oct 2010 19:01:01 -0400 (EDT)
From: foo
Message-Id: <201010302301.TAA0000027883#XXXXXX>
redacted
Cron: The previous message is the standard output
and standard error of one of your cron commands.
From root Sat Oct 30 19:01:01 2010
Received: by XXXXXX (8.8.8/1.1.22.3/15Jan03-1152AM)
id TAA0000025999; Sat, 30 Oct 2010 19:01:01 -0400 (EDT)
Date: Sat, 30 Oct 2010 19:01:01 -0400 (EDT)
From: system privileged account
Message-Id: <201010302301.TAA0000025999#XXXXXX>
redacted
Cron: The previous message is the standard output
and standard error of one of your cron commands.
You should show us the actual crontab entry. Some crons allow to specify a user, not just a command. If that user doesn't have a mailbox, maybe by default cron sends the output to root's inbox with the sender still set to 'foo' (which is easily done by having From: foo in the mail header.)
vixie-cron supports a system-wide crontab file in /etc/crontab that allows per-user cron jobs to be specified. The syntax is similar to the usual cron syntax except a username is specified in the 6th column, and the command to be run follows that. For example:
0 22 * * 1-5 foo mail -s "Mail to root from foo" root
So check /etc/crontab for any entries with foo in the 6th column.