DynamoDB local http://localhost:8000/shell - amazon-dynamodb

I'm having trouble to open AWS dynamoDB shell (UI). Did anyone tried and worked?
Steps taken:
Download latest - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.DownloadingAndRunning.html
Start local dynamodb - No Errors
run aws dynamodb list-tables --endpoint-url http://localhost:8000 - No Errors (shows the table)
Error:
When trying to access -> http://localhost:8000/shell i am getting HTTP 400 Request must contain either a valid (registered) AWS access key ID or X.509 certificate.
Ref for shell (UI) https://aws.amazon.com/blogs/aws/sweet-treats-for-dynamodb-users/
Note: I'm having aws cli setup with named profiles. I even tried http request in browser after exporting AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_DEFAULT_REGION and still get above HTTP 400 error

This appears to be a bug in new versions DynamoDB Local, but I couldn't find any documentation about it being deliberate, so please try reporting it to Amazon...
I just checked version 1.13.5 from 2020-10-13, and the "/shell" works as expected and documented. But on version 1.18.0 from 2022-1-10, it doesn't - and reports the same error you listed:
HTTP/1.1 400 Bad Request
Date: Thu, 13 Jan 2022 08:06:18 GMT
Content-Type: application/x-amz-json-1.0
x-amzn-RequestId: 4f040110-4464-48dc-99c1-9b843c25db5f
Content-Length: 173
Server: Jetty(9.4.18.v20190429)
{"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken","Message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}
The release notes in DynamoDB Local doesn't mention anything about the shell being deliberately disabled.
You are not the first person to notice this problem - see also this question from two weeks ago:
Dynamodb local web shell does not load

Related

CORS with Delphi MVC Framework

I'm testing TMS WEB Core 2 and DMVC 3.2.2 (latest) on Delphi 11.2 by me test machine locally.
I've created a simple DMVC server with all default's setup through the wizard nothing fancy except added the CORS option.
I've created a TMS Web core project with all default's setup as well with a WebHttpRequest and WebMemo components.
I ran the DMVC server and can get the result beautify on the browser.
I ran the TMS Web core project to send a request to the server using
WebHttpRequest which is like this:
WebHttpRequest1.URL := 'http://localhost:8080/api/test';
WebHttpRequest1.Execute(
procedure(AResponse: string; AReq: TJSXMLHttpRequest)
begin
WebMemo1.Lines.Add(AResponse);
end
);
However I got this error:
ERROR
HTTP request error #http://localhost:8080/api/test | fMessage::HTTP request error #http://localhost:8080/api/test fHelpContext::0
at http://localhost:8000/Project1/Project1.js [263:50]
and the browser developer console shows:
Access to XMLHttpRequest at 'localhost:8080/api/test' from origin 'localhost:8000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value 'localhost:8080' that is not equal to the supplied origin.
I want to send a request from the client to the server and get the respond in the WebMemo..
I've checked online to find that it's a backend-end problem, and some say its related to CORS, So How can I enable the CORS on server side using DMVC?
Your configuration prevents the client from connecting.
Both the server name and the port must match the CORS rule. To fix this, change the CORS header to a matching value.
This could be
Access-Control-Allow-Origin: http://localhost
or
Access-Control-Allow-Origin: http://localhost:8000
Reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS

requests to issue auth token from Microsoft Cognitive API return 500 with stack trace

I am posting this as instructed by #AzureSupport on Twitter.
All calls to the Cognitive auth API are returning 500 with a stack trace and this happens for all my subscription keys. This was working correctly and suddenly stopped.
This happens from my application, from curl, and even from the Microsoft test form here.
curl:
curl --header 'Ocp-Apim-Subscription-Key: <my-key>' --data "" 'https://api.cognitive.microsoft.com/sts/v1.0/issueToken'
The pertinent error in the stack trace appears to be:
[AdalServiceException: AADSTS70002: Error validating credentials. AADSTS50012: Client assertion contains an invalid signature. [Reason - The key used is expired., ... redacted ...]
This happens with keys that have been operating correctly for months. It also happens with newly generated keys for testing.

How do I make sure my users are downloading the new version of my S3 File?

This is inside bash file:
s3cmd --add-header='Content-Encoding':'gzip' put /home/media/main.js s3://myproject/media/main.js
This is what I do to upload my backbone compressed file into Amazon S3.
I run this command every time I make changes to my javascript files.
However, when I refresh the page in Chrome, Chrome still uses the cached version.
Request headers:
Accept:*/*
Accept-Encoding:gzip, deflate, sdch
Accept-Language:en-US,en;q=0.8,es;q=0.6
AlexaToolbar-ALX_NS_PH:AlexaToolbar/alxg-3.3
Cache-Control:max-age=0
Connection:keep-alive
Host:myproject.s3.amazonaws.com
If-Modified-Since:Thu, 04 Dec 2014 09:21:46 GMT
If-None-Match:"5ecfa32f291330156189f17b8945a6e3"
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36
Response headers:
Accept-Ranges:bytes
Content-Encoding:gzip
Content-Length:70975
Content-Type:application/javascript
Date:Thu, 04 Dec 2014 09:50:06 GMT
ETag:"85041deb28328883dd88ff761b10ece4"
Last-Modified:Thu, 04 Dec 2014 09:50:01 GMT
Server:AmazonS3
x-amz-id-2:4fGKhKO8ZQowKIIFIMXgUo7OYEusZzSX4gXgp5cPzDyaUGcwY0h7BTAW4Xi4Gci0Pu2KXQ8=
x-amz-request-id:5374BDB48F85796
Notice that the Etag is different. I made changes to it, but when I refreshed the page, that's what I got. Chrome is still using my old file.
It looks like your script has been aggressively cached, either by Chrome itself or some other interim server.
If it's a js file called from a HTML page (which is sounds like it is), one technique I've seen is having the page add a parameter to the file:
<script src="/media/main.js?v=123"></script>
or
<script src="/media/main.js?v=2015-01-03_01"></script>
... which you change whenever the JS is updated (but will be ignored by the server). Neither the browser nor any interim caching servers will recognise it as the same and will therefore not attempt to use the cached version - even though on your S3 server it is still the same filename.
Whenever you do a release you can update this number/date/whatever, ideally automatically if the templating engine has access to the application's release number or id.
It's not the most elegant solution but it is useful to have around if ever you find you have used an optimistically long cache duration.
Obviously, this only works if you have uploaded the new file correctly to S3 and S3 is genuinely sending out the new version of the file. Try using a command-line utility like curl or wget on the url of the javascript to check this is the case if you have any doubts about this.
The Invalidation Method
s3cmd -P --cf-invalidate put /home/media/main.js s3://myproject/media/main.js
| |
| Invalidate the uploaded filed in CloudFront.
|
-P, --acl-public / Store objects with ACL allowing read for anyone.
This will invalidate the cache for the file you specify. It's also possible to invalidate your entire site, however, the command above shows what I would imagine you'd want in this scenario.
Note: The first 1000 requests/month are free. After that it's approximately $0.005 per file, so if you do a large number of invalidation requests this might be a concern.
The Query String / Object Key Method
CloudFront includes the query string (on origin) from the given URL when caching the object. What this means is that even if you have the same exact object duplicated, but the query strings are different, then each one will be cached as a different object. In order for this to work properly you'll need to select Yes for Forward Query Strings in the CloudFront console or specify true for the value of the QueryString element in the DistributionConfig complex type when you're using the CloudFront API.
Example:
http://myproject/media/main.js?parameter1=a
Summary:
The most convenient method of ensuring the object being served is the current would be invalidation, although if you don't mind managing the query string parameters then you should find it just as effective. Adjusting the headers won't be nearly as reliable as either method above in my opinion; clients handle caching differently in too many ways that it's not easy to distinguish where caching issues might be.
You need the response from S3 to include the Cache-Control header. You can set this when uploading the file:
s3cmd --add-header="cache-control:max-age=0,no-cache" put file s3://your_bucket/
The lack of whitespace and uppercase in my example is due to some odd signature issue with s3cmd. Your mileage may vary.
After updating the file with that command, you should get the Cache-Control header in the S3 response.

Gassfish 3.1.2 with HTTP DIGEST authentication fails occasionally with 401

Has anyone used Glassfish 3.1.2 with HTTP DIGEST authentication in anger?
I got it to work fine, or so I thought... until I discovered that its behavior was erratic...
it works maybe 9 out of 10 times, but fails to authenticate the 10th time.
This is when I test it with wget as a client on the same machine with the same credentials and the same Java EE application, (as it happens, a REST web service, but I also have the problem on other Applications.)
I ran wget locally.
My Glassfish machine is only servicing those wget requests, it isn't doing much else!
I've no reason to believe wget is misbehaving occasionally. I calculated the request digest by hand (from the wget HTTP debug) on one of the occasions that it failed, just to be sure. It seemed fine.
When I run wget with debug, I can see it failing first time without credentials, then
succeeding with credentials. However, 1 time in 10 or thereabouts it fails the 2nd time
too ( debug shown here.)
[writing POST file request.xml ... done]
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 401 Unauthorized
X-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition
3.1.2 Java/Sun Microsystems Inc./1.6)
Server: GlassFish Server Open Source Edition 3.1.2
WWW-Authenticate: Digest realm="jdbc-realm",qop="auth",nonce="1377101691098:d07adb4a1421a265f3aa36bd99df7f6ef8c7a6e7887eb7d876e6b5ce079d1126",
opaque="C26EED99B0A8C0BCA16900215CCD241F"
Content-Type: text/html
Content-Length: 1069
Date: Wed, 21 Aug 2013 16:14:50 GMT
---response end---
401 Unauthorized
Skipping 1069 bytes of body: [<!DOCTYPE html P...
I set debug for javax.enterprise.system.core.security.level=FINE
I didn't see any error messages... but I did notice that for a "good" wget, the "hasResourcePermission" was called 3 times, 2 times returning false and one time returning true.
However, for the "bad" wget call, it is only called 2 times returning false.
|FINE|glassfish3.1.2|javax.enterprise.system.core.security|_ThreadID=36;_ThreadName=Thread->2;
ClassName=com.sun.enterprise.security.web.integration.WebSecurityManager;
MethodName=hasResourcePermission;|[Web-Security] hasResource isGranted: false|#]
|FINE|glassfish3.1.2|javax.enterprise.system.core.security|_ThreadID=36;_ThreadName=Thread-
2;ClassName=com.sun.enterprise.security.web.integration.WebSecurityManager;
MethodName=hasResourcePermission;|[Web-Security] hasResource isGranted: false|#]
GOOD CASE ONLY
|FINE|glassfish3.1.2|javax.enterprise.system.core.security|_ThreadID=36;_ThreadName=Thread-
2;ClassName=com.sun.enterprise.security.web.integration.WebSecurityManager;
MethodName=hasResourcePermission;|[Web-Security] hasResource isGranted: true|#]
Any ideas anyone ? Is there more Debug I could enable ?
thanks
******************GLASSFISH DIGEST INSTRUCTIONS********
Install a mysql database with yum.
Follow these instructions (with some changes, this blog is for FORM authentication so stop at step 4)
http://jugojava.blogspot.ie/2011/02/jdbc-security-realm-with-glassfish-and.html
Create the mysql database "realm_db" with the tables in the above blog
Using the Glassfish console UI, I created a JDBC Connection Pool and JDBC Resource for mysql database.
In the Pool Additional Properties, add in your mysql database properties as shown in the blog
On the server-config, Security page, I set "Default Realm" to jdbc-realm
IMPORTANT: When creating the JDBC security realm, use JAAS context of "jdbcDigestRealm" and JNDI of "jdbc/realm_db".
I left these fields blank, Digest Algorithm, Encoding, Charset, Password, Encryption Algormithm etc. and I put the passwords in the mysql database in clear text.
By the way, I used an up-to-date version of wget for testing because I read somewhere that older versions don't have proper RFC2617 DIGEST support. The version is 1.14 from Aug 12.
you need a driver file in $GLASSFISH_HOME/domains/domain1/lib. The file is called mysql-connector-java-3.1.13-bin.jar

Tomcat, HTTP, OPTIONS

Note: I am new to Tomcat...
I am getting this message in the Tomcat localhost_access_log:
127.0.0.1 - - [09/Oct/2009:09:37:30 -0700] "OPTIONS /stl/foo HTTP/1.1" 200 -
Can anyone explain to me where the OPTIONS comes from? I am using a 3rd party library (DirectJngine) but in perusing the source I can't see any reference to this being set. The docs imply that it will always use GET or POST. Is OPTIONS some kind of default inside Tomcat?
The same log file shows a more normal looking GET when I do the same thing from a browser:
127.0.0.1 - - [09/Oct/2009:09:07:24 -0700] "GET /stl/foo HTTP/1.1" 500 1805
The OPTIONS method is a request from the client to the server asking about available transfer options, but without actually requesting the resource.
From the spec at http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
9.2 OPTIONS
The OPTIONS method represents a
request for information about the
communication options available on the
request/response chain identified by
the Request-URI. This method allows
the client to determine the options
and/or requirements associated with a
resource, or the capabilities of a
server, without implying a resource
action or initiating a resource
retrieval.
It would appear your 3rd-party library is using the OPTIONS command prior to fetching the resource.
That's the request coming from the client.
GET and POST aren't the only allowed requests. You might also see
OPTIONS
HEAD
PUT
DELETE
TRACE
CONNECT
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html

Resources