AWS Deploy Serverless limit? - aws-serverless

I have a serverless project, that has quite a few API endpoints, and when I try to deploy all at once I get this error:
Error: The CloudFormation template is invalid: Template format error: Number of resources, 293, is greater than the maximum allowed, 200
at C:...\AppData\Roaming\npm\node_modules\serverless\lib\plugins\aws\deplow\lib\validateTemplate.js:20:13
My serverless.yaml functions def looks like this
functions:
# Auth: Sign-in
signIn:
handler: src/collections/auth/auth.signIn
events:
- http:
path: auth/signIn
method: post
cors: true
# Admin-User: Find Permission By Role
findPermissionByRole:
handler: src/collections/permissions/permissions.findPermissionByRole
events:
- http:
path: permissions/findPermissionByRole
method: get
cors: true
# Lookup: FindAll
lookup:
handler: src/collections/lookup/lookup.find
events:
- http:
path: lookup/find
method: post
cors: true
...(1180 lines of code 131 resources)
There are 131 Handler/events - but if I try to deploy more than 20 (twenty) I get that error.
So I am confused by the error message specifying 293, and 200 max when I have 131.
Any thoughts on this?

This issue to due to the following limit in the Cloudformation API:
Keep in the mind, serverless can add up to 6 resources to the CloudFormation request
For each http event you configured, you end up creating six (!)
CloudFormation resources, in addition to shared resources like
AWS::ApiGateway::RestApi and AWS::IAM::Role.
To work around this, serverless suggests one of the following:
Break your API down: opt for a small deployments and small code (split by business domain). But this may require a lot for existing projects.
Handle routing in your application logic: Make some of the heavy lifting done by API Gateway be done by the lambda function instead.
Use plugins to split your service into multiple stacks or nested stacks: Use this neat AWS solution for the 200 resource limit in one cloudformation template. (e.g. serverless-plugin-split-stacks, serverless-plugin-additional-stacks ... etc)
Ask AWS for the CloudFormation limit increase: Won't solve the root cause and only solve it til your app gets bigger and run into the same issue but a higher limit and more complex/bigger code.

Related

AWS Amplify Build Issue - StackUpdateComplete

When running amplify push -y in the CLI, my project errors with this message:
["Index: 0 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"]
How do I resolve this error?
The "Resource is not in the state stackUpdateComplete" is the message that comes from the root CloudFormation stack associated with the Amplify App ID. The Amplify CLI is just surfacing the error message that comes from the update stack operation. This indicates that the Amplify's CloudFormation stack may have been still be in progress or stuck.
Solution 1 – “deployment-state.json”:
To fix this issue, go to the S3 bucket containing project settings and deleted the “deployment-state.json” file in root folder as this file holds the app deployment states. The bucket should end with, or contain the word “deployment”.
Solution 2 – “Requested resource not found”:
Check the status of the CloudFormation stack and see if you can notice that the stack failed because of a “Requested resource not found” error indicating that the DynamoDB table “tableID” was missing and confirm that you have deleted it (possibly accidentally). Manually create the above DynamoDB table and retry to push again.
Solution 3A - “#auth directive with 'apiKey':
If you recieve an error stating that “#auth directive with 'apiKey' provider found, but the project has no API Key authentication provider configured”. This error appears when you define a public authorisation in your GraphQL schema without specifying a provider. The public authorization specifies that everyone will be allowed to access the API, behind the scenes the API will be protected with an API Key. To be able to use the public API you must have API Key configured.
The #auth directive allows the override of the default provider for a given authorization mode. To fix the issue specify “IAM” as the provider which allows to use an "Unauthenticated Role" from Cognito Identity Pools for public access instead of an API Key.
Below is the sample code for public authorisation rule:
type Todo #model #auth(rules: [{ allow: public, provider: iam, operations: [create, read, update, delete] }]) {
id: ID!
name: String!
description: String
}
After making the above changes, you can run “amplify update api” and add a IAM auth provider, the CLI generated scoped down IAM policies for the "UnAuthenticated" role automatically.
Solution 3B - Parameters: [AuthCognitoUserPoolId] must have values:
Another issue could occur here, where the default authorization type is API Key when you run the command “amplify add api” without specifying the API type. To fix this issue, follow these steps:
Deleted the the API
Recreate a new one by specifying the “Amazon Cognito user pool” as the authorization mode
Add IAM as an additional authorization type
Re-enable #auth directive in the newly created API Schema
Run “amplify push”
Documentation:
Public Authorisation
Troubleshoot CloudFormation stack issues in my AWS Amplify project

spring cloud gateway ribbon load balancing

Trying to get spring cloud gateway to load balance across a couple of instances of our application, but just can't figure it out. We don't have a service registry at present (no Eureka etc).
I've been trying to use ribbon and have a configuration like so:
spring:
application:
name: gateway-service
cloud:
discovery:
locator:
enabled: true
gateway:
routes:
- id: my-service
uri: lb://my-load-balanced-service
predicates:
- Path=/
filters:
- TestFilter
ribbon:
eureka:
enabled: false
my-load-balanced-service:
ribbon:
listOfServers: localhost:8080, localhost:8081
However when I try a request to the gateway, I get a 200 response with content-length 0, and my stubs have not been hit.
I have a very basic setup, no beans defined.
How can I get ribbon to play nice / or an alternative?
You should check out whether spring-cloud-starter-netflix-ribbon dependency is on your project or not

spring cloud gateway, can you exclude paths (do a global !=)

I'm hoping someone can provide some ideas here. I'm playing around with some of the sample apps for the spring cloud gateway and going through the docs but I'm not seeing any way to route to self or do a global ignore. The idea here is that there are some paths that ALWAYS need to point to self, like for the actuator, and other that may need a global block (maybe for security reasons like you've found a severe vulnerability and need to disable access to a specific resource). Right now from what I can tell there is no way to do this, but I hope I'm wrong!
I've set up the app with the actuator running on port 8081 and the server on 8080.
I've got two simple rules:
- id: local_test_1
uri: http://localhost:80
order: 9000
predicates:
- Path=/echo
# =====================================
- id: local_test_2
uri: ${test.uri}
order: 10000
predicates:
- Path=/**
But the universal /** makes sure that any call to localhost:8081/actuator/* also gets routed to the uri. How can I exempt the management port from routing rules so the server itself will deal with the request?
I thought a default filter like - Path!=${management.server.port}/* might work, but it seems that != isn't supported.
I ran into this same problem when using a default route, but also needing to serve a custom post-logout page from the classpath. The default route would handle the request instead of the gateway itself. Without the default route the logout.html was served correctly.
I ended up moving the default route to a Java bean and used the fluent API like this:
#Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes()
.route("default", r -> r
.order(Ordered.LOWEST_PRECEDENCE)
.path("/**")
.and().not(p -> p.path("/logout.html", "/logout.css"))
.uri("http://localhost:8080")
)
.build();
}
If someone knows of a way to do negation in the .yml configuration files that would be ideal, but I have yet to find an example of that in any docs.
You can use no://op as value for uri:.
The only disadvantage, that I see, is that any endpoint, which is not supposed to be found (like /actuator/foo) would still return 200 OK.
- id: local_test_1
uri: http://localhost:80
order: 9000
predicates:
- Path=/echo
# =====================================
- id: local_test_2
uri: ${test.uri}
order: 10000
predicates:
- Path=/**
Try add two space before - Path, the problem may be you config is not working.
maybe you can use - Path=/** and - setStatus=404 for its filter and for actuator route - Path=/actuator/** and - setStatus=ACCEPTED don't forget to uri: no://op for both

HTTP requests not working on aws ec2

I am building an app in node.js and I’m using AWS EC2 to host it. However, my HTTP requests are not working.
My app is split into two repositories: app-ui and app-server. app-server contains all of my server side code/API’s. In app-ui, I am making simple POST requests such as:
$.ajax({
type: "POST",
url: "http://ec2-xx-xxx-xx/api/users",
success: function(data) {
console.log(data);
},
error: function(a) {
console.log(a);
}
});
However, I keep getting the net::ERR_CONNECTION_TIMED_OUT error.
Does anyone know what might be happening?
Add an inbound rule for the security group attached to your server for the specific port you're using.
I'm having the same issue this is because the amazon servers were down today, but take a look on your server to see if it is working in my case:
/etc/init.d/apache2 status
Response:
Active: active (running) since Wed 2017-03-01 02:21:53 UTC; 2h 3min ago
Docs: man:systemd-sysv-generator(8)
Apparently the S3 was one of the services down and also the routing system, if your server was located on AWS EST side you will find this issue, this affected several apps like HockeyApp and Trello
Take a look on the current status: status.aws.amazon.com
Of course assuming that you have the security groups, the elastic or static ip's set and configured and that you see this issue on all your site and not just on your API
I was struggling with the same situation. I managed it. Go to AWS -> login -> ec2 -> select the options in the left sidebar "security and groups". then select your default instance on the right side that is listed in the table then clicked the action button on the top of the table. that will show the inbound menu.
there you click the "add rule" button. there the type is "custom TCP" then you give port 8080 or whatever you prepare. then save it.
Now go ahead with postman it will work. enjoy your work. !!!!

Disable log in Symfony2

This question may have been asked before. I have searched for answers, but I haven't found what I was looking for.
In Symfony 2.3, is there a way to disable the logger for specific requests? I mean, I am using a SOAP service for my project. When I send a request to login, the username and password are dumped straight as plain text into the log file. Is there a way to stop logging this kind of specific requests?
For example, when I send a request for login, the logger should be disabled, but for all other request it works again. Is this possible?
depending if your are in Prod or Dev environement but everything is in config.yml or config_dev.yml :
to disable logging just remove monolog configuration like this :
monolog:
handlers:
main:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
console:
type: console
bubble: false
# uncomment to get logging in your browser
# you may have to allow bigger header sizes in your Web server configuration
#firephp:
# type: firephp
# level: info
#chromephp:
# type: chromephp
# level: info
but in my opinion , you shouln't do this because logging allows you to improve significantly your code !
Logging except for a specific service :
You need to create a specific log channel for your service as described there :
http://symfony.com/doc/current/cookbook/logging/channels_handlers.html
and there :
http://symfony.com/doc/current/reference/dic_tags.html#dic-tags-monolog
you ll be able to separate your soap log from others and eventually send it to null

Resources