Spring cloud contract for different environments testing [closed] - spring-cloud-contract

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
My client has asked me to come out with a POC on Spring cloud contract as we are going to use this framework across global projects. I have followed the examples available online and documentation as well. However, I am having few questions.
if application hosted in different environments like DEV,STAG,PROD ..do we need to generate the stubs for all these three environments?
Is there way to store stubs to a repository using gradle script? So that I can download the stub to my local test environment from repository and test?
Sorry, I am very new to this framework and very much thankful for your answers.
If possible, please provide some samples for the above.
Thanks in advance

if application hosted in different environments like DEV,STAG,PROD ..do we need to generate the stubs for all these three environments?
It depends on your deployment strategy but in general, I don't think so. You can check my rationale in Spring Cloud Pipelines project - http://cloud.spring.io/spring-cloud-pipelines/single/spring-cloud-pipelines.html#_opinionated_implementation . If you want to do continuous delivery you just should create stubs for the current built version. That version will go through a deployment pipeline that would go via stage and prod.
Is there way to store stubs to a repository using gradle script? So that I can download the stub to my local test environment from repository and test?
You can use the classpath scanning - http://cloud.spring.io/spring-cloud-static/Dalston.SR4/multi/multi__spring_cloud_contract_stub_runner.html#_classpath_scanning . That way you set your Gradle build in such a way that you have the stubs on classpath. If you're referring to downloading a stub to run it inside a seprate proces you can combine this with Stub Runner Boot. Example is available here - https://github.com/spring-cloud-samples/github-analytics-stub-runner-boot-classpath-stubs/blob/master/pom.xml#L77-L88 . It's a Maven build but I'm sure that you get the idea regardless (this example registers stubs in Eureka and sends messages to a real RabbitMQ instance, of course you can remove these features if you don't need them).

Related

Calling from feature to features packaged as JAR [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Problem statement: Every service has a separate repository. what is the best way to use a common framework across several service repositories?
We are trying to create an API test automation framework using "Karate".
Here we want to create a framework(Which can be distributed(example:jar)) such that it can be used across all of the microservice project repositories.
As the creator of Karate, I strongly recommend you don't do this. In the long term this makes all your projects depend on one common framework - and you should try to reduce the creation of "home grown" frameworks. Especially for a testing framework, you should try not to force teams to depend on an additional library which you need to maintain and version-control. Re-use can cause more harm than good especially in the context of testing, see this article at the Google Testing Blog.
That said, since Karate can read files from the classpath: you can "ship" a JAR file with common Java classes and even feature or JS files that all your projects can inherit from or "re use". In fact the karate-base.js has been designed to solve for common bootstrap logic or variables / parameters being supplied from a JAR file.
Short Answer: use normal Java techniques (Maven / Gradle) to create a re-usable JAR file. There are multiple ways to use resources (Java, *.feature, JS) from a JAR file. It is up to you how to structure your Maven (or Gradle) projects to make this happen.
EDIT: for those looking for how to create a "runnable" JAR, please see https://stackoverflow.com/a/56553194/143475

What is the best way to make instant messaging on website? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am currently creating a website on Symfony 4, and I would like to integrate an instant messaging system like messenger, with the possibility to create groups of discussions.
The problem is that I don't know which method to use. Symfony doesn't offer anything for that, and ajax seems to me not optimized at all because of the many requests made to the server.
Should I use websockets coupled to nodeJs?
Or use the Rachet librarie? Because I don't know NodeJs and integrate a new technology into the project may not be suitable for everyone
So, what would be the most optimized system to support a large number of users?
Thank you,
You have 2 options here:
Implement by yourself
In your case you need the following:
Install some XMPP server in your cloud. It could be something like Ejabberd, Prosody, Tigase, Openfire
On client side - use XMPP libs to connect to XMPP server and to send/receive messages. On Web/Web panel - use StropheJS
for any service tasks - there are also XMPP libs for PHP
Use some messaging SaaS platforms
There are also lot's of diff messaging platforms e.g Pusher, Twillio, Layer, ConnectyCube, Applozic etc.
I used ConnectyCube some time ago, they support Messaging, Video Calling and Push Notifications functionality for iOS, Android and Web. They also have some ready code samples available, so can some some time on start. Pricing is a competitive one. So it can be done in the following way:
Javascript/Web Chat SDK and code samples https://developers.connectycube.com/js/messaging
Hope it will be helpful for you
Just use pubnub.com,
it's like 5 lines of code
https://www.pubnub.com/developers/demos/10chat/
These days it would be bizarre to build chat from scratch.

Use existent VM Instace (bitnami) for Autoscale Group of Instances [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I am using a Bitnami Wordpress for Google Cloud. Now, I need to setup a Instance Template -> Group of instances -> Load balancer and with this, my system will be autoscaling :)
But, I have the VM instance created using an boot image by Bitnami, and I need to put in a group of instance.
Can you help me with this, please?
The answer for creating a Highly scalable web application on GCP is very long and could be made as a blog post. Since writing the whole answer here will be very long and difficult to read, I have split the answer into 3 parts.
As you have mentioned, the steps for creating a Highly scalable web application on GCP can be divided as:
Instance template
Managed Instance Group and autoscaling
Network / HTTP(s) load balancer
1. Instance Template: This is first step in creating this high scale web app. I have listed out the steps for creating an Instance Template here. One change, that you have to make in the template, is to change from CentOS 6 image to bitnami image.
Best practices: From my perspective, it is better to create a custom image with all your software installed than to use a startup script. As the time taken to launch new instances in the group should be as minimum as possible. This will increase the speed at which you scale your web app.
2. Managed Instance group and Autoscaling: I have written about the steps for creating Managed Instance group and Autoscaling here. As autoscaling and load balancing are independent, either of them
can be set up first.
Best practices: Both autoscaling and load balancers offer health check to the instances. From my perspective, setting up the health check for both the services are redundant and I think health check for load balancer alone would do good.
3. Load balancer: GCP offers two types of load balancers namely, Network and HTTP(s) load balancer. I have written about the differences Network Vs HTTP(s) here. Since I assumed that you will be building a web stack out of bitnami image, I have written about the steps for setting up the HTTP load balancer here.
By following these three steps, I hope you would be able to build an highly scalable web app. This answer is based on my perspective. If anything is incorrect or If I had missed something, please feel free to comment and I will add it to the table.

Datapower service migration to different region/environment [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am new to Datapower and have developed/configured a service which is working fine at the moment, I want to take this to production and for that need to create artefacts. Could you help me telling the standard practice and how /what all files I should include ? I heard about manifest file to include but not sure where should I find them.
Also heard about the mkick but not even know what does it do.
Thanks in advance!
As Stefan suggests Deployment policies will likely be of interest for changing settings between your Development and Production environments.
You will want to take a configuration export of your service and use the options to include referenced objects.
Also keep in mind certificates and keys are not included in the export, so if you have any referenced for the configuration, you will need to update those settings on your prod environment before this service can be active.
As answered earlier by Jimb. We can export the service from the DEV, STG environments and import them to Production Environments.
You can use deployment policies, Be sure you first import deployment policy and then the service(Because you have to select the deployment policy when importing the service).
Also You have to export the Keys, Certs and necessary artifacts from the previous environment.
Hope this helps.
Thank You!
Deployment is an integral part of any development architecture. Code deployment is a process of moving the code from your development environment to QA (Quality Assessment) env’t or from conveying env’t to pre-production env’t etc.
In DP the code deployment mean bundling all your code and dependent resources into one env’t and to the target env’t. However, to move from one env’t to the other env’t in practical you may have to face key challenges:
For instance in the process of moving code from dev to QA, both the structure remains the same. But, the detail is different why? Because IP and port no which is available in dev env’t it may not be working if we move along with QA env’t. Therefore, you should change that. The second thing also the backend server details of the dev env’t is also different from the QA env’t. That also needs to change. However, in order to address these challenges, the DP has a set of tools. That tool is the so-called Deployment policy.
Generally, whenever to make deployment and migration we need to keep in mind is :
Identify from and to which application domain is taken place, usually this migration process is taken place from high-level appliance DP to the lower one the process definitely failed.
If the process is taken place the same level appliance say from XI50 to XI52 we need to take care of the code from the lower level firmware to the new one may not be working because the new one may have advanced features.
Migration is working with env’t variables and we need to check that env’t variable. How? Use the deployment policy. However, deployment policy has one weakness is cannot look inside your SSL file and cannot make a change over there. You have to do a deployment policy by yourself.

Using same cloudControl MySQLd addon with multiple apps [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
It is unclear to me how cloudControl MySQLd addon works.
My understanding of MySQLd is that it is a MySQL server that can/will work with unlimited apps.
But since all addons are only app based, this could also mean that I cannot use the same MySQLd server on multiple apps.
Could anyone please help me understand if one MySQLd instance can be used with multiple apps hosted on cloudControl?
There are two concepts on the cloudControl PaaS. Applications and deployments. An application is basically just grouping developers and deployments together. Each deployment is a distinct running version of the app from a branch matching the deployment name. More details on this can be found in the Apps, Users and Deployments documentation.
All add-ons are always per deployment. We do this because this way we can provide all credentials as part of the runtime environment. This means you don't have to have credentials in version controlled files. Thich is a huge benefit when merging between branches, because you don't risk accidentally talking to e.g. the live database from a dev deployment. Also add-on credentials can change at any time at the add-on providers discretion.
For this reason separation between deployments makes a lot of sense. Usually your dev deployments also don't need the same database power as the production deployment for example. So you can easily use a smaller plan or even a shared database (e.g. MySQLs) for development. You can read more about how to use this feature inside your code in the Add-on documentation.
Also as explained earlier, add-on credentials are always provided as part of the runtime environment. Now credetials can change at any time at the add-on providers discretion. These changes are automatically provided in the environment and the app processes restarted. If you had hard coded the credentials as would be required for the second app, this would mean the app would probably experience downtime.
Last but not least, it's usually very bad practice to connect to the same database from two different code bases in different repositories, which would be the reason to have a second app. This causes all kinds of potential conflicts and dependencies that make code changes and database migrations extremely hard to maintain over time. The recommended way would be to have the data owned by one code base only and provide an API to access that data from the second code base.
All this being said, it is technically possible to connect multiple deployments or even apps to the same add-on (database or anything else) but highly advised against.
If you have a good reason to connect two apps/deployments to the same database I would suggest you manually launch an RDS instance at Amazon (MySQLd is based on RDS) and provide credentials for that through the custom config add-on to both of your apps/deployments.
I hope this answers your question and also explains the reasons.

Resources