I'm just started working with Kafka together with C# (dotnet core). I have spinnned up an Kafka environment with Schema Registry using docker (Confluent Images).
I have a hobby project where I try to implement a microservice architecture. I will use Kafka to handle my IntegrationEvents between services.
Right now I have created my Kafka topics through the Confluent UI but I really like to have configuration as code - etc. I have database migrations using EF Core Migrations - Cloud environment using TerraForm.
What would be the best practices for creating topics? Right now I am thinking to create topics when my applications is starting up (if it exists it does nothing). The responsible application for creating the topic will be the application that needs to produce to that topic.
Any input our ideas what I can improve or do I have missed something that can potentiel cause me a lot of troubles.
Best regards Martin
I have decided to just use a command for my docker compose file. I still don't know or have decided for how my hosting environment will be. So wether going with Kubernetes all the way or using only Azure Services or ConfluentCloud.
So on I will postpone my decision regarding that and when time comes I will adjust code or Terraform for that.
Related
Please advise the modern architecture of ASP.NET WebApi application (better .NET Framework not Core) with the latest innovations. For example: Unit of work pattern, controller -> manager -> repository, Automapper, xUnit, Serilog or..., a reliable migration mechanism for Oracle - Liquibase or... Asynchronous execution - async / await.
Requirements: authentication - AD / Windows, DB - Oracle. If there are code examples it would be ideal
UPDATE 1
What about ORM and migration system. In my opinion code first approach and EF migrations is good because the model suit to DB copletely, but it is risky. Previously I made Database Project for MS Sql Server and all change scripts was generated by comparing schemas of original DB and DB changed by EF migration, like this:
create models
apply migration to local (dev) database
compare original DB and changed DB, ectract/generate change SQL scripts
create deployment migration for CI/CD (DB project, liquibase or similar)
rollback local DB and test deployment migration
commit and push
It looks strange, but practice shows that using EFs migrations leads to data loss
For me, an API (which your app exposes to consumers) can be thought of as just another type of presentation layer.
If you assume that's the case then any sensible .Net architecture would suit. Personally I follow an logical approach like this: 5 layer Architecture - https://morphological.files.wordpress.com/2011/08/5-layer-architecture-draft.pdf
Not sure if that answers your question though. I think if you get the logical architecture sorted out, then determining which technologies you use to implement isn't so hard.
The 5-Layer architecture referenced above uses dependency injection, so you can use whatever database technology you like.
Use of sync vs async depends on the nature of the actual functional and technical problems you're trying to solve.
I am looking for an options to execute recurring background tasks. The background Task would call the external REST GET request and update the status accordingly in the application database.
Which one of the following would be appropriate, considering that we do not like to maintain separate web.config between the application and the scheduler/task app. Looking for Simple option in .NET/Asp.NET web API context - not looking for any separate installation / 3rd party.
Scheduled task - believe we need to create those many scheduled tasks in a server which points to those many databases? maintainability is a concern?
windows service
Asp.Net background task options
any other better option?
Please provide your insights for this question.
I highly recommend looking at Hangfire to implement background tasks
This works better than a windows service in a cloud environment and supports fire-and-forget and repeat tasks/processing etc and integration is really seamless.
I just noticed your non-3rd party comment, not sure if you mean commercial component, but this is free, via nuget, if that helps?
see: https://www.hangfire.io
This question already has an answer here:
Is there a way to run Karate tests as an integration test suite against a pre-booted spring boot server?
(1 answer)
Closed 1 year ago.
Maybe this is not possible to do generically in a test framework but
I would like to be able to deploy the microservice I am testing within the test itself. I have looked at Citrus, RestAssured, and Karate and listened to countless talks and read countless blogs but I never see how to do this first stage. It always seems to be the case that there is an assumption that the microservice is pre-deployed.
Honestly it depends on how your microservice is deployed and which infrastructure you are targeting on. I prefer to integrate the deployment into the Maven build as Maven provides pre- and post-integration-test phases.
In case you can use Kubernetes or Docker I would recommend integrating the deployment with fabric8 maven plugins (fabric8-maven-plugin, docker-maven-plugin). That would automatically create/start/stop the Docker container deployment within the Maven build.
In case you can use Spring boot the official maven plugin can do so in the same way.
Another possibility would be to use build pipelines. Where the continuous build with Jenkins for example would deploy the system under test first and then execute the tests in a pipeline.
I personally prefer to always separate deployment and testing tasks. In case you really want to do a deployment within your test Citrus as a framework is able to start/stop Docker containers and/or Kubernetes pods within the test. Citrus can also integrate with before/after test suite phases for these deployment tasks.
I found a way to do it using docker-compose.
https://github.com/djangofan/karate-test-prime-example
Basically, make a docker-compose.yml that runs your service container and then also runs e2e tests after a call to wait-for-it.sh.
2 points:
The karate-demo is a Spring Boot example that is deployed by the JUnit test-runner. Here is the code that starts the server.
The karate-mock-servlet takes things one step further where you can run HTTP integration tests within your project without booting an app-server. Save time and code-coverage reports are easier.
If you have any requirements beyond these, I'd be happy to hear them. One of the things we would like to implement is a built-in server-side mocking framework - think embedded wiremock: but with the ease of Karate's DSL. But no concrete timeline yet.
EDIT: Karate has mocking now: https://github.com/intuit/karate/tree/master/karate-netty
I recently started a side-project. It was supposed to be a virtual recipe-book with the capabilities to store and retrieve recipes (CRUD), rate them and search through them. This is nothing new, but i wanted to build it as a desktop application to learn more about databases, unit testing, UIs and so on. Now that the core domain is pretty much done (i use a DDD approach) and i implemented most of the CRUD Repositories, i want to make this a bit more extensible by hosting the core functionality online, so i am able to write multiple backends (desktop application, web application, web api, etc).
Service Oriented Architecture (or Microservices) sound like a good approach to me to do that. The problem i am facing is how to decide, which parts of my project belong into a separate service and how to name them.
Take the following parts of the project:
Core domain (Aggregates, Entities, Value Objects, Logic) -> Java
Persistence (DAOs, Repositories, multiple Database backend implementations) -> Java
Search (Search Services which use SQL queries on the persistence DB for searching) -> Java
Desktop Application -> JS (Electron) or JavaFX
Web Application -> Flask or Rails
Web API (Manage, Rate, Search for recipes using REST) -> ?
My initial approach would be to put the core domain, the persistence, the search and the web api into a single sub-project and host that whole stack on Heroku or something similar. That way my clients could consume the web interface. The Desktop and Web apps would be different projects on their own. The Dektop app could share the core domain if they are both written in Java.
Is this a valid approach, or should i split the first service into smaller parts? How do you name these services?
Eric Evans on GOTO 2015 conference ( https://youtu.be/yPvef9R3k-M) and I 100% agree with him, answered to your question. Microservice scope should be one or maybe more Bounded Context(s). Including its supporting classes for persistence, REST/HTTP API, etc.
As I understood, the microservice is deployment wrapper over Bounded Context, with adding the isolation, scaling and resilient aspects.
As you wrote, you didn't apply Strategic Design to define bounded context. So its time to check, before tearing the app to parts.
I'm thinking of using Flyway for my database migration. Seems like it will be simpler than creating my own SQL and Java migration scripts. However, looking at the documentation there seems to be several ways to use it.
What should I consider when deciding between migrating with (a) application integration, (b) a maven task, or (c) the command line?
Currently I deploy to heroku with a simple git push. This builds my app and starts it as specified in the proc file.
So in this regard it seems like the application integration (migrating on startup) would be the simplest. But it also seems like overhead I don't need. I suppose if I do the maven task I would need to ensure that heroku calls maven correctly to make this happen.
What are the trade-offs? Is anyone currently using Spring + JPA + Flyway together with a heroku hosted application?
You are correct, the application integration is the simplest. Code and DB can never get out of sync.
The overhead is absolutely minimal, especially compared to JPA. The few millis it'll cost you on startup are well worth the dev and deployment convenience.