Using pyglet, twisted, pygtk together in an application - networking

I am making an app, that lets you play music synchronously on different systems. For the project, I have decided to use twisted, PyGtk2, Pyglet. I am confused on how should the main loop be run. Should i run pyglet's loop in a separate thread or should i implement a new reactor integrating twisted, pygtk2, pyglet. Will the performance suffer if i try to integrate three loops together?

I used https://github.com/padraigkitterick/pyglet-twisted when playing with pyglet and twisted, and it worked for my toy cases. Good starting point, anyway.
The above is a new reactor based on ThreadedSelectReactor.
It's not clear to me what the composition of all three would look like...

Twisted already has a solution for integrating with gtk:
http://twistedmatrix.com/documents/current/core/howto/choosing-reactor.html#core-howto-choosing-reactor-gtk
I'm not familiar with pyglet but if it has a main loop like GTK then both of your ideas seem feasible. You could also look into how twisted implements the GTK integration explained in the link above and try to replicate that for pyglet.

Related

Is it best using GStreamer to generate a pipeline automatically thanks to gst_parse_launch() or manually?

Currently developing an application with GStreamer on an embedded device, I was asking myself whether there is or not a significant difference in using the gst_parse_launch() function to generate my pipeline rather than doing it manually. Indeed, following this link I got a partial answer on a use case: Limitations of gst_parse_launch()?.
However, am I right thinking that we could still access to the different elements thanks to the gst_bin_get_by_name() function which then enables us to manually link their pads after an automatic generation of my pipeline? Has gst_parse_launch() got any drawbacks that I'm not thinking of?
As I am searching for the best possible performance (memory consumption / process time), I find this automation process interesting as it might do exactly the same thing while shortening the code.

Which state management library should I use for angular8 application?

i want to using state management in my angular8 application , before do it i research about state management libraries seems NGRX and NGXS and akita.
But I'm confused as to which one to choose !
NGRS most used.
NGXS are more possibilities and easy to learning.
AKITA less used and less download according to npm download history and github forked and issue but it base on object oriented and is easy to learning.
whats is your choise? Please state your reason !
NGRX: functional approach, well maintain, difficult to adapt due to high boilerplate.
AKITA: new to community, one of good thing about akita is independent of framework can be use with VUE,REDUX.
NGXS: OOP approach, can easily adapt due to less boilerplate.
i am working on angular form it's birth, i started with ngrx, than i switch to ngxs due to easily adaptation.

Qt for UI and Kotlin for application logic

I would like to use Kotlin for Linux desktop application. It does not have good UI library. I decided Qt would work well. So I though I would combine those two together. I do not want to use bindings library since there seams to not be any stable and maintained language binding. The way I would like to bind those two would be through use of ZeroMQ. I would like to have two way communication with application (UI needs to react to back end events too).
Has anyone tried such architecture or similar? Will there be any problem like validation or not being able to bind to the data. I would like to minimize use of C++, and use Kotlin for application logic, database, http communication with web server.
I am looking to build medium complexity embedded touch based interface (buttons, text fields, data rows).
Has anyone tried that? Is there a design error?
Communication between ZeroMQ and UI would resemble EventBus pattern.
Q : Has anyone tried such architecture or similar?
Yes.
Q : Is there a design error?
No.
Given you run for right-sized problem approach, the best production-grade results are expected from extending the industry-proved ( since adopted as early as in PARCplace Systems SmallTalk evangelisation in early 1980-ies... indeed some time to prove it be valid and best in class, isn't it? ) Model-Visual-Controller.
Have implemented the MVC-architecture-pattern in a shape and form of a distributed-system, integrated atop the smart ZeroMQ communcation infrastructure. Remote-keyboard was one of remote'd C-controller-inputs (with a dumb CLI V-isual ), another host ( supported by a computing grid ) did consolidate and operate the global M-odel and all the MVC-state transitions, next using another remote V-isual platform, for GUI and some other MMI-interactions, recollected from there back, into the central M-odel part.
Indeed a lovely way to design whatever complex systems!
It was robust, smart, scalable and maintainable architecture and would but recommend to follow this path forwards.

Best Pattern for ASP.NET MVC 5 With EF 6.2?

I am working on big scale enterprise application using .net stack, at the moment I am using layered approach. The main problem I am facing at my data layer because I am using static connection context and have following issues.
Can't use parallel calls because of static behavior.
Can't use sync methods.
Caching may have another issue
So now we have decided to change our Data layer there are couple of things in our consideration like repository and unit of work but we are not sure what kind of problems we face in it as we have enterprise scale application having 600 plus tables.
In order to write such big story I would like to take help from community which approach I should follow.
Please provide me any suitable links or thoughts.
This is madness to use static database connection firstly :) Your designer's idea was very bad about on static aproach. I think NTiear architecture was a good way to start on this. May in the future your another problem will be scalibty of your db system. Probably you use vertical scaling aproach. If it is correct you have to manage apply distribited aproach on db system like replication and sharding. Also nosql solution may hava another option.
I searched this topic the previous year and I wrote a document, you can see on medium
And another issue on monolotihc vs. microservice. You can find many article on medium or etc. Here is one of them.
As additionally , I published a sample project on github about on
N-Tier-Architecture-with-Generic-Repository--Dependency-Injection-And-Ninject
you can may examine
Good luck ;)

How to structure a proper 3-tier (no ORM) web project

I m working on a legacy web project, so there is no ORM(EF,Nhibernate) available here.
The problem here is I feel the structure is tedious while implementing a new function.
let's say I have biz Object Team.
Now if I want to get GetTeamDetailsByOrganisation
,following current coding style in the project,I need:
In Team's DAL, creat a method GetTeamDetailsByOrganisation
Create a method GetTeamDetailsByOrganisation in the Biz Object Team, and call the DAL method which I just created
In Team's BAL, wrap up the Biz object Team's method in another method,maybe same name, GetTeamDetailsByOrganisation
Page controller class call the BAL method.
It just feels not right. Any good practice or pattern can solve my problem here.
I know the tedium you speak of from similarily (probably worse) structured projects. Obviously there are multiple sensible answers to this problem, but it all depends upon your constraints and goals.
If the project is primarily in maintenance mode with very no new features being added I might accept that is the way things are. Although it sounds like you are adding at least some new features.
Is it possible to use a code generator? A project I worked on had a lot of tedium like this, which apparently was caused because they originally used a code generator for the code base which was lost to the sands of time. I ended up recreating the template which saved me a lot of time, sanity, and defects.
If the project is still under active development maybe it makes sense to perform some sort of large architectural change. My current project is currently in this category. We're decoupling code and adding repositories as we go. It's a slow process that takes diligence and discipline by the whole dev team. Each time a team takes on a story they tax that story with rewriting some of the legacy code in that area. To help facilitate this we gave a presentation to the rest of the team to get buy-in and understanding. We also created some documentation for our dev team that lists out the steps to take and the things to watch out for. In the past 6 months we've made a ton of progress. We don't have the duplication you speak of, but we have tight coupling issues which makes unit testing impossible without this refactor.
This is less likely to fit your scenario, but it may also be a possibility to take certain subset of features and separate those out into separate services that can be rewritten using a better platform and patterns. The old codebase can interoperate at the service layer if needed. You likely make changes in certain areas more than others, so the areas of heavy change might be top priority to move to a dedicated service. This has the benefit of allowing you to create a modern code base without having to rewrite the entire application from scratch all at once. This strategy is what Netflix has done to rewrite their their platform as they go and move it to the cloud.

Resources