What about using a microkernel for Node.js + NginX? - nginx

Not even sure if it would easily work but for an upcoming project I may need to set up a web sockets only server, it would not have a database, memcache or even serve static files, all it would need to do is work some logic and update other clients.
The server may need to support 1~300000 clients simultaneously so Node.js+NginX makes sense, but maybe not all the other features of a traditional web server (apache for example) are necessary...
Something like Minix sounds like it would work...

This may be exactly what you're looking for:
https://github.com/tmpvar/cluster-socket.io
It allows you to handle large amounts of requests across multiple node processes.
Remember you can always stop into #node.js and ask questions! Make sure to report back with your findings.

Related

LDAP Proxy with inspection/modification of requests and responses

I need to build an LDAP proxy that I can program to inspect and modify the LDAP requests and responses - some of the LDAP requests/responses will simply be passed through, but for others I might want to send two different requests to the server and then combine the results (that's just one example - there will be other use cases).
I've looked at the proxying options documented for OpenLDAP's slapd, and I see that it has quite flexible configuration and 'overlays', but no capability to insert custom code.
So I think that's not a solution, unless slapd's source code is easy to modify, to insert my own modules plus hooks to/from the existing code (?)
An alternative would be to start with a friendly TCP/IP framework library (or even a complete TCP/IP proxy). Then I can link to an ASN.1 decoding/encoding library, and write the rest myself.
I'd prefer to avoid having to write (& learn) all the TCP/IP connection/message handling and event loop myself.
So I'm looking for the most complete starting point that does the hard work and gives me the flexibility to write what I need. Typical lazy/greedy approach :-)
Must be open source, ideally in C or C++, and I'll probably be targetting RHEL/CentOS 8 in a container.

About Firebase Remote Configuration limitations

We'll start to new project using the Firebase.
First of all, I'll try to set static like app version) to remote config to check the lower version in app-side.
So, I'd try to search the limitation/quota of the 'firebase remote configuration' such as traffic, connections, concurrent connections per month and so on, but I can't find any documents about Firebase remote configurations.
Can anybody help me?
There aren't really any official public numbers, partly because the team reserves the right to change them in the future to better suit the needs of the service.
That said, Remote Config is designed to be free for your apps, no matter how popular they become. You shouldn't ever have to worry about concurrent connections or connections per month or anything like that, as long as your client behaves reasonably.
What does that mean? Well, personally, my recommendation would be to not set your cache for anything less than 3 hours. If you need something faster than that, then you should really start looking into the Realtime Database. Otherwise, you should be find with Remote Config.

local api's bypassing http, in .net and IIS

I have an in-memory list of objects (really just strings) I utilize in a .net web application. It is about 10 megs worth of data, so I just keep it in ram and don't mess with a database etc.
However, now I need multiple web apps to access this same data. First thought was to add a web api on top of this and give access to the additional apps through the api. This should be better than having each app keep the same 10 megs of data loaded in ram.
But this made me wonder if there's a more performant way to do this in .net on a single server- allow multiple web apps to access the same in-memory data, without the overhead of a web api, and without resorting to just having each request hit a database. I realize the performance benefits may not make it worthwhile, but am just curious if such a thing is possible.
If you use WCF with a transport optimized for communication between processes on the same machine (e.g. Named Pipe binding), you will have all the convenience of a web-API-like programming model without the overhead. And if you ever need to use multiple machines in the future, changing to a different transport (e.g. TCP or even HTTP) will be as simple as changing a config file. Take a look at http://msdn.microsoft.com/en-us/library/ms752247(v=vs.110).aspx (and http://msdn.microsoft.com/en-us/library/ms752250(v=vs.110).aspx for TCP)
You could look at something like AppFabric or possibly a document or key/value database. I've used both for various clients to access the same information with great success.

How do I handle use 100 Continue in a REST web service?

Some background
I am planning to writing a REST service which helps facilitate collaboration between multiple client systems. Similar to how git or hg handle things I want the client to perform all merging locally and for the server to reject new changes unless they have been merged with existing changes.
How I want to handle it
I don't want clients to have to upload all of their change sets before being told they need to merge first. I would like to do this by performing a POST with the Expect 100 Continue header. The server can then verify that it can accept the change sets based on the header information (not hard for me in this case) and either reject the request or send the 100 Continue status through to the client who will then upload the changes.
My problem
As far as I have been able to figure out so far ASP.NET doesn't support this scenario, by the time you see the request in your controller actions the POST body has normally already been completely uploaded. I've had a brief look at WCF REST but I haven't been able to see a way to do it there either, their conditional PUT example has the full request body before rejecting the request.
I'm happy to use any alternative framework that runs on .net or can easily be made to run on Windows Azure.
I can't recommend WcfRestContrib enough. It's free, and it has a lot of abilities.
But I think you need to use OpenRasta instead of WCF in order to do what you're wanting. There's a lot of stuff out there on it, like wiki, blog post 1, blog post 2. It might be a lot to take in, but it's a .NET framework thats truly focused on being RESTful, and not RPC like WCF. And it has the ability work with headers, like you asked about. It even has PipelineContributors, which have access to the whole context of a call and can halt execution, handle redirections, or even render something different than what was expected.
EDIT:
As far as I can tell, this isn't possible in OpenRasta after all, because "100 continue is usually handled by the hosting environment, not by OR, so there’s no support for it as such, because we don’t get a chance to respond in the asp.net pipeline"

Architecture question: client REST API's caching solution

I'm implementing a high traffic client web application that uses a lot of REST API's for its data access layer from the cloud database. I said client because it implements REST and not provides it.
REST APIs are implemented server side as well as client side and I need to figure out a good solution for caching. The application is running on a web farm so it I'm leaning toward a distributed caching like memcached. This caching solution will need to be like a proxy layer between my application and REST APIs and support both client side as well as server side.
For example if I make a call to update a record I would update through REST and I'd like to keep updated record in the cache so next calls to that record won't need extra call to the outside REST services.
I want to minimize REST calls as much as possible and would need to keep the data accurate as much as I can, but it doesn't need to be 100% accurate.
What is the best solution for this caching proxy? Is it a standalone application that runs on one of the servers with local cache, or built into current solution using distributed caching? what are you ideas, suggestion or concerns
Thank you,
You hit the nail on the head. You need a caching layer that acts as a proxy to your data.
I suggest that you create a layer that abstracts the concept of the cloud a way a bit. Your client shouldn't care where the data comes from. I would create a repository layer that communicates with the cloud and all other data. Then you can put a service layer on top of that that your client would actually call into. Inside this service layer is where you would implement things like your caching layer.
I used to always suggest using MemCached or MemCached Win32 depending on your environment. MemCached win32 works really well if you are in a windows world! Look to the Enyim client for MemCached win32...it is the least problematic of all the other ports.
If you are open to it though and you are in a .net world then you might try Velocity. MS finally got the clue that there was a hole in their caching framework in that they needed to support the farm concept. Velocity last time I checked is not out of beta yet...but still worth a look.
I generally suggest using the repository and service layer concepts from day one...even though you don't need it. The flexibility it provides for your application is worth having as you never know which direction your application will need to be pulled in. Needing to scale is usually the best reason to need this flexibility. But usually when you need to scale you need to scale now and refactoring in a repository layer and services layer while not impossible is usually semi-complex to do down the road.

Resources