Sharing Logic Between the Browser and the Server - apache-flex

I'm working on an app which will, like most apps, have a whole boat load of buisness logic, almost all of which will need to be executed both on the server and the Flash-based client… And I'm trying to figure out the best (read: least complex) way to implement the rules engine.
These are the parameters of the problem:
The rules engine must both run in a web browser (ie, in Flash Player) and on the server. Duplicating the logic (eg, by writing a "server" version and a "client" version) would be an unacceptable risk.
The input/output data is fairly complex, so serialization is a nontrivial problem. We are currently using AMF for all of our serialization needs, and using another protocol would add significant complexity… So it should probably be avoided.
It is infeasible to implement a "rules description language". Experimentation has shown that rules are sufficiently complex that any such language would need to be Turing complete… Which would also add a significant amount of complexity.
The rules engine will not need to make some, but not very many, service calls.
Currently, the best contenders are:
Writing the code in ActionScript, then running it on the server. In theory it's possible to start up an AVM instance, get it long-polling a gateway, then pass data back and forth that way… But that seems less than ideal. Is there a "good" way of doing this?
Writing the code in Haxe. I don't know anything about Haxe's AMF support, so that could be a deal-breaker.
Something involving Tamarin. Seems like a viable option, but I haven't done enough research to tell either way.
So, what do you think? Are any of these options clearly better than others? Is there something I haven't though of that's worth considering?
Finally, thanks for reading this wall of text :)

How much data are you talking about? You can use Air if you want to run it on the server and access a queue or something.

Related

Apache Nifi vs Gobblin

I am assessing a big-data project, we would need to pull lots of big data sets from various internet sources (ftp, api, etc), do light transformations and light data quality / sanity checking (eg: row and columnar inspections), and push it downstream. Immediate focus is batchy, but anticipate supporting streaming down the line. Ease of support at scale is an important requirement.
We are looking at Apache Nifi and Gobblin, which seem to overlap in intention. What sort of use cases fit which platform best? How would they conform to the use case above?
Thanks!
My experience is with NiFi, and I've just had a look at Gobblin, but mainly, NiFi is an application in itself, where Gobblin is a framework.
In NiFi, you'll have a GUI, with very granular authorizations, that allow, several users to intervene on different part of the flow, monitor it, etc ...
One other thing is that NiFi is 'always on' and 'always in production' you are potentially able to make your modifications directly on the target, and as such, there are a few safeguards in order to avoid losing data (by mistake, I mean).
So, where I think both solutions can do more or less the same thing, if you have a workflow where you want to deploy once from time to time, Gobblin might be a better fit, but if you want something where you give some users permissions to intervene on parts of the flow directly in production, NiFi will be the best.
In the end, to keep the question oriented on programming:
NiFi allows to you program graphically, to give very granular permissions to your 'developers', and well as to update the 'program' (the NiFi flow) while it is running
Gobblin seems (from what little I've looked up) to work by defining jobs with text files, which seems to be more of a 'classical' development workflow, but that may fit better for your usage.

Streaming stdout to a web page

This seems like it should be a really simple thing to achieve, unfortunately web development was never my strong point.
I have a bunch of scripts, and I would like to launch them from a webpage and see the realtime stdout text on the page. Some of the scripts take a long time to run so the normal single response isn't good enough (I have this working already).
As far as I can see, my options are
stdout to a file, and periodically (every couple of seconds) send a request from the client and respond with the contents of this file.
Chunked HTTP responses? I'm not sure if this is what they are used for- I tried to implement this already but I think I may be misunderstanding their purpose.
Websockets (I'm using a Luvit server so this isn't an option).
... Something else?
I'm sure there must be a standard way of achieving, I see other sites doing this all the time. Teamcity for example. Or chat rooms (vanilla TCP sockets?).
Any pointers in the right direction appreciated. Simplest method possible, if that's just sending lots of scheduled requests from the client then so be it.
That heavily reminds me of Common Gateway Interfaces.
Your own ideas sound all like the right direction. As you are using a shell script, and some potentially nontrivial interactions with the web server, I feel it could make sense to point out where to dig for examples of this kind of code, which was common a long time ago, and very error-prone, basically allways.
Practically, your script is a CGI script, doing typical things.
In the earlier days and years of the internet, that was the "normal way" to implement web page that are not just static files (HTML or others).
The page is basically implemented as a shell script (or any other programm reading from stdin and writing to stdout).
Part of what you are doing/proposing is very similar, and I think there are useful lessons to learn from old CGI code.
For example, getting buffering right with from inside the script over sdtout, whrough the web server onto the client's page can be tricky of course.
So digging out old examples could help a lot.
(Much of this may be obvious to you, the OP, personally, so take the "you" as potential reader)
The tricky part in general will be the buffering, I expect. If you are used to explicitly handling stdin/out buffers in shell, for programms that do not support it, the kind of things to expect can be imagined - but if not used to it: I remember CGI is worse, as you have to get the buffering of the HTTP server in sync too (let's hope it is handled automatically) - so maybe start to ask questions/dig for examples early.
The CGI style way would be exactly what you have implemented now - and it the buffering is right, that should be as real-time as it can get. But I understand that you get timeouts because of the long runtime? Or do you have strongly varying runtimes?
In terms of getting it as real-time as possible, there is nothing better than writing stdout to the http stream.
(I assume we accept the overhead of going through a HTTP server.)
Also, I'm thinking of line buffering, so not flushing every char - is that good enough for the use case? (i.e. no animated progress indicator lines/ ANSI escapes that you want to see in real time)
Then maybe the best is to work aroung the issues like timeouts, but to keep the concept. If real time is not that important, other ways may be better in many ways, of course. One point would be that other methods could be required for any scalability.

Bogarting the Data Access Layer

Situation: The dba is an offsite contractor who keeps the entire DAL code checked out in TFS. It would be nice as the front end developer to be able to add columns, and tweak procs and whatnot, without having to rely on waiting for this dude to respond to your emails to do the work.
Question: What would be a recommended solution/process that would allow for more rapid/agile development, while maintaining data integrity as well as peace love and happiness among the team?
Im getting some good feedback on Programmers HERE
There is no general technical answer to your question (unless you can define a very limited kind of needed access, which can be supplied via an API he provides for you in the DAL, etc.).
Assuming you already tried to talk with him and probably even escalated the issue, there is probably a valid reason for limiting access (security, data model integrity, performance tuning, version control etc.).
Try to understand the reasoning behind his approach, and to better define your actual needs, it is possible that after that you can formulate an improvement to your architecture (such as the aforementioned API) or your development process. Most importantly, talk frankly about your concerns, communication can go a long way, as long as you are willing to understand the other side.

ASP.Net webservice/asmx/ashx/whatever programming

I need to build a proxy (maybe a bad description) that receives an XML file from a 3rd party, saves it, sends it on to another 3rd party, gets the response back and passes that back to the original 3rd party. Let's call that entire process a "unit".
Should I use a webservice? A Generic Handler? Something else?
I might have to do 20 "units" per second, but I know that each "unit" may span 30 seconds to a minute each, so really, I mean that I need to be able to have 1200 of these "units" running at the same time, in all varying stages of the process that I described above.
As far as the file saving goes, I eventually want to put this into a database, but I would imagine that writing the file is quicker than actually saving the data into a database, so I'll just have another process that isn't nearly as time critical as this grab the files and insert them into the DB at its own convenience.
The "app" will only consist of 1 page and it will be running under SSL. This will likely be the only thing on this server at any given time to ensure that this little process is not a bottleneck.
What in .Net would be a good (fast and scalable) way to go about this? I don't have any effective limit on what I would need as far as hardware goes -- so I can get a screaming machine if it would guarantee no bottlenecks.
Since webservices are based on XML you need to consider the fact that you could end up with "XML inside XML". But part from that I'd say using webservices is a good way to go. Mostly because it is compatible, easy to use and easy to understand (for future maintainers).
There are however alternatives that use less CPU/memory/bandwith. WCF provides several models to solve this both in regard to running under IIS or stand-alone process and transfer type.
Personally I'm a fan of plain old binary transfer through TCP. REST could be one way to go as it is compatible (frontend proxy/caching for instance) and essentially gives you a binary transfer with little overhead.
I also like to leave the dirty work to IIS, so I avoid stand-alone WCF apps. I assume IIS is faster and more stable than what I can do easily.
Maybe my question on high concurrent load can be of help.
I would write a WCF service, use REST to simplify it's URLs, and set the WCF service to run as a singleton so that your memory doesn't get out of control.
Good article on WCF: http://www.c-sharpcorner.com/UploadFile/sridhar_subra/116/

How do I ensure that SOAP requests from a flash client to my ASP server are coming from the flash client?

I have a flash based game that has a high score system implemented with a SOAP service. There are prizes involved and I want to prevent someone from using FireBug or similar to discover the webservice path and submit fake scores.
I considered using some kind of encryption on the data but am aware that someone could decompile the swf and work out how I did it.
I also considered using an IP whitelist but since the incoming data will come from the users IP and not the servers that won't work. (I'm sure I'm missing something obvious here...)
I know that there is a tried and tested solution for this, but I don't seem to be asking google the right questions to get to it.
Any help and suggestions will be appreciated, thank you
What you want to achieve is impossible. You can only make it harder for people to do. The best you can do is to use encryption and encrypt the SWF it self, which usually causes higher filesize and poorer performance.
The safest method is to evaluate or even run the whole game on the server. You can try to determine whether what the client sends you is possible at all. Rather than making sure people use your client, you're making sure people play the game according to your rules.
greetz
back2dos
All security is based on making things hard. It never makes things impossible. How about having your game register with a separate service when it starts up. It could use client information to build some kind of special code that would be unique for each iteration of the game. The game could morph the code in a way that would be hard to emulate. Then when the game is over the score gets submitted with the morphed code and validated on the server side.

Resources