Unity3d 5 networking, UNET HLAPI vs LLAPI - networking

i used to do a racing game but a year ago i take a break (the project was done on unity 4), since i restart with that project i must reimplement the network because it doesn't work well with the unity 4 network. Now i saw that the are 2 ways to implement it using UNET, the HLAPI and the LLAPI, since the synchronization is very important for me the LLAPI should be better and flexible but i'm here to ask some experts if the effort to use it makes sense or the HLAPI give enough good results.

I don't think it's "LLAPI vs HLAPI".
LLAPI is a part of HLAPI. This is the lowest level :
Everything is flexible.
For example, i'm not using the "highlevel" (Engine Integration) NetworkTransform class, because this is crap, but i'm using the NetworkServer, at a lower level (Connection Management) because it's well done and you can override everything.
So it's not HLAPI vs LLAPI, HLAPI is a Ladder and your just chose which levels can be used "as is"
I know some devs are making their own Serialization, but still using higher levels. Globally it works really well EXCEPT the "Engine Integration" level that is just to me a bad written example (or useful only for card games etc...)

Related

How much unity across different teams?

Our company builds several (Java) applications that loosely communicate with eachother via web services, remote EJB and occasionally via shared data in a DB.
Each of those applications are build and maintained by their own teams. 1 or 2 persons for the smaller apps, and almost 10 for the largest one. The total amount of developers is approximately 25 FTE.
One problem we're facing is that there are some big egos among the teams. Historically the team of the largest app has set up a code convention and general guide lines. For instance our IDE is Netbeans, we use Hg for SCM, build with Ant and emphasize to first use as much from Java EE as possible, if that doesn't suffice use an external library and only resort to writing something yourself as a last resort. Writing things like yet another logging framework, orm, cms or web framework is pretty much not allowed following these guide lines.
Now some of the smaller teams go against this and start using Eclipse, Git and Maven and have an approach of writing as much as possible themselves and only look at existing things if time is short or they 'just don't feel like writing it themselves'. Where the main team uses log4j, one of the smaller teams just started writing their own logging framework.
There have been talks going on about all teams adhering to the same standards, but these have been 'troublesome' at best.
Now the big question I'd like to ask: does it actually matter that different teams do things differently? As long as each seperate app implements its requirements and provides the agreed upon interfaces, should we really force everyone to use Hg, Ant, the same code conventions, etc etc?
There is not much harm in letting each team use the technologies that work best for them. In fact if you restrict teams to the "standard" way of doing things you'll stifle innovation and have bad morale.
But you don't want things to diverge too much. There a few things you can do to prevent libraries and tools getting out of hand. The first thing is to have regular rotation of each member through the teams to cross pollinate ideas. In this way the best ideas will spread through the teams.
You can also enforce a "rule of 3", which simply says it is ok to introduce a second library, tool, logging approach, whatever. But as soon as you want to introduce a 3rd one, you have to remove one of the first two. In other words it is ok to have 2 competing logging frameworks but if there are 3 logging frameworks, choose one to kill.
A 3rd idea is to let developers run regular presentations to the entire developer group to demonstrate the pros and cons of each idea or approach. Encourage lots of discussion and constructive criticism. The purpose is to try many things and let everyone find the best way as a group.
Finally, Management 3.0 talks a lot more in depth about how teams make decisions. Well worth the read.

Regarding Automation Approach

I am supposed to automate an application which is developed in PowerBuilder. In order to test this Application we are using Rational Robot as a functional testing tool. We expect at least 40 -50% of change control in the Application for each release. Release trends are scheduled at least 3 times in a year.
The product has different setup for each client. Accordingly scenario has been derived. Although if is there any change occurs, it would be in functional feature and also in interface. Pointing to that, need to proceed with automation. Identified few areas which are stabilized (i.e., where no major changes occurs) to automate. Will that be feasible for proceeding with Automation?
Could you please suggest me how to go about on this?
I've seen the answer to your last question involve a consultant spending two days interviewing the development team, then three days developing a report. And, in some cases, I'd say the report was preliminary, introductory, and rushed. However, let me throw out a few ideas that may help manage expectations on your team.
Testing automation is great for checking for regressions in functionality that allegedly isn't being touched. Things like framework changes or database changes can cause untouched code to crash. For risk-averse environments (e.g. banking, pharmaceutical prescriptions), the investment in automation is well worth the effort.
However, what I've seen often is an underestimation of the effort. To really test all the functionality in a unit (let's say a window, for example), you need to review the specifications, design tests that test each functional point, plan your data (what is the data going into the window, how will you make sure this data is as expected when you start the test each time, what is the data when you've finished your tests, how will you ensure the data, including non-visible data, is correct), then script and debug all your tests. I'm not sure what professional testers say (I'm a developer by trade, but have taken a course in an automated testing tool), but if you aren't planning for the same amount of effort to be expended on developing the automation tests as you're spending on application development for the same functionality, I think you'll quickly become frustrated. Add to that the changing functionality means changing testing scripts, and automated testing can become a significant cost. (So, tell your manager that automated testing doesn't mean you push a button and things get tested. < grin > )
That's not to say that you can't expend less effort on testing and get some results, but you get what you pay for. Having a script that opens and closes all the windows in the app provides some value, but it won't tell you that a new behaviour implemented in the framework is being overridden on window X, or that a database change has screwed up the sequencing of items in a drop down DataWindow, or a report completion time goes from five seconds to five hours. However, again, don't underestimate the effort. This is a new tool with a new language and idiosyncrasies that need to be figured out and mastered.
Automated testing can be a great investment. If the cost of failure is significant, like a badly prescribed drug causing a death, then the investment is worth it. However, for cases where there is a high turn over of functionality (like I think you're describing) and the consequences of a failure is less critical, you might want to consider comparing the cost/benefit of testing automation with additional manual testing resources.
Good luck,
Terry
Adding on to what #Terry said: It sounds like you are both new to automation in general and to Rational Robot in particular.
One thing to keep in mind is that test automation is software development and needs to be treated as such. That means you need personnel dedicated to the automation effort who are solid programmers and have expertise in the tool being used (Robot in this case).
If your team does not have the general programming/automation skills and the specific Robot skills, you are going to need to hire that personnel or get existing staff trained in those skill sets.

Data Binding in 3 layered architecture?

Does data binding fit in a 3 layered architecture? Is dropping a grid-view on a web form and binding it to a LinkDataSource or SQLDataSource considered bad? The way I see it, that's the Presentation Layer talking to the Data Access Layer. I once heard a "professional developer" say never ever do this, so what's the alternative if you shouldn't?
The way you are doing is ok if it is a small project, but if you want your app. to have flexibility to support Windows/ Web in future then you must use Layers.
Please follow this link http://www.dotnetspider.com/resources/1566-n-Tier-Architecture-Asp.aspx
You should have a middle tier between Presentation and Data Access layers, the middle tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.
The main task of Business layer is business validation and business workflow.
When you build your business logic components into an SDK, you are effectively disconnecting it from your Web application, and any input validation that it performs. Therefore, your business logic components are the last line of defense to make sure that only valid values make it into your database.
Databinding is, of course, necessary to effectively dispay data.
Tooling is great and can boost productivity. It is equally important to understand what the tooling is generating, even at a basic level, in order to be able to effectively utilize the generated code.
The reaction you describe seems a bit extreme. If a wizard can generate some code that works for ya, then use it. If you don't understand the generated code then that is the next priority; learn about what it is doing and why. In the meantime, you have a page that people can put eyes on regardless of how it got there.
I am a bit pragmatic when it comes to tools. You do what you have to do. But, if after [insert appropriate internship length] you are still using code gen and cannot customize or fix it then you (as in the royal you, not the you you) are being lazy or stupid or both. ;-)
OT:(almost) Never say never unless you want to lessen the impact of what you are trying to communicate.
my 2 pesos.
When you're doing a small project or a prototype, go with the LINQDataSource or SQLDataSource. However, the downsides of those data sources are serious enough for you to think hard if they are appropriate. If your doing a multi-layered or multi-tear architecture, they simply don't fit. But even if your architecture isn't that strict, you should ask yourself how big this application is going to be and how likely it is going to be that you will make changes to the system in the future. How much time it is going to take you when you want to make a change to the database?
I've seen projects were the developers used those data sources, because those were the constructs that were used in those nice ASP.NET video's. However, when the projects grown from prototypes to big production applications (yes, I’ve seen it happen, the prototype seemed good enough), the lack of compile time support (your queries are defined in markup!) made it very hard to do any change to the system.
When you need to make a change to the system, that will be the time that you’ll see that the cost of the change is a magnitude bigger than the time you saved by flattering your architecture.

Upgrading from Drupal 6 to Drupal 7: best programmer's practices?

Although I am using drupal since the D4 series, I only started developing professionally for it with D6, so - despite I did various site upgrades - I was never faced by the task of having to port my own code to a new version.
I know the Drupal community will come up with lot of technical support about changed API's and architectural changes (see the deadwood module for D5-D6 or even these stubs of D6-D7 how-to's for modules and themes).
However what I am looking for with my question is more in the line of strategy thinking, or in other words, I am looking for inputs and advice on how to plan / implement / review the process of porting my own code, in the light of what colleague developers learned by previous experience. Some example:
Would you advice to begin to port my modules as soon as I have time for doing it, and to maintain a concurrent D7 for some time (so I am "prepared" for the D-day) or would you advice to rather wait for the day in which the port will be actually imminent and then upgrade the modules to D7 and drop the D6 version?
Only some of my modules have full test coverage. Would you advice to complete test coverage for the D6 version so to have all tests working to check the D7 port, or would you advice to write my test directing at porting time, to test the D7 version?
Did you find that being an early adopter gives you an edge in terms of new features and better API's or did you rather find that is more convenient to delay the conversion so as to leverage the larger amount of readily available contrib modules?
Did you set for yourself quality standards / evaluation criteria or did you just set the bar to "if it works, I'm happy"? Why? If you set certain standards or goals, what did they where / what will they be? How did they help you?
Are there common pitfalls that you experienced in the past and that you think are applicable to the D6-D7 porting process?
Is porting a good moment to do some refactoring or it is just going to make everything more complex to be put back together?
...
These questions are not an exhaustive list, but I hope they give an idea of what kind of information I am looking for. I would rather say: whatever you think is relevant and I did not list above gets a "plus"! :)
If I did not manage to express myself clearly enough, please post a comment with the info you think I should add in the question. Thank you in advance for your time!
PS: Yes I know... D7 is not yet out and it will take months before important contrib modules will be upgraded... but it's never too early to start thinking! :)
Good questions, so let's see:
(when to start porting)
This certainly depends on the complexity of the modules to port. If there are really complex/large ones, it might be useful to start early in order to find tricky spots while not being under pressure. For smaller/standard ones, I'd try to find a bigger time slot later on where I can port many of them in a row in order to get the routine stuff memorized quickly (and benefit from the probably improved documentation).
(test coverage)
Normally I'd say that having a good test coverage before starting refactoring/porting would certainly be advisable. But given that Drupal-7 introduces a major change concerning the testing framework by moving it to core, I'd expect the need to rewrite a significant amount of tests anyway. So if there is no need to maintain the Drupal-6 versions after the migration, I'd save the time/trouble and aim for increased coverage after the porting.
(early adopter vs. wait and see)
Using Drupal since the 4.7 version, we have always waited for at least the first official release of a new major version before even thinking about porting. With Drupal 6, we waited for the views module before porting our first site, and we still have some smaller projects on Drupal-5, as they are working just fine and it would be hard to justify the extra bill for our clients as long as there are still maintenance/security fixes for it. There is just so much time in a day and there is always this backlog of bugs to fix, features to add, etc., so no use playing with unfinished technology while there are more imminent things to do that would immediately benefit our clients. Now this would certainly be different if we'd have to maintain one or more 'official' contributed modules, as offering an early port would be a good thing.
I'm a bit in a bind here - being an early adopter certainly benefits the community, as someone has to find that bugs before they can get fixed, but on the other hand, it makes little business sense to fight hour after hour with bugs others might have found/fixed if you'd just waited a bit longer. As long as I have to do this for a living, I need to watch my resources, trying to strike an acceptable balance between serving the community and benefiting from it :-/
(quality standards)
"If it works, I'm happy" just doesn't cut it, as I don't want to be happy momentarily only, but tomorrow as well. So one of my quality standards is that I need to be (somewhat) certain that I 'grokked' new concepts well enough in order to not just makes things work, but make them work like they should. Now this is hard to define more precisely, as it is obviously impossible to know if one 'got it' before 'getting it', so it boils down to a gut feeling/distinction of 'yeah, it kinda works' vs. 'yup, that looks right', and one has to accept that he will quite regularly be wrong about this.
That said, one particular point I'm looking out for is 'intervene as early as possible'. As a beginner, I often tweaked stuff 'after the fact' during the theming stage, while it would have been much easier to apply the 'fix' earlier in the processing chain by means of one hook or the other. So right now, whenever I'm about to 'adjust' something in the theme layer, I deliberately take a small time out to check if this can not be done more cleanly/compatible within a hook earlier on. As I expect Drupal-7 to add even more hooking options, this is something I will pay extra attention to, as it usually reduces conflicts and sudden 'breaking of stuff' when adding new modules.
(common pitfalls)
Well - mainly porting to early, finding out afterwards/in between that one or more needed modules were not available for the new version at all, or only in dev/alpha/early beta state. So I'd make sure to compile a complete list of used/needed modules first, listing their porting state, along with a quick inspection of their issue queues.
Besides that, I have so far always been very pleased with the new versions and their improvements, and I'm looking forward for Drupal-7 again.
(refactoring while porting)
One could say that porting is a rather large refactoring in itself, so there is no need to add to the complexity by restructuring non porting related stuff. On the other hand, if you already have to shred your modules to pieces anyway, why not use the opportunity to make it a major overhaul? I'd try to draw a line based on complexity - for big/complex modules, I'd do the port as straight as possible, and refactor more later on, if need be. For smaller modules, it shouldn't really matter, as the likelihood of introducing subtle bugs should be rather small.
(other stuff)
... need to think about it ...
Ok, other stuff:
Resource needs - given some of the Drupal-7 threads, it looks like they are likely to go up, so this should be evaluated before porting smaller sites that sit on a shared/restricted hosting account.
Latest versions first - This one is rather obvious and always stressed in the upgrade guides, but nevertheless worth mentioning: Upgrade core and all modules to their latest current version first before doing a major upgrade, as the upgrade code is highly likely to depend on the latest table/data structures to work correctly. Given Drupals 'piecemeal', one step at a time update strategy, it would be very hard to implement upgrade code that would detect different pre-upgrade states and acted accordingly.

Productivity gains of using CASE tools for development

I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and ... I would say ... satisfied.
In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called "No Silver Bullet".
This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project.
I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed?
We use a CASE tool at my current company for code generation and we are trying to move away from it.
The benefits that it brings - a graphical representation of the code making components 'easier' to pick up for new developers - are outweighed by the disadvantges in my opinion.
Those main disadvantages are:
We cannot do automatic merges, making it close to impossible for parallel development on one component.
Developers get dependant on the tool and 'forget' how to handcode.
Just a couple questions for you:
How much productivity do you gain compared to the control that you use?
How testable and reliant is the code you create?
How well can you implement a new pattern into your design?
I can't imagine that there is a CASE out there that I could write a test first and then use a CASE to generate the code I need. I'd rather stick to resharper which can easily do my mundane tasks and retain full control of my code.
The project I'm on originally went w/ the Oracle Development Suite to put together a web application.
Over time (5+ years), customer requirements became more complex than originally anticipated, and the screens were not easily maintainable. So, the team informally decided to start doing custom (hand coded) screens in web PL/SQL, instead of generating them using the Oracle Development Suite CASE tools (Oracle Designer).
The Oracle Report Builder component of the Development Suite is still being used by the team, as it seems to "get the job done" in a timely fashion. In general, the developers using the Report Builder tool are not very comfortable coding.
In this case, it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background.
Unfortunaly the Magic tool doesn't generates code and also it can't implement a design pattern. I don't have control over the code cause as i stated before it doesn't have code to modify. Te bottom line is that it can speed up productivity in some way but it has the impossibility to user CVS, patterns also and I can't control all the details.
I agree with gary when he says "it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background" but also I can't agree more with Klelky;
Those main disadvantages are:
1. We cannot do automatic merges, making it close to impossible for parallel development on one component.
2.Developers get dependant on the tool and 'forget' how to handcode.
Thanks

Resources