How to compress QEvents without using Qt internal headers? - qt

Since the headers needed to iterate the posted event list in QCoreApplication::compressEvent are considered private, is there a way of getting equivalent functionality without depending on Qt's internal headers, but only on documented semantics of Qt?
Note that this is a different question that the other one concerning signals and slots!

Since the headers needed to iterate the posted event list in QCoreApplication::compressEvent are considered private, is there a way of getting equivalent functionality without depending on Qt's internal headers, but only on documented semantics of Qt?
AFAIK, there is not as per my other post.
The only API for this is internal as you write, and it can change anytime without further notice. Thereby, unless someone is writing code being part of the Qt release, this should be avoided since it can break all of a sudden for end users.
I even discussed it with 1-2 developers on IRC (peppe and suy, I think), but I think we left the topic at that point that there is no public API. This may change in the future as noted in the post.
My personal suspicion, without talking to the maintainer about it, is that it has not been a common enough use case, and hence no one has bothered just yet to get it through. I could personally live without this feature on since it has not caused me any serious defect so far, even in large scale Qt based and heavily multi-threaded softwares.
It is also quite possible that there may be technical reasons behind, and it is just my ignorance.

Related

How can I make emacs libraries use request.el instead of url.el?

Some libraries, e.g. xml-rpc, directly use url-retrieve. I want them to instead use request.el, so that I can choose curl as my backend. Is there an easy shim-layer I can install?
I'm looking for something like curl-for-url, which transparently rebinds url-http with a compatible implementation. (curl-for-url itself doesn't actually work very well, though.)
You could do this using advice, but you will need to use the
ad-get-arg/ad-get-args functions to extract the arguments url-retrieve was
called with and determine how you want to process them and pass them to the
retrieve function. The one which will likely be problematic is the callback
function. However, provided you can setup the buffer with the downloaded
data in the same way, with the same name as url-retrieve, you should be able to
apply the callback manually after the call to request and you have setup the
buffers as necessary.
It will be a fair bit of work and you will need to dig deep into both the url.el
and request.el libraries. It is also likely to be a bit fragile.
One concern I would have is the use of monkey patching by request.el. From the
project page, it looks like this code has not been updated since Emacs version
25.1 and the current official emacs is 25.2. This is one of the problems with
monkey patching - you need to keep versions in sync to avoid version
incompatibility issues.
It also seems odd to me to have someone who has patches to fix known bugs if
those patches have not been applied to the mainstream version - especially when
there has been a more recent release of the mainstream version.
The first thing I would do is upgrade to emacs 25.2 and then determine if using
request.el is as justified. I would also verify the problems you experience are
actually due to url-retrieve or are perhaps due to callbacks being passed to
that function. If it is a problem with the callbacks, you may be better off
using advice to fix those callbacks rather than replace the underlying
problems.
If you only have issues in some situations where url-retrieve is used, it may
also be easier to go up one level and look at the things which are using it and
perhaps use something like advice to replace the call to url-retrieve with
request at that level.
Someone might be able to provide more specific recommendations if you provide
more detail on the precise reasons you cannot or do not want to use the
url.el library.

final methods in JavaFX source

Problem
I need to overwrite the method
#Override protected final void layoutChartChildren(double top, double left, double width, double height)
of the XYChart class. Obviously I'm not allowed to.
Question
Why do people declare methods as "final"? Is there any benefit in that?
This answer is just a verbatim quote of text by Richard Bair, one of the JavaFX API designers, which was posted on a mailing list in response to the question: "Why is almost everything in the [JavaFX] API final?"
Subclassing breaks encapsulation. That's the fundamental reason why
you must design with care to allow for subclassing, or prohibit it.
Making all the fields of a class public would give developers
increased power -- but of course this breaks encapsulation, so we
avoid it.
We broke people all the time in Swing. It was very difficult to make
even modest bug fixes in Swing without breaking somebody. Changing the
order of calls in a method, broke people. When your framework or API
is being used by millions of programs and the program authors have no
way of knowing which version of your framework they might be running
on (the curse of a shared install of the JRE!), then you find an awful
lot of wisdom in making everything final you possibly can. It isn't
just to protect your own freedom, it actually creates a better product
for everybody. You think you want to subclass and override, but this
comes with a significant downside. The framework author isn't going to
be able to make things better for you in the future.
There's more to it though. When you design an API, you have to think
about the combinations of all things allowed by a developer. When you
allow subclassing, you open up a tremendous number of additional
possible failure modes, so you need to do so with care. Allowing a
subclass but limiting what a superclass allows for redefinition
reduces failure modes. One of my ideals in API design is to create an
API with as much power as possible while reducing the number of
failure modes. It is challenging to do so while also providing enough
flexibility for developers to do what they need to do, and if I have
to choose, I will always err on the side of giving less API in a
release, because you can always add more API later, but once you've
released an API you're stuck with it, or you will break people. And in
this case, API doesn't just mean the method signature, it means the
behavior when certain methods are invoked (as Josh points out in
Effective Java).
The getter / setter method problem Jonathan described is a perfect
example. If we make those methods non-final, then indeed it allows a
subclass to override and log calls. But that's about all it is good
for. If the subclass were to never call super, then we will be broken
(and their app as well!). They think they're disallowing a certain
input value, but they're not. Or the getter returns a value other than
what the property object holds. Or listener notification doesn't
happen right or at the right time. Or the wrong instance of the
property object is returned.
Two things I really like: final, and immutability. GUI's however tend
to favor big class hierarchies and mutable state :-). But we use final
and immutability as much as we can.
Some information:
Best practice since JavaFX setters/getters are final?

Reduing the layer complexity of a Cairngorm based application

Could tools like SWFAddress be used in some clever ways to alleviate an existing client-server architecture. I see possibilities to even introduce REST-like pattern mapping or something like that.
What I am currently doing is following all the Cairngorm guidleines, which has already led to a bunch of commands which all make sense, but inclusing the business delegates, and all that stuff, I am getting into a hard time extending and refactoring the application (and actually layers were supposed to help, tight ... maybe I am not doing it quite right, I admit).
Anyway, what I thought of was somehow reducing the number of application events flying around, and the number of commands responding to them. Actualy, I am quite OK even coupling the view with some logic, if I can get rd of some layer complexity.
What I mean by that: perhaps, I could bind a button click to a url pattern (or use SWFaddress to change the url globally). On the other end, I wll be waiting for changes of the url, reformat it, and pass it onto a service delegate, which has the necessary mappings in mind, so it knows what method to call, or it could even pass the url directly to an HTTPSErvice. The delegate will then deal with the server response, and update the model, which through the bindings will update the view.
I am not going to completely ditch commands. I thing that they are good for scheduling of the internal interactions (within the client itself), but I'd like to abstain from using them for communication with the server.
Am I on the right path ?
Are you apposed to switching to an alternate framework than Cairngorm? You just described perfectly what most people's complaints are about it. I think it mostly exists from the throw back days of Flex development...
Most of the developers I know use a more "modern" framework, usually focusing on Dependency Injection (DI).
Here is a good starting point in analyzing the various frameowkrs in use today:
http://www.adobe.com/devnet/flex/articles/flex_framework.html
and for for further reading...
I personally prefer Swiz, and use it in all my projects. It still focuses on the command pattern, but alleviates a lot of the layer complexity, as you described.
If your questions was how can I make Cairngorm less like... well Cairngorm... then I'm afraid I can't help you there. :)
Cheers and good luck!

Is obfuscation the best answer [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How effective is obfuscation?
Protect ASP.NET Source code
(Why) should I use obfuscation?
Is obfuscation the best answer for protecting our code ?
*Specially in Web Projects when you want to deliver your web projects as libraries of code to your customer ( the person who ordered ) *
Edited
At first my priority is Server-Side Code
and second Client-Side
but the main goal is when you want to deliver a complete web project
and you made every piece of your code as components and dlls now how effective can you protect them and doesn't allow others to make your code back from them .
Edited
The problem is that I want to protect the code that I'm written to a company that they ordered , now all my code are inside some DLLs ,
Now they can reverse engineer that and get my code , I want to prevent them from doing so ,
Is there anyway to do so or not ?
I think that is a unique question , And I didn't ask for what obfuscation is nor for tools of doing this activity , further than that I think this is apart from Client-Server Security
Sorry if my question wasn't clear at first , but if that is really a case to be deleted , no problem for me
Also
Also I wanted to have a comparison look at this problem and the solutions ,
because I think obfuscation wasn't the only possible solution at this , I think we can have maybe some logical sort of workarounds about this problem
Maybe not the best. If you are really ambitious, you can write your own web server (plugin).
But is it worth the effort?
Software is similar to a bike in the Netherlands, there is no known way of protection that is 100% safe. You use either a better protection than the other bikes (thieves are lazy). Or you must obfuscate the bike so they won't take it.
Another way to increase the level of protection is to use custom made ActiveX code to store mission critical algorithms. Of course, they can be reverse engineered, but javascript is easier.
What exactly are you trying to protect your code from?
Does your client-side code contain valuable business logic?
If not: you shouldn't bother obfuscating something that doesn't have much value. Personally I think clientside code theft is a something that people are far too concerned about. 99% of web apps don't really have anything special in terms of implementation on the client side. What you need to worry about more is someone ripping off the idea or visual look, which you obviously can't obfuscate.
If it does: you need to consider refactoring that logic out of the client side, as even with heavy obfuscation, a determined party will always be able to untangle it relatively easily. The code that adds real value to your app should ideally be running on your servers where it's considerably more difficult to get access to.
Even if people stealing your html markup or javascript was a something to worry about (and it probably isn't), obfuscation doesn't really solve the problem. In my opinion it is a waste of effort and money.
Hosting a critical function as a web service is probably the most sure way to protect it. It keeps the code out of the user's hands entirely. But then you're stuck hosting a service, and your users have to be on line to use your functionality.
Obfuscators help by hiding useful names and replacing control flow with weird but logically equivalent alternatives. They might thwart an amateur, but they'll only slow down a skilled reverse engineer for a few minutes, and they won't stop someone who is determined to penetrate your secrets.
I you really want to protect your code, you should write native code using a native code compiler (C++, Delphi). This still does not guarantee that your code is 100% safe because any experience developer can read assembler and essentially disassemble the native code program.
A determined hacker will always find a way to get to what they want.
The best we can do is to make it hard or painful for the would-be hacker to get at our code and the following options can help us:
Customize the CLR engine
Run an obfuscation tool over your code and use name and control flow obfuscation and string encryption
Make the application a Web-based application where all your proprietary code sits on a server somewhere
Watermark your code using your own custom techniques to "throw off" the would-be hacker
Implement techniques to prevent debugging (this is a very advanced topic!)
I really like a comment made by one of the head developers of the .NET framework where he said that he does not feel it's really the fact that others can get at our code that should be a concern to us, but rather, we should concern ourselves with the level of support we provide with our products.
So if we provide a good support base, it does not matter what the hackers do with our code, because the clients will trust us and our ability to support them using our product and not some cheap hacker-hacked program.
NO, obfuscation is not the best way to protect your code.
The tool you need to use is "copyright".
There is no (technological) way you can protect you code from someone determined enough (provided they have access to the binaries / scripts).
What you can do is prevent them from legally modifying/distributing your code.
The normal server-side code in Web projects should under no circumstances be visible to the outside world. So there is no point in obfuscating the code.
Besides that two minior points:
Javascript code is visible to the user and can be obfuscated. Minimizing javascript to save bandwidth is recommended anyway. Minimizing js also obfuscates the code.
Also important is that on production system the configuration setting customErrors should be set to RemoteOnly or On to avoid showing a stacktrace with to much code details.
If your client side code has any broad value to others, it will get reverse engineered regardless of any obfuscation.
The reality is that it's likely not going to be broadly useful to many and there is a lot of other code out there to look at so probably not worth doing more than minifying the code which is plenty of obfuscation and if your code is large, it will improve download speed.
Have you considered the alternative? That it's a good thing to give somethings back to the community? I'm sure you've looked at the code of more than one site, no?

ASP.NET and System.Diagnostics tracing - have I missed something, or is this a bad idea?

For various common reasons I wanted to use tracing for my ASP.NET application. Especially since I found out about the possibility to use the Service Trace Viewer tool which allows you to examine your traces in a powerful way.
Since I had never used this trace thing before, I started stuying it. After a while of Google, SO and MSDN I finally have a good idea of how things work. But I also found one very distrubing thing.
When using trace in ASP.NET applications it makes a lot of sense to group the trace messages together by web requests. Especially since one of the reasons I want to use it is for studying performance problems. The above mentioned tool also supports this by using <Corrleation> tags in the generated XML files. Which in turn come from System.Diagnostics.Trace.CorrelationManager. It also allows other nice features like Activity starting/stopping, which provides an even better grouping of trace messages. Cool, right?
I though so too, until I started inspecting where the CorrelationManager actually lived. After all - it was a static property. After some playing around with Reflector I found out something horrifying - it's stored in CallContext! Which is the kind of thing we shouldn't be using in ASP.NET, right?
So... am I missing something here? Is tracing really fundamentally flawed in ASP.NET?
Added: Emm, I'm kinda on the verge of rewriting this stuff myself. I still want to use the neat tool for exploring the traces. Any reason I shouldn't do this? Perhaps there is something better yet? It would be really nice if I got some answers soon. :)
Added 2: A colleague of mine confirmed that this is not just a theoretical issue. He has observed this in the system he's working on. So it's settled. I'm going to build a new little system that does things just the way I want it to. :)
Added 3: Wow, cool... the guys at Microsoft couldn't find anything wrong with using Correlation Manager in ASP.NET. So apparently we're not getting a fix for this bug after all...
You raise a very interesting question. After looking at Reflector, I also see that CorrelationManager is using the CallContext to store the activity id. I have not worked with tracing much, so I can't really speak on behalf of what types of activities it tracks, but if it tracks a single activity across the entire life cycle of a page request, per the article you referenced above, there is a possibility that the activity id could become disassociated with the actual activity. This activity would appear to die halfway through.
HttpContext would seem ideal for tracking an entire page request from beginning to finish, since it will be carried over even if the execution changes to a different thread. However, the HttpContext will not be transferred to your business objects, where as the CallContext would. On a side note, I saw that CallContext can also be transferred when using remoting between client and server apps which is pretty nifty, but in the case of tracking the website, this would not really be all that useful.
If you haven't already, check out this guy's site. The issue described in this article isn't specifically the same issue that Cup(Of T) article mentioned, but it's still pretty interesting. He also provides several very informative links on the page that describe components of the CorrelateionManager.
Unfortunately, I don't really have an answer to your question, but I definitely find the topic interesting and will continue looking into it. So please update this post as you learn more. I'm curious to see what you or others (hopefully someone out there can shed some light on the topic) find while looking into this.
Anyway, good luck. I'll talk to some of the peeps at my work about this and post more later if I find anything.
Chris
OK, so this is how it ended.
My colleague called Microsoft and reported this bug to them. Being certified partners means we get access to some more prioritized fixing queue or something... don't know that stuff. Anyway, they're working on it. Hopefully we'll see a patch soon. :)
In the mean time I've created my own little tracing class. It doesn't support all the bells and whistles that the default trace framework does, but it's just what I need. :) More specifically:
It writes to the same XML format as the default XmlWriterTraceListener so I can use the tool to analyze the logs.
It has a built in log rotation - something my colleague had to do himself on top of XmlWriterTraceListener.
The actual logging is deferred to another thread so performance can be measured more accurately.
Correlations are now stored in HttpContext.Items so ASP.NET threading peculiarities don't affect it.
Happy end, I hope. :)

Resources