I have two Qt based apps that in many ways are nearly identical in the way they are set up. They both display various dates and times using localization, and they both call QLocale::setDefault(QLocale(QLocale::Norwegian, QLocale::Norway)) (or various other languages) during startup, but for the same locale set (Norway / Norwegian), the following line gives to different formats in the two apps:
someDate.toLocaleTimeString(Qt.locale(), Locale.ShortFormat)
With this line, I get 16:32 in one app, and 16.32 in the other (. or :). And this line:
Qt.locale().timeFormat(Locale.ShortFormat)
returns HH:mm in one, and HH.mm in the other.
I can't find any difference in the way the locale is set or used in the two apps, and I have verified that both apps have the same locale set. Does anyone have any idea as to why I get two different formats?
Related
I'm wondering about the difference between the R environment and R options. Specifically, what is the difference between the settings held in this place:
Sys.getenv()
vs. this place
getOption()
When calling each of them, I receive quite different settings but I can't figure out what the difference is on a conceptual level. What is each of these intended for? When deciding in which location to let the user store their API keys, which one is the better (safer, faster, more logical) place to store them in? Does either of these settings persist after closing R?
I would like to grab satellite positions from the page(s) below, but I'm not sure if scraping is appropriate because the page appears to be updating itself every second using some internal code (it keeps updating after I disconnect from the internet). Background information can be found in my question at Space Stackexchange: A nicer way to download the positions of the Orbcomm-2 satellites.
I need a "snapshot" of four items simultaneously:
UTC time
latitude
longitude
altitude
Right now I use screen shots and manual typing. Since these values are being updated by the page - is conventional web-scraping going to work here? I found a "screen-scraping" tag, should I try to learn about that instead?
I'm looking for the simplest solution to get those four values, I wonder if I can just use urllib or urllib2 and avoid installing something new?
example page: http://www.satview.org/?sat_id=41186U I need to do 41179U through 41189U (the eleven Orbcomm-2 satellites that SpaceX just put in orbit)
Those values are calculated with a little math using javascript. The calculations are detailed here: https://www.satview.org/track.js
So, I guess one option is to write that script (plus any dependencies) in the language of your choice and use it to return your desired values.
There is one major function track() which takes arg $modo which can be one of two values - tic or plot.
There may be other source files (dependencies) referenced:
The easier way would probably be to use something which allows javascript to run e.g. automating a browser, and extracting the calculated values as they are generated.
I wanted to modify my problem and broke it down to some groups.
None of the groups have solvers attached to it. So they are just groups (composed of few components) that make it easy for the user to
differentiate various parts of the product. (each being a group).
I am confused with the IndepVarComp() and Metadata (declared in the initialization)
So far I have always used a single group and a single IndepVarComp() where the outputs were always the design variables.
If I continue this approach I could use metadata and pass large dictionaries i.e self.options.declare('aa', types=dict).
But looking at other codes based on openmdao, they seem to use indepvarcomp as if it is the metadata (ie. they dont change during iterations.and they are used as a component inside that group)
Could you guide me if one or the other is right way to
IndepVarComp outputs should be used for any output that could conceivably be a design variable or a parameter that you might wish to change in your run script.
Metadata should be used for anything that is explicitly intended to be constant. It is intended to be set once during instantiation and then not changed after the fact.
I'm setting up Graphite, and hit a problem with how data is represented on the screen when there's not enough pixels.
I found this post whose first answer is very close to what I'm looking for:
No what is probably happening is that you're looking at a graph with more datapoints than pixels, which forces Graphite to aggregate the datapoints. The default aggregation method is averaging, but you can change it to summing by applying the cumulative() function to your metrics.
Is there any way to get this cumulative() behavior by default?
I've modified my storage-aggregation.conf to use 'aggregationMethod = sum', but I believe this is for historical data and not for data that's displayed in the UI.
When I apply cumulative() everything is perfect, I'm just wondering if there's a way to get this behavior by default.
I'm guessing that even though you've modified your storage-aggregation.conf to use 'aggregationMethod = sum', your metrics you've already created have not changed their aggregationMethod. The rules in storage-aggregation.conf only affect new metrics.
To change your existing metrics to be summed instead of averaged, you'll need to use whisper-resize.py. Or you can delete your existing metrics and they'll be recreated with sum.
Here's an example of what you might need to run:
whisper-resize.py --xFilesFactor=0.0 --aggregationMethod=sum /opt/graphite/storage/whisper/stats_counts/path/to/your/metric.wsp 10s:28d 1m:84d 10m:1y 1h:3y
Make sure to run that as the same user who owns the file, or at least make sure the files have the same ownership when you're done, otherwise they won't be writeable for new data.
Another possibility if you're using statsd is that you're just using metrics under stats instead of stats_counts. From the statsd README:
In the legacy setting rates were recorded under stats.counter_name
directly, whereas the absolute count could be found under
stats_count.counter_name. With disabling the legacy namespacing those
values can be found (with default prefixing) under
stats.counters.counter_name.rate and stats.counters.counter_name.count
now.
Basically, metrics are aggregated differently under the different namespaces when using statsd, and you want stuff under stats_count or stats.counters for things that should be summed.
Have a stored procedure that produces a number--let's say 50, that is rendered as an anchor with the number as the text. When the user clicks the number, a popup opens and calls a different stored procedure and shows 50 rows in a html table. The 50 rows are the disaggregation of the number the user clicked. In summary, two different aspx pages and two different stored procedures that need to show the same amount, one amount is the aggregate and the other the disaggregation of the aggregate.
Question, how do I test this code so I know that if the numbers do not match, there is an error somewhere.
Note: This is a simplified example, in reality there are 100s of anchor tags on the page.
This kind of testing falls outside of the standard / code level testing paradigm. Here you are explicitly validating the data and it sounds like you need a utility to achieve this.
There are plenty of environments to do this and approaches you can take, but here's two possible candidates
SQL Management Studio : here you can generate a simply script that can run through the various combinations from the two stored procedures ensuring that the number and rows match up. This will involve some inventive T-SQL but nothing particular taxing. The main advantage of this approach is you'll have bare metal access to the data.
Unit Testing : as mentioned your problem is somewhat outside of the typical testing scenario where you would oridnarily Mock the data and test into your Business Logic. However, that doesn't mean you cannot write the tests (especially if you are doing any Dataset manipulation prior to this processing. Check out this link and this one for various approaches (note: if you're using VS2008 or above, you get the Testing Projects built in from the Professional Version up).
In order to test what happens when the numbers do not match, I would simply change (temporary) one of the stored procedure to return the correct amount +1, or always return zero, etc.