Is there a limit in the number of blocks that can be randomized in Qualtrics? - limits

does anyone know whether:
1) there is a limit in the number of blocks that can be added in a survey in Qualtrics?
2) is there a limit in the number of blocks that can be included in the Randomizer in Qualtrics?
Thank you,

No there are no limits to either of the above.

Related

Different size items in simmer queue

I'm starting out learning the simmer package for discrete event simulation, but can't figure out how to have different size items in a queue.
For example, if we have vehicles entering a queue, a bus is going to take up a lot more space than a motorbike. Is there a way of specifying how many queue spaces an item occupies?
Thanks in advance!
There is a vignette that discusses this kind of things. TL;DR: you can seize more than one element, among other options.

how to determine a page(or file) is least used.how to determine calculate method,which is about Least Recently Used

i know about LRU algorithm,but how to determine the calculation is key point.if the space is not enough,i want to find some files's weight which are below,then delete them and put in some files's weight which are high weight.some one ever did this?
Here are a few implementations:
1) http://www.careercup.com/question?id=14113740
2) How would you implement an LRU cache in Java?
3) http://www.geeksforgeeks.org/implement-lru-cache/

Comaprision of FDD and TDD on the basis of Throughput

I have studied Frequency division duplex (FDD) and Time division duplex (TDD) but i can't find its difference on the basis of throughput. Is there any person who have an adequate knowledge about it. Thanx
There is no throughput difference between the two methods.
If you have a 100Mbps link and you split it to 10 equal users, both FDM and TDM will give each user a 10Mbps link.
There are too many free variables in your question to give a specific answer. In FDD paired frequencies are used to accomplish simultaneous upload and download. In TDD, the same frequency is used for both upload and download and by adjusting timeslots used for upload and download, their ratio can be adjusted dynamically based on how much data there is to be sent or received.

What does X-Accel-Limit-Rate really do in NginX?

These are the docs for X-Accel-Limit-Rate:
Sets the rate limit for this single request. Off means unlimited.
Not much there. Most of the examples (I've found only two or three) I've seen set the value of X-Accel-Limit-Rate to 1024. This is obviously 1024 bytes, but per what? Or is that a total of some sort?
Without knowing what the value means it's difficult to know what it's really doing.
Apparently it is bytes per second. Source

A couple of questions about Hash Tables

I've been reading a lot about Hash Tables and how to implement on in C and I think I have almost all the concepts in my head so I can start to code my own, I just have a couple of questions that I have yet to properly understand.
As a reference, I've been reading this:
http://eternallyconfuzzled.com/jsw_home.aspx
1) As I've read on the site above, a power of two or a prime number is recommended for the Hash Table size. This is basically an array and an array has a fixed size so I can quickly look up for the value I'm looking for. I can't declare a small array if I have a large input as it won't fit and I can't declare a very large array if my input data is not that large cause it's wasted memory.
What is the optimum size for the Hash Table? What should I base my decision on?
2) Also, on that site, there's a couple of hashing functions which I have yet to read them all. It also states that it's always best to use a good known algorithm and to roll my own. And I might do just that, I'll pick one from that site and test it out on my code and see if it minimizes collisions based on my input data.
What's bugging me is how I control the hash range? The hash can't return and integer larger than the Hash Table size or we'll have a serious problem. How do I deal with this?
1) What you are referring to is the load factor of the hash table - the percentage of buckets that are expected to be filled. Wikipedia has this to say:
With a good hash function, the average
lookup cost is nearly constant as the
load factor increases from 0 up to 0.7
or so. Beyond that point, the
probability of collisions and the cost
of handling them increases.
I believe the Java implementation (and probably others) resizes periodically to keep the load factor within an acceptable range.
2) Just use the modulo operator (%) to keep the bucket index legal. The second operator should be the size of your bucket array.
Pick a small size for your hash table. As you add stuff to your table, check to see what percentage of the table is being used; when it is greater than 70% full, make the table bigger. This also holds true as you remove elements-- make the table smaller when it is less than 60% full, for instance. Wikipedia has a good description of some strategies for dynamic resizing, but that's the general idea.
I only say this because you seem to have known input data:
If you know the rough order of magnitude of the amount of data you will be storing in the hash table, it's generally good enough to just create a table about that big. (You shouldn't worry about whether everything will fit. Instead, the right thing to think about is how many collisions you will have and how you will handle them.)
As for the right hash function, it's possible that the structure of your input will suggest which one will be correct. For instance, what aspects of your input are likely to be evenly distributed?

Resources