I can write my language specific characters into mysql console, but when backspacing or moving right and left on the line, it gets messed up, and I have to ctrl + c because the console can't be used at all then. Otherwise the character set is properly set up. My platform (linux) charset is set to UTF-8....this is just a bug I can't get information about. I hope somebody has dealt with this issue and can tell me how to fix it.
mysql 5.0.84
mysql> SHOW VARIABLES LIKE 'character\_set\_%';
+--------------------------+--------+
| Variable_name | Value |
+--------------------------+--------+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
+--------------------------+--------+
7 rows in set (0.00 sec)
Related
When I prepare for my exam, I meet the following statement:
If file1 and file2 are hard linked, and two processes open file1 and file2,
their read/write pointer keeps the same.
Which, according to the answer (no explanation), is wrong. So I searched google, and found something different.
This link: https://www.usna.edu/Users/cs/wcbrown/courses/IC221/classes/L09/Class.html Says the read/write pointer is in the (system wide) open file table.
But this link http://www.cs.kent.edu/~walker/classes/os.f08/lectures/Walker-11.pdf
Says the pointer is in the per process file table.
Which one is true?
IHMO, the read/write offset clearly has to be a per process property. You could easily crash other proceses if this was a system wide per file property. This is my understang, but I'd rather have this confirmed by an informed source.
I took a look at the 1986 AT&T book "The design of the Unix Operating System" by Maurice J. Bach, which I consider a informed source.
In topic 2.2.1 An Overview of the File Subsytem it sais:
... Inodes are stored in the file system ... The kernel reads them
into an in-core inode table when manipulating files ... The kernel
contains two other data structures, the file table and the user
file descriptor table. The file table is a global kernel
structure, but the user file descriptor table is allcated per
process ... The file table keeps track of the (read/write) byte
offset ...
This would contradict my statement. But then, clarification can be read in topic 5.1 OPEN, pages 92ff. Figure 5.3 shows an example of a process having done three opens, two of them being for the same file, /x/y/z (I simplfy the naming here, and in the illustration below).
User File
Descriptor Table File Table inode Table
+--------------+ +------------+ +------------+
0| | | | | |
+--------------+ | . | | . |
1| | | . | | . |
+--------------+ | . | | . |
2| | +------------+ | |
+--------------+ +-->| read offset|----+ | |
3| | | +------------+ | | |
+--------------+ | | | | +------------+
4| |---+ | . | +->| inode of |
+--------------+ | . | +--->| /x/y/z |
5| |----+ | . | | +------------+
+--------------+ | +------------+ | | . |
6| |-+ +->| read |----+ | . |
+--------------+ | +------------+ | | | . |
| . | | | . | | | +------------+
| . | | | . | | +->| inode of |
| | | | . | | | /a/b |
+--------------+ | +------------+ | +------------+
+---->|write offset|--+ | . |
+------------+ | . |
| . | | . |
| . | | . |
+------------+ +------------+
The final answer is in the text following figure 5.3 on page 94:
Figure 5.3 shows the relationship between the inode table, file
table, and user file descriptor table structures. Each open
returns a file descriptor to the process, and the corresponding entry
in the user file descriptor table points to a unique entry in the
kernel file table even though one file (/x/y/z) is opened twice.
To answer your question: The read/write offset is kept in the kernel file table, not in a per process table, but a unique entry is allocated upon each open().
But, why is there a kernel file table? After all, the read/write offsets could have been stored in the per process user file descriptor table, instead of in a kernel table, couldn't they?
To understand why there is a kernel file table, think of what the dup() and fork() functiones do with respect to file descriptors: They duplicate the state of an open file. Under a new file descriptor in the same process, dup(), or under the same file descriptor (number) but in a duplicated user file descriptor table in the new (child) process. In both cases, duplicating the state of an open file includes the read/write offset. So for these cases, more than one file descriptor will point to a single file table entry.
I have a test file which checks for the presence of all key elements on every page of the app (one Scenario per page). However, the app is fairly complex and has different types of users (admin, regular, etc.) and I want to be able to go through the same pages.robot file with every type of user (and maybe have some if statements in that pages.robot file for every type of user) but I'm not sure how I should do it. I'm guessing I should be using a Resource File and set a global userType variable with admin, regular, etc. and run the pages.robot file multiple times (once per user type) but I'm not sure how to set up the Resource File and the userType variable.
Any ideas on how the Resource File should look like and then how to run the same file for every type of user?
You can store your test user configuration/properties in a resource file (say test_properties.txt) as below:
=== test_properties.txt ===
| *** Variables *** |
| ${tp_app_url} | http://yourapp.com |
| ${tp_browser} | firefox |
| ###### user roles to test with - admin, non-admin, regular |
| ${tp_user_type} | admin |
| ###### test users |
| ${tp_admin_user} | admin#app.com |
| ${tp_admin_password} | admin#123 |
| ${tp_regular_user} | regular#app.com |
| ${tp_regular_password} | regular#123 |
Here, the user role/type with which you want to test your application is defined as:
| ###### user roles to test with - admin, regular |
| ${tp_user_type} | admin |
Your test suite file can then import above resource file as below:
=== testsuite.txt ===
| *** settings *** |
| Library | Selenium2Library |
| Resource | test_properties.txt |
| *** test cases *** |
| Validate Page Controls |
| | Open Browser To Login Page | ${tp_user_type} |
| | Page Controls Should be Visible | ${tp_user_type} |
| *** keywords *** |
| Open Browser To Login Page |
| | [Arguments] | ${user_type} |
| | Open Browser | ${tp_app_url} | ${tp_browser} |
| | Input Username | ${tp_${user_type}_user} |
| | Input Password | ${tp_${user_type}_password} |
| | Submit Credentials |
| | Title Should Be | Welcome Page |
| Input Username |
| | [Arguments] | ${username} |
| | Input Text | username_field | ${username} |
| Input Password |
| | [Arguments] | ${password} |
| | Input Text | password_field | ${password} |
| Submit Credentials |
| | Click Button | login_button |
| Page Controls Should be Visible |
| | [Arguments] | ${user_type} |
Your code related to validating the page controls could reside in the keyword Page Controls Should be Visible which will perform the checks based on the user type argument.
Note: Test user's userId and password variables are formed here by embedding the user type variable as: ${tp_${user_type}_user} which in turn gets evaluated as ${tp_admin_user} in our case.
During execution, you can pass the value of ${tp_user_type} on command line, and it override the value set in the resource file.
pybot --variable tp_user_type:non-admin path/to/your/testfile
If you want to run the same tests with multiple user types, you can create a batch file like:
pybot --variable tp_user_type:non-admin path/to/your/testfile
pybot --variable tp_user_type:admin path/to/your/testfile
pybot --variable tp_user_type:regular path/to/your/testfile
I'm sure there will be a better solution than this for your problem. Ideally, the keywords defined above in the test suite file should reside in a resource file. You can also create a data-driven test to run your template keyword (which validates page controls) for each user type.
I am running openstack icehouse and trying to upload a 16 GB image from horizon.
Horizon and Glance are running on seperate machines.
The image is fully copied to Glance Node(checked by verifying image file size under /var/lib/glance/images/). But the image status continues to be in saving state forever, The status does not become active.
The output of nova image-show displays.
+----------------------+--------------------------------------+
| Property | Value |
+----------------------+--------------------------------------+
| OS-EXT-IMG-SIZE:size | 16947544064 |
| created | 2014-11-06T13:26:25Z |
| id | 46f6218d-2f17-493a-8bc9-a46562cbefff |
| minDisk | 0 |
| minRam | 0 |
| name | very_huge_image |
| progress | 50 |
| status | SAVING |
| updated | 2014-11-06T13:26:25Z |
+----------------------+--------------------------------------+
But when I try to upload same image with Glance CLI from horizon machine then in some time image status becomes active and ready to use.
Any help/suggestion would be greatly appreciated.
Thanks in advance.
I would like to see the data on Demographic -> Language grid grouped by ISO 3316 language, without differences between the (optional) ISO639 country code national variant.
For example, instead of seeing:
| Language | Visits |
|----------|--------|
| it | 56,027 |
| it-it | 35,130 |
| en-us | 5,878 |
| en | 1,211 |
| es | 897 |
| es-es | 576 |
| ... | ... |
I would like to see something like this:
| Language | Visits |
|----------|--------|
| it | 91,157 |
| en | 7,089 |
| es | 1473 |
|----------|--------|
Is it possible?
You can do an advanced filter at the profile level that will search and replace the language input field and keep only the first 2 letters.
If you want to keep the original language variations in your reports, you'll need to define your own dimension and use the tracking code (or Google Tag Manager) to fill it from the browser's language setting:
// this will extract 'it' from 'it-IT' or 'it-CH'
var primaryLanguage = navigator.language.match(/[^-]+/)[0];
ga('set', 'dimension1', primaryLanguage);
https://developers.google.com/analytics/devguides/collection/analyticsjs/custom-dims-mets
HTTP uses the IEFT Language Tag, so the primary language can be provided in different standards. It can consist of up to 8 letters. This usually won't be the case, but if you need to account for this, you need some extra logic to group languages from different standards.
From the get go: sorry if I'm not using the proper emacs terminology -- I'm relatively wet behind the ears in the emacs world.
Most of my work in emacs is for programming R, and I'm using ESS and ECB to do so quite happily. I'd like to build a custom ECB layout which uses the entire bottom of the screen as my R console, while putting some ECB-specific buffers on the left.
Using ECB-esque layout diagrams, I'd like my layout to look like pretty much exactly like "left13", except I'd like the entirety of the "compilation" buffer to be my running R console (or any shell, for that matter):
-------------------------------------------------------
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| Directories | Edit |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
-------------------------------------------------------
| |
| R Console |
| |
-------------------------------------------------------
If I can just split my buffer in two (vertically), then call ecb-activate from the top buffer (and not allow it to touch my bottom buffer), I'm imagining it could work (hence the subject of my question).
That doesn't work, though, and I don't know how to get an entier "bottom pane" out of a layout to work in the way I like using trying to use ECB's customize layout functionality.
Does anybody know if/how I can do this?
Short answer: No.
Longer answer: Unfortunately, ECB completely takes over Emacs "window" management at a very low level. So it's all or nothing. You can't comfortably combine it with regular window splitting. What you might be able to do, is to adjust the layout ECB gives you or to program a custom layout. (Some assembly required.)