I can not run sahi [duplicate] - symfony

This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
sahi and symfony2
I would like to test behat, mink and sahi with symfony2.
In my file config_test.yml I have
mink:
base_url: http://localhost/Symfony_Standard_2.0.15_2/symfony/web/app_test.php
default_session: symfony
sahi: ~
in my test.feature when I use
Scenario: Open page with products list and check it
Given I am on "/hello"
Then the response should contain "hello"
the result is well:
1 scenario (1 passed)
2 steps (2 passed)
0m5.112s
But when i add #mink:sahi I get
#mink:sahi
Scenario: Open page with products list and check it #
src\Acme\DemoBundle\Features\test.feature:6
Given I am on "/hello" # Acme\DemoBundle\Features\Context\FeatureContext::visit()
Operation timed out after 5000 milliseconds with 0 bytes received
Then the response should contain "hello" #
Acme\DemoBundle\Features\Context\FeatureContext::assertResponseContains()
1 scenario (1 failed)
2 steps (1 skipped, 1 failed)
0m5.112s
Do you have any idea?

i have solved the problem, in fact, we must run the driver sahi, before starting the test,

Related

Turning a large character string into a dataframe [duplicate]

This question already has answers here:
R read.csv data from inline string with a csv file content
(3 answers)
Is there a way to use read.csv to read from a string value rather than a file in R?
(6 answers)
Closed 7 months ago.
I'm getting data from an API in a very raw, messy text format. For the purposes of reproducibility, the dput looks as such:
people <- "Registrant UID,Status,Tracking Source,Tracking ID,Open Tracking ID,State API Submission Result,Language,Date of birth,Email address,US citizen?,Salutation,First name,Middle name,Last name,Name suffix,Home address,Home unit,Home city,Home County,Home state,Home zip code,Has mailing address?,Mailing address,Mailing unit,Mailing city,Mailing County,Mailing state,Mailing zip code,Party,Race,Phone,Phone type,Opt-in to RTV email?,Opt-in to RTV sms?,Opt-in to Partner email?,Opt-in to Partner SMS/robocall,Survey question 1,Survey answer 1,Survey question 2,Survey answer 2,Volunteer for RTV,Volunteer for partner,Ineligible reason,Pre-Registered,Started registration,Finish with State,Built via API,Has State License,Has SSN,VR Application Submission Modifications,VR Application Submission Errors,VR Application Status,VR Application Status Details,VR Application Status Imported DateTime,Submitted Via State API,Submitted Signature to State API,utm_source,utm_medium,utm_campaign,utm_term,utm_content,other_parameters,Change of Name,Prev Name Title,Prev First Name,Prev Middle Name,Prev Last Name,Prev Name Suffix,Registration Source,Registration Medium,Shift ID,Blocks Shift ID,Over 18 Affirmation,Preferred Language,State Flow Status,State API Transaction ID,Requested Assistance,Viewed Steps\n111,Complete,\"\",\"\",,Success: 333,English,4/13/sample#email.com,Yes,Ms.,FirstName,\"\",LastName,\"\",Street,\"\",City, State, Zip,No,\"\",,\"\",,,\"\",Political Party,\"\",111,,Yes,No,No,No,,,,,No,No,,No,DoB and Time,No,No,Yes,Yes,\"\",[],Approved,APPR - CHANGE APPLICATION,Date ,Yes,false,,,,,,amp=,No,,\"\",\"\",\"\",,Web,Submitted Via State API,,,Yes,\"\",complete,333,,\"state_registrants-edit,state_registrants-edit,state_registrants-edit,state_registrants-pending,state_registrants-complete\"\n111,Complete,\"\",\"\",,Success: 333,English,4/13/sample#email.com,Yes,Ms.,FirstName,\"\",LastName,\"\",Street,\"\",City, State, Zip,No,\"\",,\"\",,,\"\",Political Party,\"\",111,,Yes,No,No,No,,,,,No,No,,No,DoB and Time,No,No,Yes,Yes,\"\",[],Approved,APPR - CHANGE APPLICATION,Date ,Yes,false,,,,,,amp=,No,,\"\",\"\",\"\",,Web,Submitted Via State API,,,Yes,\"\",complete,333,,\"state_registrants-edit,state_registrants-edit,state_registrants-edit,state_registrants-pending,state_registrants-
n111,Complete,"","",,Success: 333,English,4/13/sample#email.com,Yes,Ms.,FirstName,"",LastName,"",Street,"",City, State, Zip,No,"",,"",,,"",Political Party,"",111,,Yes,No,No,No,,,,,No,No,,No,DoB and Time,No,No,Yes,Yes,"",[],Approved,APPR - CHANGE APPLICATION,Date ,Yes,false,,,,,,amp=,No,,"","","",,Web,Submitted Via State API,,,Yes,"",complete,333,,"state_registrants-edit,state_registrants-edit,state_registrants-edit,state_registrants-pending,state_registrants-
You can probably tell, but from Registrant UID to Viewed Steps will be the header/column names. Some of the fields are empty and that's fine. Some of the fields also have multiple commas which is also fine.
How exactly would I go about putting this into a neatly structured data frame?

How to know the netwrok traffic my test (using JMeter) is going to generate?

I am going to run load test using JMeter over Amazon AWS and I need to know before starting my test how much traffic is it going to generate over network.
The criteria that Amazon has in their policy is:
sustains, in aggregate, for more than 1 minute, over 1 Gbps (1 billion bits per second) or 1 Gpps (1 billion packets per second). If my test is going to exceed this criteria we need to submit a form before starting the test.
so how can I know if the test is going to exceed this number or not?
Run your test with 1 virtual user and 1 iteration in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.csv
To get an approximate figure open Open the result.csv file using Aggregate Report listener and there you will have 2 columns: Received KB/sec and Sent KB/sec. Multiply it by the duration of your test in seconds and you will get the number you're looking for.
alternatively you can open the result.csv file using MS Excel or LibreOffice Calc or equivalent where you can sum bytes and sentBytes columns and get the traffic with 1 byte precision:

Reading MarkLogic logs from Query Console using XQuery

I want to read MarkLogic logs (for eg : ErrorLog.txt) from query console using Xquery. I had the below code but the problem is output is not properly formatted. Result is like below
xquery version "1.0-ml";
for $hid in xdmp:hosts()
let $h := xdmp:host-name($hid)
return
xdmp:filesystem-file("file://" || $h || "/" ||xdmp:data-directory($hid) ||"/Logs/ErrorLog.txt")
Problem is result is coming as per host basis like first all log of one host is coming and then starting with time 00:00:01 of host 2 and then 00:00:01 of host 3 after running the Xquery.
2019-07-02 00:00:35.668 Info: Merging 2 MB from /cams/q06data02/testQA2/Forests/testQA2-2.2/0002b4cd to /cams/q06data02/testQA2/Forests/testQA2-2.2/0002b4ce, timestamp=15620394303480170
2019-07-02 00:00:36.007 Info: Merged 3 MB at 9 MB/sec to /cams/q06data02/testQA2/Forests/test2-2.2/0002b4ce
2019-07-02 00:00:38.161 Info: Deleted 3 MB at 399 MB/sec /cams/q06data02/test2/Forests/test2-2.2/0002b4cd
Is it possible to get the output with hostname included with log entries and also if we can sort the output by timelines something like
host 1 : 2019-07-02 00:00:01 : Info Merging ....
host 2 : 2019-07-02 00:00:02 : Info Deleted 3 MB at 399 MB/sec ...
Log files are text files. You can parse and sort them like any other text file.
Although they can get very large (GB+), so simple methods may not be performant.
Plus you need to be able to parse the text into fields in order to sort by a field.
Since the first 20 bytes of every line is the time stamp, and that timestamp is in ISO format which sorts lexically same as date, you can split the file by lines and sort using basic colation to get by time sorting of multiple files.
In V9 one can use the pair of xdmp:logfile-scan and xdmp:logmessage-parse to efficiently search over log files (remotely as well as local) and then transform the results into text, XML (attribute or element format) or JSON.
One can also use the REST API for the same.
see: https://docs.marklogic.com/REST/GET/manage/v2/logs
Once logfiles (ideally a selected subset of log messages that is small enough to manage) is converted to a structured format (xml , json or text lines) then sorting, searching, enriching etc is easily performed.
For something much better take a look at Ops Director https://docs.marklogic.com/guide/opsdir/intro

R source without stopping rest of script from running [duplicate]

This question already has an answer here:
R - Run source() in background
(1 answer)
Closed 4 years ago.
Is there a way to source a different R script and continuing to execute the remainder of the current script without stoping to wait for the sourced scrip to finish?
eg.
Script 1 - run 00:00
source(Script2) - run 00:01
script 1 - end 00:05
script 2 - end 01:00
I hope this makes sense
I believe you can accomplish this with doParallel like so:
require(doParallel)
scripts <- c('script1.r', 'script2.r')
foreach(x = 1:length(scripts)) %dopar%
{
script <- scripts[x]
source(script)
}
Which would run each script simultaneously on two different workers.

Apache Nutch NoSuchElementException with bin/nutch inject , readdb, generate options

I am new to Apache Nutch 2.3 and Solr. I am trying to get my first crawl working. I installed Apache Nutch and Solr as mentioned in official documentation and both are working fine. However when I did the following steps I get errors -
bin/nutch inject examples/dmoz/ - Works correctly
(InjectorJob: total number of urls rejected by filters: 2
InjectorJob: total number of urls injected after normalization and filtering:130)
Error - $ bin/nutch generate -topN 5
GeneratorJob: starting at 2015-06-25 17:51:50
GeneratorJob: Selecting best-scoring urls due for fetch.
GeneratorJob: starting
GeneratorJob: filtering: true
GeneratorJob: normalizing: true
GeneratorJob: topN: 5
java.util.NoSuchElementException
at java.util.TreeMap.key(TreeMap.java:1323)
at java.util.TreeMap.firstKey(TreeMap.java:290)
at org.apache.gora.memory.store.MemStore.execute(MemStore.java:125)
at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:73) ...
GeneratorJob: generated batch id: 1435279910-1190400607 containing 0 URLs
Same errors if i do - $ bin/nutch readdb -stats
Error - java.util.NoSuchElementException ...
Statistics for WebTable:
jobs: {db_stats-job_local970586387_0001={jobName=db_stats, jobID=job_local970586387_0001, counters={Map-Reduce Framework={MAP_OUTPUT_MATERIALIZED_BYTES=6, REDUCE_INPUT_RECORDS=0, SPILLED_RECORDS=0, MAP_INPUT_RECORDS=0, SPLIT_RAW_BYTES=653, MAP_OUTPUT_BYTES=0, REDUCE_SHUFFLE_BYTES=0, REDUCE_INPUT_GROUPS=0, COMBINE_OUTPUT_RECORDS=0, REDUCE_OUTPUT_RECORDS=0, MAP_OUTPUT_RECORDS=0, COMBINE_INPUT_RECORDS=0, COMMITTED_HEAP_BYTES=514850816}, File Input Format Counters ={BYTES_READ=0}, File Output Format Counters ={BYTES_WRITTEN=98}, FileSystemCounters={FILE_BYTES_WRITTEN=1389120, FILE_BYTES_READ=1216494}}}}
TOTAL urls: 0
I am also not able to use generate or crawl commands.
Can anyone tell me what am I doing wrong?
Thanks.
I too am new to nutch. However, I think the problem is that you haven't configured a data store. I got the same error, and got a bit further. You need to follow this: https://wiki.apache.org/nutch/Nutch2Tutorial, or this: https://wiki.apache.org/nutch/Nutch2Cassandra. Then, rebuild: ant runtime

Resources