RegDBGetKeyValueEx returns -1 - installscript

I am using below installscript code to identify whether SharePoint is installed or not. but its not working. The function returns -1. Not sure what is the issue. can someone please help? I want to do below steps
Reach to this registry location first ""SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\15.0""
Read Name "SharePoint" and its value "Installed"
refer attached image.
function IsSharePointInstalled()
STRING szKey, svValue, szName;
NUMBER nvType, nvSize;
begin
RegDBSetDefaultRoot(HKEY_LOCAL_MACHINE);
szKey = "SOFTWARE\\Microsoft\\Shared Tools\\Web Server
Extensions\\15.0";
szName = "SharePoint";
if(RegDBKeyExist (szKey) >=1) then
MessageBox("Key found", INFORMATION);
if(RegDBGetKeyValueEx(szKey, szName, nvType, svValue, nvSize) < 0) then
MessageBox("Failed to get value", INFORMATION);
else
MessageBox("Successfully got value", INFORMATION);
endif;
endif;
RegDBSetDefaultRoot(HKEY_CLASSES_ROOT);
end;
Image

Installscipt: Where is your value located? Have you accounted for the 64- versus 32 bit sections of the registry?
HKEY_LOCAL_MACHINE\SOFTWARE
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node
Perhaps investigate the 64-bit option here (REGDB_OPTION_WOW64_64KEY) if you need the 64-bit section of the registry.
AppSearch: for a simple registry retrieval like this, you could use AppSearch instead (System Search View). I don't have the time to make a sample for that right now. You can also see the System Search View / Wizard.

Related

How to analyze a MariaDB crash log?

When I created a partition for an existing table, I got an exception stack below:
Thread pointer: 0x1ce1ecc6ab8
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
mysqld.exe!ha_partition::read_par_file()[ha_partition.cc:3021]
mysqld.exe!ha_partition::get_from_handler_file()[ha_partition.cc:3161]
mysqld.exe!ha_partition::initialize_partition()[ha_partition.cc:512]
mysqld.exe!partition_create_handler()[ha_partition.cc:185]
mysqld.exe!get_new_handler()[handler.cc:302]
mysqld.exe!TABLE_SHARE::init_from_binary_frm_image()[table.cc:2025]
mysqld.exe!open_table_def()[table.cc:698]
mysqld.exe!tdc_acquire_share()[table_cache.cc:842]
mysqld.exe!open_table()[sql_base.cc:1905]
mysqld.exe!open_and_process_table()[sql_base.cc:3802]
mysqld.exe!open_tables()[sql_base.cc:4300]
mysqld.exe!mysql_alter_table()[sql_table.cc:9353]
mysqld.exe!Sql_cmd_alter_table::execute()[sql_alter.cc:510]
mysqld.exe!mysql_execute_command()[sql_parse.cc:6087]
mysqld.exe!Prepared_statement::execute()[sql_prepare.cc:4760]
mysqld.exe!Prepared_statement::execute_loop()[sql_prepare.cc:4246]
mysqld.exe!mysql_sql_stmt_execute()[sql_prepare.cc:3364]
mysqld.exe!mysql_execute_command()[sql_parse.cc:3901]
mysqld.exe!sp_instr_stmt::exec_core()[sp_head.cc:3652]
mysqld.exe!sp_lex_keeper::reset_lex_and_exec_core()[sp_head.cc:3335]
mysqld.exe!sp_instr_stmt::execute()[sp_head.cc:3513]
mysqld.exe!sp_head::execute()[sp_head.cc:1346]
mysqld.exe!sp_head::execute_procedure()[sp_head.cc:2288]
mysqld.exe!do_execute_sp()[sql_parse.cc:3005]
mysqld.exe!Sql_cmd_call::execute()[sql_parse.cc:3247]
mysqld.exe!mysql_execute_command()[sql_parse.cc:6087]
mysqld.exe!sp_instr_stmt::exec_core()[sp_head.cc:3652]
mysqld.exe!sp_lex_keeper::reset_lex_and_exec_core()[sp_head.cc:3335]
mysqld.exe!sp_instr_stmt::execute()[sp_head.cc:3513]
mysqld.exe!sp_head::execute()[sp_head.cc:1346]
mysqld.exe!sp_head::execute_procedure()[sp_head.cc:2288]
mysqld.exe!Event_job_data::execute()[event_data_objects.cc:1459]
mysqld.exe!Event_worker_thread::run()[event_scheduler.cc:312]
mysqld.exe!event_worker_thread()[event_scheduler.cc:268]
mysqld.exe!pthread_start()[my_winthread.c:62]
ucrtbase.dll!_configthreadlocale()
KERNEL32.DLL!BaseThreadInitThunk()
ntdll.dll!RtlUserThreadStart()
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x1ce21cf66b8): alter table mytable add PARTITION (PARTITION t20221103 VALUES LESS THAN (TO_DAYS("2022-11-03")+1))
Connection ID (thread ID): 1649
Status: NOT_KILLED
The running platform is windows so I cannot debug the core file.
Therefore, I read the source code but I cannot understand the reason still deeply.
The database version is 10.4.6 so here is the source code link:
https://github.com/MariaDB/server/blob/mariadb-10.4.6/sql/ha_partition.cc#L3021
chksum= 0;
for (i= 0; i < len_words; i++)
chksum ^= uint4korr((file_buffer) + PAR_WORD_SIZE * i);
if (chksum)
goto err2;
m_tot_parts= uint4korr((file_buffer) + PAR_NUM_PARTS_OFFSET);
DBUG_PRINT("info", ("No of parts: %u", m_tot_parts));
tot_partition_words= (m_tot_parts + PAR_WORD_SIZE - 1) / PAR_WORD_SIZE;
tot_name_len_offset= file_buffer + PAR_ENGINES_OFFSET +
PAR_WORD_SIZE * tot_partition_words;
tot_name_words= (uint4korr(tot_name_len_offset) + PAR_WORD_SIZE - 1) /
PAR_WORD_SIZE; // <--- crashed here
There are some macros, so I don't know whether the line number is exact.
It seems a bug for loading the par file. I found that the above code has checked the validation on the par file, but subsequently, MariaDB still raises the exception.
So, I wonder if my analysis is right and how to bypass the exception, thanks a lot.
First is to search existing bug reports for 'read_par_file', and there's no obvious match.
Second is, looking at the line it crashed, we can assume its reading from memory that isn't allocated. The file_buffer was allocated earlier in the file based of a word length at the beginning.
Third, if you look at the git blame of the function, you see nothing has changed in quite a while.
So its likely a new bug. Please report it using the show create table and par file.
To confirm, that the show create table mytable, paste that into a running 10.4 latest version (containers are good for this). Then issue your alter table statement again. Its likely to crash, but its good to confirm.
There does seem to be a lack of checking around some of these offsets with regard to the allocated space.
Given it looks like you are doing daily partitions, have you exceeded the maximum partitions of 8k per table? Either way, an error message should occur rather than a crash.

Sage TypeError positive characteristics not allowed in symbolic computations

I am new to sage and have got a code (link to code) which should run.
I am still getting an error message in the decoding part. The error trace looks like this:
in decode(y)
--> sigma[i+1+1] = sigma[i+1]*(z)\
-(delta[i+1]/delta[mu+1])*z^(i-mu)*sigma[mu+1]*(z);
in sage.structure.element.Element.__mul__
if BOTH_ARE_ELEMNT(cl):
--> return coercion_model.bin_op(left, right, mul)
in sage.structure.coerce.CoercionModel_cache_maps.bin_op
--> action = self.get_action(xp,yp,op,x,y)
...... some more traces (don't actually know if they are important)
TypeError: positive characteristics not allowed in symbolic computations
Does anybody know if there is something wrong in this code snipped? Due to previous errors, I changed the following to get to where I am at the moment:
.coeffs() changed to .coefficients(sparse=False) due to a warning message.
in the code line sigma[i+1+1] = sigma[i+1](z)\
-(delta[i+1]/delta[mu+1])*z^(i-mu)*sigma[mu+1](z); where the error occurs, i needed to insert * eg. sigma[i+1]*(z)
I would be grateful for any guess what could be wrong!
Your issue is that you are multiplying things not of characteristic zero (like elements related to Phi.<x> = GF(2^m)) with elements of symbolic computation like z which you have explicitly defined as a symbolic variable
Phi.<x> = GF(2^m)
PR = PolynomialRing(Phi,'z')
z = var('z')
Basically, the z you get from PR is not the same one as from var('z'). I recommend naming it something else. You should be able to access this with PR.gen() or maybe PR(z).
I'd be able to be more detailed, but I encourage you next time to paste a fully (non-)working example; trying to slog through a big worksheet is not the easiest thing to track all this down. Finally, good luck, hope Sage ends up being useful for you!

Silencing ChromeDriver.exe logging

I am running ruby unit tests against Chrome using watir-webdriver. Whenever a test is run and chromedriver.exe is launched output similar to below appears:
Started ChromeDriver
port=9515
version=26.0.1383.0
log=C:\Home\Server\Test\Watir\web\chromedriver.log
[5468:8796:0404/150755:ERROR:accelerated_surface_win.cc(208)] Reseting D3D device
[5468:8996:0404/150758:ERROR:textfield.h(156)] NOT IMPLEMENTED
[WARNING:..\..\..\..\flash\platform\pepper\pep_module.cpp(63)] SANDBOXED
None of this impacts the correct functioning of the tests, but as one might imagine the appearance of "ERROR" and "WARNING" might be rather confusing to, for example, parsing rules in Jenkins looking for failures. Sure I can get really fancy with regular expression in the parsing rules, but it would be really nice to turn off this verbose and unnecessary logging on the part of chromedriver.exe. I have seen many mentions of this searching for an answer. No one has come up with a solution. Yes, chromedriver possibly has a "--silent" option, but there seems to be no way to pass that to the executable. Code similar to below is supposed to work, but has zero effect as far as I can see. Any ideas?
profile = Selenium::WebDriver::Chrome::Profile.new
profile['--cant-make-any-switches-work-here-how-about-you'] = true
browser = Watir::Browser.new :chrome, :profile => profile, :switches => %w[--ignore-certificate-errors --disable-extensions --disable-popup-blocking --disable-translate--allow-file-access]
Here's help for anyone else searching
Find ...selenium\webdriver\chrome\service.rb
Path start may differ on your system
And I added "-silent" to the passed parameters .... However, this silenced everything but the error/warning messages.
def initialize(executable_path, port)
#uri = URI.parse "http://#{Platform.localhost}:#{port}"
server_command = [executable_path, " -silent", "--port=#{port}"]
#process = ChildProcess.build(*server_command)
#socket_poller = SocketPoller.new Platform.localhost, port, START_TIMEOUT
#process.io.inherit! if $DEBUG == true
end
set chromeOptions with key --log-level=3 this should shut it up
I was able to divert the hundreds, yes hundreds, of chrome driver log messages that were showing up in cucumber stdout by using the :service_log_path argument.
#browser = Watir::Browser.new :chrome, :service_log_path => 'chromedriver.out'
the '-silent', or '--silent', or ' -silent', or ' --silent' parameter suggested above did nothing when I added it to ...selenium\webdriver\chrome\service.rb. And having to tweak the gem itself is not a particularly viable solution.
I couldn't find a place to capture the chromedriver stderr and divert it to null (not to mention having to handle doing that in windows and in *nix/osx)
The driver should default to something way less verbose. In this case INFO is way too verbose as hundreds of log entries pop out as INFO, 90%+ of them identical.
At least the :service_log_path argument works most of them.
You can try -Dwebdriver.chrome.logfile="/dev/null" and/or -Dwebdriver.chrome.args="--disable-logging" to the options of java that runs selenium-server-standalone-what.ever.jar

IDA assembly patching fails with "cannot reach destination from current location"

I'm a newbie in IDA (and reverse engineering).
I'm trying to use the "patch->assembly" option in the edit->patch menu, but it fails with an error I cannot understand.
My current line is "jnz short func" (where func is a label I renamed from loc_xxxx), and I am trying to change it to "jmp short func", but when I click OK I get a message box with the message: "cannot reach destination from current location".
Can anyone explain What does that mean and why doesn't it work? I tried to search all over and I can't find any answer!
I must also add that I'm doing this as part of an IDA tutorial I found (in tut4you.com).
Thanks again for your help, I'm really stuck with that!
"jnz short func" and "jmp short func" instructions doesn't have the same number of bytes..Try to keep code alignment with the original code..
Also jnz, jz, ja,..(conditional jumps) don't work with far pointers (i.e. intersegment).. but only jmp..

Solution for "Fatal error: Maximum function nesting level of '100' reached, aborting!" in PHP

I have made a function that finds all the URLs within an html file and repeats the same process for each html content linked to the discovered URLs. The function is recursive and can go on endlessly. However, I have put a limit on the recursion by setting a global variable which causes the recursion to stop after 100 recursions.
However, php returns this error:
Fatal error: Maximum function nesting level of '100' reached,
aborting! in
D:\wamp\www\crawler1\simplehtmldom_1_5\simple_html_dom.php on line
1355
I found a solution here: Increasing nesting function calls limit but this is not working in my case.
I am quoting one of the answers from the link mentioned above. Please do consider it.
"Do you have Zend, IonCube, or xDebug installed? If so, that is probably where you are getting this error from.
I ran into this a few years ago, and it ended up being Zend putting that limit there, not PHP. Of course removing it will let >you go past the 100 iterations, but you will eventually hit the memory limits."
Is there a way to increase the maximum function nesting level in PHP
Increase the value of xdebug.max_nesting_level in your php.ini
A simple solution solved my problem. I just commented this line:
zend_extension = "d:/wamp/bin/php/php5.3.8/zend_ext/php_xdebug-2.1.2-5.3-vc9.dll
in my php.ini file. This extension was limiting the stack to 100 so I disabled it. The recursive function is now working as anticipated.
Another solution is to add xdebug.max_nesting_level = 200 in your php.ini
Rather than going for a recursive function calls, work with a queue model to flatten the structure.
$queue = array('http://example.com/first/url');
while (count($queue)) {
$url = array_shift($queue);
$queue = array_merge($queue, find_urls($url));
}
function find_urls($url)
{
$urls = array();
// Some logic filling the variable
return $urls;
}
There are different ways to handle it. You can keep track of more information if you need some insight about the origin or paths traversed. There are also distributed queues that can work off a similar model.
Rather than disabling the xdebug, you can set the higher limit like
xdebug.max_nesting_level=500
It's also possible to fix this directly in php, for example in the config file of your project.
ini_set('xdebug.max_nesting_level', 200);
Go into your php.ini configuration file and change the following line:
xdebug.max_nesting_level=100
to something like:
xdebug.max_nesting_level=200
on Ubuntu using PHP 5.59 :
got to `:
/etc/php5/cli/conf.d
and find your xdebug.ini in that dir, in my case is 20-xdebug.ini
and add this line `
xdebug.max_nesting_level = 200
or this
xdebug.max_nesting_level = -1
set it to -1 and you dont have to worry change the value of the nesting level.
`
probably happened because of xdebug.
Try commenting the following line in your "php.ini" and restart your server to reload PHP.
  ";xdebug.max_nesting_level"
Try looking in /etc/php5/conf.d/ to see if there is a file called xdebug.ini
max_nesting_level is 100 by default
If it is not set in that file add:
xdebug.max_nesting_level=300
to the end of the list so it looks like this
xdebug.remote_enable=on
xdebug.remote_handler=dbgp
xdebug.remote_host=localhost
xdebug.remote_port=9000
xdebug.profiler_enable=0
xdebug.profiler_enable_trigger=1
xdebug.profiler_output_dir=/home/drupalpro/websites/logs/profiler
xdebug.max_nesting_level=300
you can then use #Andrey's test before and after making this change to see if worked.
php -r 'function foo() { static $x = 1; echo "foo ", $x++, "\n"; foo(); } foo();'
php.ini:
xdebug.max_nesting_level = -1
I'm not entirely sure if the value will ever overflow and reach -1, but it'll either never reach -1, or it'll set the max_nesting_level pretty high.
You could convert your recursive code into an iterative code, which simulates the recursion. This means that you have to push the current status (url, document, position in document etc.) into an array, when you reach a link, and pop it out of the array, when this link has finished.
Check recursion from command line:
php -r 'function foo() { static $x = 1; echo "foo ", $x++, "\n"; foo(); } foo();'
if result > 100 THEN check memory limit;
You could try to wiggle down the nesting by implementing parallel workers (like in cluster computing) instead of increasing the number of nesting function calls.
For example: you define a limited number of slots (eg. 100) and monitor the number of "workers" assigned to each/some of them. If any slots become free, you put the waiting workers "in them".
<?php
ini_set('xdebug.max_nesting_level', 9999);
... your code ...
P.S. Change 9999 to any number you want.
Stumbled upon this bug as well during development.
However, in my case it was caused by an underlying loop of functions calling eachother - as a result of continuous iterations during development.
For future reference by search engines - the exact error my logs provided me with was:
Exception: Maximum function nesting level of '256' reached, aborting!
If, like in my case, the given answers do not solve your problem, make sure you're not accidentally doing something along the lines of the following simplified situation:
function foo(){
// Do something
bar();
}
function bar(){
// Do something else
foo();
}
In this case, even if you set ini_set('xdebug.max_nesting_level', 9999); it will still print out the same error message in your logs.
If you're using Laravel, do
composer update
This should be work.
In your case it's definitely the crawler instance is having more Xdebug limit to trace error and debug info.
But, in other cases also errors like on PHP or core files like CodeIgniter libraries will create such a case and if you even increase the x-debug level setting it would not vanish.
So, look into your code carefully :) .
Here was the issue in my case.
I had a service class which is library in CodeIgniter. Having a function inside like this.
class PaymentService {
private $CI;
public function __construct() {
$this->CI =& get_instance();
}
public function process(){
//lots of Ci referencing here...
}
My controller as follow:
$this->load->library('PaymentService');
$this->process_(); // see I got this wrong instead it shoud be like
Function call on last line was wrong because of the typo, instead it should have been like below:
$this->Payment_service->process(); //the library class name
Then I was keeping getting the exceed error message. But I disabled XDebug but non helped. Any way please check you class name or your code for proper function calling.
I had a error when i was installing many plugins So the error 100 showed including the location of the last plugin that i installed C:\wamp\www\mysite\wp-content\plugins\"..." so i deleted this plugin folder on the C: drive then everything was back to normal.I think i have to limit the amount of plug-in i install or have activated .good luck i hope it helps
I had this issue with WordPress on cloud9. It turns out it was the W3 Caching plugin. I disabled the plugin and it worked fine.
Another solution if you are running php script in CLI(cmd)
The php.ini file that needs edit is different in this case. In my WAMP installation the php.ini file that is loaded in command line is:
\wamp\bin\php\php5.5.12\php.ini
instead of \wamp\bin\apache\apache2.4.9\bin\php.ini which loads when php is run from browser
You can also modify the {debug} function in modifier.debug_print_var.php, in order to limit its recursion into objects.
Around line 45, before :
$results .= '<br>' . str_repeat(' ', $depth * 2)
. '<b> ->' . strtr($curr_key, $_replace) . '</b> = '
. smarty_modifier_debug_print_var($curr_val, ++$depth, $length);
After :
$max_depth = 10;
$results .= '<br>' . str_repeat(' ', $depth * 2)
. '<b> ->' . strtr($curr_key, $_replace) . '</b> = '
. ($depth > $max_depth ? 'Max recursion depth:'.(++$depth) : smarty_modifier_debug_print_var($curr_val, ++$depth, $length));
This way, Xdebug will still behave normally: limit recursion depth in var_dump and so on.
As this is a smarty problem, not a Xdebug one!
I had the same problem and I resolved it like this:
Open MySQL my.ini file
In [mysqld] section, add the following line: innodb_force_recovery =
1
Save the file and try starting MySQL
Remove that line which you just added and Save

Resources