I use QScreen::GrabWindow(nullptr), sometimes it would crash. it's stack is like this:
RaiseException() : [KERNELBASE] (File: N/A #Line N/A)
CxxThrowException() : [VCRUNTIME140] (File: N/A #Line N/A)
QGridLayoutEngine::rowCount() : [Qt5Gui] (File: N/A #Line N/A)
QGridLayoutEngine::rowCount() : [Qt5Gui] (File: N/A #Line N/A)
QMovie::updated() : [Qt5Gui] (File: N/A #Line N/A)
qt_pixmapFromWinHBITMAP() : [Qt5Gui] (File: N/A #Line N/A)
N/A() : [qwindows] (File: N/A #Line N/A)
QScreen::grabWindow() : [Qt5Gui] (File: N/A #Line N/A)
how about this api's stability? or I should use gdi directly?
Related
A section of my R Shiny script is giving me a headache. In short, the script receives two data tables: data_tran and contam_datafile. data_tran is queried with contam_datafile for any matching IDs under the "Feature.ID" column. A user is then given the option to either keep the matching IDs, remove
the matching IDs (by turning to zeros), or to do nothing.
data_tran_contam_filt_react = reactive({
if(input$contam_filter == "Remove"){
data_tran = data_tran_react()
contam_datafile = contam_datafile()
rownames(data_tran) = data_tran$Feature.ID
# data_tran = data_tran[ ! data_tran$Feature.ID %in% contam_datafile$Feature.ID,]
data_tran[contam_datafile$Feature.ID,!names(data_tran) %in% c("Consensus.Lineage","Feature.ID","ReprSequence","rowID")] = 0
rownames(data_tran) = 1:nrow(data_tran)
}
if(input$contam_filter == "Analyze"){
data_tran = data_tran_react()
contam_datafile = contam_datafile()
data_tran = data_tran[data_tran$Feature.ID %in% contam_datafile$Feature.ID,]
}
if(input$contam_filter == "No action"){
data_tran = data_tran_react()
}
})
This works when I run the R Shiny app locally within a combined ui/server script. I've tested it with several datasets and it's all good. However, when I transfer it to an Ubuntu LTS-based server, and select the option to "Remove", the app fails and provides the "Error in Summary.factor: ‘max’ not meaningful for factors" within the log files. All other aspects of the R Shiny script works locally and remotely except for the "Remove" function. I'll include the full error log below.
I don't get it. Any help would be appreciated.
Thanks
Warning in Ops.factor(i, 0L) : ‘>=’ not meaningful for factors
Warning: Error in Summary.factor: ‘max’ not meaningful for factors
86: stop
85: Summary.factor
84: [<-.data.frame
82: <reactive:data_tran_contam_filt_react> [DELETED]
66: data_tran_contam_filt_react
65: <reactive:data_long_react> [DELETED]
49: data_long_react
45: observe [DELETED]
44: <observer>
1: runApp
Warning: Error in Summary.factor: ‘max’ not meaningful for factors
107: <Anonymous>
106: stop
105: data_long_react
104: exprFunc [DELETED]
103: widgetFunc
102: ::
htmlwidgets
shinyRenderWidget
101: func
88: renderFunc
87: renderFunc
83: renderFunc
82: output$proc_main_alt
1: runApp
Warning: Error in Summary.factor: ‘max’ not meaningful for factors
107: <Anonymous>
106: stop
105: data_tran_contam_filt_react
104: exprFunc [DELETED]
103: widgetFunc
102: ::
htmlwidgets
shinyRenderWidget
101: func
88: renderFunc
87: renderFunc
83: renderFunc
82: output$proc_new_data
1: runApp
Execution halted
Ignore the "DELETED". It's just a server address and contains some identifying information.
Thank you to Ric Villalba for the suggestion.
This turned out to be a simple R version issue, and likely caused by the change in stringsAsFactors between R 3.x and 4.x. It sent me down a whole 8-hour path of trying to update R and encountering package issues, but I have it all sorted out now.
Cheers
i have a column date in the format 1/1/15 (month / day / year) without leading zeros and 15 instead of 2015.
i tried
data = data.withColumn('date' , to_date(unix_timestamp(data['date'], 'MM-dd-yyyy').cast("timestamp")))
data.orderBy('date').show()
it gives NULL in the date column (maybe because of the year format; i tried with MM-dd-yy , M-d-yy too)
so i tried
data = data.withColumn('date' , regexp_replace('date', '15', '2015'))
data = data.withColumn('date' , regexp_replace('date', '/2015/', '-15-'))
data = data.withColumn('date' , regexp_replace('date' , '/' , '-'))
now I have the date as 1-1-2015 and then when i tried the code from above , it shows the following error:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-39-470368900f3b> in <module>
----> 1 data.orderBy('date').show()
C:\Users\Admin\Anaconda3\lib\site-packages\pyspark\sql\dataframe.py in show(self, n, truncate, vertical)
438 """
439 if isinstance(truncate, bool) and truncate:
--> 440 print(self._jdf.showString(n, 20, vertical))
441 else:
442 print(self._jdf.showString(n, int(truncate), vertical))
C:\Users\Admin\Anaconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
1303 answer = self.gateway_client.send_command(command)
1304 return_value = get_return_value(
-> 1305 answer, self.gateway_client, self.target_id, self.name)
1306
1307 for temp_arg in temp_args:
C:\Users\Admin\Anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
126 def deco(*a, **kw):
127 try:
--> 128 return f(*a, **kw)
129 except py4j.protocol.Py4JJavaError as e:
130 converted = convert_exception(e.java_exception)
C:\Users\Admin\Anaconda3\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o140.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 34.0 failed 1 times, most recent failure: Lost task 1.0 in stage 34.0 (TID 54, DESKTOP-IQ36PJF, executor driver): org.apache.spark.SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to parse '5-17-2015' in the new parser. You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string.
at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:150)
at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:141)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:86)
at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.parse(TimestampFormatter.scala:77)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:31)
at org.sparkproject.guava.collect.Ordering.leastOf(Ordering.java:628)
at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37)
at org.apache.spark.rdd.RDD.$anonfun$takeOrdered$2(RDD.scala:1492)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:837)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:837)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.time.format.DateTimeParseException: Text '5-17-2015' could not be parsed at index 0
at java.base/java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2046)
at java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1874)
at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:78)
... 24 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2194)
at org.apache.spark.rdd.RDD.$anonfun$reduce$1(RDD.scala:1094)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:388)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:1076)
at org.apache.spark.rdd.RDD.$anonfun$takeOrdered$1(RDD.scala:1498)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:388)
at org.apache.spark.rdd.RDD.takeOrdered(RDD.scala:1486)
at org.apache.spark.sql.execution.TakeOrderedAndProjectExec.executeCollect(limit.scala:183)
at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3627)
at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2697)
at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2697)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2904)
at org.apache.spark.sql.Dataset.getRows(Dataset.scala:300)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:337)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: org.apache.spark.SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to parse '5-17-2015' in the new parser. You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string.
at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:150)
at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:141)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:86)
at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.parse(TimestampFormatter.scala:77)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:31)
at org.sparkproject.guava.collect.Ordering.leastOf(Ordering.java:628)
at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37)
at org.apache.spark.rdd.RDD.$anonfun$takeOrdered$2(RDD.scala:1492)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:837)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:837)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
... 1 more
Caused by: java.time.format.DateTimeParseException: Text '5-17-2015' could not be parsed at index 0
at java.base/java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2046)
at java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1874)
at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:78)
... 24 more
any help regarding this would be helpful!!!
thanks !
I'm working on a quite complicated dashboard written in R using Shiny and ggplot2.
At the moment it was written everything worked fine, but after some time (data is updated daily) it started to segfault while rendering specific pages.
I can reproduce the problem by opening specific tab with 12 ggplot graphs.
Console output at the moment of failure looks like this:
dashboard_1 | address 0x194b1, cause 'memory not mapped'
dashboard_1 |
dashboard_1 | Traceback:
dashboard_1 | 1: mapply(FUN = f, ..., SIMPLIFY = FALSE)
dashboard_1 | 2: Map(build_grob, plot$layer, data)
dashboard_1 | 3: ggplot_gtable(data)
dashboard_1 | 4: print.ggplot(result$value)
dashboard_1 | 5: print(result$value)
dashboard_1 | 6: eval(expr, envir, enclos)
dashboard_1 | 7: eval(expr, pf)
dashboard_1 | 8: withVisible(eval(expr, pf))
dashboard_1 | 9: evalVis(expr)
dashboard_1 | 10: capture.output(print(result$value))
...
dashboard_1 | 65: tryCatchOne(expr, names, parentenv, handlers[[1L]])
dashboard_1 | 66: tryCatchList(expr, classes, parentenv, handlers)
dashboard_1 | 67: tryCatch(evalq((function (handle, binary, message) { for (handler in .wsconns[[as.character(handle)]]$.messageCallbacks) { result <- try(handler(binary, message)) if (inherits(result, "try-error")) { .wsconns[[as.character(handle)]]$close() return() } }})("62978544", FALSE, "{\"method\":\"update\",\"data\":{\"daterange:shiny.date\":[\"2014-04-26\",\"2015-04-26\"],\"group_by\":\"weeks\"}}"), <environment>), error = .rcpp_error_recorder)
dashboard_1 | 68: withCallingHandlers(tryCatch(evalq((function (handle, binary, message) { for (handler in .wsconns[[as.character(handle)]]$.messageCallbacks) { result <- try(handler(binary, message)) if (inherits(result, "try-error")) { .wsconns[[as.character(handle)]]$close() return() } }})("62978544", FALSE, "{\"method\":\"update\",\"data\":{\"daterange:shiny.date\":[\"2014-04-26\",\"2015-04-26\"],\"group_by\":\"weeks\"}}"), <environment>), error = .rcpp_error_recorder), warning = .rcpp_warning_recorder)
dashboard_1 | aborting ...
dashboard_1 | Segmentation fault
rmreports_dashboard_1 exited with code 139
I would like to trace the root cause and introduce changes to prevent segfault from happening. What is the best way to approach this problem?
I was searching your problem was able to find this thread. https://github.com/hadley/dplyr/issues/322
Which says to try to update 'dplyr' with 'devtools'.
"devtools::install_github("hadley/dplyr", build_vignettes = FALSE)"
Exemple examples\activemq\activemq-camel-blueprint fails after being updated. My own blueprint bundles also fails in the same conditions.
karaf#root> features:install examples-activemq-camel-blueprint
karaf#root> >>>> ActiveMQ-Blueprint-Example set body: Fri Sep 12 18:11:16 CEST 2014
>>>> ActiveMQ-Blueprint-Example set body: Fri Sep 12 18:11:18 CEST 2014
>>>> ActiveMQ-Blueprint-Example set body: Fri Sep 12 18:11:20 CEST 2014
karaf#root> stop 381
[ 381] [Resolved ] [ ] [ ] [ 80] Apache ServiceMix :: Examples :: ActiveMQ Camel Blueprint (5.1.0)
karaf#root> update 381
karaf#root> start 381
18:13:53,156 | ERROR | Thread-65 | BlueprintContainerImpl | 7 - org.apache.aries.blueprint.core - 1.4.0 | Unable to start blueprint container for bundle activemq-camel-blueprint
org.osgi.service.blueprint.container.ComponentDefinitionException: java.lang.NullPointerException
at org.apache.aries.blueprint.container.ReferenceRecipe.internalCreate(ReferenceRecipe.java:122)
at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_45]
at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88)
at org.apache.aries.blueprint.container.BeanRecipe.setProperty(BeanRecipe.java:933)
at org.apache.aries.blueprint.container.BeanRecipe.setProperties(BeanRecipe.java:907)
at org.apache.aries.blueprint.container.BeanRecipe.setProperties(BeanRecipe.java:888)
at org.apache.aries.blueprint.container.BeanRecipe.internalCreate2(BeanRecipe.java:820)
at org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:787)
at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_45]
at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88)
at org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:245)
at org.apache.aries.blueprint.container.BlueprintRepository.createInstance(BlueprintRepository.java:230)
at org.apache.aries.blueprint.container.BlueprintRepository.create(BlueprintRepository.java:155)
at org.apache.aries.blueprint.container.BlueprintContainerImpl.processProcessors(BlueprintContainerImpl.java:527)
at org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:361)
at org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:269)
at org.apache.aries.blueprint.container.BlueprintExtender.createContainer(BlueprintExtender.java:276)
at org.apache.aries.blueprint.container.BlueprintExtender.createContainer(BlueprintExtender.java:245)
at org.apache.aries.blueprint.container.BlueprintExtender.modifiedBundle(BlueprintExtender.java:235)
at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$Tracked.customizerModified(BundleHookBundleTracker.java:500)
at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$Tracked.customizerModified(BundleHookBundleTracker.java:433)
at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$AbstractTracked.track(BundleHookBundleTracker.java:725)
at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$Tracked.bundleChanged(BundleHookBundleTracker.java:463)
at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$BundleEventHook.event(BundleHookBundleTracker.java:422)
at org.apache.felix.framework.util.SecureAction.invokeBundleEventHook(SecureAction.java:1103)
at org.apache.felix.framework.util.EventDispatcher.createWhitelistFromHooks(EventDispatcher.java:695)
at org.apache.felix.framework.util.EventDispatcher.fireBundleEvent(EventDispatcher.java:483)
at org.apache.felix.framework.Felix.fireBundleEvent(Felix.java:4244)
at org.apache.felix.framework.Felix.startBundle(Felix.java:1923)
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:944)
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:931)
at org.apache.karaf.shell.osgi.StartBundle.doExecute(StartBundle.java:37)
at org.apache.karaf.shell.osgi.BundlesCommand.doExecute(BundlesCommand.java:37)
at org.apache.karaf.shell.console.OsgiCommandSupport.execute(OsgiCommandSupport.java:38)[15:org.apache.karaf.shell.console:2.3.4]
at org.apache.felix.gogo.commands.basic.AbstractCommand.execute(AbstractCommand.java:35)[15:org.apache.karaf.shell.console:2.3.4]
at org.apache.felix.gogo.runtime.CommandProxy.execute(CommandProxy.java:78)[15:org.apache.karaf.shell.console:2.3.4]
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:474)[15:org.apache.karaf.shell.console:2.3.4]
at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:400)[15:org.apache.karaf.shell.console:2.3.4]
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)[15:org.apache.karaf.shell.console:2.3.4]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:183)[15:org.apache.karaf.shell.console:2.3.4]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:120)[15:org.apache.karaf.shell.console:2.3.4]
at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:89)
at org.apache.karaf.shell.console.jline.Console.run(Console.java:183)
at java.lang.Thread.run(Thread.java:744)[:1.7.0_45]
at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$4.doRun(ShellFactoryImpl.java:144)[47:org.apache.karaf.shell.ssh:2.3.4]
at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$4$1.run(ShellFactoryImpl.java:135)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_45]
at javax.security.auth.Subject.doAs(Subject.java:356)[:1.7.0_45]
at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$4.run(ShellFactoryImpl.java:133)[47:org.apache.karaf.shell.ssh:2.3.4]
Caused by: java.lang.NullPointerException
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1432)
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:72)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1843)
at java.lang.ClassLoader.loadClass(ClassLoader.java:412)[:1.7.0_45]
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)[:1.7.0_45]
at java.lang.Class.forName0(Native Method)[:1.7.0_45]
at java.lang.Class.forName(Class.java:270)[:1.7.0_45]
at org.apache.aries.proxy.impl.interfaces.ProxyClassLoader.isInvalid(ProxyClassLoader.java:109)
at org.apache.aries.proxy.impl.interfaces.InterfaceProxyGenerator.getProxyInstance(InterfaceProxyGenerator.java:84)
at org.apache.aries.proxy.impl.AsmProxyManager.createNewProxy(AsmProxyManager.java:72)
at org.apache.aries.proxy.impl.AbstractProxyManager.createDelegatingInterceptingProxy(AbstractProxyManager.java:75)
at org.apache.aries.proxy.impl.AbstractProxyManager.createDelegatingProxy(AbstractProxyManager.java:40)
at org.apache.aries.blueprint.container.AbstractServiceReferenceRecipe.createProxy(AbstractServiceReferenceRecipe.java:338)
at org.apache.aries.blueprint.container.ReferenceRecipe.internalCreate(ReferenceRecipe.java:106)
... 50 more
I'm trying to figure out what is stored at a certain place on the stack with GDB. I have a statement:
cmpl $0x176,-0x10(%ebp)
In this function I'm comparing 0x176 to the -0x10(%ebp) and I am wondering if there is a way to see what is stored at -0x10(%ebp).
I am wondering if there is a way to see what is stored at -0x10(%ebp).
Assuming you have compiled with debug info, info locals will tell you about all the local variables in current frame. After that, print (char*)&a_local - (char*)$ebp will tell you the offset from start of a_local to %ebp, and you can usually find out what local is close to 0x176.
Also, if your locals have initializers, you can do info line NN to figure out which assembly instruction range corresponds to initialization of a given local, then disas ADDR0,ADDR1 to see the disassembly, and again understand which local is located at what offset.
Another alternative is to readelf -w a.out, and look for entries like this:
int foo(int x) { int a = x; int b = x + 1; return b - a; }
<1><25>: Abbrev Number: 2 (DW_TAG_subprogram)
<26> DW_AT_external : 1
<27> DW_AT_name : foo
<2b> DW_AT_decl_file : 1
<2c> DW_AT_decl_line : 1
<2d> DW_AT_prototyped : 1
<2e> DW_AT_type : <0x67>
<32> DW_AT_low_pc : 0x0
<36> DW_AT_high_pc : 0x23
<3a> DW_AT_frame_base : 0x0 (location list)
<3e> DW_AT_sibling : <0x67>
<2><42>: Abbrev Number: 3 (DW_TAG_formal_parameter)
<43> DW_AT_name : x
<45> DW_AT_decl_file : 1
<46> DW_AT_decl_line : 1
<47> DW_AT_type : <0x67>
<4b> DW_AT_location : 2 byte block: 91 0 (DW_OP_fbreg: 0)
<2><4e>: Abbrev Number: 4 (DW_TAG_variable)
<4f> DW_AT_name : a
<51> DW_AT_decl_file : 1
<52> DW_AT_decl_line : 1
<53> DW_AT_type : <0x67>
<57> DW_AT_location : 2 byte block: 91 74 (DW_OP_fbreg: -12)
<2><5a>: Abbrev Number: 4 (DW_TAG_variable)
<5b> DW_AT_name : b
<5d> DW_AT_decl_file : 1
<5e> DW_AT_decl_line : 1
<5f> DW_AT_type : <0x67>
<63> DW_AT_location : 2 byte block: 91 70 (DW_OP_fbreg: -16)
This tells you that x is stored at fbreg+0, a at fbreg-12, and b at fbreg-16. Now you just need to examine location list to figure out how to derive fbreg from %ebp. The list for above code looks like this:
Contents of the .debug_loc section:
Offset Begin End Expression
00000000 00000000 00000001 (DW_OP_breg4: 4)
00000000 00000001 00000003 (DW_OP_breg4: 8)
00000000 00000003 00000023 (DW_OP_breg5: 8)
00000000 <End of list>
So for most of the body, fbreg is %ebp+8, which means that a is at %ebp-4. Disassembly confirms:
00000000 <foo>:
0: 55 push %ebp
1: 89 e5 mov %esp,%ebp
3: 83 ec 10 sub $0x10,%esp
6: 8b 45 08 mov 0x8(%ebp),%eax # 'x' => %eax
9: 89 45 fc mov %eax,-0x4(%ebp) # '%eax' => 'a'
...