When running perf/simpleperf stat, the report_option_string,
report_sample_options and run_report_sample configuration parameters
don't make sense. Instead of quietly ignoring them, raise a
ConfigError so that the user can fix the agenda.
devlib learnt to use report-sample in
ARM-software/devlib@fe2fe3ae04 ("collector/perf: run simpleperf
report-sample in the target if requested"). Adapt the perf instrument
to use the new parameters of the PerfCollector.
With the perf instrument configured as:
perf:
perf_type: simpleperf
command: stat
optionstring: '-a --interval-only-values --csv'
WA fails to parse simpleperf's output:
INFO Extracting reports from target...
ERROR Error in instrument perf
ERROR File "/work/workload_automation/workload-automation/wa/framework/instrument.py", line 272, in __call__
ERROR self.callback(context)
ERROR File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 142, in update_output
ERROR self._process_simpleperf_output(context)
ERROR File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 155, in _process_simpleperf_output
ERROR self._process_simpleperf_stat_output(context)
ERROR File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 233, in _process_simpleperf_stat_output
ERROR self._process_simpleperf_stat_from_csv(stat_file, context, label)
ERROR File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 245, in _process_simpleperf_stat_from_csv
ERROR context.add_metric('{}_{}'.format(label, row[1]), row[0], 'count', classifiers=classifiers)
ERROR File "/work/workload_automation/workload-automation/wa/framework/execution.py", line 222, in add_metric
ERROR self.output.add_metric(name, value, units, lower_is_better, classifiers)
ERROR File "/work/workload_automation/workload-automation/wa/framework/output.py", line 142, in add_metric
ERROR self.result.add_metric(name, value, units, lower_is_better, classifiers)
ERROR File "/work/workload_automation/workload-automation/wa/framework/output.py", line 390, in add_metric
ERROR metric = Metric(name, value, units, lower_is_better, classifiers)
ERROR File "/work/workload_automation/workload-automation/wa/framework/output.py", line 653, in __init__
ERROR self.value = numeric(value)
ERROR File "/work/workload_automation/devlib/devlib/utils/types.py", line 88, in numeric
ERROR raise ValueError('Not numeric: {}'.format(value))
ERROR
ERROR ValueError(Not numeric: Performance counter statistics)
With the above options, the csv that simpleperf produces looks like
this:
Performance counter statistics,
123456789,raw-l1-dtlb,,(60%),
42424242,raw-l1-itlb,,(60%),
Total test time,1.001079,seconds,
Performance counter statistics,
123456789,raw-l1-dtlb,,(60%),
42424242,raw-l1-itlb,,(60%),
Total test time,2.001178,seconds,
Performance counter statistics,
[...]
That is, with "--interval-only-values", the "Performance counter
statistics," header is repeated every interval. WA current expects
it only in the first line. Modify the condition so that it is ignored
every time we find it in the file and not just the first time.
* Added simpleperf type to perf instrument as it's more stable
on Android devices.
* Added record command to instrument
* Added output processing for simpleperf