1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-08-14 01:59:50 +01:00

119 Commits

Author SHA1 Message Date
Sebastian Goscik
a826b661f4 Version bump 2016-06-10 14:26:32 +01:00
setrofim
43f4e52995 Merge pull request from ep1cman/release-notes
Documentation changes & Removing apk_version
2016-06-10 13:22:11 +01:00
Sebastian Goscik
23b3b165d5 docs: Change log & updates 2016-06-10 13:17:10 +01:00
Sebastian Goscik
2f87e126f0 apk_version: Removed instrument
APK versions are now added as result classifiers:
48259d872b
2016-06-09 13:55:27 +01:00
setrofim
59d74b6273 Merge pull request from ep1cman/release-notes
servo_power: Added check for device platform.
2016-06-08 11:16:14 +01:00
Sebastian Goscik
7b92f355c8 netstat: Changed exception type & typo fix 2016-06-08 11:13:35 +01:00
Sebastian Goscik
982069be32 servo_power: Added check for device platform.
Now checks to see if the device is running chromeOS.
2016-06-08 11:10:53 +01:00
setrofim
63ff8987ea Merge pull request from ep1cman/cpustates
cpustates
2016-06-06 17:12:12 +01:00
Sebastian Goscik
f276d4e39f cpustates: Added the ability to configure how a missing start marker is handled.
cpustates can now handle the lack of a start marker in three ways:

 - try: If the start marker is present only the correct section of the trace
        will be used, if its not the whole trace will be used.
 - error: An error will be raised if the start marker is missing
 - ignore: The markers are ignored and the whole trace is always used.
2016-06-06 17:09:48 +01:00
Sebastian Goscik
1811a8b733 PowerStateProcessor: Added a warning when no stop marker is encountered
PowerStateProcessor will now stop itrerating over events when it finds
a stop marker. If it does not find a stop marker it will log a warning.
2016-06-06 17:03:56 +01:00
Sebastian Goscik
0ae03e2c54 PowerStateProcessor: Exceptions no longer stop processing
If an exception is raised inside a generator it cannot be continued.
To get around this exceptions are now caught and later output via the
logger.

Also added logger setup when running cpustates as a standalone script
2016-06-06 16:28:07 +01:00
Sebastian Goscik
c423a8b4bc Utils.misc: Added memoised function decorator
This allows the return value of a function to be cached so that
when it is called in the future the function does not need to
run.

Borrowed from: https://github.com/ARM-software/devlib
2016-06-06 16:28:07 +01:00
Sebastian Goscik
c207a34872 cpustates: Now shows a warning when it fails to nudge a core.
Before WA would raise a error message that wasn't very clear.
Now when cpustates tries to nudge cores and and error occurs it
will only show a warning (which promts users to check if the cpu is
hot plugged out) and keep going with the reset of the run without
causing errors in other WA extensions.
2016-06-02 15:14:03 +01:00
setrofim
2cb40d3da6 Merge pull request from ep1cman/master
Revent fixes
2016-06-01 17:04:46 +01:00
Sebastian Goscik
18d1f9f649 ReventWorkload: Now kills all revent instances on teardown
Previously revent would be left running if a run was aborted.
2016-06-01 16:47:01 +01:00
Sebastian Goscik
17ce8d0fe9 Revent: Device model name is now used when searching for revent files
Previously the WA device name was used when searching for revent files.
Since most were `generic_android` this made it difficult to keep revent
files for multiple android devices. Now it the device model is used instead.

If a file with the device model is not found it will fall back to the WA
device name.
2016-06-01 16:47:01 +01:00
setrofim
ac03c9bab4 Merge pull request from ep1cman/master
LinuxDevice fixes
2016-06-01 14:14:13 +01:00
Sebastian Goscik
8bdffe6f9c LinuxDevice: Removed has_root method
Was not used anywhere and is_rooted should be used instead
2016-06-01 14:13:37 +01:00
Sebastian Goscik
2ff13089fd LinuxDevice: kick_off & killall will now run as root on rooted devices by default
kick_off has been changed to behave the same as AndroidDevice.

Said changes caused kill all to fail on rooted devices. Killall will now
behave in the same way as kick_off, if specifically told to (or not to)
run as root it will. Otherwise it will run as root if the device is rooted
2016-06-01 13:50:59 +01:00
setrofim
772346507c Merge pull request from ep1cman/servo
servo_power: Added support for chromebook servo boards
2016-05-27 16:16:49 +01:00
Sebastian Goscik
0fc88a84be servo_power: Added support for chromebook servo boards
Servo is a debug board used for Chromium OS test and development. Among other uses, it allows
access to the built in power monitors (if present) of a Chrome OS device. More information on
Servo board can be found in the link bellow:

 https://www.chromium.org/chromium-os/servo

based on: 03ede10739
and: 9a0dc55b55
2016-05-27 16:09:08 +01:00
setrofim
6e4f6af942 Merge pull request from ep1cman/poller
Poller: Added an instrument to poll files and output a csv of their v…
2016-05-26 16:33:59 +01:00
Sebastian Goscik
c87daa510e Poller: Added an instrument to poll files and output a csv of their values 2016-05-26 16:32:58 +01:00
Sebastian Goscik
5e1c9694e7 Merge pull request from setrofim/master
list_or_string: ensure that elements of a list are always strings
2016-05-26 16:07:22 +01:00
Sergei Trofimov
a9a42164a3 list_or_string: ensure that elements of a list are always strings 2016-05-26 16:05:43 +01:00
Sebastian Goscik
0d50fe9b77 AndroidDevice: kick-off no longer requires root
kick off will now use root if the device is rooted or if manually
specified otherwise its run without root.
2016-05-26 10:29:21 +01:00
setrofim
e5c228bab2 Merge pull request from ep1cman/camera_update
cameracapture & camerarecord: Fixed parameters
2016-05-25 09:49:58 +01:00
Sebastian Goscik
7ccac87b93 cameracapture & camerarecord: Fixed parameters
Parameters were not being passed to the UI automation properly
2016-05-25 09:49:21 +01:00
setrofim
24a2afb5b9 Merge pull request from ep1cman/vellamo-update
Vellamo update
2016-05-24 13:01:31 +01:00
Sebastian Goscik
9652801cce vellamo: Fixed geting values from logcat
The previous method of getting results out of logcat does not work
if the format of logcat changes.
2016-05-24 13:00:10 +01:00
setrofim
881b7514e2 Merge pull request from ep1cman/buildprop
AndroidDevice: Improved gathering of build props
2016-05-24 12:56:22 +01:00
Sebastian Goscik
17fe6c9a5b AndroidDevice: Improved gathering of build props
These are now gathered via `getprop` rather than trying to parse the
build.prop file directly.

This fixes issues with build.prop files that have imports.
2016-05-24 12:55:33 +01:00
Sebastian Goscik
f02b6d5fd9 vellamo: Added support for v3.2.4 2016-05-24 09:57:38 +01:00
Sebastian Goscik
eaf4d02aea Merge pull request from chase-qi/add-blogbench-workload
workloads: add blogbench workload
2016-05-24 09:55:37 +01:00
Chase Qi
56a4d52995 workloads: add blogbench workload
Blogbench is a portable filesystem benchmark that tries to reproduce the
load of a real-world busy file server.

Signed-off-by: Chase Qi <chase.qi@linaro.org>
2016-05-24 16:49:19 +08:00
Sebastian Goscik
ec5c149df5 Merge pull request from chase-qi/add-stress-ng-workload
workloads: add stress_ng workload
2016-05-24 09:45:35 +01:00
setrofim
c0f32237e3 Merge pull request from ep1cman/camera_update
cameracapture & camerarecord: Updated workloads to work with Android M+
2016-05-16 17:28:39 +01:00
Sebastian Goscik
5a1c8c7a7e cameracapture & camerarecord: Updated workloads to work with Android M+
The stock camera app as of Android M has changed. This commit updates
the ui automation to work with this new app. As part of this change
it was required to bump the API level of the ui automation to 18.

Also made the teardown of the capture workload close the app like the
record workload.
2016-05-16 17:25:50 +01:00
Sebastian Goscik
46cd26e774 BaseUiAutomation: Added functions for checking version strings
Added splitVersion and compareVersions functions allow versions strings
like "3.2.045" to be compared.

Also fixed the build script to now copy to the correct folder
2016-05-16 17:22:09 +01:00
Sebastian Goscik
544c498eb6 UiAutomatorWorkload: Added quotes around uiautomator parameters
Some characters would be interpreted by the shell thus breaking the
command. Adding quotes around the parameters solved this.

N.B Space still needs to be replaced.
2016-05-16 16:19:57 +01:00
Chase Qi
5ad75dd0b8 workloads: add stress_ng workload
stress-ng will stress test a computer system in various selectable ways.
It was designed to exercise various physical subsystems of a computer as
well as the various operating system kernel interfaces.

Signed-off-by: Chase Qi <chase.qi@linaro.org>
2016-05-13 19:35:26 +08:00
setrofim
b2248413b7 Merge pull request from ep1cman/master
cpustates: Fix for error when trying to use cpustates with hotplugged…
2016-05-13 11:35:45 +01:00
setrofim
9296bafbd9 Merge pull request from ep1cman/juno-fixes
hwmon & adb fixes
2016-05-10 09:49:33 +01:00
Sebastian Goscik
8abf39762d hwmon: Fixed sensor naming
Previously the sensor name was just appeneded to the end of the
previous sensors name.

Now the hwmon name is added as a classifier of the metric.
If the hwmon sensor has a label, the metric will use this for its name,
if it does not then the sensors kind and ID will be used e.g. temp3
2016-05-10 09:27:42 +01:00
Sebastian Goscik
87cbce4244 hwmon: Added allowed values to sensors parameter
Previously the sensor name was just appeneded to the end of the
previous sensors name.
2016-05-10 09:27:42 +01:00
Sebastian Goscik
ef61f16896 AndroidDevice: Fixed screen lock disable
Due to the previous commits, this command no longer works properly.

It turns out there is an issue with using multiple levels of escaping.
It seems that bash handles the backslashes and single quotes separately
incorrectly processing our escaping. To get around this we are writing the
sqlite command to a shell script file and running that.

This seems to be the only case in WA at the moment that requires this,
if more show up/when WA moves to devlib it should use the devlib shutil
mechanism.
2016-05-10 09:27:42 +01:00
Sebastian Goscik
e96450d226 adb_shell: Fixed getting return codes
They way we were attempting to get return codes before always gave
us a return code of the previous echo, therefore always `0`.

This commit adds the newline into the last echo.
2016-05-10 09:12:54 +01:00
setrofim
2cf08cf448 Merge pull request from ep1cman/fixes
Added sqlite3 binary & changed kick_off signature
2016-05-09 17:36:04 +01:00
Sebastian Goscik
59cfd7c757 AndroidDevice: WA now pushes its own sqlite3 binary
Some device have the sqlite3 binary removed. WA will now check for
this and push its own binary if necessary.
2016-05-09 17:31:09 +01:00
Sebastian Goscik
d3c7f11f2d AndroidDevice: Changed kick_off signature to match BaseLinuxExamples 2016-05-09 17:06:08 +01:00
Sebastian Goscik
187fd70077 Merge pull request from setrofim/master
report_power_stats: number of entries returned always matches number of reporters
2016-05-09 10:23:05 +01:00
Sergei Trofimov
fe7f98a98b report_power_stats: number of entries returned always matches number of reporters
Previously, only reports that were generated were returned. With this
commit, there will be an entry for each active reporter in the returned
list. If a reporter did not produce a valid report, the entry will be
None.

This ensures consistent output, even if a run time issue causes a
reporter not to produce a report  (e.g. if cpufreq events were not
enabled).
2016-05-09 10:20:25 +01:00
Sebastian Goscik
66c18fcd31 cpustates: Fix for error when trying to use cpustates with hotplugged cores
It is not possible to read frequencies from a core that has been hotplugged.
The code will now set the current and max frequencies of hotplugged cores
to None.

This still doesn't work for devices that have dynamic hotplug enabled
2016-05-06 15:00:32 +01:00
Sebastian Goscik
5773da0d08 Merge pull request from setrofim/master
sysfile_getter/cpufreq: fix taball name
2016-05-06 13:54:53 +01:00
Sergei Trofimov
d581f1f329 sysfile_getter/cpufreq: fix taball name
Commit 724f6e590e changed sysfile_getter
behavior to first tar up copied files and then gzip them. Tarball name
needs to be updated to not include '.gz' extension.
2016-05-06 13:51:09 +01:00
setrofim
f165969d61 Merge pull request from ep1cman/juno-fixes
Juno fixes
2016-05-04 11:57:56 +01:00
Sebastian Goscik
8dc24bd327 uboot: Now detects the U-Boot version to use correct line endings
Previously Linaro U-Boot releases had a bug where they used \n\r
as the line ending. This has now been fixed which caused
issues with WA. WA now detects the U-Boot version and uses the
coresponding line ending.
2016-05-04 11:54:29 +01:00
Sebastian Goscik
59066cb46d juno: Removed default bootargs
The default boot args have been removed since these cause issues with
the latest Linaro builds, which boot correctly without any bootargs.

Also made a regex string a raw-string.
2016-05-03 15:24:35 +01:00
setrofim
6c4d88ff57 Merge pull request from setrofim/master
create command: fix example parameter name in templates
2016-04-20 14:45:16 +01:00
Sergei Trofimov
a40542d57b create command: fix example parameter name in templates
Parameter name in workload templates updated to be a valid identifier.
2016-04-20 14:43:07 +01:00
Sergei Trofimov
697aefc7bb ApkWorkload: clear app data on failed uninstall.
If uninstall fails, "pm clear" should be called to make sure that the
next time the app is launched it starts from a known state (which would
normally be ensured by the uninstall).
2016-04-19 16:43:42 +01:00
Sergei Trofimov
8bc71bb810 ApkWorkload: report correct apk verison on failed install
It's possible that there is already a version of an app on target that
differs form the version of the apk on the host. In such cases, WA will
usually try to uninstall the target version and install the host
version.

It's possible that the uninstall may fail. If that happens, it will be
reported as a warning but workload exectuion will proceed with the
target version. In this case, apk_version would have already been set to
that of the host apk. This change ensures that the APK version is
correctly set to the target version (the one that actually ran).
2016-04-19 16:33:37 +01:00
Sebastian Goscik
91210f26e9 RunCommand: WA no longer runs with no workloads specs
Previously if no worklaod specs were loaded, WA would still start instruments
and then go immediately to the teardown stage. This no longer happens.
2016-04-19 16:32:53 +01:00
Sergei Trofimov
44a49db04d glbcorp: pep8 fix
Added a missing blank line between method declaration and class
attribute definitions.
2016-04-15 16:39:24 +01:00
setrofim
0bfa4bff3c Merge pull request from ep1cman/master
glbench updates
2016-04-14 16:41:26 +01:00
Sebastian Goscik
73aa590056 glbench: renamed start_activity to launch_package
To match changes made in: ff5f48b7e7
2016-04-14 16:36:37 +01:00
Sebastian Goscik
985b249a24 glbench: Fixed ending regex
Updated the regex that detected the end of the benchmark to match the new
logcat format.
2016-04-14 16:36:37 +01:00
Sebastian Goscik
f5e138bed0 Merge pull request from setrofim/master
boostrap: nicer error messages on config parasing.
2016-04-14 16:22:10 +01:00
Sergei Trofimov
b6c0e2e4fd boostrap: nicer error messages on config parasing.
- handle ValueError as well as SyntaxError from config parser
- Report source file in the error message
2016-04-14 16:18:31 +01:00
Sebastian Goscik
df8ef6be6b Merge pull request from mcgeagh/uxperf
CpuUtilisationTimeline added. This now will generate cpu utilisation …
2016-04-14 14:05:58 +01:00
Michael McGeagh
8a3186e1c8 CpuUtilisationTimeline added. This now will generate cpu utilisation based on frequencies and a number of samples
Fixed error in percentage when frequency is 'None'. Now default to 0 in these cases

cpu_utilisation is now a separate parameter in cpustate. Now generates a floating point number representing the utilisation based on the maximum frequency of the capture. No longer performs averaging of values, this can be done as a post-processing step

cpu utilisation now based on the max cpu freq per core, not max captured freq overall
2016-04-14 14:03:28 +01:00
Sebastian Goscik
68043f2a52 Merge pull request from mcgeagh/fps-allviews
fps: Can now process multiple 'view' attributes
2016-04-14 13:57:28 +01:00
Michael McGeagh
95bbce77a2 fps: Can now process multiple 'view' attributes 2016-04-14 13:12:39 +01:00
Sebastian Goscik
ec85f9f8a0 Merge pull request from setrofim/master
ApkWorkload: add package verison to the result as a classifer.
2016-04-14 11:35:49 +01:00
Sergei Trofimov
82e4998092 Deprecating apk_version instrument. 2016-04-14 11:33:54 +01:00
Sergei Trofimov
48259d872b ApkWorkload: add package verison to the result as a classifer. 2016-04-14 11:23:39 +01:00
setrofim
8d13e1f341 Merge pull request from ep1cman/glbench_logcat_fix
glbench: Fixed updated logcat format
2016-04-13 16:46:09 +01:00
Sebastian Goscik
33ef949507 Merge pull request from mcgeagh/fps-fix
Only check for crashed content if crash_check is true.
2016-04-11 13:38:18 +01:00
Michael McGeagh
68714e0e55 fps: Only check for crashed content if crash_check is true. 2016-04-11 12:01:12 +01:00
setrofim
9ee1666a76 Merge pull request from ep1cman/master
SysfsExtractor & Busybox fixes
2016-04-07 10:31:31 +01:00
Sebastian Goscik
8dcdc9afe1 busybox: Rebuilt busybox binaries to prefer applets over system binaries
Busybox will now prefer to use its own built in applets before it tries
using the system binaries so that we are always running commands as expected.
2016-04-07 10:29:13 +01:00
Sebastian Goscik
724f6e590e SysfsExtractor: Now performs tar and gzip separately
On some devices there were permissions issues when trying to tar and gzip
the temp-fs in one command. These two steps are now done separately.
2016-04-07 10:29:13 +01:00
Sebastian Goscik
507090515b Merge pull request from jimboatarm/master
Fix to install APKs with whitespace in their path name
2016-04-06 10:56:58 +01:00
James Hartley
1dfbe9e44c Fix to install APKs with whitespace in their path name 2016-04-06 10:53:08 +01:00
setrofim
d303ab2b50 Merge pull request from ep1cman/artem
ADB 1.0.35 support
2016-04-05 16:05:16 +01:00
Sebastian Goscik
b17ae78d6b adb_shell: Now handles return codes from ADB
As of ADB 1.0.35/Android N, it will return the exit code of the command that it runs
This code handles this scenario as before WA treated a return code from ADB as an
error with ADB.
2016-04-05 15:53:41 +01:00
Sergei Trofimov
391b0b01fc pylint/pep8 fixes
- android/workload: emoved an extra bank line between methods
- trace_cmd: define member attribute inside __init__
- adb_shell: ignore pylint warning about too many branches in this case
2016-04-05 11:36:39 +01:00
setrofim
20861f0ee4 Merge pull request from jimboatarm/master
Fix for packages without launch activities
2016-04-05 11:00:50 +01:00
James Hartley
ff5f48b7e7 Fix for packages without launch activities
If the package has no defined launch activity you must call the
activity manager in a different way.
2016-04-05 10:24:42 +01:00
Sebastian Goscik
9a301175b0 glbench: Fixed updated logcat format
The old results looked like:
I/TfwActivity(30824):    "description": "",
I/TfwActivity(30824):    "elapsed_time": 62070,
I/TfwActivity(30824):    "error": "NOERROR",

The new format is:
04-04 11:38:04.144  1410  1410 I TfwActivity:    "description": "",
04-04 11:38:04.144  1410  1410 I TfwActivity:    "elapsed_time": 62009,
04-04 11:38:04.144  1410  1410 I TfwActivity:    "error": "NOERROR",
2016-04-04 17:33:48 +01:00
setrofim
712c79020d Merge pull request from ep1cman/master
ResourceResolver: Show version number when resource wasn't found.
2016-03-30 11:05:21 +01:00
Sebastian Goscik
12dfbef76b ResourceResolver: Show version number when resource wasn't found.
If the ResourceResolver was looking for a specific version of a
resource and could not find it, this version number is now shown
in the error message.
2016-03-30 11:01:35 +01:00
Sebastian Goscik
b1f607ef70 Merge pull request from setrofim/master
trace-cmd fixes
2016-03-24 18:13:16 +00:00
Sergei Trofimov
107e8414bb trace-cmd: set a minimum bound on trace pull timeout
The timeout for the pulling the trace file after the run is being set
based on the time for which the trace was collected. For workloads with
short execution time, but large number of events, the resulting timeout
might be too short. To deal with this, do not let the timout be shorter
than 1 minute.
2016-03-24 16:49:42 +00:00
Sergei Trofimov
4f8b7e9f59 trace-cmd: updating sched_switch parser to handle both formats.
Depending on the kernel, sched_switch events may be formatted one of two
different ways in the text output. Previously, we've only handled the
"old" format. This commit updates the parser to handle the new format as
well.
2016-03-24 16:33:29 +00:00
setrofim
a077e7df3c Merge pull request from ep1cman/master
BaseLinuxDevice: gzipped property files are now zcat'ed
2016-03-24 16:30:32 +00:00
Sebastian Goscik
a2257fe1e2 BaseLinuxDevice: gzipped property files are now zcat'ed
Before they were cat'ed this gave garbage output for compressed files.
Cat-ing is necessary since not all properties are normal files (sysfs).
2016-03-24 16:28:19 +00:00
Sebastian Goscik
50353d0b8f Merge pull request from Sticklyman1936/lmbench_update
lmbench: Tidied up the code and improved stability
2016-03-24 16:26:52 +00:00
setrofim
0f5621ff66 Merge pull request from Sticklyman1936/sysbench_fix
sysbench: use device busybox binary
2016-03-24 16:24:38 +00:00
Sascha Bischoff
2eca77fb02 sysbench: use device busybox binary
Use the full path to busybox on the target device as opposed to
assuming it is found on the path.
2016-03-24 16:21:01 +00:00
Sascha Bischoff
3de5b5fe0b lmbench: Tidied up the code and improved stability
This patch tidies up the benchmark code to bring it in line with the
style used in Workload Automation in general. Additionally, the
results from sub-benchmarks are now directly written to a file on the
device as opposed to processing the standard output/error from the
benchmark, which was error prone.
2016-03-24 10:20:32 +00:00
Sebastian Goscik
499a9f4082 Merge pull request from setrofim/master
applaunch: pass the location of busybox into the script
2016-03-23 16:32:50 +00:00
Sergei Trofimov
3043506d86 applaunch: pass the location of busybox into the script
applaunch creates and deploys an auxilary script in order to collect
precise timings. This script invoked busybox with the assumption that it
is in PATH.

Since recent changes mean that it is no longer deployed to /system/bin,
the busybox in not found. With this commit, the full path to busybox
will be passed into the script's template.
2016-03-23 16:28:18 +00:00
Sebastian Goscik
7db904b359 Merge pull request from ep1cman/master
adb_shell: Fixed checking exit codes on Android N
2016-03-23 13:51:17 +00:00
Sebastian Goscik
5abeb7aac2 adb_shell: Fixed checking exit codes on Android N
As of android N '\n' is used as the new line separator not '\r\n'.
This fix makes the function detect which is being used by the device.
2016-03-23 13:43:07 +00:00
setrofim
e04691afb9 Merge pull request from ep1cman/master
daq: Fixed channel merging
2016-03-21 11:22:10 +00:00
Sebastian Goscik
15ced50640 daq: Fixed channel merging
Fixed channel merging when setting merge to True.
Channel merges done by setting a mapping manually were not affected by this bug.
2016-03-21 11:15:30 +00:00
setrofim
1a2e1fdf75 Merge pull request from ep1cman/master
dhyrstone: Fixed arm64 binary
2016-03-15 14:40:47 +00:00
Sebastian Goscik
3531dd6d07 dhyrstone: Fixed arm64 binary
It was dynamically linked, its is now statically linked
2016-03-15 14:38:18 +00:00
setrofim
cf55f317f8 Merge pull request from ep1cman/master
freq_sweep: Improved documentation
2016-03-09 16:52:04 +00:00
Sebastian Goscik
79554a2dbc freq_sweep: Improved documentation
- Added explanation that this instrument does not taskset workloads
 - Fixed formatting issue with the agenda example
2016-03-09 16:37:15 +00:00
setrofim
06c232545a Merge pull request from ep1cman/master
dhrystone: Updated executable resolution
2016-03-09 14:57:49 +00:00
Sebastian Goscik
11184750ec dhrystone: Updated executable resolution
Previously it was just using the binary in the dhrystone folder.
Now it uses WA's resource resolution to use the correct ABI.
2016-03-09 14:54:39 +00:00
setrofim
77b221fc5a Merge pull request from ep1cman/master
daq: Added check for duplicate channel labels
2016-03-08 12:54:33 +00:00
Sebastian Goscik
20cd6a9c18 daq: Added check for duplicate channel labels
The daq instrument will no longer accept duplicate channel names.
This caused issues where files sent from the daq sever were being
overwritten.
2016-03-07 13:21:40 +00:00
Sebastian Goscik
34d7e7055a Merge pull request from setrofim/master
run command: more usefull error message when specifying non-existing agenda path
2016-02-29 17:28:29 +00:00
Sergei Trofimov
0c1e01cad4 run command: more usefull error message when specifying non-existing agenda path
If the specified agenda argument is not found in the file system, WA
assumes it is the name of a workload and would then raise an "extension
not found error", which may be confusing if the user's intension was to
specify a path.

Now, WA will first check that neither path separator, nor a '.' are
present in the agenda argument before assuming it is a workload name, and
will provide a less confusing error in that case.
2016-02-29 17:26:29 +00:00
Sebastian Goscik
a68e46eb0a Merge pull request from setrofim/master
LinuxDevice: fixed reboot.
2016-02-22 10:00:51 +00:00
Sergei Trofimov
203a3f7d07 LinuxDevice: fixed reboot.
- Deal with the dropped connection on issuing "reboot"
- Introduced a fixed initial delay before polling for connection to
  avoid re-connecting to adevice that is still in the process of
  shutting down.
2016-02-22 09:45:42 +00:00
79 changed files with 2372 additions and 362 deletions

@@ -1,6 +1,232 @@
=================================
What's New in Workload Automation
=================================
-------------
Version 2.5.0
-------------
Additions:
##########
Instruments
~~~~~~~~~~~
- ``servo_power``: Added support for chromebook servo boards.
- ``file_poller``: polls files and outputs a CSV of their values over time.
- ``systrace``: The Systrace tool helps analyze the performance of your
application by capturing and displaying execution times of your applications
processes and other Android system processes.
Workloads
~~~~~~~~~
- ``blogbench``: Blogbench is a portable filesystem benchmark that tries to
reproduce the load of a real-world busy file server.
- ``stress-ng``: Designed to exercise various physical subsystems of a computer
as well as the various operating system kernel interfaces.
- ``hwuitest``: Uses hwuitest from AOSP to test rendering latency on Android
devices.
- ``recentfling``: Tests UI jank on android devices.
- ``apklaunch``: installs and runs an arbitrary apk file.
- ``googlemap``: Launches Google Maps and replays previously recorded
interactions.
Framework
~~~~~~~~~
- ``wlauto.utils.misc``: Added ``memoised`` function decorator that allows
caching of previous function/method call results.
- Added new ``Device`` APIs:
- ``lsmod``: lists kernel modules
- ``insmod``: inserts a kernel module from a ``.ko`` file on the host.
- ``get_binary_path``: Checks ``binary_directory`` for the wanted binary,
if it is not found there it will try to use ``which``
- ``install_if_needed``: Will only install a binary if it is not already
on the target.
- ``get_device_model``: Gets the model of the device.
- ``wlauto.core.execution.ExecutionContext``:
- ``add_classfiers``: Allows adding a classfier to all metrics for the
current result.
Other
~~~~~
- Commands:
- ``record``: Simplifies recording revent files.
- ``replay``: Plays back revent files.
Fixes/Improvements:
###################
Devices
~~~~~~~
- ``juno``:
- Fixed ``bootargs`` parameter not being passed to ``_boot_via_uboot``.
- Removed default ``bootargs``
- ``gem5_linux``:
- Added ``login_prompt`` and ``login_password_prompt`` parameters.
- ``generic_linux``: ABI is now read from the target device.
Instruments
~~~~~~~~~~~
- ``trace-cmd``:
- Added the ability to report the binary trace on the target device,
removing the need for ``trace-cmd`` binary to be present on the host.
- Updated to handle messages that the trace for a CPU is empty.
- Made timeout for pulling trace 1 minute at minimum.
- ``perf``: per-cpu statistics now get added as metrics to the results (with a
classifier used to identify the cpu).
- ``daq``:
- Fixed bug where an exception would be raised if ``merge_channels=False``
- No longer allows duplicate channel labels
- ``juno_energy``:
- Summary metrics are now calculated from the contents of ``energy.csv`` and
added to the overall results.
- Added a ``strict`` parameter. When this is set to ``False`` the device
check during validation is omitted.
- ``sysfs_extractor``: tar and gzip are now performed separately to solve
permission issues.
- ``fps``:
- Now only checks for crashed content if ``crash_check`` is ``True``.
- Can now process multiple ``view`` attributes.
- ``hwmon``: Sensor naming fixed, they are also now added as result classifiers
Resource Getters
~~~~~~~~~~~~~~~~
- ``extension_asset``: Now picks up the path to the mounted filer from the
``remote_assets_path`` global setting.
Result Processors
~~~~~~~~~~~~~~~~~
- ``cpustates``:
- Added the ability to configure how a missing ``START`` marker in the trace
is handled.
- Now raises a warning when there is a ``START`` marker in the trace but no
``STOP`` marker.
- Exceptions in PowerStateProcessor no longer stop the processing of the
rest of the trace.
- Now ensures a known initial state by nudging each CPU to bring it out of
idle and writing starting CPU frequencies to the trace.
- Added the ability to create a CPU utilisation timeline.
- Fixed issues with getting frequencies of hotplugged CPUs
- ``csv``: Zero-value classifieres are no longer converted to an empty entry.
- ``ipynb_exporter``: Default template no longer shows a blank plot for
workloads without ``summary_metrics``
Workloads
~~~~~~~~~
- ``vellamo``:
- Added support for v3.2.4.
- Fixed getting values from logcat.
- ``cameracapture``: Updated to work with Android M+.
- ``camerarecord``: Updated to work with Android M+.
- ``lmbench``:
- Added the output file as an artifact.
- Added taskset support
- ``antutu`` - Added support for v6.0.1
- ``ebizzy``: Fixed use of ``os.path`` to ``self.device.path``.
- ``bbench``: Fixed browser crashes & permissions issues on android M+.
- ``geekbench``:
- Added check whether device is rooted.
- ``manual``: Now only uses logcat on Android devices.
- ``applaunch``:
- Fixed ``cleanup`` not getting forwarded to script.
- Added the ability to stress IO during app launch.
- ``dhrystone``: Now uses WA's resource resolution to find it's binary so it
uses the correct ABI.
- ``glbench``: Updated for new logcat formatting.
Framework
~~~~~~~~~
- ``ReventWorkload``:
- Now kills all revent instances on teardown.
- Device model name is now used when searching for revent files, falling back
to WA device name.
- ``BaseLinuxDevice``:
- ``killall`` will now run as root by default if the device
is rooted.
- ``list_file_systems`` now handles blank lines.
- All binaries are now installed into ``binaries_directory`` this allows..
- Busybox is now deployed on non-root devices.
- gzipped property files are no zcat'ed
- ``LinuxDevice``:
- ``kick_off`` no longer requires root.
- ``kick_off`` will now run as root by default if the device is rooted.
- No longer raises an exception if a connection was dropped during a reboot.
- Added a delay before polling for a connection to avoid re-connecting to a
device that is still in the process of rebooting.
- ``wlauto.utils.types``: ``list_or_string`` now ensures that elements of a list
are strings.
- ``AndroidDevice``:
- ``kick_off`` no longer requires root.
- Build props are now gathered via ``getprop`` rather than trying to parse
build.prop directly.
- WA now pushes its own ``sqlite3`` binary.
- Now uses ``content`` instead of ``settings`` to get ``ANDROID_ID``
- ``swipe_to_unlock`` parameter is now actually used. It has been changed to
take a direction to accomodate various devices.
- ``ensure_screen_is_on`` will now also unlock the screen if swipe_to_unlock
is set.
- Fixed use of variables in as_root=True commands.
- ``get_pids_of`` now used ``busybox grep`` since as of Android M+ ps cannot
filter by process name anymore.
- Fixed installing APK files with whitespace in their path/name.
- ``adb_shell``:
- Fixed handling of line breaks at the end of command output.
- Newline separator is now detected from the target.
- As of ADB v1.0.35, ADB returns the return code of the command run. WA now
handles this correctly.
- ``ApkWorkload``:
- Now attempts to grant all runtime permissions for devices on Android M+.
- Can now launch packages that don't have a launch activity defined.
- Package version is now added to results as a classifier.
- Now clears app data if an uninstall failed to ensure it starts from a known
state.
- ``wlauto.utils.ipython``: Updated to work with ipython v5.
- ``Gem5Device``:
- Added support for deploying the ``m5`` binary.
- No longer waits for the boot animation to finish if it has been disabled.
- Fixed runtime error caused by lack of kwargs.
- No longer depends on ``busybox``.
- Split out commands to resize shell to ``resize_shell``.
- Now tries to connect to the shell up to 10 times.
- No longer renames gzipped files.
- Agendas:
- Now errors when an agenda key is empty.
- ``wlauto.core.execution.RunInfo``: ``run_name`` will now default to
``{output_folder}_{date}_{time}``.
- Extensions:
- Two different parameters can now have the same global alias as long as they
their types match.
- You can no longer ``override`` parameters that are defined at the same
level.
- ``wlauto.core.entry_point``: Now gives a better error when a config file
doesn't exist.
- ``wlauto.utils.misc``: Added ``aarch64`` to list for arm64 ABI.
- ``wlauto.core.resolver``: Now shows what version was being search for when a
resource is not found.
- Will no longer start instruments ect. if a run has no workload specs.
- ``wlauto.utils.uboot``: Now detects uboot version to use correct line endings.
- ``wlauto.utils.trace_cmd``: Added a parser for sched_switch events.
Other
~~~~~
- Updated to pylint v1.5.1
- Rebuilt ``busybox`` binaries to prefer built-in applets over system binaries.
- ``BaseUiAutomation``: Added functions for checking version strings.
Incompatible changes
####################
Instruments
~~~~~~~~~~~
- ``apk_version``: Removed, use result classifiers instead.
Framework
~~~~~~~~~
- ``BaseLinuxDevice``: Removed ``is_installed`` use ``install_if_needed`` and
``get_binary_path`` instead.
- ``LinuxDevice``: Removed ``has_root`` method, use ``is_rooted`` instead.
- ``AndroidDevice``: ``swipe_to_unlock`` method replaced with
``perform_unlock_swipe``.
-------------
Version 2.4.0
-------------

@@ -59,6 +59,11 @@ usually the best bet.
Optionally (but recommended), you should also set ``ANDROID_HOME`` to point to
the install location of the SDK (i.e. ``<path_to_android_sdk>/sdk``).
.. note:: You may need to install 32-bit compatibility libararies for the SDK
to work properly. On Ubuntu you need to run::
sudo apt-get install lib32stdc++6 lib32z1
Python
------
@@ -87,7 +92,7 @@ similar distributions, this may be done with APT::
If you do run into this issue after already installing some packages,
you can resolve it by running ::
sudo chmod -R a+r /usr/local/lib/python2.7/dist-packagessudo
sudo chmod -R a+r /usr/local/lib/python2.7/dist-packagessudo
find /usr/local/lib/python2.7/dist-packages -type d -exec chmod a+x {} \;
(The paths above will work for Ubuntu; they may need to be adjusted
@@ -307,7 +312,7 @@ that location.
If you have installed Workload Automation via ``pip`` and wish to remove it, run this command to
uninstall it::
sudo -H pip uninstall wlauto
.. Note:: This will *not* remove any user configuration (e.g. the ~/.workload_automation directory)
@@ -317,5 +322,5 @@ uninstall it::
====================
To upgrade Workload Automation to the latest version via ``pip``, run::
sudo -H pip install --upgrade --no-deps wlauto

@@ -5,7 +5,7 @@ Commands
========
Installing the wlauto package will add ``wa`` command to your system,
which you can run from anywhere. This has a number of sub-commands, which can
which you can run from anywhere. This has a number of sub-commands, which can
be viewed by executing ::
wa -h
@@ -131,5 +131,63 @@ will produce something like ::
- Results displayed in Iterations per second
- Detailed log file for comprehensive engineering analysis
.. _record-command:
record
------
This command simplifies the process of recording an revent file. It
will automatically deploy revent and even has the option of automatically
opening apps. WA uses two parts to the names of revent recordings in the
format, {device_name}.{suffix}.revent. - device_name can either be specified
manually with the ``-d`` argument or it can be automatically determined. On
Android device it will be obtained from ``build.prop``, on Linux devices it is
obtained from ``/proc/device-tree/model``. - suffix is used by WA to determine
which part of the app execution the recording is for, currently these are
either ``setup`` or ``run``. This should be specified with the ``-s``
argument. The full set of options for this command are::
usage: wa record [-h] [-c CONFIG] [-v] [--debug] [--version] [-d DEVICE]
[-s SUFFIX] [-o OUTPUT] [-p PACKAGE] [-C]
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
specify an additional config.py
-v, --verbose The scripts will produce verbose output.
--debug Enable debug mode. Note: this implies --verbose.
--version show program's version number and exit
-d DEVICE, --device DEVICE
The name of the device
-s SUFFIX, --suffix SUFFIX
The suffix of the revent file, e.g. ``setup``
-o OUTPUT, --output OUTPUT
Directory to save the recording in
-p PACKAGE, --package PACKAGE
Package to launch before recording
-C, --clear Clear app cache before launching it
.. _replay-command:
replay
------
Along side ``record`` wa also has a command to playback recorded revent files.
It behaves very similar to the ``record`` command taking many of the same options::
usage: wa replay [-h] [-c CONFIG] [-v] [--debug] [--version] [-p PACKAGE] [-C]
revent
positional arguments:
revent The name of the file to replay
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
specify an additional config.py
-v, --verbose The scripts will produce verbose output.
--debug Enable debug mode. Note: this implies --verbose.
--version show program's version number and exit
-p PACKAGE, --package PACKAGE
Package to launch before recording
-C, --clear Clear app cache before launching it

@@ -17,37 +17,43 @@ to Android UI Automator for providing automation for workloads. ::
info:shows info about each event char device
any additional parameters make it verbose
.. note:: There are now also WA commands that perform the below steps.
Please see ``wa show record/replay`` and ``wa record/replay --help``
for details.
Recording
---------
To record, transfer the revent binary to the device, then invoke ``revent
record``, giving it the time (in seconds) you want to record for, and the
file you want to record to (WA expects these files to have .revent
extension)::
WA features a ``record`` command that will automatically deploy and start
revent on the target device::
host$ adb push revent /data/local/revent
host$ adb shell
device# cd /data/local
device# ./revent record 1000 my_recording.revent
wa record
INFO Connecting to device...
INFO Press Enter when you are ready to record...
[Pressed Enter]
INFO Press Enter when you have finished recording...
[Pressed Enter]
INFO Pulling files from device
Once started, you will need to get the target device ready to record (e.g.
unlock screen, navigate menus and launch an app) then press ``ENTER``.
The recording has now started and button presses, taps, etc you perform on
the device will go into the .revent file. To stop the recording simply press
``ENTER`` again.
Once you have finished recording the revent file will be pulled from the device
to the current directory. It will be named ``{device_model}.revent``. When
recording revent files for a ``GameWorkload`` you can use the ``-s`` option to
add ``run`` or ``setup`` suffixes.
For more information run please read :ref:`record-command`
The recording has now started and button presses, taps, etc you perform on the
device will go into the .revent file. The recording will stop after the
specified time period, and you can also stop it by hitting return in the adb
shell.
Replaying
---------
To replay a recorded file, run ``revent replay`` on the device, giving it the
file you want to replay::
To replay a recorded file, run ``wa replay``, giving it the file you want to
replay::
device# ./revent replay my_recording.revent
wa replay my_recording.revent
For more information run please read :ref:`replay-command`
Using revent With Workloads
---------------------------

@@ -20,6 +20,7 @@ import shutil
import wlauto
from wlauto import Command, settings
from wlauto.exceptions import ConfigError
from wlauto.core.agenda import Agenda
from wlauto.core.execution import Executor
from wlauto.utils.log import add_log_file
@@ -76,6 +77,11 @@ class RunCommand(Command):
agenda = Agenda(args.agenda)
settings.agenda = args.agenda
shutil.copy(args.agenda, settings.meta_directory)
if len(agenda.workloads) == 0:
raise ConfigError("No workloads specified")
elif '.' in args.agenda or os.sep in args.agenda:
raise ConfigError('Agenda "{}" does not exist.'.format(args.agenda))
else:
self.logger.debug('{} is not a file; assuming workload name.'.format(args.agenda))
agenda = Agenda()

@@ -14,7 +14,7 @@ class ${class_name}(AndroidBenchmark):
parameters = [
# Workload parameters go here e.g.
Parameter('Example parameter', kind=int, allowed_values=[1,2,3], default=1, override=True, mandatory=False,
Parameter('example_parameter', kind=int, allowed_values=[1,2,3], default=1, override=True, mandatory=False,
description='This is an example parameter')
]

@@ -14,7 +14,7 @@ class ${class_name}(AndroidUiAutoBenchmark):
parameters = [
# Workload parameters go here e.g.
Parameter('Example parameter', kind=int, allowed_values=[1,2,3], default=1, override=True, mandatory=False,
Parameter('example_parameter', kind=int, allowed_values=[1,2,3], default=1, override=True, mandatory=False,
description='This is an example parameter')
]

@@ -8,7 +8,7 @@ class ${class_name}(Workload):
parameters = [
# Workload parameters go here e.g.
Parameter('Example parameter', kind=int, allowed_values=[1,2,3], default=1, override=True, mandatory=False,
Parameter('example_parameter', kind=int, allowed_values=[1,2,3], default=1, override=True, mandatory=False,
description='This is an example parameter')
]

@@ -8,7 +8,7 @@ class ${class_name}(UiAutomatorWorkload):
parameters = [
# Workload parameters go here e.g.
Parameter('Example parameter', kind=int, allowed_values=[1,2,3], default=1, override=True, mandatory=False,
Parameter('example_parameter', kind=int, allowed_values=[1,2,3], default=1, override=True, mandatory=False,
description='This is an example parameter')
]

@@ -21,9 +21,12 @@ import time
import tempfile
import shutil
import threading
import json
from subprocess import CalledProcessError
from wlauto.core.extension import Parameter
from wlauto.common.resources import Executable
from wlauto.core.resource import NO_ONE
from wlauto.common.linux.device import BaseLinuxDevice, PsEntry
from wlauto.exceptions import DeviceError, WorkerThreadError, TimeoutError, DeviceNotRespondingError
from wlauto.utils.misc import convert_new_lines
@@ -193,6 +196,7 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
self._is_ready = True
def initialize(self, context):
self.sqlite = self.deploy_sqlite3(context) # pylint: disable=attribute-defined-outside-init
if self.is_rooted:
self.disable_screen_lock()
self.disable_selinux()
@@ -357,7 +361,7 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
self._check_ready()
ext = os.path.splitext(filepath)[1].lower()
if ext == '.apk':
return adb_command(self.adb_name, "install {}".format(filepath), timeout=timeout)
return adb_command(self.adb_name, "install '{}'".format(filepath), timeout=timeout)
else:
raise DeviceError('Can\'t install {}: unsupported format.'.format(filepath))
@@ -442,22 +446,20 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
else:
return adb_shell(self.adb_name, command, timeout, check_exit_code, as_root)
def kick_off(self, command):
def kick_off(self, command, as_root=None):
"""
Like execute but closes adb session and returns immediately, leaving the command running on the
device (this is different from execute(background=True) which keeps adb connection open and returns
a subprocess object).
.. note:: This relies on busybox's nohup applet and so won't work on unrooted devices.
Added in version 2.1.4
"""
if not self.is_rooted:
raise DeviceError('kick_off uses busybox\'s nohup applet and so can only be run a rooted device.')
if as_root is None:
as_root = self.is_rooted
try:
command = 'cd {} && busybox nohup {}'.format(self.working_directory, command)
output = self.execute(command, timeout=1, as_root=True)
command = 'cd {} && {} nohup {}'.format(self.working_directory, self.busybox, command)
output = self.execute(command, timeout=1, as_root=as_root)
except TimeoutError:
pass
else:
@@ -505,17 +507,18 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
def _get_android_properties(self, context):
props = {}
props['android_id'] = self.get_android_id()
buildprop_file = os.path.join(context.host_working_directory, 'build.prop')
if not os.path.isfile(buildprop_file):
self.pull_file('/system/build.prop', context.host_working_directory)
self._update_build_properties(buildprop_file, props)
context.add_run_artifact('build_properties', buildprop_file, 'export')
self._update_build_properties(props)
dumpsys_target_file = self.path.join(self.working_directory, 'window.dumpsys')
dumpsys_host_file = os.path.join(context.host_working_directory, 'window.dumpsys')
self.execute('{} > {}'.format('dumpsys window', dumpsys_target_file))
self.pull_file(dumpsys_target_file, dumpsys_host_file)
context.add_run_artifact('dumpsys_window', dumpsys_host_file, 'meta')
prop_file = os.path.join(context.host_working_directory, 'android-props.json')
with open(prop_file, 'w') as wfh:
json.dump(props, wfh)
context.add_run_artifact('android_properties', prop_file, 'export')
return props
def getprop(self, prop=None):
@@ -529,6 +532,11 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
return props[prop]
return props
def deploy_sqlite3(self, context):
host_file = context.resolver.get(Executable(NO_ONE, self.abi, 'sqlite3'))
target_file = self.install_if_needed(host_file)
return target_file
# Android-specific methods. These either rely on specifics of adb or other
# Android-only concepts in their interface and/or implementation.
@@ -629,7 +637,15 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
"""
lockdb = '/data/system/locksettings.db'
sqlcommand = "update locksettings set value='0' where name='screenlock.disabled';"
self.execute('sqlite3 {} "{}"'.format(lockdb, sqlcommand), as_root=True)
f = tempfile.NamedTemporaryFile()
try:
f.write('{} {} "{}"'.format(self.sqlite, lockdb, sqlcommand))
f.flush()
on_device_executable = self.install_executable(f.name,
with_name="disable_screen_lock")
finally:
f.close()
self.execute(on_device_executable, as_root=True)
def disable_selinux(self):
# This may be invoked from intialize() so we can't use execute() or the
@@ -651,15 +667,15 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
# Internal methods: do not use outside of the class.
def _update_build_properties(self, filepath, props):
def _update_build_properties(self, props):
try:
with open(filepath) as fh:
for line in fh:
line = re.sub(r'#.*', '', line).strip()
if not line:
continue
key, value = line.split('=', 1)
props[key] = value
def strip(somestring):
return somestring.strip().replace('[', '').replace(']', '')
for line in self.execute("getprop").splitlines():
key, value = line.split(':', 1)
key = strip(key)
value = strip(value)
props[key] = value
except ValueError:
self.logger.warning('Could not parse build.prop.')

@@ -87,7 +87,7 @@ class UiAutomatorWorkload(Workload):
params_dict['workdir'] = self.device.working_directory
params = ''
for k, v in self.uiauto_params.iteritems():
params += ' -e {} {}'.format(k, v)
params += ' -e {} "{}"'.format(k, v)
self.command = 'uiautomator runtest {}{} -c {}'.format(self.device_uiauto_file, params, method_string)
self.device.push_file(self.uiauto_file, self.device_uiauto_file)
self.device.killall('uiautomator')
@@ -122,7 +122,8 @@ class ApkWorkload(Workload):
:package: The package name of the app. This is usually a Java-style name of the form
``com.companyname.appname``.
:activity: This is the initial activity of the app. This will be used to launch the
app during the setup.
app during the setup. Many applications do not specify a launch activity so
this may be left blank if necessary.
:view: The class of the main view pane of the app. This needs to be defined in order
to collect SurfaceFlinger-derived statistics (such as FPS) for the app, but
may otherwise be left as ``None``.
@@ -183,7 +184,7 @@ class ApkWorkload(Workload):
def setup(self, context):
self.initialize_package(context)
self.start_activity()
self.launch_package()
self.device.execute('am kill-all') # kill all *background* activities
self.device.clear_logcat()
@@ -200,6 +201,7 @@ class ApkWorkload(Workload):
self.logger.debug(message.format(installed_version))
self.reset(context)
self.apk_version = installed_version
context.add_classifiers(apk_version=self.apk_version)
def initialize_with_host_apk(self, context, installed_version):
host_version = ApkInfo(self.apk_file).version_name
@@ -220,13 +222,22 @@ class ApkWorkload(Workload):
if self.force_install:
if installed_version:
self.device.uninstall(self.package)
self.install_apk(context)
# It's possible that that the uninstall above fails, which will result in
# install failing and a warning, hower execution would the proceed, so need
# to make sure that the right apk_vesion is reported in the end.
if self.install_apk(context):
self.apk_version = host_version
else:
self.apk_version = installed_version
else:
self.apk_version = installed_version
self.reset(context)
self.apk_version = host_version
def start_activity(self):
output = self.device.execute('am start -W -n {}/{}'.format(self.package, self.activity))
def launch_package(self):
if not self.activity:
output = self.device.execute('am start -W {}'.format(self.package))
else:
output = self.device.execute('am start -W -n {}/{}'.format(self.package, self.activity))
if 'Error:' in output:
self.device.execute('am force-stop {}'.format(self.package)) # this will dismiss any erro dialogs
raise WorkloadError(output)
@@ -242,15 +253,19 @@ class ApkWorkload(Workload):
self._grant_requested_permissions()
def install_apk(self, context):
success = False
output = self.device.install(self.apk_file, self.install_timeout)
if 'Failure' in output:
if 'ALREADY_EXISTS' in output:
self.logger.warn('Using already installed APK (did not unistall properly?)')
self.reset(context)
else:
raise WorkloadError(output)
else:
self.logger.debug(output)
success = True
self.do_post_install(context)
return success
def _grant_requested_permissions(self):
dumpsys_output = self.device.execute(command="dumpsys package {}".format(self.package))
@@ -306,19 +321,24 @@ class ReventWorkload(Workload):
super(ReventWorkload, self).__init__(device, **kwargs)
devpath = self.device.path
self.on_device_revent_binary = devpath.join(self.device.binaries_directory, 'revent')
self.on_device_setup_revent = devpath.join(self.device.working_directory, '{}.setup.revent'.format(self.device.name))
self.on_device_run_revent = devpath.join(self.device.working_directory, '{}.run.revent'.format(self.device.name))
self.setup_timeout = kwargs.get('setup_timeout', self.default_setup_timeout)
self.run_timeout = kwargs.get('run_timeout', self.default_run_timeout)
self.revent_setup_file = None
self.revent_run_file = None
self.on_device_setup_revent = None
self.on_device_run_revent = None
def init_resources(self, context):
def initialize(self, context):
self.revent_setup_file = context.resolver.get(wlauto.common.android.resources.ReventFile(self, 'setup'))
self.revent_run_file = context.resolver.get(wlauto.common.android.resources.ReventFile(self, 'run'))
devpath = self.device.path
self.on_device_setup_revent = devpath.join(self.device.working_directory,
os.path.split(self.revent_setup_file)[-1])
self.on_device_run_revent = devpath.join(self.device.working_directory,
os.path.split(self.revent_run_file)[-1])
self._check_revent_files(context)
def setup(self, context):
self._check_revent_files(context)
self.device.killall('revent')
command = '{} replay {}'.format(self.on_device_revent_binary, self.on_device_setup_revent)
self.device.execute(command, timeout=self.setup_timeout)
@@ -333,6 +353,7 @@ class ReventWorkload(Workload):
pass
def teardown(self, context):
self.device.killall('revent')
self.device.delete_file(self.on_device_setup_revent)
self.device.delete_file(self.on_device_run_revent)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

@@ -214,7 +214,10 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
outfile = os.path.join(context.host_working_directory, normname)
if self.is_file(propfile):
with open(outfile, 'w') as wfh:
wfh.write(self.execute('cat {}'.format(propfile)))
if propfile.endswith(".gz"):
wfh.write(self.execute('{} zcat {}'.format(self.busybox, propfile)))
else:
wfh.write(self.execute('cat {}'.format(propfile)))
elif self.is_directory(propfile):
self.pull_file(propfile, outfile)
else:
@@ -390,7 +393,7 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
signal_string = '-s {}'.format(signal) if signal else ''
self.execute('kill {} {}'.format(signal_string, pid), as_root=as_root)
def killall(self, process_name, signal=None, as_root=False): # pylint: disable=W0221
def killall(self, process_name, signal=None, as_root=None): # pylint: disable=W0221
"""
Kill all processes with the specified name.
@@ -401,6 +404,8 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
Modified in version 2.1.5: added ``as_root`` parameter.
"""
if as_root is None:
as_root = self.is_rooted
for pid in self.get_pids_of(process_name):
self.kill(pid, signal=signal, as_root=as_root)
@@ -632,7 +637,11 @@ class LinuxDevice(BaseLinuxDevice):
# Power control
def reset(self):
self.execute('reboot', as_root=True)
try:
self.execute('reboot', as_root=True)
except DeviceError as e:
if 'Connection dropped' not in e.message:
raise e
self._is_ready = False
def hard_reset(self):
@@ -644,8 +653,15 @@ class LinuxDevice(BaseLinuxDevice):
else:
self.reset()
self.logger.debug('Waiting for device...')
# Wait a fixed delay before starting polling to give the device time to
# shut down, otherwise, might create the connection while it's still shutting
# down resulting in subsequenct connection failing.
initial_delay = 20
time.sleep(initial_delay)
boot_timeout = max(self.boot_timeout - initial_delay, 10)
start_time = time.time()
while (time.time() - start_time) < self.boot_timeout:
while (time.time() - start_time) < boot_timeout:
try:
s = socket.create_connection((self.host, self.port), timeout=5)
s.close()
@@ -669,15 +685,6 @@ class LinuxDevice(BaseLinuxDevice):
# Execution
def has_root(self):
try:
self.execute('ls /', as_root=True)
return True
except DeviceError as e:
if 'not in the sudoers file' not in e.message:
raise e
return False
def execute(self, command, timeout=default_timeout, check_exit_code=True, background=False,
as_root=False, strip_colors=True, **kwargs):
"""
@@ -720,13 +727,15 @@ class LinuxDevice(BaseLinuxDevice):
except CalledProcessError as e:
raise DeviceError(e)
def kick_off(self, command, as_root=False):
def kick_off(self, command, as_root=None):
"""
Like execute but closes adb session and returns immediately, leaving the command running on the
device (this is different from execute(background=True) which keeps adb connection open and returns
Like execute but closes ssh session and returns immediately, leaving the command running on the
device (this is different from execute(background=True) which keeps ssh connection open and returns
a subprocess object).
"""
if as_root is None:
as_root = self.is_rooted
self._check_ready()
command = 'sh -c "{}" 1>/dev/null 2>/dev/null &'.format(escape_double_quotes(command))
return self.shell.execute(command, as_root=as_root)

@@ -114,8 +114,8 @@ class ConfigLoader(object):
new_config = load_struct_from_yaml(source)
else:
raise ConfigError('Unknown config format: {}'.format(source))
except LoadSyntaxError as e:
raise ConfigError(e)
except (LoadSyntaxError, ValueError) as e:
raise ConfigError('Invalid config "{}":\n\t{}'.format(source, e))
self._config = merge_dicts(self._config, new_config,
list_duplicates='first',

@@ -204,6 +204,9 @@ class ExecutionContext(object):
def add_metric(self, *args, **kwargs):
self.result.add_metric(*args, **kwargs)
def add_classifiers(self, **kwargs):
self.result.classifiers.update(kwargs)
def add_artifact(self, name, path, kind, *args, **kwargs):
if self.current_job is None:
self.add_run_artifact(name, path, kind, *args, **kwargs)

@@ -69,7 +69,11 @@ class ResourceResolver(object):
self.logger.debug('\t{}'.format(result))
return result
if strict:
raise ResourceError('{} could not be found'.format(resource))
if kwargs:
criteria = ', '.join(['{}:{}'.format(k, v) for k, v in kwargs.iteritems()])
raise ResourceError('{} ({}) could not be found'.format(resource, criteria))
else:
raise ResourceError('{} could not be found'.format(resource))
self.logger.debug('Resource {} not found.'.format(resource))
return None

@@ -18,7 +18,7 @@ from collections import namedtuple
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision'])
version = VersionTuple(2, 4, 0)
version = VersionTuple(2, 5, 0)
def get_wa_version():

@@ -199,16 +199,6 @@ class Gem5AndroidDevice(BaseGem5Device, AndroidDevice):
props = self._get_android_properties(context)
return props
def disable_screen_lock(self):
"""
Attempts to disable he screen lock on the device.
Overridden here as otherwise we have issues with too many backslashes.
"""
lockdb = '/data/system/locksettings.db'
sqlcommand = "update locksettings set value=\'0\' where name=\'screenlock.disabled\';"
self.execute('sqlite3 {} "{}"'.format(lockdb, sqlcommand), as_root=True)
def capture_screen(self, filepath):
if BaseGem5Device.capture_screen(self, filepath):
return

@@ -81,9 +81,7 @@ class Juno(BigLittleDevice):
'fdt_support': True,
}
),
Parameter('bootargs', default='console=ttyAMA0,115200 earlyprintk=pl011,0x7ff80000 '
'verbose debug init=/init root=/dev/sda1 rw ip=dhcp '
'rootwait video=DVI-D-1:1920x1080R@60',
Parameter('bootargs',
description='''Default boot arguments to use when boot_arguments were not.'''),
]
@@ -158,7 +156,7 @@ class Juno(BigLittleDevice):
target.sendline('ip addr list eth0')
time.sleep(1)
try:
target.expect('inet ([1-9]\d*.\d+.\d+.\d+)', timeout=10)
target.expect(r'inet ([1-9]\d*.\d+.\d+.\d+)', timeout=10)
self.adb_name = target.match.group(1) + ':5555' # pylint: disable=W0201
break
except pexpect.TIMEOUT:

11
wlauto/external/sqlite/README vendored Normal file

@@ -0,0 +1,11 @@
For WA we use a slightly modified version of sqlite3 so that it can
be built statically. We used the amalgamated sqlite3 version 3.12.2.
which is under the public domain.
https://www.sqlite.org/download.html
Build command:
gcc shell.c sqlite3.c -lpthread -ldl -static -O2 -fPIC -DPIC -DSQLITE_OMIT_LOAD_EXTENSION
You will need to apply the diff in static.patch

20
wlauto/external/sqlite/static.patch vendored Normal file

@@ -0,0 +1,20 @@
--- shell.c 2016-05-09 15:35:26.952309563 +0100
+++ shell.c.bak 2016-05-09 15:33:41.991259588 +0100
@@ -4503,7 +4503,7 @@
static char *home_dir = NULL;
if( home_dir ) return home_dir;
+/*#if !defined(_WIN32) && !defined(WIN32) && !defined(_WIN32_WCE) \
-#if !defined(_WIN32) && !defined(WIN32) && !defined(_WIN32_WCE) \
&& !defined(__RTP__) && !defined(_WRS_KERNEL)
{
struct passwd *pwent;
@@ -4512,7 +4512,7 @@
home_dir = pwent->pw_dir;
}
}
+#endif*/
-#endif
#if defined(_WIN32_WCE)
/* Windows CE (arm-wince-mingw32ce-gcc) does not provide getenv()

@@ -18,4 +18,4 @@
ant build
cp bin/classes/com/arm/wlauto/uiauto/BaseUiAutomation.class ../../common
cp bin/classes/com/arm/wlauto/uiauto/BaseUiAutomation.class ../../common/android

@@ -20,6 +20,10 @@ import java.io.File;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.concurrent.TimeoutException;
import java.util.Arrays;
import java.util.ArrayList;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import android.app.Activity;
import android.os.Bundle;
@@ -109,5 +113,40 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
throw new TimeoutException("Timed out waiting for Logcat text \"%s\"".format(searchText));
}
}
public Integer[] splitVersion(String versionString) {
String pattern = "(\\d+).(\\d+).(\\d+)";
Pattern r = Pattern.compile(pattern);
ArrayList<Integer> result = new ArrayList<Integer>();
Matcher m = r.matcher(versionString);
if (m.find() && m.groupCount() > 0) {
for(int i=1; i<=m.groupCount(); i++) {
result.add(Integer.parseInt(m.group(i)));
}
} else {
throw new IllegalArgumentException(versionString + " - unknown format");
}
return result.toArray(new Integer[result.size()]);
}
//Return values:
// -1 = a lower than b
// 0 = a and b equal
// 1 = a greater than b
public int compareVersions(Integer[] a, Integer[] b) {
if (a.length != b.length) {
String msg = "Versions do not match format:\n %1$s\n %1$s";
msg = String.format(msg, Arrays.toString(a), Arrays.toString(b));
throw new IllegalArgumentException(msg);
}
for(int i=0; i<a.length; i++) {
if(a[i] > b[i])
return 1;
else if(a[i] < b[i])
return -1;
}
return 0;
}
}

@@ -286,6 +286,10 @@ class Daq(Instrument):
if self.labels:
if len(self.labels) != len(self.resistor_values):
raise ConfigError('Number of DAQ port labels does not match the number of resistor values.')
duplicates = set([x for x in self.labels if self.labels.count(x) > 1])
if len(duplicates) > 0:
raise ConfigError('Duplicate labels: {}'.format(', '.join(duplicates)))
else:
self.labels = ['PORT_{}'.format(i) for i, _ in enumerate(self.resistor_values)]
self.server_config = ServerConfiguration(host=self.server_host,
@@ -306,7 +310,10 @@ class Daq(Instrument):
if isinstance(self.merge_channels, bool):
if self.merge_channels:
# Create a dict of potential prefixes and a list of their suffixes
grouped_suffixes = {label[:-1]: label for label in sorted(self.labels) if len(label) > 1}
grouped_suffixes = defaultdict(list)
for label in sorted(self.labels):
if len(label) > 1:
grouped_suffixes[label[:-1]].append(label)
# Only merge channels if more than one channel has the same prefix and the prefixes
# are consecutive letters starting with 'a'.
self.label_map = {}

13
wlauto/instrumentation/fps/__init__.py Normal file → Executable file

@@ -166,7 +166,7 @@ class FpsInstrument(Instrument):
def slow_update_result(self, context):
result = context.result
if result.has_metric('execution_time'):
if self.crash_check and result.has_metric('execution_time'):
self.logger.debug('Checking for crashed content.')
exec_time = result['execution_time'].value
fps = result['FPS'].value
@@ -230,11 +230,10 @@ class LatencyCollector(threading.Thread):
#command_template = 'while (true); do dumpsys SurfaceFlinger --latency {}; sleep 2; done'
command_template = 'dumpsys SurfaceFlinger --latency {}'
def __init__(self, outfile, device, activity, keep_raw, logger):
def __init__(self, outfile, device, activities, keep_raw, logger):
super(LatencyCollector, self).__init__()
self.outfile = outfile
self.device = device
self.command = self.command_template.format(activity)
self.keep_raw = keep_raw
self.logger = logger
self.stop_signal = threading.Event()
@@ -244,6 +243,9 @@ class LatencyCollector(threading.Thread):
self.drop_threshold = self.refresh_period * 1000
self.exc = None
self.unresponsive_count = 0
if isinstance(activities, basestring):
activities = [activities]
self.activities = activities
def run(self):
try:
@@ -254,7 +256,10 @@ class LatencyCollector(threading.Thread):
wfh = os.fdopen(fd, 'wb')
try:
while not self.stop_signal.is_set():
wfh.write(self.device.execute(self.command))
view_list = self.device.execute('dumpsys SurfaceFlinger --list').split()
for activity in self.activities:
if activity in view_list:
wfh.write(self.device.execute(self.command_template.format(activity)))
time.sleep(2)
finally:
wfh.close()

@@ -33,6 +33,9 @@ class FreqSweep(Instrument):
- Setting the runner to 'by_spec' increases the chance of successfully
completing an agenda without encountering hotplug issues
- If possible disable dynamic hotplug on the target device
- This instrument does not automatically pin workloads to the cores
being swept since it is not aware of what the workloads do.
To achieve this use the workload's taskset parameter (if it has one).
"""
parameters = [
@@ -44,24 +47,23 @@ class FreqSweep(Instrument):
can do so by specifying this parameter.
Sweeps should be a lists of dictionaries that can contain:
- Cluster (mandatory): The name of the cluster this sweep will be
performed on. E.g A7
- Frequencies: A list of frequencies (in KHz) to use. If this is
not provided all frequencies available for this
cluster will be used.
E.g: [800000, 900000, 100000]
- label: Workload specs will be named '{spec id}_{label}_{frequency}'.
If a label is not provided it will be named 'sweep{sweep No.}'
- Cluster (mandatory): The name of the cluster this sweep
will be performed on. E.g `A7`
- Frequencies: A list of frequencies (in KHz) to use. If
this is not provided all frequencies available for this
cluster will be used. E.g: `[800000, 900000, 100000]`
- label: Workload specs will be named
`{spec id}_{label}_{frequency}`. If a label is not
provided it will be named `sweep{sweep No.}`
Example sweep specification: ::
Example sweep specification:
freq_sweep:
sweeps:
- cluster: A53
label: littles
frequencies: [800000, 900000, 100000]
- cluster: A57
label: bigs
freq_sweep:
sweeps:
- cluster: A53
label: littles
frequencies: [800000, 900000, 100000]
- cluster: A57
label: bigs
"""),
]

@@ -64,6 +64,7 @@ class HwmonInstrument(Instrument):
parameters = [
Parameter('sensors', kind=list_of_strs, default=['energy', 'temp'],
global_alias='hwmon_sensors',
allowed_values=HWMON_SENSORS.keys(),
description='The kinds of sensors hwmon instrument will look for')
]
@@ -73,11 +74,7 @@ class HwmonInstrument(Instrument):
if self.sensors:
self.sensor_kinds = {}
for kind in self.sensors:
if kind in HWMON_SENSORS:
self.sensor_kinds[kind] = HWMON_SENSORS[kind]
else:
message = 'Unexpected sensor type: {}; must be in {}'.format(kind, HWMON_SENSORS.keys())
raise ConfigError(message)
self.sensor_kinds[kind] = HWMON_SENSORS[kind]
else:
self.sensor_kinds = HWMON_SENSORS
@@ -110,13 +107,17 @@ class HwmonInstrument(Instrument):
if report_type == 'diff':
before, after = sensor.readings
diff = conversion(after - before)
context.result.add_metric(sensor.label, diff, units)
context.result.add_metric(sensor.label, diff, units,
classifiers={"hwmon_device": sensor.device_name})
elif report_type == 'before/after':
before, after = sensor.readings
mean = conversion((after + before) / 2)
context.result.add_metric(sensor.label, mean, units)
context.result.add_metric(sensor.label + ' before', conversion(before), units)
context.result.add_metric(sensor.label + ' after', conversion(after), units)
context.result.add_metric(sensor.label, mean, units,
classifiers={"hwmon_device": sensor.device_name})
context.result.add_metric(sensor.label + ' before', conversion(before), units,
classifiers={"hwmon_device": sensor.device_name})
context.result.add_metric(sensor.label + ' after', conversion(after), units,
classifiers={"hwmon_device": sensor.device_name})
else:
raise InstrumentError('Unexpected report_type: {}'.format(report_type))
except ValueError, e:

@@ -57,7 +57,7 @@ class SysfsExtractor(Instrument):
mount_command = 'mount -t tmpfs -o size={} tmpfs {}'
extract_timeout = 30
tarname = 'sysfs.tar.gz'
tarname = 'sysfs.tar'
DEVICE_PATH = 0
BEFORE_PATH = 1
AFTER_PATH = 2
@@ -151,16 +151,18 @@ class SysfsExtractor(Instrument):
def update_result(self, context):
if self.use_tmpfs:
on_device_tarball = self.device.path.join(self.device.working_directory, self.tarname)
on_host_tarball = self.device.path.join(context.output_directory, self.tarname)
self.device.execute('{} tar czf {} -C {} .'.format(self.device.busybox,
on_device_tarball,
self.tmpfs_mount_point),
on_host_tarball = self.device.path.join(context.output_directory, self.tarname + ".gz")
self.device.execute('{} tar cf {} -C {} .'.format(self.device.busybox,
on_device_tarball,
self.tmpfs_mount_point),
as_root=True)
self.device.execute('chmod 0777 {}'.format(on_device_tarball), as_root=True)
self.device.pull_file(on_device_tarball, on_host_tarball)
self.device.execute('{} gzip {}'.format(self.device.busybox,
on_device_tarball))
self.device.pull_file(on_device_tarball + ".gz", on_host_tarball)
with tarfile.open(on_host_tarball, 'r:gz') as tf:
tf.extractall(context.output_directory)
self.device.delete_file(on_device_tarball)
self.device.delete_file(on_device_tarball + ".gz")
os.remove(on_host_tarball)
for paths in self.device_and_host_paths:
@@ -225,29 +227,6 @@ class ExecutionTimeInstrument(Instrument):
context.result.add_metric('execution_time', execution_time, 'seconds')
class ApkVersion(Instrument):
name = 'apk_version'
description = """
Extracts APK versions for workloads that have them.
"""
def __init__(self, device, **kwargs):
super(ApkVersion, self).__init__(device, **kwargs)
self.apk_info = None
def setup(self, context):
if hasattr(context.workload, 'apk_file'):
self.apk_info = ApkInfo(context.workload.apk_file)
else:
self.apk_info = None
def update_result(self, context):
if self.apk_info:
context.result.add_metric(self.name, self.apk_info.version_name)
class InterruptStatsInstrument(Instrument):
name = 'interrupts'
@@ -290,7 +269,7 @@ class DynamicFrequencyInstrument(SysfsExtractor):
"""
tarname = 'cpufreq.tar.gz'
tarname = 'cpufreq.tar'
parameters = [
Parameter('paths', mandatory=False, override=True),
@@ -386,4 +365,3 @@ def _diff_sysfs_dirs(before, after, result): # pylint: disable=R0914
else:
dchunks = [diff_tokens(b, a) for b, a in zip(bchunks, achunks)]
dfh.write(''.join(dchunks))

@@ -9,7 +9,7 @@ from itertools import izip_longest
from wlauto import Instrument, Parameter
from wlauto import ApkFile
from wlauto.exceptions import DeviceError, HostError
from wlauto.exceptions import InstrumentError, HostError
from wlauto.utils.android import ApkInfo
from wlauto.utils.types import list_of_strings
@@ -160,7 +160,7 @@ class NetstatsInstrument(Instrument):
def initialize(self, context):
if self.device.platform != 'android':
raise DeviceError('nestats instrument only supports on Android devices.')
raise InstrumentError('nestats instrument is only supported on Android devices.')
apk = context.resolver.get(ApkFile(self))
self.collector = NetstatsCollector(self.device, apk) # pylint: disable=attribute-defined-outside-init
self.collector.setup(force=self.force_reinstall)

@@ -0,0 +1,105 @@
# Copyright 2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=access-member-before-definition,attribute-defined-outside-init,unused-argument
from wlauto import Instrument, Parameter, Executable
from wlauto.exceptions import ConfigError
from wlauto.utils.types import list_or_string
class FilePoller(Instrument):
name = 'file_poller'
description = """
Polls the given files at a set sample interval. The values are output in CSV format.
This instrument places a file called poller.csv in each iterations result directory.
This file will contain a timestamp column which will be in uS, the rest of the columns
will be the contents of the polled files at that time.
This instrument can poll any file whos contents do not contain a new line since this
breaks the CSV formatting.
"""
parameters = [
Parameter('sample_interval', kind=int, default=1000,
description="""The interval between samples in mS."""),
Parameter('files', kind=list_or_string,
description="""A list of paths to the files to be polled"""),
Parameter('labels', kind=list_or_string,
description="""A list of lables to be used in the CSV output for
the corresponding files. This cannot be used if
a `*` wildcard is used in a path."""),
Parameter('as_root', kind=bool, default=False,
description="""
Whether or not the poller will be run as root. This should be
used when the file you need to poll can only be accessed by root.
"""),
]
def validate(self):
if not self.device.is_rooted and self.as_root:
raise ConfigError('The device is not rooted, cannot run poller as root.')
if self.labels and any(['*' in f for f in self.files]):
raise ConfigError('You cannot used manual labels with `*` wildcards')
def initialize(self, context):
host_poller = context.resolver.get(Executable(self, self.device.abi,
"poller"))
target_poller = self.device.install_if_needed(host_poller)
expanded_paths = []
for path in self.files:
if "*" in path:
for p in self.device.listdir(path):
expanded_paths.append(p)
else:
expanded_paths.append(path)
self.files = expanded_paths
if not self.labels:
self.labels = self._generate_labels()
self.target_output_path = self.device.path.join(self.device.working_directory, 'poller.csv')
self.command = '{} -t {} -l {} {} > {}'.format(target_poller,
self.sample_interval * 1000,
','.join(self.labels),
' '.join(self.files),
self.target_output_path)
def start(self, context):
self.device.kick_off(self.command, as_root=self.as_root)
def stop(self, context):
self.device.killall('poller', signal='TERM', as_root=self.as_root)
def update_result(self, context):
self.device.pull_file(self.target_output_path, context.output_directory)
def teardown(self, context):
self.device.delete_file(self.target_output_path)
def _generate_labels(self):
# Split paths into their parts
path_parts = [f.split(self.device.path.sep) for f in self.files]
# Identify which parts differ between at least two of the paths
differ_map = [len(set(x)) > 1 for x in zip(*path_parts)]
# compose labels from path parts that differ
labels = []
for pp in path_parts:
label_parts = [p for i, p in enumerate(pp[:-1])
if i >= len(differ_map) or differ_map[i]]
label_parts.append(pp[-1]) # always use file name even if same for all
labels.append('-'.join(label_parts))
return labels

Binary file not shown.

Binary file not shown.

@@ -0,0 +1,132 @@
#include <fcntl.h>
#include <stdio.h>
#include <sys/poll.h>
#include <sys/time.h>
#include <unistd.h>
#include <errno.h>
#include <signal.h>
#include <string.h>
#include <stdlib.h>
volatile sig_atomic_t done = 0;
void term(int signum)
{
done = 1;
}
int main(int argc, char ** argv) {
extern char *optarg;
extern int optind;
int c = 0;
int show_help = 0;
useconds_t interval = 1000000;
char buf[1024];
memset(buf, 0, sizeof(buf));
struct timeval current_time;
double time_float;
int files_to_poll[argc-optind];
char *labels;
int labelCount = 0;
static char usage[] = "usage: %s [-h] [-t INTERVAL] FILE [FILE ...]\n"
"polls FILE(s) every INTERVAL microseconds and outputs\n"
"the results in CSV format including a timestamp to STDOUT\n"
"\n"
" -h Display this message\n"
" -t The polling sample interval in microseconds\n"
" Defaults to 1000000 (1 second)\n"
" -l Comma separated list of labels to use in the CSV\n"
" output. This should match the number of files\n";
//Handling command line arguments
while ((c = getopt(argc, argv, "ht:l:")) != -1)
{
switch(c) {
case 'h':
case '?':
default:
show_help = 1;
break;
case 't':
interval = (useconds_t)atoi(optarg);
break;
case 'l':
labels = optarg;
labelCount = 1;
int i;
for (i=0; labels[i]; i++)
labelCount += (labels[i] == ',');
}
}
if (show_help) {
fprintf(stderr, usage, argv[0]);
exit(1);
}
if (optind >= argc) {
fprintf(stderr, "%s: missiing file path(s)\n", argv[0]);
fprintf(stderr, usage, argv[0]);
exit(1);
}
if (labelCount && labelCount != argc-optind)
{
fprintf(stderr, "%s: %d labels specified but %d files specified\n", argv[0],
labelCount,
argc-optind);
fprintf(stderr, usage, argv[0]);
exit(1);
}
//Print headers and open files to poll
printf("time");
if(labelCount)
{
printf(",%s", labels);
}
int i;
for (i = 0; i < (argc - optind); i++)
{
files_to_poll[i] = open(argv[optind + i], O_RDONLY);
if(!labelCount) {
printf(",%s", argv[optind + i]);
}
}
printf("\n");
//Setup SIGTERM handler
struct sigaction action;
memset(&action, 0, sizeof(struct sigaction));
action.sa_handler = term;
sigaction(SIGTERM, &action, NULL);
//Poll files
while (!done) {
gettimeofday(&current_time, NULL);
time_float = (double)current_time.tv_sec;
time_float += ((double)current_time.tv_usec)/1000/1000;
printf("%f", time_float);
for (i = 0; i < (argc - optind); i++) {
read(files_to_poll[i], buf, 1024);
lseek(files_to_poll[i], 0, SEEK_SET);
//Removes trailing "\n"
int new_line = strlen(buf) -1;
if (buf[new_line] == '\n')
buf[new_line] = '\0';
printf(",%s", buf);
}
printf("\n");
usleep(interval);
}
//Close files
for (i = 0; i < (argc - optind); i++)
{
files_to_poll[i] = open(argv[optind + i], O_RDONLY);
}
exit(0);
}

@@ -0,0 +1,232 @@
# Copyright 2016 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=W0613,E1101,attribute-defined-outside-init
from __future__ import division
import os
import subprocess
import signal
import csv
import threading
import time
import getpass
import logging
import xmlrpclib
from datetime import datetime
from wlauto import Instrument, Parameter, Executable
from wlauto.exceptions import InstrumentError, ConfigError
from wlauto.utils.types import list_of_strings
from wlauto.utils.misc import check_output
from wlauto.utils.cros_sdk import CrosSdkSession
from wlauto.utils.misc import which
class ServoPowerMonitor(Instrument):
name = 'servo_power'
description = """
Collects power traces using the Chromium OS Servo Board.
Servo is a debug board used for Chromium OS test and development. Among other uses, it allows
access to the built in power monitors (if present) of a Chrome OS device. More information on
Servo board can be found in the link bellow:
https://www.chromium.org/chromium-os/servo
In order to use this instrument you need to be a sudoer and you need a chroot environment. More
information on the chroot environment can be found on the link bellow:
https://www.chromium.org/chromium-os/developer-guide
If you wish to run servod on a remote machine you will need to allow it to accept external connections
using the `--host` command line option, like so:
`sudo servod -b some_board -c some_board.xml --host=''`
"""
parameters = [
Parameter('power_domains', kind=list_of_strings, default=[],
description="""The names of power domains to be monitored by the
instrument using servod."""),
Parameter('labels', kind=list_of_strings, default=[],
description="""Meaningful labels for each of the monitored domains."""),
Parameter('chroot_path', kind=str,
description="""Path to chroot direcory on the host."""),
Parameter('sampling_rate', kind=int, default=10,
description="""Samples per second."""),
Parameter('board_name', kind=str, mandatory=True,
description="""The name of the board under test."""),
Parameter('autostart', kind=bool, default=True,
description="""Automatically start `servod`. Set to `False` if you want to
use an already running `servod` instance or a remote servo"""),
Parameter('host', kind=str, default="localhost",
description="""When `autostart` is set to `False` you can specify the host
on which `servod` is running allowing you to remotelly access
as servo board.
if `autostart` is `True` this parameter is ignored and `localhost`
is used instead"""),
Parameter('port', kind=int, default=9999,
description="""When `autostart` is set to false you must provide the port
that `servod` is running on
If `autostart` is `True` this parameter is ignored and the port
output during the startup of `servod` will be used."""),
Parameter('vid', kind=str,
description="""When more than one servo is plugged in, you must provide
a vid/pid pair to identify the servio you wish to use."""),
Parameter('pid', kind=str,
description="""When more than one servo is plugged in, you must provide
a vid/pid pair to identify the servio you wish to use."""),
]
# When trying to initialize servod, it may take some time until the server is up
# Therefore we need to poll to identify when the sever has successfully started
# servod_max_tries specifies the maximum number of times we will check to see if the server has started
# while servod_delay_between_tries is the sleep time between checks.
servod_max_tries = 100
servod_delay_between_tries = 0.1
def validate(self):
# pylint: disable=access-member-before-definition
if self.labels and len(self.power_domains) != len(self.labels):
raise ConfigError('There should be exactly one label per power domain')
if self.autostart:
if self.host != 'localhost': # pylint: disable=access-member-before-definition
self.logger.warning('Ignoring host "%s" since autostart is set to "True"', self.host)
self.host = "localhost"
if (self.vid is None) != (self.pid is None):
raise ConfigError('`vid` and `pid` must both be specified')
def initialize(self, context):
# pylint: disable=access-member-before-definition
self.poller = None
self.data = None
self.stopped = True
if self.device.platform != "chromeos":
raise InstrumentError("servo_power instrument only supports Chrome OS devices.")
if not self.labels:
self.labels = ["PORT_{}".format(channel) for channel, _ in enumerate(self.power_domains)]
self.power_domains = [channel if channel.endswith("_mw") else
"{}_mw".format(channel) for channel in self.power_domains]
self.label_map = {pd: l for pd, l in zip(self.power_domains, self.labels)}
if self.autostart:
self._start_servod()
def setup(self, context):
# pylint: disable=access-member-before-definition
self.outfile = os.path.join(context.output_directory, 'servo.csv')
self.poller = PowerPoller(self.host, self.port, self.power_domains, self.sampling_rate)
def start(self, context):
self.poller.start()
self.stopped = False
def stop(self, context):
self.data = self.poller.stop()
self.poller.join()
self.stopped = True
timestamps = self.data.pop("timestamp")
for channel, data in self.data.iteritems():
label = self.label_map[channel]
data = [float(v) / 1000.0 for v in data]
sample_sum = sum(data)
metric_name = '{}_power'.format(label)
power = sample_sum / len(data)
context.result.add_metric(metric_name, round(power, 3), 'Watts')
metric_name = '{}_energy'.format(label)
energy = sample_sum * (1.0 / self.sampling_rate)
context.result.add_metric(metric_name, round(energy, 3), 'Joules')
with open(self.outfile, 'wb') as f:
c = csv.writer(f)
headings = ['timestamp'] + ['{}_power'.format(label) for label in self.labels]
c.writerow(headings)
for row in zip(timestamps, *self.data.itervalues()):
c.writerow(row)
def teardown(self, context):
if not self.stopped:
self.stop(context)
if self.autostart:
self.server_session.kill_session()
def _start_servod(self):
in_chroot = False if which('dut-control') is None else True
password = ''
if not in_chroot:
msg = 'Instrument %s requires sudo access on this machine to start `servod`'
self.logger.info(msg, self.name)
self.logger.info('You need to be sudoer to use it.')
password = getpass.getpass()
check = subprocess.call('echo {} | sudo -S ls > /dev/null'.format(password), shell=True)
if check:
raise InstrumentError('Given password was either wrong or you are not a sudoer')
self.server_session = CrosSdkSession(self.chroot_path, password=password)
password = ''
command = 'sudo servod -b {b} -c {b}.xml'
if self.vid and self.pid:
command += " -v " + self.vid
command += " -p " + self.pid
command += '&'
self.server_session.send_command(command.format(b=self.board_name))
for _ in xrange(self.servod_max_tries):
server_lines = self.server_session.get_lines(timeout=1, from_stderr=True,
timeout_only_for_first_line=False)
if server_lines:
if 'Listening on' in server_lines[-1]:
self.port = int(server_lines[-1].split()[-1])
break
time.sleep(self.servod_delay_between_tries)
else:
raise InstrumentError('Failed to start servod in cros_sdk environment')
class PowerPoller(threading.Thread):
def __init__(self, host, port, channels, sampling_rate):
super(PowerPoller, self).__init__()
self.proxy = xmlrpclib.ServerProxy("http://{}:{}/".format(host, port))
self.proxy.get(channels[1]) # Testing connection
self.channels = channels
self.data = {channel: [] for channel in channels}
self.data['timestamp'] = []
self.period = 1.0 / sampling_rate
self.term_signal = threading.Event()
self.term_signal.set()
self.logger = logging.getLogger(self.__class__.__name__)
def run(self):
while self.term_signal.is_set():
self.data['timestamp'].append(str(datetime.now()))
for channel in self.channels:
self.data[channel].append(float(self.proxy.get(channel)))
time.sleep(self.period)
def stop(self):
self.term_signal.clear()
self.join()
return self.data

@@ -155,6 +155,7 @@ class TraceCmdInstrument(Instrument):
def __init__(self, device, **kwargs):
super(TraceCmdInstrument, self).__init__(device, **kwargs)
self.trace_cmd = None
self._pull_timeout = None
self.event_string = _build_trace_events(self.events)
self.output_file = os.path.join(self.device.working_directory, OUTPUT_TRACE_FILE)
self.temp_trace_file = self.device.path.join(self.device.working_directory, OUTPUT_TRACE_FILE)
@@ -233,6 +234,8 @@ class TraceCmdInstrument(Instrument):
# Therefore timout for the pull command must also be adjusted
# accordingly.
self._pull_timeout = (self.stop_time - self.start_time) # pylint: disable=attribute-defined-outside-init
if self._pull_timeout < 60:
self._pull_timeout = 60
self.device.pull_file(self.output_file, context.output_directory, timeout=self._pull_timeout)
context.add_iteration_artifact('bintrace', OUTPUT_TRACE_FILE, kind='data',
description='trace-cmd generated ftrace dump.')

@@ -86,11 +86,16 @@ class ReventGetter(ResourceGetter):
self.resolver.register(self, 'revent', GetterPriority.package)
def get(self, resource, **kwargs):
filename = '.'.join([resource.owner.device.name, resource.stage, 'revent']).lower()
location = _d(os.path.join(self.get_base_location(resource), 'revent_files'))
for candidate in os.listdir(location):
if candidate.lower() == filename.lower():
return os.path.join(location, candidate)
device_model = resource.owner.device.get_device_model()
wa_device_name = resource.owner.device.name
for name in [device_model, wa_device_name]:
if not name:
continue
filename = '.'.join([name, resource.stage, 'revent']).lower()
location = _d(os.path.join(self.get_base_location(resource), 'revent_files'))
for candidate in os.listdir(location):
if candidate.lower() == filename.lower():
return os.path.join(location, candidate)
class PackageApkGetter(PackageFileGetter):
@@ -368,6 +373,7 @@ class HttpGetter(ResourceGetter):
return requests.get(url, auth=auth, stream=stream)
def resolve_resource(self, resource):
# pylint: disable=too-many-branches
assets = self.index.get(resource.owner.name, {})
if not assets:
return {}
@@ -380,11 +386,16 @@ class HttpGetter(ResourceGetter):
if a['path'] == found:
return a
elif resource.name == 'revent':
filename = '.'.join([resource.owner.device.name, resource.stage, 'revent']).lower()
for asset in assets:
pathname = os.path.basename(asset['path']).lower()
if pathname == filename:
return asset
device_model = resource.owner.device.get_device_model()
wa_device_name = resource.owner.device.name
for name in [device_model, wa_device_name]:
if not name:
continue
filename = '.'.join([name, resource.stage, 'revent']).lower()
for asset in assets:
pathname = os.path.basename(asset['path']).lower()
if pathname == filename:
return asset
else: # file
for asset in assets:
if asset['path'].lower() == resource.path.lower():
@@ -446,6 +457,7 @@ class RemoteFilerGetter(ResourceGetter):
return local_full_path
def get_from(self, resource, version, location): # pylint: disable=no-self-use
# pylint: disable=too-many-branches
if resource.name in ['apk', 'jar']:
return get_from_location_by_extension(resource, location, resource.name, version)
elif resource.name == 'file':
@@ -453,19 +465,24 @@ class RemoteFilerGetter(ResourceGetter):
if os.path.exists(filepath):
return filepath
elif resource.name == 'revent':
filename = '.'.join([resource.owner.device.name, resource.stage, 'revent']).lower()
alternate_location = os.path.join(location, 'revent_files')
# There tends to be some confusion as to where revent files should
# be placed. This looks both in the extension's directory, and in
# 'revent_files' subdirectory under it, if it exists.
if os.path.isdir(alternate_location):
for candidate in os.listdir(alternate_location):
if candidate.lower() == filename.lower():
return os.path.join(alternate_location, candidate)
if os.path.isdir(location):
for candidate in os.listdir(location):
if candidate.lower() == filename.lower():
return os.path.join(location, candidate)
device_model = resource.owner.device.get_device_model()
wa_device_name = resource.owner.device.name
for name in [device_model, wa_device_name]:
if not name:
continue
filename = '.'.join([name, resource.stage, 'revent']).lower()
alternate_location = os.path.join(location, 'revent_files')
# There tends to be some confusion as to where revent files should
# be placed. This looks both in the extension's directory, and in
# 'revent_files' subdirectory under it, if it exists.
if os.path.isdir(alternate_location):
for candidate in os.listdir(alternate_location):
if candidate.lower() == filename.lower():
return os.path.join(alternate_location, candidate)
if os.path.isdir(location):
for candidate in os.listdir(location):
if candidate.lower() == filename.lower():
return os.path.join(location, candidate)
else:
raise ValueError('Unexpected resource type: {}'.format(resource.name))

50
wlauto/result_processors/cpustate.py Normal file → Executable file

@@ -19,7 +19,7 @@ from collections import OrderedDict
from wlauto import ResultProcessor, Parameter
from wlauto.core import signal
from wlauto.exceptions import ConfigError
from wlauto.exceptions import ConfigError, DeviceError
from wlauto.instrumentation import instrument_is_installed
from wlauto.utils.power import report_power_stats
from wlauto.utils.misc import unique
@@ -102,7 +102,26 @@ class CpuStatesProcessor(ResultProcessor):
Create a CSV with the timeline of core power states over the course of the run
as well as the usual stats reports.
"""),
Parameter('create_utilization_timeline', kind=bool, default=False,
description="""
Create a CSV with the timeline of cpu(s) utilisation over the course of the run
as well as the usual stats reports.
The values generated are floating point numbers, normalised based on the maximum
frequency of the cluster.
"""),
Parameter('start_marker_handling', kind=str, default="try",
allowed_values=['ignore', 'try', 'error'],
description="""
The trace-cmd instrument inserts a marker into the trace to indicate the beginning
of workload execution. In some cases, this marker may be missing in the final
output (e.g. due to trace buffer overrun). This parameter specifies how a missing
start marker will be handled:
:`ignore`: The start marker will be ignored. All events in the trace will be used.
:`error`: An error will be raised if the start marker is not found in the trace.
:`try`: If the start marker is not found, all events in the trace will be used.
""")
]
def validate(self):
@@ -130,6 +149,7 @@ class CpuStatesProcessor(ResultProcessor):
self.idle_state_names = [idle_states[i] for i in sorted(idle_states.keys())]
self.num_idle_states = len(self.idle_state_names)
self.iteration_reports = OrderedDict()
self.max_freq_list = []
# priority -19: just higher than the slow_start of instrumentation
signal.connect(self.set_initial_state, signal.BEFORE_WORKLOAD_EXECUTION, priority=-19)
@@ -139,12 +159,21 @@ class CpuStatesProcessor(ResultProcessor):
# Write initial frequencies into the trace.
# NOTE: this assumes per-cluster DVFS, that is valid for devices that
# currently exist. This will need to be updated for per-CPU DFS.
# pylint: disable=attribute-defined-outside-init
self.logger.debug('Writing initial frequencies into trace...')
device = context.device
cluster_freqs = {}
cluster_max_freqs = {}
self.max_freq_list = []
for c in unique(device.core_clusters):
cluster_freqs[c] = device.get_cluster_cur_frequency(c)
try:
cluster_freqs[c] = device.get_cluster_cur_frequency(c)
cluster_max_freqs[c] = device.get_cluster_max_frequency(c)
except ValueError:
cluster_freqs[c] = None
cluster_max_freqs[c] = None
for i, c in enumerate(device.core_clusters):
self.max_freq_list.append(cluster_max_freqs[c])
entry = 'CPU {} FREQUENCY: {} kHZ'.format(i, cluster_freqs[c])
device.set_sysfile_value('/sys/kernel/debug/tracing/trace_marker',
entry, verify=False)
@@ -153,7 +182,10 @@ class CpuStatesProcessor(ResultProcessor):
self.logger.debug('Nudging all cores awake...')
for i in xrange(len(device.core_names)):
command = device.busybox + ' taskset 0x{:x} {}'
device.execute(command.format(1 << i, 'ls'))
try:
device.execute(command.format(1 << i, 'ls'))
except DeviceError:
self.logger.warning("Failed to nudge CPU %s, has it been hot plugged out?", i)
def process_iteration_result(self, result, context):
trace = context.get_artifact('txttrace')
@@ -165,7 +197,12 @@ class CpuStatesProcessor(ResultProcessor):
timeline_csv_file = os.path.join(context.output_directory, 'power_states.csv')
else:
timeline_csv_file = None
parallel_report, powerstate_report = report_power_stats( # pylint: disable=unbalanced-tuple-unpacking
if self.create_utilization_timeline:
cpu_utilisation = os.path.join(context.output_directory, 'cpu_utilisation.csv')
else:
cpu_utilisation = None
reports = report_power_stats( # pylint: disable=unbalanced-tuple-unpacking
trace_file=trace.path,
idle_state_names=self.idle_state_names,
core_names=self.core_names,
@@ -175,7 +212,12 @@ class CpuStatesProcessor(ResultProcessor):
first_system_state=self.first_system_state,
use_ratios=self.use_ratios,
timeline_csv_file=timeline_csv_file,
cpu_utilisation=cpu_utilisation,
max_freq_list=self.max_freq_list,
start_marker_handling=self.start_marker_handling,
)
parallel_report = reports.pop(0)
powerstate_report = reports.pop(0)
if parallel_report is None:
self.logger.warning('No power state reports generated; are power '
'events enabled in the trace?')

@@ -25,8 +25,10 @@ import subprocess
import logging
import re
from wlauto.exceptions import DeviceError, ConfigError, HostError
from wlauto.utils.misc import check_output, escape_single_quotes, escape_double_quotes, get_null
from wlauto.exceptions import DeviceError, ConfigError, HostError, WAError
from wlauto.utils.misc import (check_output, escape_single_quotes,
escape_double_quotes, get_null,
CalledProcessErrorWithStderr)
MAX_TRIES = 5
@@ -266,6 +268,7 @@ am_start_error = re.compile(r"Error: Activity class {[\w|.|/]*} does not exist")
def adb_shell(device, command, timeout=None, check_exit_code=False, as_root=False): # NOQA
# pylint: disable=too-many-branches, too-many-locals, too-many-statements
_check_env()
if as_root:
command = 'echo \'{}\' | su'.format(escape_single_quotes(command))
@@ -273,13 +276,28 @@ def adb_shell(device, command, timeout=None, check_exit_code=False, as_root=Fals
full_command = 'adb {} shell "{}"'.format(device_string, escape_double_quotes(command))
logger.debug(full_command)
if check_exit_code:
actual_command = "adb {} shell '({}); echo; echo $?'".format(device_string, escape_single_quotes(command))
raw_output, error = check_output(actual_command, timeout, shell=True)
actual_command = "adb {} shell '({}); echo \"\n$?\"'".format(device_string, escape_single_quotes(command))
try:
raw_output, error = check_output(actual_command, timeout, shell=True)
except CalledProcessErrorWithStderr as e:
raw_output = e.output
error = e.error
exit_code = e.returncode
if exit_code == 1:
logger.debug("Exit code 1 could be either the return code of the command or mean ADB failed")
if raw_output:
if raw_output.endswith('\r\n'):
newline = '\r\n'
elif raw_output.endswith('\n'):
newline = '\n'
else:
raise WAError("Unknown new line separator in: {}".format(raw_output))
try:
output, exit_code, _ = raw_output.rsplit('\r\n', 2)
output, exit_code, _ = raw_output.rsplit(newline, 2)
except ValueError:
exit_code, _ = raw_output.rsplit('\r\n', 1)
exit_code, _ = raw_output.rsplit(newline, 1)
output = ''
else: # raw_output is empty
exit_code = '969696' # just because
@@ -303,7 +321,14 @@ def adb_shell(device, command, timeout=None, check_exit_code=False, as_root=Fals
else:
raise DeviceError('adb has returned early; did not get an exit code. Was kill-server invoked?')
else: # do not check exit code
output, _ = check_output(full_command, timeout, shell=True)
try:
output, _ = check_output(full_command, timeout, shell=True)
except CalledProcessErrorWithStderr as e:
output = e.output
error = e.error
exit_code = e.returncode
if e.returncode == 1:
logger.debug("Got Exit code 1, could be either the return code of the command or mean ADB failed")
return output

132
wlauto/utils/cros_sdk.py Normal file

@@ -0,0 +1,132 @@
# Copyright 2016 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
import time
import os
import logging
from Queue import Queue, Empty
from threading import Thread
from subprocess import Popen, PIPE
from wlauto.utils.misc import which
from wlauto.exceptions import HostError
class OutputPollingThread(Thread):
def __init__(self, out, queue, name):
super(OutputPollingThread, self).__init__()
self.out = out
self.queue = queue
self.stop_signal = False
self.name = name
def run(self):
for line in iter(self.out.readline, ''):
if self.stop_signal:
break
self.queue.put(line)
def set_stop(self):
self.stop_signal = True
class CrosSdkSession(object):
def __init__(self, cros_path, password=''):
self.logger = logging.getLogger(self.__class__.__name__)
self.in_chroot = True if which('dut-control') else False
ON_POSIX = 'posix' in sys.builtin_module_names
if self.in_chroot:
self.cros_sdk_session = Popen(['/bin/sh'], bufsize=1, stdin=PIPE, stdout=PIPE, stderr=PIPE,
cwd=cros_path, close_fds=ON_POSIX, shell=True)
else:
cros_sdk_bin_path = which('cros_sdk')
potential_path = os.path.join("cros_path", "chromium/tools/depot_tools/cros_sdk")
if not cros_sdk_bin_path and os.path.isfile(potential_path):
cros_sdk_bin_path = potential_path
if not cros_sdk_bin_path:
raise HostError("Failed to locate 'cros_sdk' make sure it is in your PATH")
self.cros_sdk_session = Popen(['sudo -Sk {}'.format(cros_sdk_bin_path)], bufsize=1, stdin=PIPE,
stdout=PIPE, stderr=PIPE, cwd=cros_path, close_fds=ON_POSIX, shell=True)
self.cros_sdk_session.stdin.write(password)
self.cros_sdk_session.stdin.write('\n')
self.stdout_queue = Queue()
self.stdout_thread = OutputPollingThread(self.cros_sdk_session.stdout, self.stdout_queue, 'stdout')
self.stdout_thread.daemon = True
self.stdout_thread.start()
self.stderr_queue = Queue()
self.stderr_thread = OutputPollingThread(self.cros_sdk_session.stderr, self.stderr_queue, 'stderr')
self.stderr_thread.daemon = True
self.stderr_thread.start()
def kill_session(self):
self.stdout_thread.set_stop()
self.stderr_thread.set_stop()
self.send_command('echo TERMINATE >&1') # send something into stdout to unblock it and close it properly
self.send_command('echo TERMINATE 1>&2') # ditto for stderr
self.stdout_thread.join()
self.stderr_thread.join()
self.cros_sdk_session.kill()
def send_command(self, cmd, flush=True):
if not cmd.endswith('\n'):
cmd = cmd + '\n'
self.logger.debug(cmd.strip())
self.cros_sdk_session.stdin.write(cmd)
if flush:
self.cros_sdk_session.stdin.flush()
def read_line(self, timeout=0):
return _read_line_from_queue(self.stdout_queue, timeout=timeout, logger=self.logger)
def read_stderr_line(self, timeout=0):
return _read_line_from_queue(self.stderr_queue, timeout=timeout, logger=self.logger)
def get_lines(self, timeout=0, timeout_only_for_first_line=True, from_stderr=False):
lines = []
line = True
while line is not None:
if from_stderr:
line = self.read_stderr_line(timeout)
else:
line = self.read_line(timeout)
if line:
lines.append(line)
if timeout and timeout_only_for_first_line:
timeout = 0 # after a line has been read, no further delay is required
return lines
def _read_line_from_queue(queue, timeout=0, logger=None):
try:
line = queue.get_nowait()
except Empty:
line = None
if line is None and timeout:
sleep_time = timeout
time.sleep(sleep_time)
try:
line = queue.get_nowait()
except Empty:
line = None
if line is not None:
line = line.strip('\n')
if logger and line:
logger.debug(line)
return line

@@ -22,9 +22,10 @@ HWMON_ROOT = '/sys/class/hwmon'
class HwmonSensor(object):
def __init__(self, device, kind, label, filepath):
def __init__(self, device, kind, device_name, label, filepath):
self.device = device
self.kind = kind
self.device_name = device_name
self.label = label
self.filepath = filepath
self.readings = []
@@ -58,10 +59,10 @@ def discover_sensors(device, sensor_kinds):
for hwmon_device in hwmon_devices:
try:
device_path = path.join(HWMON_ROOT, hwmon_device, 'device')
name = device.get_sysfile_value(path.join(device_path, 'name'))
base_name = device.get_sysfile_value(path.join(device_path, 'name'))
except DeviceError: # probably a virtual device
device_path = path.join(HWMON_ROOT, hwmon_device)
name = device.get_sysfile_value(path.join(device_path, 'name'))
base_name = device.get_sysfile_value(path.join(device_path, 'name'))
for sensor_kind in sensor_kinds:
i = 1
@@ -69,9 +70,17 @@ def discover_sensors(device, sensor_kinds):
while device.file_exists(input_path):
label_path = path.join(device_path, '{}{}_label'.format(sensor_kind, i))
if device.file_exists(label_path):
name += ' ' + device.get_sysfile_value(label_path)
sensors.append(HwmonSensor(device, sensor_kind, name, input_path))
sensors.append(HwmonSensor(device,
sensor_kind,
base_name,
device.get_sysfile_value(label_path),
input_path))
else:
sensors.append(HwmonSensor(device,
sensor_kind,
base_name,
"{}{}".format(sensor_kind, i),
input_path))
i += 1
input_path = path.join(device_path, '{}{}_input'.format(sensor_kind, i))
return sensors

@@ -27,12 +27,13 @@ import imp
import string
import threading
import signal
import subprocess
import pkgutil
import traceback
import logging
import random
import hashlib
import subprocess
from subprocess import CalledProcessError
from datetime import datetime, timedelta
from operator import mul, itemgetter
from StringIO import StringIO
@@ -81,6 +82,13 @@ class TimeoutError(Exception):
return '\n'.join([self.message, 'OUTPUT:', self.output or ''])
class CalledProcessErrorWithStderr(CalledProcessError):
def __init__(self, *args, **kwargs):
self.error = kwargs.pop("error")
super(CalledProcessErrorWithStderr, self).__init__(*args, **kwargs)
def check_output(command, timeout=None, ignore=None, **kwargs):
"""This is a version of subprocess.check_output that adds a timeout parameter to kill
the subprocess if it does not return within the specified time."""
@@ -120,7 +128,7 @@ def check_output(command, timeout=None, ignore=None, **kwargs):
if retcode == -9: # killed, assume due to timeout callback
raise TimeoutError(command, output='\n'.join([output, error]))
elif ignore != 'all' and retcode not in ignore:
raise subprocess.CalledProcessError(retcode, command, output='\n'.join([output, error]))
raise CalledProcessErrorWithStderr(retcode, command, output=output, error=error)
return output, error
@@ -815,3 +823,21 @@ def sha256(path, chunk=2048):
def urljoin(*parts):
return '/'.join(p.rstrip('/') for p in parts)
__memo_cache = {}
def memoized(func):
"""A decorator for memoizing functions and methods."""
func_id = repr(func)
def memoize_wrapper(*args, **kwargs):
id_string = func_id + ','.join([str(id(a)) for a in args])
id_string += ','.join('{}={}'.format(k, v)
for k, v in kwargs.iteritems())
if id_string not in __memo_cache:
__memo_cache[id_string] = func(*args, **kwargs)
return __memo_cache[id_string]
return memoize_wrapper

124
wlauto/utils/power.py Normal file → Executable file

@@ -24,6 +24,7 @@ from collections import defaultdict
import argparse
from wlauto.utils.trace_cmd import TraceCmdTrace, TRACE_MARKER_START, TRACE_MARKER_STOP
from wlauto.exceptions import DeviceError
logger = logging.getLogger('power')
@@ -157,6 +158,8 @@ class PowerStateProcessor(object):
self.requested_states = defaultdict(lambda: -1) # cpu_id -> requeseted state
self.wait_for_start_marker = wait_for_start_marker
self._saw_start_marker = False
self._saw_stop_marker = False
self.exceptions = []
idle_state_domains = build_idle_domains(core_clusters,
num_states=num_idle_states,
@@ -173,9 +176,17 @@ class PowerStateProcessor(object):
def process(self, event_stream):
for event in event_stream:
next_state = self.update_power_state(event)
if self._saw_start_marker or not self.wait_for_start_marker:
yield next_state
try:
next_state = self.update_power_state(event)
if self._saw_start_marker or not self.wait_for_start_marker:
yield next_state
if self._saw_stop_marker:
break
except Exception as e: # pylint: disable=broad-except
self.exceptions.append(e)
else:
if self.wait_for_start_marker:
logger.warning("Did not see a STOP marker in the trace")
def update_power_state(self, event):
"""
@@ -191,7 +202,7 @@ class PowerStateProcessor(object):
if event.name == 'START':
self._saw_start_marker = True
elif event.name == 'STOP':
self._saw_start_marker = False
self._saw_stop_marker = True
else:
raise ValueError('Unexpected event type: {}'.format(event.kind))
return self.power_state.copy()
@@ -515,6 +526,39 @@ class PowerStateStatsReport(object):
for s in stats])
class CpuUtilisationTimeline(object):
def __init__(self, filepath, core_names, max_freq_list):
self.filepath = filepath
self._wfh = open(filepath, 'w')
self.writer = csv.writer(self._wfh)
if core_names:
headers = ['ts'] + ['{} CPU{}'.format(c, i)
for i, c in enumerate(core_names)]
self.writer.writerow(headers)
self._max_freq_list = max_freq_list
def update(self, timestamp, core_states): # NOQA
row = [timestamp]
for core, [idle_state, frequency] in enumerate(core_states):
if idle_state == -1:
if frequency == UNKNOWN_FREQUENCY:
frequency = 0
elif idle_state is None:
frequency = 0
else:
frequency = 0
if core < len(self._max_freq_list):
frequency /= float(self._max_freq_list[core])
row.append(frequency)
else:
logger.warning('Unable to detect max frequency for this core. Cannot log utilisation value')
self.writer.writerow(row)
def report(self):
self._wfh.close()
def build_idle_domains(core_clusters, # NOQA
num_states,
first_cluster_state=None,
@@ -577,14 +621,29 @@ def build_idle_domains(core_clusters, # NOQA
def report_power_stats(trace_file, idle_state_names, core_names, core_clusters,
num_idle_states, first_cluster_state=sys.maxint,
first_system_state=sys.maxint, use_ratios=False,
timeline_csv_file=None, filter_trace=False):
# pylint: disable=too-many-locals
trace = TraceCmdTrace(filter_markers=filter_trace)
timeline_csv_file=None, cpu_utilisation=None,
max_freq_list=None, start_marker_handling='error'):
# pylint: disable=too-many-locals,too-many-branches
trace = TraceCmdTrace(trace_file,
filter_markers=False,
names=['cpu_idle', 'cpu_frequency', 'print'])
wait_for_start_marker = True
if start_marker_handling == "error" and not trace.has_start_marker:
raise DeviceError("Start marker was not found in the trace")
elif start_marker_handling == "try":
wait_for_start_marker = trace.has_start_marker
if not wait_for_start_marker:
logger.warning("Did not see a START marker in the trace, "
"state residency and parallelism statistics may be inaccurate.")
elif start_marker_handling == "ignore":
wait_for_start_marker = False
ps_processor = PowerStateProcessor(core_clusters,
num_idle_states=num_idle_states,
first_cluster_state=first_cluster_state,
first_system_state=first_system_state,
wait_for_start_marker=not filter_trace)
wait_for_start_marker=wait_for_start_marker)
reporters = [
ParallelStats(core_clusters, use_ratios),
PowerStateStats(core_names, idle_state_names, use_ratios)
@@ -592,8 +651,13 @@ def report_power_stats(trace_file, idle_state_names, core_names, core_clusters,
if timeline_csv_file:
reporters.append(PowerStateTimeline(timeline_csv_file,
core_names, idle_state_names))
if cpu_utilisation:
if max_freq_list:
reporters.append(CpuUtilisationTimeline(cpu_utilisation, core_names, max_freq_list))
else:
logger.warning('Maximum frequencies not found. Cannot normalise. Skipping CPU Utilisation Timeline')
event_stream = trace.parse(trace_file, names=['cpu_idle', 'cpu_frequency', 'print'])
event_stream = trace.parse()
transition_stream = stream_cpu_power_transitions(event_stream)
power_state_stream = ps_processor.process(transition_stream)
core_state_stream = gather_core_states(power_state_stream)
@@ -602,16 +666,21 @@ def report_power_stats(trace_file, idle_state_names, core_names, core_clusters,
for reporter in reporters:
reporter.update(timestamp, states)
if ps_processor.exceptions:
logger.warning('There were errors while processing trace:')
for e in ps_processor.exceptions:
logger.warning(str(e))
reports = []
for reporter in reporters:
report = reporter.report()
if report:
reports.append(report)
reports.append(report)
return reports
def main():
# pylint: disable=unbalanced-tuple-unpacking
logging.basicConfig(level=logging.INFO)
args = parse_arguments()
parallel_report, powerstate_report = report_power_stats(
trace_file=args.infile,
@@ -623,7 +692,9 @@ def main():
first_system_state=args.first_system_state,
use_ratios=args.ratios,
timeline_csv_file=args.timeline_file,
filter_trace=(not args.no_trace_filter),
cpu_utilisation=args.cpu_utilisation,
max_freq_list=args.max_freq_list,
start_marker_handling=args.start_marker_handling,
)
parallel_report.write(os.path.join(args.output_directory, 'parallel.csv'))
powerstate_report.write(os.path.join(args.output_directory, 'cpustate.csv'))
@@ -653,11 +724,6 @@ def parse_arguments(): # NOQA
help='''
Output directory where reports will be placed.
''')
parser.add_argument('-F', '--no-trace-filter', action='store_true', default=False,
help='''
Normally, only the trace between begin and end marker is used. This disables
the filtering so the entire trace file is considered.
''')
parser.add_argument('-c', '--core-names', action=SplitListAction,
help='''
Comma-separated list of core names for the device on which the trace
@@ -698,6 +764,30 @@ def parse_arguments(): # NOQA
A timeline of core power states will be written to the specified file in
CSV format.
''')
parser.add_argument('-u', '--cpu-utilisation', metavar='FILE',
help='''
A timeline of cpu(s) utilisation will be written to the specified file in
CSV format.
''')
parser.add_argument('-m', '--max-freq-list', action=SplitListAction, default=[],
help='''
Comma-separated list of core maximum frequencies for the device on which
the trace was collected.
Only required if --cpu-utilisation is set.
This is used to normalise the frequencies to obtain percentage utilisation.
''')
parser.add_argument('-M', '--start-marker-handling', metavar='HANDLING', default="try",
choices=["error", "try", "ignore"],
help='''
The trace-cmd instrument inserts a marker into the trace to indicate the beginning
of workload execution. In some cases, this marker may be missing in the final
output (e.g. due to trace buffer overrun). This parameter specifies how a missing
start marker will be handled:
ignore: The start marker will be ignored. All events in the trace will be used.
error: An error will be raised if the start marker is not found in the trace.
try: If the start marker is not found, all events in the trace will be used.
''')
args = parser.parse_args()

@@ -172,9 +172,8 @@ class SshShell(object):
logger.warning('Could not get exit code for "{}",\ngot: "{}"'.format(command, exit_code_text))
return output
except EOF:
logger.error('Dropped connection detected.')
self.connection_lost = True
raise
raise DeviceError('Connection dropped.')
def logout(self):
logger.debug('Logging out {}@{}'.format(self.username, self.host))

@@ -17,7 +17,7 @@ import re
import logging
from itertools import chain
from wlauto.utils.misc import isiterable
from wlauto.utils.misc import isiterable, memoized
from wlauto.utils.types import numeric
@@ -173,6 +173,24 @@ def regex_body_parser(regex, flags=0):
return regex_parser_func
def sched_switch_parser(event, text):
"""
Sched switch output may be presented in a couple of different formats. One is handled
by a regex. The other format can *almost* be handled by the default parser, if it
weren't for the ``==>`` that appears in the middle.
"""
if text.count('=') == 2: # old format
regex = re.compile(
r'(?P<prev_comm>\S.*):(?P<prev_pid>\d+) \[(?P<prev_prio>\d+)\] (?P<status>\S+)'
r' ==> '
r'(?P<next_comm>\S.*):(?P<next_pid>\d+) \[(?P<next_prio>\d+)\]'
)
parser_func = regex_body_parser(regex)
return parser_func(event, text)
else: # there are more than two "=" -- new format
return default_body_parser(event, text.replace('==>', ''))
# Maps event onto the corresponding parser for its body text. A parser may be
# a callable with signature
#
@@ -182,11 +200,7 @@ def regex_body_parser(regex, flags=0):
# regex). In case of a string/regex, its named groups will be used to populate
# the event's attributes.
EVENT_PARSER_MAP = {
'sched_switch': re.compile(
r'(?P<prev_comm>\S.*):(?P<prev_pid>\d+) \[(?P<prev_prio>\d+)\] (?P<status>\S+)'
r' ==> '
r'(?P<next_comm>\S.*):(?P<next_pid>\d+) \[(?P<next_prio>\d+)\]'
),
'sched_switch': sched_switch_parser,
}
TRACE_EVENT_REGEX = re.compile(r'^\s+(?P<thread>\S+.*?\S+)\s+\[(?P<cpu_id>\d+)\]\s+(?P<ts>[\d.]+):\s+'
@@ -201,37 +215,38 @@ EMPTY_CPU_REGEX = re.compile(r'CPU \d+ is empty')
class TraceCmdTrace(object):
def __init__(self, filter_markers=True):
self.filter_markers = filter_markers
@property
@memoized
def has_start_marker(self):
with open(self.file_path) as fh:
for line in fh:
if TRACE_MARKER_START in line:
return True
return False
def parse(self, filepath, names=None, check_for_markers=True): # pylint: disable=too-many-branches,too-many-locals
def __init__(self, file_path, names=None, filter_markers=True):
self.filter_markers = filter_markers
self.file_path = file_path
self.names = names or []
def parse(self): # pylint: disable=too-many-branches,too-many-locals
"""
This is a generator for the trace event stream.
"""
inside_maked_region = False
filters = [re.compile('^{}$'.format(n)) for n in names or []]
if check_for_markers:
with open(filepath) as fh:
for line in fh:
if TRACE_MARKER_START in line:
break
else:
# maker not found force filtering by marker to False
self.filter_markers = False
with open(filepath) as fh:
inside_marked_region = False
filters = [re.compile('^{}$'.format(n)) for n in self.names or []]
with open(self.file_path) as fh:
for line in fh:
# if processing trace markers, skip marker lines as well as all
# lines outside marked region
if self.filter_markers:
if not inside_maked_region:
if not inside_marked_region:
if TRACE_MARKER_START in line:
inside_maked_region = True
inside_marked_region = True
continue
elif TRACE_MARKER_STOP in line:
inside_maked_region = False
continue
break
match = DROPPED_EVENTS_REGEX.search(line)
if match:
@@ -268,4 +283,7 @@ class TraceCmdTrace(object):
if isinstance(body_parser, basestring) or isinstance(body_parser, re._pattern_type): # pylint: disable=protected-access
body_parser = regex_body_parser(body_parser)
yield TraceCmdEvent(parser=body_parser, **match.groupdict())
else:
if self.filter_markers and inside_marked_region:
logger.warning('Did not encounter a stop marker in trace')

@@ -111,7 +111,6 @@ def list_of_numbers(value):
"""
Value must be iterable. All elements will be converted to numbers (either ``ints`` or
``float``\ s depending on the elements).
"""
if not isiterable(value):
raise ValueError(value)
@@ -170,7 +169,7 @@ def list_or_string(value):
return [value]
else:
try:
return list(value)
return map(str, value)
except ValueError:
return [str(value)]
@@ -300,3 +299,32 @@ class arguments(list):
def __str__(self):
return ' '.join(self)
class range_dict(dict):
"""
This dict allows you to specify mappings with a range.
If a key is not in the dict it will search downward until
the next key and return its value. E.g:
If:
a[5] = "Hello"
a[10] = "There"
Then:
a[2] == None
a[7] == "Hello"
a[999] == "There"
"""
def __getitem__(self, i):
key = int(i)
while key not in self and key > 0:
key -= 1
if key <= 0:
raise KeyError(i)
return dict.__getitem__(self, key)
def __setitem__(self, i, v):
i = int(i)
super(range_dict, self).__setitem__(i, v)

@@ -16,7 +16,7 @@
import re
import time
import logging
from distutils.version import LooseVersion
from wlauto.utils.serial_port import TIMEOUT
@@ -32,11 +32,14 @@ class UbootMenu(object):
option_regex = re.compile(r'^\[(\d+)\]\s+([^\r]+)\r\n', re.M)
prompt_regex = re.compile(r'^([^\r\n]+):\s*', re.M)
invalid_regex = re.compile(r'Invalid input \(max (\d+)\)', re.M)
uboot_regex = re.compile(r"U-Boot\s(\d\S*)\s")
load_delay = 1 # seconds
default_timeout = 60 # seconds
fixed_uboot_version = '2016.03' # The version on U-Boot that fixed newlines
def __init__(self, conn, start_prompt='Hit any key to stop autoboot'):
def __init__(self, conn,
start_prompt='Hit any key to stop autoboot'):
"""
:param conn: A serial connection as returned by ``pexect.spawn()``.
:param prompt: U-Boot menu prompt
@@ -44,7 +47,7 @@ class UbootMenu(object):
"""
self.conn = conn
self.conn.crlf = '\n\r' # TODO: this has *got* to be a bug in U-Boot...
self.conn.crlf = None
self.start_prompt = start_prompt
self.options = {}
self.prompt = None
@@ -56,6 +59,7 @@ class UbootMenu(object):
"""
self.conn.expect(self.start_prompt, timeout)
self._set_line_separator()
self.conn.sendline('')
time.sleep(self.load_delay)
self.conn.readline() # garbage
@@ -114,3 +118,10 @@ class UbootMenu(object):
pass
self.conn.buffer = ''
def _set_line_separator(self):
uboot_text = self.conn.before
uboot_ver = self.uboot_regex.findall(uboot_text)
if uboot_ver and LooseVersion(uboot_ver[0]) < LooseVersion(self.fixed_uboot_version):
self.conn.crlf = "\n\r"
else:
self.conn.crlf = "\r\n"

@@ -116,6 +116,7 @@ class ApplaunchWorkload(Workload):
package=APP_CONFIG[self.app]['package'],
activity=APP_CONFIG[self.app]['activity'],
options=APP_CONFIG[self.app]['options'],
busybox=self.device.busybox,
))
self.device_script_file = self.device.install(self.host_script_file)
if self.set_launcher_affinity:

@@ -42,7 +42,7 @@ $STOP_COMMAND
echo {{ io_scheduler }} > /sys/block/mmcblk0/queue/scheduler
{% endif %}
for i in $(busybox seq 1 {{ iterations }})
for i in $({{ busybox }} seq 1 {{ iterations }})
do
{% for sensor in sensors %}
{{ sensor.label }}="${{ sensor.label }} `$GET_{{ sensor.label }}`"
@@ -52,9 +52,9 @@ do
# Drop caches to get a cold start.
sync; echo 3 > /proc/sys/vm/drop_caches
# Run IO stress during App launch.
busybox dd if=/dev/zero of=write.img bs=1048576 count=2000 conv=fsync > dd_write.txt 2>&1 &
{{ busybox }} dd if=/dev/zero of=write.img bs=1048576 count=2000 conv=fsync > dd_write.txt 2>&1 &
io_write=$!
busybox dd if=/dev/block/mmcblk0 of=/dev/null bs=1048576 > dd_read.txt 2>&1 &
{{ busybox }} dd if=/dev/block/mmcblk0 of=/dev/null bs=1048576 > dd_read.txt 2>&1 &
io_read=$!
{% endif %}
@@ -64,7 +64,7 @@ do
{{ sensor.label }}="${{ sensor.label }} `$GET_{{ sensor.label }}`"
{% endfor %}
TIME=`busybox awk '{if($1~"TotalTime") print $2}' $TEMP_FILE`
TIME=`{{ busybox }} awk '{if($1~"TotalTime") print $2}' $TEMP_FILE`
TIME_RESULT="$TIME_RESULT $TIME"
{% if cleanup %}
rm $TEMP_FILE

@@ -0,0 +1,15 @@
/*
* Copyright (c) 2005-2010 Frank Denis <j at pureftpd.org>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/

@@ -0,0 +1,75 @@
# Copyright 2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=attribute-defined-outside-init
import os
from wlauto import Workload, Parameter, Executable
class Blogbench(Workload):
name = 'blogbench'
description = """
Blogbench is a portable filesystem benchmark that tries to reproduce the
load of a real-world busy file server.
Blogbench stresses the filesystem with multiple threads performing random
reads, writes and rewrites in order to get a realistic idea of the
scalability and the concurrency a system can handle.
Source code are available from:
https://download.pureftpd.org/pub/blogbench/
"""
parameters = [
Parameter('iterations', kind=int, default=30,
description='The number of iterations to run')
]
def setup(self, context):
host_binary = context.resolver.get(Executable(self, self.device.abi,
'blogbench'))
self.binary = self.device.install_if_needed(host_binary)
# Total test duration equal to iteration*frequency
# The default frequency is 10 seconds, plus 5 here as a buffer.
self.timeout = self.iterations * 15
# An empty and writable directory is needed.
self.directory = self.device.path.join(self.device.working_directory,
'blogbench')
self.device.execute('rm -rf {}'.format(self.directory), timeout=300)
self.device.execute('mkdir -p {}'.format(self.directory))
self.results = self.device.path.join(self.device.working_directory,
'blogbench.output')
self.command = ('{} --iterations {} --directory {} > {}'
.format(self.binary, self.iterations, self.directory,
self.results))
def run(self, context):
self.output = self.device.execute(self.command, timeout=self.timeout)
def update_result(self, context):
host_file = os.path.join(context.output_directory, 'blogbench.output')
self.device.pull_file(self.results, host_file)
with open(host_file, 'r') as blogbench_output:
for line in blogbench_output:
if any('Final score for ' + x in line
for x in ['writes', 'reads']):
line = line.split(':')
metric = line[0].lower().strip().replace(' ', '_')
score = int(line[1].strip())
context.result.add_metric(metric, score, 'blogs')
def finalize(self, context):
self.device.execute('rm -rf {}'.format(self.directory), timeout=300)

Binary file not shown.

Binary file not shown.

@@ -16,6 +16,7 @@
# pylint: disable=E1101
from wlauto import UiAutomatorWorkload, Parameter
from wlauto.utils.types import range_dict
class Cameracapture(UiAutomatorWorkload):
@@ -28,6 +29,10 @@ class Cameracapture(UiAutomatorWorkload):
package = 'com.google.android.gallery3d'
activity = 'com.android.camera.CameraActivity'
api_packages = range_dict()
api_packages[1] = 'com.google.android.gallery3d'
api_packages[23] = 'com.google.android.GoogleCamera'
parameters = [
Parameter('no_of_captures', kind=int, default=5,
description='Number of photos to be taken.'),
@@ -35,17 +40,20 @@ class Cameracapture(UiAutomatorWorkload):
description='Time, in seconds, between two consecutive camera clicks.'),
]
def __init__(self, device, **kwargs):
super(Cameracapture, self).__init__(device, **kwargs)
def initialize(self, context):
api = self.device.get_sdk_version()
self.uiauto_params['no_of_captures'] = self.no_of_captures
self.uiauto_params['time_between_captures'] = self.time_between_captures
self.uiauto_params['api_level'] = api
self.package = self.api_packages[api]
version = self.device.get_installed_package_version(self.package)
version = version.replace(' ', '_')
self.uiauto_params['version'] = version
def setup(self, context):
super(Cameracapture, self).setup(context)
self.device.execute('am start -n {}/{}'.format(self.package, self.activity))
def update_result(self, context):
pass
def teardown(self, context):
self.device.execute('am force-stop {}'.format(self.package))
super(Cameracapture, self).teardown(context)

@@ -11,4 +11,4 @@
#proguard.config=${sdk.dir}/tools/proguard/proguard-android.txt:proguard-project.txt
# Project target.
target=android-17
target=android-18

@@ -16,6 +16,8 @@
package com.arm.wlauto.uiauto.cameracapture;
import java.util.concurrent.TimeUnit;
import android.app.Activity;
import android.os.Bundle;
import android.util.Log;
@@ -32,37 +34,98 @@ import com.arm.wlauto.uiauto.BaseUiAutomation;
public class UiAutomation extends BaseUiAutomation {
public static String TAG = "cameracapture";
int timeDurationBetweenEachCapture = 0;
int sleepTime = 2;
int iterations = 0;
int api = 0;
Integer[] version = {0,0,0};
public void runUiAutomation() throws Exception {
int timeDurationBetweenEachCapture = 0;
int sleepTime = 2;
Bundle parameters = getParams();
String noOfCaptures = "";
int iterations = 0;
Bundle parameters = getParams();
if (parameters.size() > 0) {
iterations = Integer.parseInt(parameters
.getString("no_of_captures"));
timeDurationBetweenEachCapture = Integer.parseInt(parameters
.getString("time_between_captures"));
api = Integer.parseInt(parameters.getString("api_level"));
String versionString = parameters.getString("version");
version = splitVersion(versionString);
}
// Pre Android M UI
if(api < 23)
takePhotosAosp();
else
{
if(compareVersions(version, new Integer[]{3,2,0}) >= 0)
takePhotosGoogleV3_2();
else
takePhotosGoogle();
}
}
if (parameters.size() > 0) {
iterations = Integer.parseInt(parameters
.getString("no_of_captures"));
timeDurationBetweenEachCapture = Integer.parseInt(parameters
.getString("time_between_captures"));
}
// switch to camera capture mode
UiObject clickModes = new UiObject(new UiSelector().descriptionMatches("Camera, video or panorama selector"));
clickModes.click();
sleep(sleepTime);
private void takePhotosAosp() throws Exception
{
// switch to camera capture mode
UiObject clickModes = new UiObject(new UiSelector().descriptionMatches("Camera, video or panorama selector"));
clickModes.click();
sleep(sleepTime);
UiObject changeModeToCapture = new UiObject(new UiSelector().descriptionMatches("Switch to photo"));
UiObject changeModeToCapture = new UiObject(new UiSelector().descriptionMatches("Switch to photo"));
changeModeToCapture.click();
sleep(sleepTime);
changeModeToCapture.click();
sleep(sleepTime);
// click to capture photos
UiObject clickCaptureButton = new UiObject(new UiSelector().descriptionMatches("Shutter button"));
// click to capture photos
UiObject clickCaptureButton = new UiObject(new UiSelector().descriptionMatches("Shutter button"));
for (int i = 0; i < iterations; i++) {
clickCaptureButton.longClick();
sleep(timeDurationBetweenEachCapture);
}
getUiDevice().pressBack();
for (int i = 0; i < iterations; i++) {
clickCaptureButton.longClick();
sleep(timeDurationBetweenEachCapture);
}
getUiDevice().pressBack();
}
private void takePhotosGoogleV3_2() throws Exception
{
// clear tutorial if needed
UiObject tutorialText = new UiObject(new UiSelector().resourceId("com.android.camera2:id/photoVideoSwipeTutorialText"));
if (tutorialText.waitForExists(TimeUnit.SECONDS.toMillis(5))) {
tutorialText.swipeLeft(5);
sleep(sleepTime);
tutorialText.swipeRight(5);
}
// ensure we are in photo mode
UiObject viewFinder = new UiObject(new UiSelector().resourceId("com.android.camera2:id/viewfinder_frame"));
viewFinder.swipeRight(5);
// click to capture photos
UiObject clickCaptureButton = new UiObject(new UiSelector().resourceId("com.android.camera2:id/photo_video_button"));
for (int i = 0; i < iterations; i++) {
clickCaptureButton.longClick();
sleep(timeDurationBetweenEachCapture);
}
}
private void takePhotosGoogle() throws Exception
{
// open mode select menu
UiObject swipeScreen = new UiObject(new UiSelector().resourceId("com.android.camera2:id/mode_options_overlay"));
swipeScreen.swipeRight(5);
// switch to video mode
UiObject changeModeToCapture = new UiObject(new UiSelector().descriptionMatches("Switch to Camera Mode"));
changeModeToCapture.click();
sleep(sleepTime);
// click to capture photos
UiObject clickCaptureButton = new UiObject(new UiSelector().descriptionMatches("Shutter"));
for (int i = 0; i < iterations; i++) {
clickCaptureButton.longClick();
sleep(timeDurationBetweenEachCapture);
}
}
}

@@ -14,6 +14,7 @@
#
from wlauto import UiAutomatorWorkload, Parameter
from wlauto.utils.types import range_dict
class Camerarecord(UiAutomatorWorkload):
@@ -28,15 +29,25 @@ class Camerarecord(UiAutomatorWorkload):
activity = 'com.android.camera.CameraActivity'
run_timeout = 0
api_packages = range_dict()
api_packages[1] = 'com.google.android.gallery3d'
api_packages[23] = 'com.google.android.GoogleCamera'
parameters = [
Parameter('recording_time', kind=int, default=60,
description='The video recording time in seconds.'),
]
def __init__(self, device, **kwargs):
super(Camerarecord, self).__init__(device)
def initialize(self, context):
self.uiauto_params['recording_time'] = self.recording_time # pylint: disable=E1101
self.uiauto_params['version'] = "button"
self.run_timeout = 3 * self.uiauto_params['recording_time']
api = self.device.get_sdk_version()
self.uiauto_params['api_level'] = api
self.package = self.api_packages[api]
version = self.device.get_installed_package_version(self.package)
version = version.replace(' ', '_')
self.uiauto_params['version'] = version
def setup(self, context):
super(Camerarecord, self).setup(context)

@@ -11,4 +11,4 @@
#proguard.config=${sdk.dir}/tools/proguard/proguard-android.txt:proguard-project.txt
# Project target.
target=android-17
target=android-18

@@ -16,6 +16,8 @@
package com.arm.wlauto.uiauto.camerarecord;
import java.util.concurrent.TimeUnit;
import android.app.Activity;
import android.os.Bundle;
import android.util.Log;
@@ -32,18 +34,36 @@ import com.arm.wlauto.uiauto.BaseUiAutomation;
public class UiAutomation extends BaseUiAutomation {
public static String TAG = "camerarecord";
int timeToRecord = 0;
int timeout = 4;
int sleepTime = 2;
int recordingTime = 0;
int api = 0;
Integer[] version = {0,0,0};
public void runUiAutomation() throws Exception {
Bundle parameters = getParams();
int timeToRecord = 0;
int timeout = 4;
int sleepTime = 2;
int recordingTime = 0;
if (parameters.size() > 0) {
recordingTime = Integer.parseInt(parameters
.getString("recording_time"));
api = Integer.parseInt(parameters.getString("api_level"));
String versionString = parameters.getString("version");
version = splitVersion(versionString);
}
//Pre Android M UI
if (api < 23)
recordVideoAosp();
else
{
if(compareVersions(version, new Integer[]{3,2,0}) >= 0)
recordVideoGoogleV3_2();
else
recordVideoGoogle();
}
}
void recordVideoAosp() throws Exception {
// switch to camera capture mode
UiObject clickModes = new UiObject(new UiSelector().descriptionMatches("Camera, video or panorama selector"));
clickModes.click();
@@ -62,4 +82,44 @@ public class UiAutomation extends BaseUiAutomation {
getUiDevice().pressBack();
}
void recordVideoGoogleV3_2() throws Exception {
// clear tutorial if needed
UiObject tutorialText = new UiObject(new UiSelector().resourceId("com.android.camera2:id/photoVideoSwipeTutorialText"));
if (tutorialText.waitForExists(TimeUnit.SECONDS.toMillis(5))) {
tutorialText.swipeLeft(5);
sleep(sleepTime);
tutorialText.swipeRight(5);
}
// ensure we are in video mode
UiObject viewFinder = new UiObject(new UiSelector().resourceId("com.android.camera2:id/viewfinder_frame"));
viewFinder.swipeLeft(5);
// click to capture photos
UiObject clickCaptureButton = new UiObject(new UiSelector().resourceId("com.android.camera2:id/photo_video_button"));
clickCaptureButton.longClick();
sleep(recordingTime);
// stop video recording
clickCaptureButton.longClick();
}
void recordVideoGoogle() throws Exception {
// Open mode select menu
UiObject swipeScreen = new UiObject(new UiSelector().resourceId("com.android.camera2:id/mode_options_overlay"));
swipeScreen.swipeRight(5);
// Switch to video mode
UiObject changeModeToCapture = new UiObject(new UiSelector().descriptionMatches("Switch to Video Camera"));
changeModeToCapture.click();
sleep(sleepTime);
UiObject clickRecordingButton = new UiObject(new UiSelector().descriptionMatches("Shutter"));
clickRecordingButton.longClick();
sleep(recordingTime);
// Stop video recording
clickRecordingButton.longClick();
}
}

@@ -18,7 +18,7 @@
import os
import re
from wlauto import Workload, Parameter
from wlauto import Workload, Parameter, Executable
from wlauto.exceptions import ConfigError
@@ -67,7 +67,7 @@ class Dhrystone(Workload):
]
def setup(self, context):
host_exe = os.path.join(this_dir, 'dhrystone')
host_exe = context.resolver.get(Executable(self, self.device.abi, 'dhrystone'))
self.device_exe = self.device.install(host_exe)
execution_mode = '-l {}'.format(self.mloops) if self.mloops else '-r {}'.format(self.duration)
if self.taskset_mask:
@@ -127,4 +127,3 @@ class Dhrystone(Workload):
raise ConfigError('mloops and duration cannot be both specified at the same time for dhrystone.')
if not self.mloops and not self.duration: # pylint: disable=E0203
self.mloops = self.default_mloops

Binary file not shown.

@@ -37,9 +37,9 @@ class Facebook(AndroidUiAutoBenchmark):
Find friends.
Update the facebook status
[NOTE: This workload starts disableUpdate workload as a part of setup to
disable online updates, which helps to tackle problem of uncertain
behavier during facebook workload run.]
.. note:: This workload starts disableUpdate workload as a part of setup to
disable online updates, which helps to tackle problem of uncertain
behavier during facebook workload run.]
"""
package = 'com.facebook.katana'
@@ -79,4 +79,3 @@ class Facebook(AndroidUiAutoBenchmark):
def teardown(self, context):
pass

@@ -29,6 +29,10 @@ from wlauto.exceptions import WorkloadError
DELAY = 2
OLD_RESULT_START_REGEX = re.compile(r'I/TfwActivity\s*\(\s*\d+\):\s+\S+\s+result: {')
NEW_RESULT_START_REGEX = re.compile(r'[\d\s:.-]+I\sTfwActivity(\s*\(\s*\d+\))?:\s+\S+\s+result: {')
OLD_PREAMBLE_REGEX = re.compile(r'I/TfwActivity\s*\(\s*\d+\):\s+')
NEW_PREAMBLE_REGEX = re.compile(r'[\d\s:.-]+I\sTfwActivity(\s*\(\s*\d+\))?:')
class GlbCorp(ApkWorkload):
@@ -44,8 +48,8 @@ class GlbCorp(ApkWorkload):
package = 'net.kishonti.gfxbench'
activity = 'net.kishonti.benchui.TestActivity'
result_start_regex = re.compile(r'I/TfwActivity\s*\(\s*\d+\):\s+\S+\s+result: {')
preamble_regex = re.compile(r'I/TfwActivity\s*\(\s*\d+\):\s+')
result_start_regex = None
preamble_regex = None
valid_test_ids = [
'gl_alu',
@@ -104,7 +108,7 @@ class GlbCorp(ApkWorkload):
self.monitor = GlbRunMonitor(self.device)
self.monitor.start()
def start_activity(self):
def launch_package(self):
# Unlike with most other APK workloads, we're invoking the use case
# directly by starting the activity with appropriate parameters on the
# command line during execution, so we dont' need to start activity
@@ -131,7 +135,14 @@ class GlbCorp(ApkWorkload):
line = fh.next()
result_lines = []
while True:
if self.result_start_regex.search(line):
if OLD_RESULT_START_REGEX.search(line):
self.preamble_regex = OLD_PREAMBLE_REGEX
self.result_start_regex = OLD_RESULT_START_REGEX
elif NEW_RESULT_START_REGEX.search(line):
self.preamble_regex = NEW_PREAMBLE_REGEX
self.result_start_regex = NEW_RESULT_START_REGEX
if self.result_start_regex and self.result_start_regex.search(line):
result_lines.append('{')
line = fh.next()
while self.preamble_regex.search(line):
@@ -177,7 +188,8 @@ class GlbCorp(ApkWorkload):
class GlbRunMonitor(threading.Thread):
regex = re.compile(r'I/Runner\s+\(\s*\d+\): finished:')
old_regex = re.compile(r'I/Runner\s+\(\s*\d+\): finished:')
new_regex = re.compile(r'I Runner\s*:\s*finished:')
def __init__(self, device):
super(GlbRunMonitor, self).__init__()
@@ -202,7 +214,7 @@ class GlbRunMonitor(threading.Thread):
ready, _, _ = select.select([proc.stdout, proc.stderr], [], [], 2)
if ready:
line = ready[0].readline()
if self.regex.search(line):
if self.new_regex.search(line) or self.old_regex.search(line):
self.run_ended.set()
def stop(self):
@@ -212,4 +224,3 @@ class GlbRunMonitor(threading.Thread):
def wait_for_run_end(self, timeout):
self.run_ended.wait(timeout)
self.run_ended.clear()

@@ -1,4 +1,4 @@
# Copyright 2015 ARM Limited
# Copyright 2015-2016 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -107,19 +107,15 @@ class lmbench(Workload):
def run(self, context):
self.output = []
for time in xrange(self.times):
for _ in xrange(self.times):
for command in self.commands:
self.output.append("Output for time #{}, {}: ".format(time + 1, command))
self.output.append(self.device.execute(command, timeout=self.run_timeout, check_exit_code=False))
self.device.execute(command, timeout=self.run_timeout)
def update_result(self, context):
for output in self.output:
self.logger.debug(output)
outfile = os.path.join(context.output_directory, 'lmbench.output')
with open(outfile, 'w') as wfh:
for output in self.output:
wfh.write(output)
context.add_artifact('lmbench', 'lmbench.output', 'data')
host_file = os.path.join(context.output_directory, 'lmbench.output')
device_file = self.device.path.join(self.device.working_directory, 'lmbench.output')
self.device.pull_file(device_file, host_file)
context.add_artifact('lmbench', host_file, 'data')
def teardown(self, context):
self.device.uninstall_executable(self.test)
@@ -128,22 +124,28 @@ class lmbench(Workload):
# Test setup routines
#
def _setup_lat_mem_rd(self):
device_file = self.device.path.join(self.device.working_directory, 'lmbench.output')
self.device.execute('rm -f {}'.format(device_file))
command_stub = self._setup_common()
if self.thrash:
command_stub = command_stub + '-t '
command_stub = '{} -t'.format(command_stub)
for size in self.size:
command = command_stub + size + ' '
command = '{} {}'.format(command_stub, size)
for stride in self.stride:
self.commands.append(command + str(stride))
self.commands.append('{} {} >> {} 2>&1'.format(command, stride, device_file))
def _setup_bw_mem(self):
device_file = self.device.path.join(self.device.working_directory, 'lmbench.output')
self.device.execute('rm -f {}'.format(device_file))
command_stub = self._setup_common()
for size in self.size:
command = command_stub + size + ' '
for what in self.mem_category:
self.commands.append(command + what)
command = '{} {}'.format(command_stub, size)
for category in self.mem_category:
self.commands.append('{} {} >> {} 2>&1'.format(command, category, device_file))
def _setup_common(self):
parts = []

@@ -32,7 +32,7 @@ class Recentfling(Workload):
For this workload to work, ``recentfling.sh`` and ``defs.sh`` must be placed
in ``~/.workload_automation/dependencies/recentfling/``. These can be found
in the [AOSP Git repository](https://android.googlesource.com/platform/system/extras/+/master/tests/).
in the `AOSP Git repository <https://android.googlesource.com/platform/system/extras/+/master/tests/>`_.
To change the apps that are opened at the start of the workload you will need
to modify the ``defs.sh`` file. You will need to add your app to ``dfltAppList``

@@ -0,0 +1,339 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

@@ -0,0 +1,107 @@
# Copyright 2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=attribute-defined-outside-init
import os
import yaml
from wlauto import Workload, Parameter, Executable
from wlauto.exceptions import WorkloadError, ConfigError
class StressNg(Workload):
name = 'stress_ng'
description = """
stress-ng will stress test a computer system in various selectable ways. It
was designed to exercise various physical subsystems of a computer as well
as the various operating system kernel interfaces.
stress-ng can also measure test throughput rates; this can be useful to
observe performance changes across different operating system releases or
types of hardware. However, it has never been intended to be used as a
precise benchmark test suite, so do NOT use it in this manner.
The official website for stress-ng is at:
http://kernel.ubuntu.com/~cking/stress-ng/
Source code are available from:
http://kernel.ubuntu.com/git/cking/stress-ng.git/
"""
parameters = [
Parameter('stressor', kind=str, default='cpu',
allowed_values=['cpu', 'io', 'fork', 'switch', 'vm', 'pipe',
'yield', 'hdd', 'cache', 'sock', 'fallocate',
'flock', 'affinity', 'timer', 'dentry',
'urandom', 'sem', 'open', 'sigq', 'poll'],
description='Stress test case name. The cases listed in '
'allowed values come from the stable release '
'version 0.01.32. The binary included here '
'compiled from dev version 0.06.01. Refer to '
'man page for the definition of each stressor.'),
Parameter('threads', kind=int, default=0,
description='The number of workers to run. Specifying a '
'negative or zero value will select the number '
'of online processors.'),
Parameter('duration', kind=int, default=60,
description='Timeout for test execution in seconds')
]
def initialize(self, context):
if not self.device.is_rooted:
raise WorkloadError('stress-ng requires root premissions to run')
def validate(self):
if self.stressor == 'vm' and self.duration < 60:
raise ConfigError('vm test duration need to be >= 60s.')
def setup(self, context):
host_binary = context.resolver.get(Executable(self, self.device.abi,
'stress-ng'))
self.binary = self.device.install_if_needed(host_binary)
self.log = self.device.path.join(self.device.working_directory,
'stress_ng_output.txt')
self.results = self.device.path.join(self.device.working_directory,
'stress_ng_results.yaml')
self.command = ('{} --{} {} --timeout {}s --log-file {} --yaml {} '
'--metrics-brief --verbose'
.format(self.binary, self.stressor, self.threads,
self.duration, self.log, self.results))
self.timeout = self.duration + 10
def run(self, context):
self.output = self.device.execute(self.command, timeout=self.timeout,
as_root=True)
def update_result(self, context):
host_file_log = os.path.join(context.output_directory,
'stress_ng_output.txt')
host_file_results = os.path.join(context.output_directory,
'stress_ng_results.yaml')
self.device.pull_file(self.log, host_file_log)
self.device.pull_file(self.results, host_file_results)
with open(host_file_results, 'r') as stress_ng_results:
results = yaml.load(stress_ng_results)
try:
metric = results['metrics'][0]['stressor']
throughput = results['metrics'][0]['bogo-ops']
context.result.add_metric(metric, throughput, 'ops')
# For some stressors like vm, if test duration is too short, stress_ng
# may not able to produce test throughput rate.
except TypeError:
self.logger.warning('{} test throughput rate not found. '
'Please increase test duration and retry.'
.format(self.stressor))

Binary file not shown.

Binary file not shown.

@@ -147,7 +147,7 @@ class Sysbench(Workload):
param_strings.append('--file-test-mode={}'.format(self.file_test_mode))
sysbench_command = '{} {} {} run'.format(self.on_device_binary, ' '.join(param_strings), self.cmd_params)
if self.taskset_mask:
taskset_string = 'busybox taskset 0x{:x} '.format(self.taskset_mask)
taskset_string = '{} taskset 0x{:x} '.format(self.device.busybox, self.taskset_mask)
else:
taskset_string = ''
return 'cd {} && {} {} > sysbench_result.txt'.format(self.device.working_directory, taskset_string, sysbench_command)

@@ -15,8 +15,12 @@
import os
import logging
import json
import re
from HTMLParser import HTMLParser
from collections import defaultdict, OrderedDict
from distutils.version import StrictVersion
from wlauto import AndroidUiAutoBenchmark, Parameter
from wlauto.utils.types import list_of_strs, numeric
@@ -46,6 +50,7 @@ class Vellamo(AndroidUiAutoBenchmark):
benchmark_types = {
'2.0.3': ['html5', 'metal'],
'3.0': ['Browser', 'Metal', 'Multi'],
'3.2.4': ['Browser', 'Metal', 'Multi'],
}
valid_versions = benchmark_types.keys()
summary_metrics = None
@@ -66,11 +71,10 @@ class Vellamo(AndroidUiAutoBenchmark):
def __init__(self, device, **kwargs):
super(Vellamo, self).__init__(device, **kwargs)
if self.version == '2.0.3':
self.activity = 'com.quicinc.vellamo.VellamoActivity'
if self.version == '3.0':
if StrictVersion(self.version) >= StrictVersion("3.0.0"):
self.activity = 'com.quicinc.vellamo.main.MainActivity'
self.summary_metrics = self.benchmark_types[self.version]
if StrictVersion(self.version) == StrictVersion('2.0.3'):
self.activity = 'com.quicinc.vellamo.VellamoActivity'
def setup(self, context):
self.uiauto_params['version'] = self.version
@@ -97,7 +101,12 @@ class Vellamo(AndroidUiAutoBenchmark):
if not self.device.is_rooted:
return
elif self.version == '3.0.0':
self.update_result_v3(context)
elif self.version == '3.2.4':
self.update_result_v3_2(context)
def update_result_v3(self, context):
for test in self.benchmarks: # Get all scores from HTML files
filename = None
if test == "Browser":
@@ -122,24 +131,36 @@ class Vellamo(AndroidUiAutoBenchmark):
context.result.add_metric('{}_{}'.format(benchmark.name, name), score)
context.add_iteration_artifact('vellamo_output', kind='raw', path=filename)
def update_result_v3_2(self, context):
device_file = self.device.path.join(self.device.package_data_directory,
self.package,
'files',
'chapterscores.json')
host_file = os.path.join(context.output_directory, 'vellamo.json')
self.device.pull_file(device_file, host_file, as_root=True)
context.add_iteration_artifact('vellamo_output', kind='raw', path=host_file)
with open(host_file) as results_file:
data = json.load(results_file)
for chapter in data:
for result in chapter['benchmark_results']:
name = result['id']
score = result['score']
context.result.add_metric(name, score)
def non_root_update_result(self, context):
failed = []
with open(self.logcat_log) as logcat:
metrics = OrderedDict()
for line in logcat:
if 'VELLAMO RESULT:' in line:
info = line.split(':')
parts = info[2].split(" ")
metric = parts[1].strip()
value = int(parts[2].strip())
metrics[metric] = value
with open(self.logcat_log) as fh:
iteration_result_regex = re.compile("VELLAMO RESULT: (Browser|Metal|Multicore) (\d+)")
for line in fh:
if 'VELLAMO ERROR:' in line:
self.logger.warning("Browser crashed during benchmark, results may not be accurate")
for key, value in metrics.iteritems():
key = key.replace(' ', '_')
context.result.add_metric(key, value)
if value == 0:
failed.append(key)
result = iteration_result_regex.findall(line)
if result:
for (metric, score) in result:
if not score:
failed.append(metric)
else:
context.result.add_metric(metric, score)
if failed:
raise WorkloadError("The following benchmark groups failed: {}".format(", ".join(failed)))

@@ -59,20 +59,22 @@ public class UiAutomation extends BaseUiAutomation {
getScore("html5", "com.quicinc.vellamo:id/act_ba_results_img_0");
getScore("metal", "com.quicinc.vellamo:id/act_ba_results_img_1");
}
else {
dismissLetsRoll();
if (version.equals("3.2.4")) {
dismissArrow();
}
if (browser) {
startBrowserTest(browserToUse);
startBrowserTest(browserToUse, version);
proccessTest("Browser");
}
if (multicore) {
startTestV3(1);
startTestV3(1, version);
proccessTest("Multicore");
}
if (metal) {
startTestV3(2);
startTestV3(2, version);
proccessTest("Metal");
}
}
@@ -96,7 +98,7 @@ public class UiAutomation extends BaseUiAutomation {
runButton.click();
}
public void startBrowserTest(int browserToUse) throws Exception {
public void startBrowserTest(int browserToUse, String version) throws Exception {
//Ensure chrome is selected as "browser" fails to run the benchmark
UiSelector selector = new UiSelector();
UiObject browserToUseButton = new UiObject(selector.className("android.widget.ImageButton")
@@ -136,13 +138,13 @@ public class UiAutomation extends BaseUiAutomation {
// Run watcher
UiDevice.getInstance().runWatchers();
startTestV3(0);
startTestV3(0, version);
}
public void startTestV3(int run) throws Exception {
public void startTestV3(int run, String version) throws Exception {
UiSelector selector = new UiSelector();
UiObject thirdRunButton = new UiObject(selector.resourceId("com.quicinc.vellamo:id/card_launcher_run_button").instance(run));
UiObject thirdRunButton = new UiObject(selector.resourceId("com.quicinc.vellamo:id/card_launcher_run_button").instance(2));
if (!thirdRunButton.waitForExists(TimeUnit.SECONDS.toMillis(5))) {
if (!thirdRunButton.exists()) {
throw new UiObjectNotFoundException("Could not find three \"Run\" buttons.");
@@ -158,17 +160,29 @@ public class UiAutomation extends BaseUiAutomation {
}
runButton.click();
//Skip tutorial screens
UiObject swipeScreen = new UiObject(selector.textContains("Swipe left to continue"));
if (!swipeScreen.waitForExists(TimeUnit.SECONDS.toMillis(5))) {
if (!swipeScreen.exists()) {
throw new UiObjectNotFoundException("Could not find \"Swipe screen\".");
//Skip tutorial screen
if (version.equals("3.2.4")) {
UiObject gotItButton = new UiObject(selector.textContains("Got it"));
if (!gotItButton.waitForExists(TimeUnit.SECONDS.toMillis(5))) {
if (!gotItButton.exists()) {
throw new UiObjectNotFoundException("Could not find correct \"GOT IT\" button.");
}
}
gotItButton.click();
}
else {
UiObject swipeScreen = new UiObject(selector.textContains("Swipe left to continue"));
if (!swipeScreen.waitForExists(TimeUnit.SECONDS.toMillis(5))) {
if (!swipeScreen.exists()) {
throw new UiObjectNotFoundException("Could not find \"Swipe screen\".");
}
}
sleep(1);
swipeScreen.swipeLeft(2);
sleep(1);
swipeScreen.swipeLeft(2);
}
sleep(1);
swipeScreen.swipeLeft(2);
sleep(1);
swipeScreen.swipeLeft(2);
}
@@ -236,6 +250,17 @@ public class UiAutomation extends BaseUiAutomation {
letsRollButton.click();
}
public void dismissArrow() throws Exception {
UiSelector selector = new UiSelector();
UiObject cardContainer = new UiObject(selector.resourceId("com.quicinc.vellamo:id/cards_container")) ;
if (!cardContainer.waitForExists(TimeUnit.SECONDS.toMillis(5))) {
if (!cardContainer.exists()) {
throw new UiObjectNotFoundException("Could not find vellamo main screen");
}
}
cardContainer.click();
}
public void dismissNetworkConnectionDialogIfNecessary() throws Exception {
UiSelector selector = new UiSelector();
UiObject dialog = new UiObject(selector.className("android.widget.TextView")