1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-09-04 12:22:41 +01:00

554 Commits

Author SHA1 Message Date
Sergei Trofimov
0b9d8f1c5e managers: adding missing __init__.py 2016-09-26 17:38:30 +01:00
setrofim
a4a428c9ae Merge pull request #109 from ep1cman/locallinux
LocalLinuxManager: Added a local linux manager
2016-02-23 17:04:41 +00:00
Sebastian Goscik
d89a52584b bootstrap: Removed and fixed module mapping for extensions 2016-02-23 17:01:53 +00:00
Sebastian Goscik
41a3877640 LocalLinuxManager: Added a local linux manager
This allows WA to automate the machine it is running on.
2016-02-23 16:57:46 +00:00
Sebastian Goscik
0b1b9d304c Fixed WA extensions for AndroidManager
Changed method calls to devlib naming
2016-02-19 15:29:10 +00:00
Sebastian Goscik
a3962b6323 AndroidManager: Added AndroidManager
Replaces AndroidDevice
2016-02-19 15:27:18 +00:00
Sebastian Goscik
001239dfe4 Fixed WA extensions for LinuxManager
Changed method calls to devlib naming
2016-02-19 15:27:18 +00:00
Sebastian Goscik
6f0de17201 LinuxManager: Added LinuxManager
Replaces BaseLinuxDevice & LinuxDevice
2016-02-19 15:26:54 +00:00
Sebastian Goscik
1599c1e0ed Devices: Removed Devices
They are now superseded by DeviceManagers
2016-02-19 15:26:54 +00:00
Sebastian Goscik
4fc93a8a3c DeviceManager: Introduced DeviceManager extension
DeviceManagers will replace devices and will wrap devlib targets for use in WA
2016-02-19 15:23:07 +00:00
Sebastian Goscik
cd0186d14e json: Replaced json results processor with a more comprehensive one 2016-02-19 15:23:07 +00:00
Sebastian Goscik
de133cddb4 Merge pull request #105 from bjackman/check-config-exists
Add entry_point check for config file existence
2016-02-18 09:02:46 +00:00
Brendan Jackman
a5c9b94257 Add entry_point check for config file existence
This is just to provide a friendlier error message.
Before this commit you get an IOError from imp.load_source.
2016-02-17 17:24:14 +00:00
setrofim
c203ec8921 Merge pull request #103 from ep1cman/fixes
ApkWorkload: Fixed runtime permission granting
2016-02-15 11:50:36 +00:00
Sebastian Goscik
de021da300 ApkWorkload: Fixed runtime permission granting
"Normal" android permissions are automatically granted and cannot
be changed. Trying to "pm grant" these caused an error, this should
no longer occur.
2016-02-15 11:38:28 +00:00
setrofim
693afa3528 Merge pull request #102 from ep1cman/fixes
LinuxDevice: Added as_root to kick_off
2016-02-12 09:55:36 +00:00
Sebastian Goscik
5203188d9e LinuxDevice: Added as_root to kick_off 2016-02-12 09:54:14 +00:00
Steve Bannister
08663209d6 Fix up lmbench commandline 2016-02-11 17:40:31 +00:00
Sebastian Goscik
232e4b3e65 Merge pull request #101 from setrofim/master
Adding taskset capability to lmbench + minor fixes.
2016-02-11 09:35:58 +00:00
Sergei Trofimov
13ebc8ad55 pep8: removed trailling spaces 2016-02-11 08:22:53 +00:00
Sergei Trofimov
759f8db1bc lmbench: adding taskset support
lmbench can now be run pinned to specific CPUs.
2016-02-11 08:22:39 +00:00
setrofim
9a7cccacab Merge pull request #100 from setrofim/master
dhrystone: fix busybox reference.
2016-02-10 17:45:42 +00:00
Sergei Trofimov
288aa764b3 dhrystone: fix busybox reference. 2016-02-10 17:28:33 +00:00
Sebastian Goscik
a32cc0f213 Merge pull request #99 from setrofim/master
Minor fixes.
2016-02-10 16:50:17 +00:00
Sergei Trofimov
fdbc2ae372 pylint 2016-02-10 16:39:06 +00:00
Sergei Trofimov
9129a9d2d8 dhrystone: remove reference to sysbench from dhrystone doc. 2016-02-10 16:38:56 +00:00
Sebastian Goscik
cb46c57754 Merge pull request #98 from setrofim/master
ipython: switched to using LooseVersion for version checks.
2016-02-10 10:54:00 +00:00
Sebastian Goscik
536c0ffe4e Merge pull request #94 from ranjeetkumar/master
Added GoogleMap : Navigation app by Google Inc.
2016-02-10 09:37:00 +00:00
Sergei Trofimov
4f30e37f22 ipython: switched to using LooseVersion for version checks.
This is a fix for

https://github.com/ARM-software/workload-automation/issues/97

IPython can use rc tags in its version strings, which StrictVersion
can't handle.
2016-02-10 09:01:40 +00:00
ranjeet
0deb8fd7c6 Added GoogleMap : Navigation app by Google Inc. 2016-02-07 10:01:36 +05:30
Sebastian Goscik
85edc3084b Merge pull request #96 from setrofim/master
Fixes to cpufreq module and elimination of unknown state in cpustate result processor.
2016-02-04 16:25:00 +00:00
Sergei Trofimov
3a99a284c4 cpustate: ensure known initial state
cpustate result processor generates a view of the cpu subsystem power
state during execution of a workload from cpu_idle and cpu_frequency
ftraces. There exists a period before the first events in those
categories are seen where the state of the cpu subsystem is (partially)
unknown and it is reported as such by the result processor.

Unknown state usually exists for a relatively short period of time and
is generally not a big deal. For certain kinds of workloads, however, it
may constitude a significant protion of the trace.

Changes in this commit attempt to deal with this by a) reading starting
cpu frequencies and writing them into the trace, and b) nudging each
core to bring it out of idle; this happens before the start marker, so
that the system state between the markers should be completely known.
2016-02-04 16:08:22 +00:00
Sergei Trofimov
5e3cc8fcb5 cpufreq: minor fixes
- added a missing conversion from int to cpu name.
- fixed the invocation of the current cpu frequency function inside core
  and cluster versions.
2016-02-04 15:42:35 +00:00
setrofim
f92bd1bcdd Merge pull request #95 from ep1cman/fixes
Parameter: Fixed overriding of new parameters
2016-02-04 15:42:01 +00:00
Sebastian Goscik
519efaf22c Parameter: Fixed overriding of new parameters
Previously you could have `override` set to True on parameters that
only existed in the current scope.

Now if you try to override a parameter that doesn't exist higher up
in the hiarchy you will get a ValueError.
2016-02-04 15:36:47 +00:00
setrofim
28ef01505d Merge pull request #92 from ep1cman/fixes
AndroidDevice: Removed duplicate parameter
2016-02-03 15:07:32 +00:00
Sebastian Goscik
dec574e59e AndroidDevice: Removed duplicate parameter 2016-02-03 15:06:00 +00:00
Sebastian Goscik
7ad8b8522b AttributeCollection: No longer allows duplicate overriding attributes
Previously if parameters with the same names and override set to True
were added to an extension at the same level one would silently
override the other.

This is no longer the case and an error will be show instead.

Also added tests to check that this is handeled correctly
2016-02-03 15:05:14 +00:00
Sebastian Goscik
14a1bc8a5d Merge pull request #91 from setrofim/master
Some minor fixes for ABI resolution.
2016-02-02 09:41:08 +00:00
Sergei Trofimov
45a9c0a86d Removing hard-coded abi from generic_linux devices
ABI should be read from the target device as with other Device
interfaces. This must be a holdover from before this was being done.
2016-02-02 09:29:22 +00:00
Sergei Trofimov
7edb2c8919 Adding aarch64 to architecture list for arm64 ABI
This was observer being reported on a device.
2016-02-02 09:29:22 +00:00
setrofim
5fad83a50d Merge pull request #90 from ep1cman/fixes
manual: Fixed trying to logcat on non linux devices
2016-02-01 15:26:46 +00:00
Sebastian Goscik
68fefe8532 manual: Fixed trying to logcat on non linux devices 2016-02-01 15:24:32 +00:00
setrofim
c96590b713 Merge pull request #89 from ep1cman/fixes
sysbench: Fixed binary installation
2016-02-01 11:17:34 +00:00
Sebastian Goscik
dc22856431 sysbench: Fixed binary installation 2016-02-01 11:13:57 +00:00
Sebastian Goscik
2d1f0e99b9 Merge pull request #87 from setrofim/master
Added functions for manipulating kernel modules + pep8 fix.
2016-01-27 17:18:55 +00:00
Sergei Trofimov
da720c8613 pep8 2016-01-27 17:15:44 +00:00
Sergei Trofimov
eaabe01fa5 BaseLinuxDevice: added insmod() method.
Allows insting a kernel module on the target from a .ko located on the
host.
2016-01-27 17:15:41 +00:00
Sergei Trofimov
dc07c8d87e BaseLinuxDevice: added lsmod() method
Execute lsmod on the target device and parses the output into named
tuples.
2016-01-27 16:50:29 +00:00
setrofim
a402bfd7f9 Merge pull request #85 from ep1cman/fixes
Fixes
2016-01-26 15:14:09 +00:00
Sebastian Goscik
fe2d279eac RunInfo: Added default run name
The run name will now default to ``{output_folder}_{date}_{time}``
2016-01-26 15:00:39 +00:00
Sebastian Goscik
0ffbac1629 Merge pull request #84 from bjackman/pedantry
A couple of tweaks
2016-01-26 14:16:10 +00:00
Brendan Jackman
65cc22a305 core/agenda.py: Add check for empty values in agenda
This gives an error message when an agenda contains a key with no
value, so creating agendas is a little more user-friendly.
2016-01-25 13:43:38 +00:00
Brendan Jackman
2ae8c6073f doc: Apply it's/its grammar pedantry 2016-01-25 13:34:24 +00:00
Sebastian Goscik
dc5cf6d7b8 Merge pull request #83 from setrofim/master
Various fixes.
2016-01-22 14:29:13 +00:00
setrofim
e6ae9ecc51 Merge pull request #81 from ep1cman/bbench_fix
revent: Added record and replay commands
2016-01-22 12:55:16 +00:00
Sergei Trofimov
85fb5e3684 Pylint fixes
- apklaunch: ignore package (re-)assignment outside init.
- applaunch: factored out part of result processing into a separate
  method.
2016-01-22 12:19:39 +00:00
Sergei Trofimov
98b19328de Fixing assets discovery.
- Two different parameters may now have the same global alias as long as
  their types match
- `extension_asset` resource getter now picks up the path to the mouted
  filer from ``remote_assets_path`` global setting.
2016-01-22 12:19:31 +00:00
Sergei Trofimov
73ddc205fc geekbench: fixing root check
- negating the check (error if *not* rooted)
- do not check for version 2 (results are extracted differently and that
  does not require root).
2016-01-22 10:43:01 +00:00
Sebastian Goscik
1e6eaff702 revent: Added record and replay commands
Added two commands to WA to record and replay input events using revent.

As part of this also added the ability to get a device model from
android and linux device. This may need to be improved in the future.
2016-01-22 10:40:03 +00:00
setrofim
78d49ca8ae Merge pull request #82 from ep1cman/fixes
Fixes
2016-01-22 10:33:41 +00:00
Sebastian Goscik
f4c89644ff geekbench: Added check whether device is rooted 2016-01-22 09:39:49 +00:00
Sebastian Goscik
798a7befb8 pylint fixes 2016-01-22 09:39:29 +00:00
setrofim
6a388ffc71 Merge pull request #80 from ep1cman/bbench_fix
bbench fix
2016-01-20 16:57:10 +00:00
Sebastian Goscik
82df73278e recentfling: Fixed inequality 2016-01-20 16:31:27 +00:00
Sebastian Goscik
68a39d7fa1 bbench: Fix for web browser crash on latest Linaro release
Also fixes browser permissions issues on Android 6+
2016-01-20 16:29:38 +00:00
setrofim
120f0ff94f Merge pull request #78 from ep1cman/binary_install
BaseLinuxDevice: Tidied up the way binaries are handled
2016-01-19 10:52:54 +00:00
Sebastian Goscik
f47ba6fea6 ebizzy: changed os.path to device path 2016-01-19 10:45:09 +00:00
Sebastian Goscik
5f8da66322 antutu: Fixed runtime permissions
Antutu 6 lists corse_location as a requirement but also asks for
fine_location at runtime. So it is now manually beign granted.
2016-01-19 10:45:09 +00:00
Sebastian Goscik
67213d471b BaseLinuxDevice: documentation update
Added docs explaining how extension developers should deploy binaries.
2016-01-19 10:45:09 +00:00
Sebastian Goscik
7c35c604f4 BaseLinuxDevice: Tidied up the way binaries are handled
Added:
get_binary_path: Checks binary_directory for the wanted binary, if
                 if its not there, it will use which to find a
                 system one. returns the full path

install_if_needed: will install a binary only if it is not present.

Changes:
 - Busybox is now deployed to non-rooted devices
 - is_installed has now been removed as the new functions supersede it
 - binaries will now always be installed to `binaries_directory` and
   not system folders.
 - updated workloads to use these new functions
   - rt-app and sysbench might still need work
2016-01-19 10:45:09 +00:00
Sergei Trofimov
c11cc7d0d2 trace-cmd: do not error on missing host-side trace-cmd when report_on_target is set
When report_on_target option is set, binary trace will be "reported"
into a text version on the target device. This removes the need for
trace-cmd to be installed on the host, in which case that should not be
reported as an error.
2016-01-18 11:53:06 +00:00
setrofim
89f1e7b6e5 Merge pull request #79 from chase-qi/add-io-scheduler-test
applaunch: Added support for IO scheduler test
2016-01-15 10:29:07 +00:00
Chase Qi
bd826783cc applaunch: Added support for IO scheduler test
When IO is heavily loaded, the impact of IO schedulers on App launch
time varies. To measure the impact, added io_stress and io_scheduler two
parameters and related jinja2 blocks.

Signed-off-by: Chase Qi <chase.qi@linaro.org>
2016-01-15 02:16:35 -08:00
setrofim
0fb867e7c6 Merge pull request #77 from bjackman/apklaunch
workloads: Add apklaunch workload
2016-01-14 16:00:53 +00:00
Brendan Jackman
6b3187c2c9 workloads: Add apklaunch workload
This is a workload to install and run an arbitrary .apk
2016-01-14 15:58:26 +00:00
setrofim
75ce620e6b Merge pull request #76 from ep1cman/get_pid_fix
AndroidDevice: fixed get_pids_of
2016-01-13 17:15:12 +00:00
Sebastian Goscik
d9c4063307 AndroidDevice: fixed get_pids_of
As of Android M ps can no longer filter by process name. This is
now handled using grep from busybox
2016-01-13 17:07:30 +00:00
setrofim
5f2b25532b Merge pull request #75 from chase-qi/fix-applaunch-cleanup
applaunch: pass cleanup argument to the template
2016-01-13 08:20:53 +00:00
Chase Qi
0998c18efd applaunch: pass cleanup argument to the template
Since cleanup test block is defined in the device_script.template, the
value of cleanup is needed to render the template properly.

Signed-off-by: Chase Qi <chase.qi@linaro.org>
2016-01-12 18:20:22 -08:00
setrofim
9eeeaf02ad Merge pull request #74 from setrofim/master
juno: fixing a stupid error in u-boot boot path
2016-01-12 15:14:14 +00:00
Sergei Trofimov
df937dc847 juno: fixing a stupid error in u-boot boot path
Juno's bootargs parameter specifies the kernel boot arguments as a
sigle string. However, when it is passed into _boot_via_uboot, it was
expanded as a mapping, causing an error. This fixes that boneheaded
mistake...
2016-01-12 15:00:25 +00:00
setrofim
1ef7bb4e93 Merge pull request #73 from ep1cman/ipython4
ipython: Updated to work with the latest ipython version
2016-01-12 14:37:37 +00:00
Sebastian Goscik
41890589e1 ipython: Updated to work with the latest ipython version 2016-01-12 10:59:52 +00:00
Sebastian Goscik
a0cd66ed45 Merge pull request #71 from setrofim/master
trace_cmd: updated to handle empty CPUs.
2016-01-12 10:16:27 +00:00
Sergei Trofimov
b84f97a902 trace_cmd: updated to handle empty CPUs.
Updated trace-cmd parser to handle messages that the trace for a CPU is
empty.
2016-01-12 10:12:00 +00:00
setrofim
ffc3fcef67 Merge pull request #70 from ep1cman/antutu6
Antutu6
2016-01-11 16:12:40 +00:00
Sebastian Goscik
09563bc01e antutu: Updated to support Antutu v6 2016-01-11 14:37:32 +00:00
Sebastian Goscik
f1bb44b3e7 ApkWorkload: Added automatic granting of runtime permissions
As of Android 6.0, apps can request permissions at runtime. If the
target device is running Android 6.0+ these permissions are now automatically
granted.
2016-01-11 13:58:38 +00:00
Sebastian Goscik
1085c715c2 Merge pull request #69 from setrofim/master
juno_energy: add metrics to results and "strict" parameter
2016-01-07 11:11:58 +00:00
Sergei Trofimov
c105e8357c juno_energy: add metrics to results and "strict" parameter
- Summary metrics are now calculated from the contents of energy.csv and
  added to the overall results.
- Added a new "strict" parameter. If this is set to False, the device
  check during validation is omitted.
2016-01-07 11:09:00 +00:00
Sebastian Goscik
dc1b0e629e ipynb_exporter: default template no longer shows a blank plot for workloads without summary_metrics 2015-12-15 17:18:25 +00:00
Sergei Trofimov
62a0fd70de daq: fixed typo 2015-12-15 09:54:21 +00:00
Sergei Trofimov
438e18328d AndroidDevice: remove unnecessary escapes from update locksettings command
The single quotes will be escaped further down the command processing
chain.
2015-12-15 09:52:46 +00:00
setrofim
57b31149f1 Merge pull request #68 from ep1cman/daq_fix
daq: Fixed bug where an exception would be raised if merge_channles=False
2015-12-15 09:43:23 +00:00
Sebastian Goscik
09390e7ffb daq: Fixed bug where an exception would be raised if merge_channles=False 2015-12-15 09:39:28 +00:00
Sergei Trofimov
e83d021a5c utils/android: fixed use of variables in as_root=True commands.
In order to execute as root, the command string gets echo'd into so;
previusly, double quotes were used in echo, which caused any veriables
in the command string to be expanded _before_ it was echoed.
2015-12-15 08:34:18 +00:00
Sergei Trofimov
bca012fccb csv: handle zero-value classifiers correctly
If the value of a classifier was zero (or any other value that
interprets as boolean False), it used to be coverted to an empty entry.
This makes sure that the value gets correctly ropagated.
2015-12-15 08:30:53 +00:00
Sergei Trofimov
bb37c31fed perf: added support for per-cpu statistics
per-cpu statistics now get added as metrics to the results (with
a classifier used to identify the cpu).
2015-12-11 14:01:04 +00:00
Sergei Trofimov
0005f927e8 pep8 2015-12-11 14:01:04 +00:00
setrofim
9222257d79 Merge pull request #67 from ep1cman/recentfling
Recentfling
2015-12-11 11:06:45 +00:00
Sebastian Goscik
585d8b2d7d recentfling: Added workload 2015-12-11 11:02:25 +00:00
Sebastian Goscik
d3470dca73 AndroidDevice: Fixed swipe_to_unlock
Previously swipe_to_unlock was not used and conflicted with a method
of the same name.

 - swipe_to_unlock() renamed perform_unlock_swipe()
 - swipe_to_unlock parameter now takes a direction, this allows swipe unlocking on Android M devices
 - ensure_screen_is_on() will now also unlock the screen if swipe_to_unlock is set
2015-12-11 10:58:32 +00:00
Sergei Trofimov
0f60e9600f trace_cmd: parser for sched_switch events and fixes
- Compiled regular expressions in EVENT_PARSER_MAP now get handled
  correctly.
- regex_body_parser now attemts to convert field values to ints,
  bringing it inline with the default parser behavior.
- There is now a regex for sched_switch events.
2015-12-10 13:41:24 +00:00
Sergei Trofimov
6a85dff94f pylint: addtional fix
further to bef8fb40ef
2015-12-10 13:39:28 +00:00
setrofim
aae88b8be4 Merge pull request #65 from Sticklyman1936/gem5_fixes
gem5 fixes and one AndroidDevice fix
2015-12-10 13:31:57 +00:00
Sascha Bischoff
72a617c16d Gem5Device: Remove the rename in pull_file to align with gem5 2015-12-10 11:09:42 +00:00
Sascha Bischoff
d6355966bf Gem5Device: Removed unused methods 2015-12-10 11:09:42 +00:00
Sascha Bischoff
845d577482 Gem5LinuxDevice: Added login_prompt and login_password_prompt parameters
Added two parameters which allow the user to change the strings used
to match the login prompt and the following password prompt to match
their device configurations.
2015-12-10 11:09:42 +00:00
Sascha Bischoff
9ccf256ee8 AndroidDevice: Use content instead of settings to get ANDROID_ID
We move from using settings to using content to get the ANDROID_ID as
this works across a wider range of Android versions.
2015-12-10 11:09:42 +00:00
Sascha Bischoff
cc9b00673e Gem5AndroidDevice: No longer wait for disabled boot animation
Adjust the wait_for_boot method of Gem5AndroidDevice to no longer wait
for the boot animation to finish if the animation has been
disabled. The service.bootanim.exit property is only set (to 0) when
the animation starts, and is set to 1 when the animation finishes. If
the animation never starts, then the property is not set at
all. Hence, we assume the boot animation has finished, unless the
property has been set.
2015-12-10 11:09:42 +00:00
Sascha Bischoff
e7c75b2d3b Gem5: Add support for deploying the m5 binary 2015-12-10 11:09:41 +00:00
Sascha Bischoff
480155fd8c Gem5Device: Try to connect to the shell up to 10 times 2015-12-10 11:09:41 +00:00
Sascha Bischoff
d98bdac0be Gem5Device: Move resize shell commands to own method
Moved the commands to resize the shell to their own method. They are
now executed twice. Once as soon as the shell is connected, and a
second time as part of initialize. This latter call takes place after
the isntallation of busybox.
2015-12-10 11:09:41 +00:00
Sascha Bischoff
32cf5c0939 Gem5Device: Removed busybox dependency. 2015-12-10 11:09:41 +00:00
Sebastian Goscik
a330a64340 Merge pull request #66 from ep1cman/pylint_update
Major pylint fix
2015-12-09 16:54:13 +00:00
Sebastian Goscik
bef8fb40ef Updated pylint for v1.5.1
Fixed WA for the latest version of pylint (1.5.1)
2015-12-09 16:52:39 +00:00
Sergei Trofimov
344bc519c4 AndroidDevice: fixing get_properites to refer to self.device
This is a fix to the previous fix
(2510329cdf) that updated get_properties
to store "dumpsys window" output relative to the working_directory. That
commit constructed the path using self.device, which is wrong, as in
this case self itself is the device.
2015-12-08 11:26:56 +00:00
setrofim
3da58d9541 Merge pull request #64 from ep1cman/hwuitest
hwuitest: Added workload
2015-12-03 16:46:06 +00:00
Sebastian Goscik
065ebaac61 hwuitest: Added workload
This workload allow WA to use hwuitest from AOSP to test rendering
latency on Android devices.
2015-12-03 15:07:42 +00:00
Sergei Trofimov
f85ef61ce9 lmbench: add the output file as an artifiact 2015-12-03 10:54:32 +00:00
setrofim
e5c6ef5368 Merge pull request #62 from ep1cman/trace-cmd-target-extract
trace-cmd: Added ability to generate reports on target device
2015-12-02 09:58:44 +00:00
Sebastian Goscik
a697c47c49 trace-cmd: Added ability to generate reports on target device 2015-12-02 09:53:43 +00:00
Sascha Bischoff
c6e712d44c Gem5Device: simplfy if statement based on pylint recommendation 2015-12-01 18:23:26 +00:00
Sascha Bischoff
00c9bdc2a6 Gem5Device: Fix runtime error caused by lack of kwargs 2015-12-01 18:23:26 +00:00
setrofim
1b31d8ef6f Merge pull request #53 from ep1cman/systrace
Added systrace instrument
2015-11-27 09:54:57 +00:00
Sebastian Goscik
b3a9512f44 Added systrace instrument
Added an instrument to uses systrace.py from the android SDK.
 Generates a fancy HTML report (only works in Google chrome).
2015-11-27 09:53:59 +00:00
Sergei Trofimov
a06016a442 adb_shell: fixing handling of line breaks at the end of the output
- adb protcol uses "\r\n" for line breaks. This is not handled by
  Python's line break translation, as not a file. So spliting on '\n'
  when extracting the exit code resulted in stray '\r' in the output.
- adb_shell expects exit code to be echoed on the same line. This may
  not have been the case if latest output from executed command was not
  a complete line. An extra echo will now ensure that the exit code will
  be on its own line even in that case.
2015-11-24 15:50:38 +00:00
Sergei Trofimov
c02a1118d7 BaseLinuxDevice: list_file_systems() now handles blank lines
mount() may return an empty line at the end of the output; this update
handles that.
2015-11-24 15:50:38 +00:00
Sebastian Goscik
d9f1190e1f Version bump 2015-11-23 16:19:45 +00:00
setrofim
36c58ee76f Merge pull request #61 from setrofim/documentation-update
doc: updating installation instructions
2015-11-23 16:02:32 +00:00
Sergei Trofimov
b04d141680 doc: updating installation instructions
Re-writing some of the instructions for clarity plus updating with the
correct pip invocations.
2015-11-23 15:47:27 +00:00
setrofim
3453fe5fc1 Merge pull request #59 from ep1cman/changelog
Updated change log
2015-11-23 10:13:38 +00:00
Sebastian Goscik
016876e814 Updated change log
Initial update of change log, this will probably need a lot of tidy up.
2015-11-23 10:11:23 +00:00
setrofim
e8ba515075 Merge pull request #58 from Sticklyman1936/gem5_init_fix
Gem5Device: Fix issue with init arguments
2015-11-20 14:21:11 +00:00
Sascha Bischoff
c533da3f38 Gem5Device: Fix issue with init arguments
- This patch addresses an issue with the arguments passed the
  BaseGem5Device __init__. With this patch these are no longer passed
  in as they are not required in the base device implementation.
2015-11-20 13:10:35 +00:00
Sergei Trofimov
8261c1d5b5 freqsweap: fix: was erroneously merging runtime_params into workload_params 2015-11-19 14:11:23 +00:00
setrofim
1d12e6a8b0 Merge pull request #57 from ep1cman/freqsweep
Freqsweep: Added the ability to specify workload/runtime parameters
2015-11-18 11:29:39 +00:00
Sebastian Goscik
2957d63e2f Freqsweep: Added the ability to specify workload/runtime parameters
E.g:
  sweeps:
    - cluster: a15
      runtime_params:
        a15_cores: 1
2015-11-18 11:27:35 +00:00
setrofim
7b8d62d1ec Merge pull request #56 from ep1cman/streamline_improvements
Streamline improvements
2015-11-16 17:28:02 +00:00
Sebastian Goscik
2063f48cf0 streamline: cleaned up and added Linux support
- The streamline instrument can now run on linux clients
  because it no longer relies on adb port forwarding
- Updated description.
- Cleaned up code
- Now check for streamline not caiman
- Gatord is now only run once instead of restarted every
  job
2015-11-16 17:27:17 +00:00
Sergei Trofimov
2510329cdf AndroidDevice fix: create dympsys file relative to working directory on target.
Commit 95f17702d7 redirected output of
"dumpsys window" to a file (needed for Gem5 support). However, the file
was created in $PWD, which breaks on production devices, as that
location is not writable. This moves the file under the working
directory on the device.
2015-11-16 13:09:28 +00:00
Sergei Trofimov
9c8c81cc25 csv result processor: write results.csv after each iteration.
(partial) results.csv will now be written after each iteration rather
than at the end of the run.
2015-11-16 12:43:52 +00:00
Sergei Trofimov
2e5b3671e9 LinuxDevice fix: do not invoke super's hard_reset()
as it does not have one...
2015-11-16 12:43:52 +00:00
Sergei Trofimov
705ce9ae40 rt_app: added an option to force install.
Userful when the target device already has rt-app on it, but you want to
replace it with the version from your host.
2015-11-16 12:43:52 +00:00
setrofim
f43daacd41 Merge pull request #52 from ep1cman/freqsweep
Added freqsweep instrument
2015-11-13 15:19:52 +00:00
Sebastian Goscik
3b41f69762 Added freqsweep instrument
Added an instrument to sweep workloads across all available frequenices
2015-11-13 15:16:54 +00:00
setrofim
a722e594af Merge pull request #55 from Sticklyman1936/split_gem5device
Gem5Device: Refactor into BaseGem5Device, Gem5LinuxDevice and Gem5And…
2015-11-13 14:26:07 +00:00
Sascha Bischoff
a447689a86 Gem5Device: Refactor into BaseGem5Device, Gem5LinuxDevice and Gem5AndroidDevice
- Refactored the Gem5Device to avoid duplicated code. There is now a
  BaseGem5Device which includes all of the shared functionality. The
  Gem5LinuxDevice and the Gem5AndroidDevice both inherit from
  BaseGem5Device, and LinuxDevice or AndroidDevice, respectively.
2015-11-13 14:19:24 +00:00
setrofim
9fe4887626 Merge pull request #54 from ep1cman/daq_merge
Added ability to merge DAQ channels
2015-11-13 11:00:36 +00:00
Sebastian Goscik
6492b95edd Added ability to merge DAQ channels
If the parameter merge_channels is set to true (false by default).
 DAQ channels that have consecutive letters on the end of their names
 will be summed together. E.g: A7, A15a, A15b becomes A7, A15

 You can also  manually specify a channel mapping by setting
 merge_channels to a dict. .
2015-11-13 10:56:50 +00:00
setrofim
c5b4b70aae Merge pull request #50 from Sticklyman1936/master
Add Gem5Device to WA
2015-11-11 16:55:57 +00:00
Sascha Bischoff
55f6ef4a5e Gem5Device: Remove VirtIO device rebinding to align with gem5
- Remove the unbind and rebind for the VirtIO 9P mount method as gem5
  now checkpoints the basic state of the device. This allows us to
  just mount it assuming that checkpoint have been created correctly.
2015-11-11 16:45:01 +00:00
Sascha Bischoff
672c74c76c Gem5Device: Avoid duplicate get_properties code
- Remove the duplicated get_properties code by calling the internal
  _get_android_properties method directly.
2015-11-11 16:45:01 +00:00
Sascha Bischoff
95f17702d7 AndroidDevice: Move the processing of Android properties to an internal method
- Move the processing of Android properties to an internal
  method. This allows the Android properties to be extracted without
  extracting those of the Linux device.

- Redirect the output from 'dumpsys window' to a file and pull the
  file as opposed to extracting the output from the terminal. This is
  more reliable in the event that another process writes to the shell.
2015-11-11 16:45:01 +00:00
Sascha Bischoff
ee4764adc4 Gem5Device: Addressed review comments
- Replaced hard-coded pexpect expect string with UNIQUE_PROMPT.

- Changed the capture_screen debug to a warning to make sure that the
  user knows when it happens.

- Fixed the logic for checking when a file exists. Previously, if the
  output could not correctly be processed (ValueError) then we just
  assumed that the file existed if there was any output at all. This
  is clearly not a good default. Changed to default to False if it was
  not able to process the output as this seems to be the safest
  option.

- Changed ad hoc filename extraction to use os.path.basename.

- Removed the processing of some kwargs and defaults that are handled
  by the parent class.

- Stopped overriding some paramaters which were purely defined in the
  Gem5Device.
2015-11-11 16:45:00 +00:00
Sascha Bischoff
3a8eed1062 Gem5Device: Allowed the gem5 binary to be specified in the agenda
- Added the gem5_binary option to the agenda which allows a different
  gem5 binary to be specified. This allows WA to be used with
  different levels of gem5 debugging. as well as allowing non-standard
  gem5 binary names and locations.
2015-11-04 10:38:50 +00:00
Sascha Bischoff
29abd290f4 Gem5Device: Fix style issues
- Fix up style issues
2015-11-02 16:30:13 +00:00
Sascha Bischoff
3bf114cf48 Gem5Device: Improve shell command matching
- Replace ugly while True loop with a simple regex substitution
  achieving the same thing. This is required to match the command in
  the shell output when the command wraps around due to the length of
  the command.
2015-11-02 11:42:33 +00:00
Sascha Bischoff
96a6179355 Gem5Device: Add a gem5 device for Android
- Implementation of a gem5 device which allows simulated systems to be
  used in the place of a real device. Currently, only Android is
  supported.

- The gem5 simulation is started automatically based on a command line
  passed in via the agenda. The correct telnet port to connect on is
  extracted from the standard error from the gem5 process.

- Resuming from gem5 checkpoints is supported, and can be specified as
  part of the gem5 system description. Additionally, the agenda option
  checkpoint_post_boot can be used to create a checkpoint
  automatically once the system has booted. This can then by used for
  subsequent runs to avoid booting the system a second time.

- The Gem5Device waits for Android to finish booting, before sending
  commands to the simulated device. Additionally, the device supports
  a sleep option, which will sleep in the simulated system for a
  number of seconds, prior to running the workload. This ensures that
  the system can quieten down, prior to running the workload.

- The Gem5Device relies of VirtIO to pull files into the simulated
  environment, and therefire diod support is required on the host
  system. Additionally, VirtIO 9P support is required in the guest
  system kernel.

- The m5 writefile binary and gem5 pseudo instruction are used to
  extract files from the simulated environment.
2015-11-02 10:15:34 +00:00
Sascha Bischoff
a6382b730b TelnetConnection:
- Allowed telnet connections without a password.

This is required as part of the upcoming Gem5Device, which uses a
password-less telnet connection to communicate with the device.
2015-11-02 10:09:59 +00:00
setrofim
661371f6f0 Merge pull request #49 from ep1cman/master
Fixed telnet support
2015-10-21 11:58:06 +01:00
Sebastian Goscik
b9b4a7c65c Fixed telnet support
- Added port kwarg to telnet login function (default port=23)
2015-10-21 11:53:25 +01:00
Sergei Trofimov
20a5660ea1 android device: always deploy busybox on rooted devices. 2015-10-19 13:40:50 +01:00
Sergei Trofimov
691c380779 serial_port: updating to handle newer versions of pexpect
from version 4.0.0 onwards, fdpexpect is now namespaced under the main
pexpect libarary, rather than being stand-alone.
2015-10-16 17:38:19 +01:00
Sergei Trofimov
7546232c10 trace_cmd: fix parser to oportunistically convert parsed values to ints
This was done previosly but was broken by commit

1c146e3ce7
2015-10-16 17:08:09 +01:00
Sergei Trofimov
4fa3d9de6e Import pxssh via pexpect
Importing it directly causes issues in some environments.
2015-10-16 11:05:53 +01:00
Sergei Trofimov
90bfbf6022 Adding autotest workload wrapper. 2015-10-09 08:43:53 +01:00
setrofim
079d5b4ec5 Merge pull request #48 from lisatn/rtapp_use_cases
rt-app: Update use cases
2015-10-09 08:21:07 +01:00
Lisa Nguyen
2c5d51cb2a rt-app: Update use cases
Add video-long.json and video-short.json files and
update spreading-tasks.json. The originals can be
found in git.linaro.org/power/rt-app.git repo.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-10-08 17:46:20 -07:00
Sergei Trofimov
1c146e3ce7 trace_cmd: more robust event fields parsing
The default paraser can now handle spaces in field values (but not field
names).
2015-10-08 17:13:48 +01:00
Sergei Trofimov
6159711a05 iozone: ensure test 0 always runs
Test 0 (write) creates a file that is used by subsequent tests.
Therefore if this test is not specified withen selecting which tests to
run, izone will fail with an error.

To avoid this, check the tests list specified by the user andd add test
0 if necessary.
2015-10-08 09:16:32 +01:00
Sergei Trofimov
208fdf4210 iozone: pylint fixes 2015-10-08 09:10:08 +01:00
setrofim
0fc602ecdb Merge pull request #43 from lisatn/iozone
Add iozone workload
2015-10-08 08:59:36 +01:00
Sergei Trofimov
552ea2a1bb fix: moving is_file and is_directory into BaseLinuxDevice, as relied on by get_properties. 2015-10-05 16:30:33 +01:00
Lisa Nguyen
5f6247cf8b iozone: Modify comments
Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-10-01 14:57:35 -07:00
Lisa Nguyen
c96a50e081 iozone: Add description to enable classifiers
In order to show more detailed results for the iozone
workload, inform users to enable classifiers in their
agenda or config file.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-10-01 14:48:09 -07:00
Lisa Nguyen
812bbffab1 iozone: Rewrite parse_metrics() function
When users specify tests, the parse_metrics()
function doesn't capture the last report name and its
results during the parsing process. Fix the
parse_metrics() function to make sure the data for
all reports are captured.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-10-01 12:38:04 -07:00
Lisa Nguyen
361f1a0f0c iozone: Whitespace cleanup
Cleanup whitespace and reorganize code.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-10-01 11:52:51 -07:00
Lisa Nguyen
8e84e4a230 iozone: Add descriptions and refactor code
Add test descriptions, remove defaults, and
fix functions to parse non-thread related data.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-10-01 11:37:05 -07:00
Sergei Trofimov
fe4d49e334 android device: added swipe-to-unlock option 2015-10-01 12:06:02 +01:00
Sergei Trofimov
dc01dd79ee telemetry: report errors. 2015-10-01 12:06:02 +01:00
Sergei Trofimov
0003993173 daq: updated workload labeling in daq_power.csv
Workload labels, rather than names, are now used in the "workload"
column.
2015-10-01 12:05:53 +01:00
Sergei Trofimov
b6442acf80 device: more reliable get_properites
Pulling entries from procfs does not work on some platforms. This commit
updates get_properites() to cat the contents of a property_file and
write it to output file on the host, rather than pulling it (directories
are still pulled).
2015-09-30 12:39:49 +01:00
Sergei Trofimov
a6feb65b34 daq: adding gpio_sync option.
When enabled, this wil cause the instrument to insert a marker into
ftrace, while at the same time setting a GPIO pin high.

For this to work, GPIO sysfs interface must be be enabed in the kernel
and the specified pin must be exported.
2015-09-25 15:03:16 +01:00
Sergei Trofimov
f1d3ebc466 adding missing supported_platforms attributes.
Bbench only works on Android. It should advertise that fact by setting
supported_platforms to ['android'].
Telemetry is a Chrome browser workload that is only supported on
ChromeOS and Android.
2015-09-23 08:42:09 +01:00
Sergei Trofimov
0608356465 list command: adding --packaged-only option
With this option, only extensions packaged with WA itself will be
listed. Extensions discovered from other packages or from local paths
will not appear in the list.
2015-09-22 08:41:53 +01:00
Sergei Trofimov
d7ef6ff8ba netstats: added "period" parameter.
This parameter allows specifying polling period for the on-device
service.
2015-09-21 08:45:26 +01:00
Sergei Trofimov
2904246cb5 Adding netstats instrument.
This instrument allows monitoring data sent/received by applications on
an Android device.
2015-09-18 09:33:17 +01:00
Sergei Trofimov
5abb42eab9 Do not attempt to get_sysfile_value() as root on unrooted devices. 2015-09-18 09:33:17 +01:00
setrofim
ce91e34f9f Merge pull request #47 from lisatn/rtapp-binary-update
rt-app: Update binaries
2015-09-16 09:12:01 +01:00
Lisa Nguyen
18f4c3611c rt-app: Update binaries
A newer version of rt-app has been released. Built
binaries from git://git.linaro.org/power/rt-app.git.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-09-15 14:43:47 -07:00
Sergei Trofimov
6bdd6cf037 energy_model: preserve indecies during bs power table calculation. 2015-09-14 16:06:00 +01:00
Sergei Trofimov
e36c619fc2 vexperess flashing: added an option to keep UEFI entry 2015-09-14 15:31:21 +01:00
Sergei Trofimov
76253e1e26 Fixing turning off UI in ChromeOS + adding it energy_model. 2015-09-14 12:53:08 +01:00
Sergei Trofimov
bf8dc6642f energy_model: cleanner error reporting. 2015-09-14 11:05:26 +01:00
Sergei Trofimov
3d8c384bb7 cpufreq: handle non-integer output in get_available_frequencies 2015-09-14 11:03:54 +01:00
Sergei Trofimov
3247b63cb9 shh: handle backspaces in serial output 2015-09-14 10:58:14 +01:00
Lisa Nguyen
100c6c0ac9 iozone: Add functions and rewrite update_result()
Added functions to parse thread-mode results and
non-thread mode results accordingly, in addition
to rewriting the update_result() function.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-09-13 20:50:17 -07:00
Sergei Trofimov
f063726cc3 Further fixes for Juno flashing. 2015-09-10 08:40:09 +01:00
Sergei Trofimov
84f7adbfb2 uefi menu: updated prompt_regex to handle debug UEFI builds. 2015-09-09 17:40:02 +01:00
Sergei Trofimov
b6c497d32b Updated boolean to interpret 'off' as False 2015-09-09 14:32:27 +01:00
Sergei Trofimov
f430187b11 sysbench: fixed case where default timeout < max_time 2015-09-08 18:00:17 +01:00
Sergei Trofimov
37c49e22b3 chromeos_test_image: added a runtime parameter to disable ui 2015-09-08 17:43:10 +01:00
Sergei Trofimov
9e12930a43 dhrystone: kill any running instances during setup (also handle CTRL-C). 2015-09-08 17:42:18 +01:00
Sergei Trofimov
d5f4457701 cpufreq: refined availability check not to rely on the top-level cpu/cpufreq directory 2015-09-08 17:41:25 +01:00
Sergei Trofimov
c9f86b05dd flashing: fixing vexpress flashing 2015-09-07 17:56:05 +01:00
Sergei Trofimov
6e447aa8b2 docs: typo 2015-09-07 17:55:42 +01:00
Sergei Trofimov
5eb7ca07fe energy_model: fix cluster power estimation based on voltage. 2015-09-07 11:52:31 +01:00
Sergei Trofimov
6d6cddff58 Added 'negative_values' to daq instrument which can be used to specify how negative values in the samples should be handled. 2015-09-04 17:22:17 +01:00
Sergei Trofimov
94cc17271e sysfs_extractor: fixed pulled files verification.
When no files are listed for one the specified directoris, the
instrument checks whether there are any files in that directory on the
device. If that directory itself does not exist; that will result in an
error. This is now handled correctly.
2015-09-04 08:56:19 +01:00
Sergei Trofimov
9aadb9087a http_assets: fixing that which was broken 2015-09-04 08:56:15 +01:00
Sergei Trofimov
2e35d4003f nenamark: made duration configurable. 2015-09-03 17:29:42 +01:00
Sergei Trofimov
0d076fe8ba andebench: added parameter to only run native portion of the benchmark. 2015-09-03 17:21:45 +01:00
Sergei Trofimov
bfed59a7cf Adding an HTTP-based resource getter. 2015-09-03 13:55:44 +01:00
Sergei Trofimov
047308a904 ipython: pylint fixes 2015-09-03 12:07:41 +01:00
Sergei Trofimov
0179d45b5b energy_model: minor fix for previous adjustment. 2015-09-03 11:06:26 +01:00
Sergei Trofimov
c04b98c75a ipython: updated version check.
Version 4.0.0 changes API and breaks WA's usage of it. Updated version
check to only initialize ipython utils if version is < 4.0.0.
2015-09-02 17:39:11 +01:00
Sergei Trofimov
3501eccb8e energy_model: yet another adjustment to leakage compensation. 2015-09-02 15:31:13 +01:00
Sergei Trofimov
a80780b9ed energy_model: idle state fix 2015-09-01 16:19:53 +01:00
Sergei Trofimov
1b6b0907f9 energy_model: further fixes to idle measurement.
- Fix to core frequency to min
- Only disable idle states that are deeper than the measured state.
  Keep shallower states enabed.
2015-08-28 17:12:24 +01:00
setrofim
4389a7d350 Merge pull request #46 from JaviMerino/fix_trace_warning
trace: fix data file trace name in warning
2015-08-28 16:05:16 +01:00
Javi Merino
227c39d95d trace: fix data file trace name in warning
The warning refers to trace.bin, which is not the extension that we use
for it.  Instead, use the variable with the default trace
name (trace.dat) for the warning.
2015-08-28 15:52:43 +01:00
Sergei Trofimov
25b9350fea energy_model: fixing idle state handling to encompass all active cores on a cluster. 2015-08-26 12:48:22 +01:00
Sergei Trofimov
9b7b57a4d4 androbench: removed trailing spaces from metric names. 2015-08-21 13:58:00 +01:00
Sergei Trofimov
5aae705172 androbench: added early check for root on device; updated description with link to website. 2015-08-21 09:37:53 +01:00
setrofim
b653beacd3 Merge pull request #45 from bathepawan/master
Androbench Storage Benchmark
2015-08-21 09:16:28 +01:00
Sergei Trofimov
88f57e5251 Agenda: default config to dict 2015-08-21 08:34:49 +01:00
Pawan Bathe
f44fd9df7a Change device pull to handle root,and renamed local file as well history.db from results.db 2015-08-20 21:25:55 +05:30
Pawan Bathe
8513304aeb python sqlite3 to remove host/DUT dependencies + other changes 2015-08-19 21:53:50 +05:30
Pawan Bathe
e38c87f258 Fix pylint errors 2015-08-18 01:28:09 +05:30
Pawan Bathe
03a9470007 Fixed issue reported by pep8 checkers 2015-08-18 01:09:45 +05:30
Pawan Bathe
2d8b8ba799 moved UiAutomator code to java, and removed not required python code for device interfacing 2015-08-18 00:46:48 +05:30
Pawan Bathe
e4ee496bc9 Use sqllite3 instead of sqllite to remove host dependencies and result unit in proper format 2015-08-17 23:20:30 +05:30
Sergei Trofimov
205934d55b juno: use bootargs on hard_reset with u-boot 2015-08-17 14:36:28 +01:00
Sergei Trofimov
95c3f049fb trace_cmd: handle trace headers and thread names with spaces 2015-08-17 12:37:58 +01:00
Sergei Trofimov
25c0fd7b8b Allow setting classifiers via agenda. 2015-08-17 10:37:40 +01:00
Pawan Bathe
1cf60c2615 Results format 2015-08-15 02:39:45 +05:30
Pawan Bathe
c3d8128ff3 Androbench Storage Benchmark Workload Addition 2015-08-15 02:10:08 +05:30
Sergei Trofimov
aa2ae03ce6 daq: daq_power.csv now matches the order of labels (if specified). 2015-08-13 16:10:58 +01:00
Sergei Trofimov
6137d5650f delay: added fixed_before_start configuration; updated all to be very_slow
- fixed_before_start is a fixed time period equivalent to
  temperature_before_start
- Changed existing *_between_specs and *_between_iterations callbacks to
  be very_slow, as they should always run after everything else.
2015-08-12 16:12:38 +01:00
Sergei Trofimov
0703db05cf rt-app: relaxing timeouts. 2015-08-12 11:53:29 +01:00
Sergei Trofimov
0e1990a2bb ssh: fixed error reporting on failed exit code extraction. 2015-08-12 11:28:38 +01:00
Lisa Nguyen
25e53c2abc iozone: Add license information
Add license information for the iozone binaries.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-08-11 14:07:26 -07:00
Sergei Trofimov
14f5858e3d pep8 2015-08-11 17:02:38 +01:00
Sergei Trofimov
a6d374bcff wa create workload: better error reporting
Give more informative errors if Android SDK installed but no
platform has been downloaded.
2015-08-11 16:59:25 +01:00
Sergei Trofimov
85eba9c37a Better error reporting for subprocess.CalledProcessError 2015-08-11 16:51:34 +01:00
Lisa Nguyen
0acbcc9f95 Add iozone workload
Initial commit of the iozone workload.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-08-10 13:13:17 -07:00
Sergei Trofimov
1d67dd3b99 energy_model: adjusting to compensate for leakage when evaluating cluster power 2015-08-10 10:48:26 +01:00
Sergei Trofimov
ab45c4499f utils.misc.normalize: only normalize string keys. 2015-08-10 10:42:20 +01:00
Sergei Trofimov
75e4b7d2ae fps: filter out bogus actual-present times and ignore janks above PAUSE_LATENCY
The PAUSE_LATENCY thing is in line with what Google do inside
SurfaceFlingerHelper.
2015-08-06 11:20:58 +01:00
Sergei Trofimov
5e92728d77 pylint fixes 2015-07-27 10:11:59 +01:00
Sergei Trofimov
b5d879a90b APK workloads: added an option to skip host-side APK check entirely. 2015-07-27 09:55:23 +01:00
Sergei Trofimov
507efaec48 perf: reverting type change for optionstring 2015-07-24 16:20:14 +01:00
Sergei Trofimov
0db14e2466 Updating energy_model to mitigate cluster power issue
A quadratic is now fitted to single and two-core power measured across
frequencies. This quadratic is then used in projection of cluster power.
This mitigates issues with cluster powers going negative or "crossing
over".
2015-07-24 08:11:23 +01:00
Sergei Trofimov
6ced04daf0 Fix for the previous fix... 2015-07-23 10:25:13 +01:00
Sergei Trofimov
b6064fa348 perf: fix for Android: killall sleep as root. 2015-07-23 10:17:01 +01:00
setrofim
6207f79f01 Merge pull request #41 from rockyzhang/patch-2
passing 'video' command line to Juno kernel
2015-07-21 18:09:45 +01:00
Sergei Trofimov
295b04fb5a Updating jank calcluation to only count "large" janks. 2015-07-21 10:03:21 +01:00
Sergei Trofimov
19072530e4 daq: fix for the updated twisted support. 2015-07-21 08:35:36 +01:00
Sergei Trofimov
39ed42ccb9 daqpower: updating to work with new versions of twisted. 2015-07-20 17:40:02 +01:00
Sergei Trofimov
89da25c25e cpustates: added "Running (unknown Hz)" state to the timeline. 2015-07-20 17:04:09 +01:00
Sergei Trofimov
4debbb2a66 Updating ANROID_VERSION_MAP with recent versions. 2015-07-17 17:24:14 +01:00
Sergei Trofimov
e38bd942a8 bbench: fixed to work when binaries_directory is not in path. 2015-07-17 15:38:47 +01:00
Sergei Trofimov
47b2081ccb doc: fixing daq wiring screenshot diagram 2015-07-17 12:43:46 +01:00
Sergei Trofimov
298602768e doc: changing theme to "classic" to be compatible with new version of sphinx 2015-07-17 12:43:25 +01:00
Sergei Trofimov
d2dbc9d6dd Added the ability to override the timeout of deploying the assets tarball in GameWorkload 2015-07-16 16:53:35 +01:00
Rocky Zhang
d512029f10 Update __init__.py 2015-07-16 16:27:08 +08:00
Rocky Zhang
dc7dea1c3e passing 'video' command line to Juno kernel
There's a known issue that HDMI will lose sync with monitor, adding video kernel parameter will make the HDMI more stable for juno

> HDMI can lose sync with the monitor intermittently, particularly at higher resolutions. 
> If you are affected by this then try adding a kernel command line argument that forces 
> a video mode with reduced blanking, such as the following:
> video=DVI-D-1:1920x1080R@60
2015-07-16 15:45:26 +08:00
Sergei Trofimov
2d3be33bb0 Added target_config option to telemetry 2015-07-14 11:26:17 +01:00
Sergei Trofimov
65ab86221b Reduce starting priority of trace-cmd instrument. 2015-07-13 17:04:45 +01:00
Sergei Trofimov
b8e25efdd4 ssh: attempt to deal with dropped connections 2015-07-10 11:44:01 +01:00
setrofim
2e4bda71a8 Merge pull request #39 from JaviMerino/ipynb_convert_to_html
ipynb_exporter convert to html
2015-07-09 13:58:51 +01:00
Javi Merino
30d7ee52f4 ipynb_exporter: learn to convert the notebook to HTML 2015-07-09 12:50:02 +01:00
Javi Merino
539c3de7b8 ipython: rename IPYTHON_NBCONVERT to IPYTHON_NBCONVERT_PDF
This command is tailored for converting notebooks to pdf, but we also
want it to be able to generate html.
2015-07-09 12:49:31 +01:00
Sergei Trofimov
687e1ba6b8 daq server: report DAQError messages properly 2015-07-09 11:00:46 +01:00
Sergei Trofimov
a72ae92ece stream: pylint/pep8 fixes. 2015-07-09 08:23:20 +01:00
setrofim
1562f17d8c Merge pull request #38 from lisatn/stream
Add stream workload
2015-07-09 08:19:50 +01:00
Lisa Nguyen
255e6c1545 stream: Add initialize and finalize functions
Add initalize and finalize functions in the stream workload
in addition to simplifying code.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-07-08 11:16:13 -07:00
Lisa Nguyen
c733ecad98 Add stream workload
Initial commit of the stream workload to measure
memory bandwidth.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-07-07 11:12:28 -07:00
Sergei Trofimov
088d0f6981 fix: added remote_assets_mount_point to ingore list for cofig parsing 2015-07-07 16:20:24 +01:00
Sergei Trofimov
13e5e4d943 adb_connect: do not assume port 5555 2015-07-07 11:19:58 +01:00
Naresh Kamboju
6e72ad0cc3 hwmon: print both before/after and mean temperatures
Print both before/after and mean temperatures of DCC and MCC

Example:
mp_a7bc_w01,bbench_with_audio,1,"arm,vexpress-temp DCC",34.062,Celsius mp_a7bc_w01,bbench_with_audio,1,"arm,vexpress-temp DCC before",33.849,Celsius mp_a7bc_w01,bbench_with_audio,1,"arm,vexpress-temp DCC after",34.275,Celsius mp_a7bc_w01,bbench_with_audio,1,"arm,vexpress-temp MCC",45.432,Celsius mp_a7bc_w01,bbench_with_audio,1,"arm,vexpress-temp MCC before",45.432,Celsius mp_a7bc_w01,bbench_with_audio,1,"arm,vexpress-temp MCC after",45.432,Celsius

Signed-off-by: Naresh Kamboju <naresh.kamboju@linaro.org>
2015-07-06 14:09:50 +01:00
Sergei Trofimov
cb89bd5708 Fixing typo in config_example.py 2015-07-02 12:17:19 +01:00
Sergei Trofimov
30bb453747 gaming workloads: added an option to prevent clearing of package data before execution 2015-07-01 16:17:39 +01:00
Sergei Trofimov
a37e734cf1 fix: adding dependencies_directory to NO_ONE resource owner 2015-07-01 16:08:10 +01:00
Sergei Trofimov
a27768fe21 rt-app: do not unintall at then end by default
rt-app workload will no longer uninstall the executable at the
end of the run by default. A parameter can be used to re-enabed the
uninstall.
2015-06-30 15:54:58 +01:00
Sergei Trofimov
6affc484f4 resource getter: Change the order in which executable paths checked. 2015-06-30 15:54:40 +01:00
Sergei Trofimov
90ea2dd569 resource getter: look for executable resource in correct loction.
Look in the bin/ directory under resource owner's dependencies directory
as well as general dependencies bin.
2015-06-30 10:50:56 +01:00
Sergei Trofimov
df6d1f1c2b resorce resover: debug-print the path of found resource 2015-06-30 10:50:49 +01:00
Sergei Trofimov
34a604f4fc juno: do not auto-disconnect at the end of the run
Juno connection no persists at the end of the run. Boolean parameter
actually_disconnect has been added to allow restoring the old behavior.
2015-06-30 10:36:20 +01:00
Sergei Trofimov
f7941bbc25 More informative syntax error reporting. 2015-06-30 10:36:20 +01:00
setrofim
f6ecc25a4b Merge pull request #34 from bobbyb-arm/lmbench-submit2
Initial commit of lmbench workload
2015-06-30 09:55:30 +01:00
Bobby Batacharia
5c53f394cb Initial commit of lmbench workload
Remove unused includes
2015-06-29 22:55:56 +01:00
Sergei Trofimov
967f9570e2 manhattan: fixing syntax error introduced by previous commit 2015-06-29 17:57:41 +01:00
Sergei Trofimov
19d6ce5486 manhattan: chaning run_timemout into a parameter and upping default to 10mins 2015-06-29 17:55:43 +01:00
Sergei Trofimov
3b4dc137d2 cpustate: check if trace marker is present and disable marker filtering if it is not. 2015-06-29 17:33:28 +01:00
Sergei Trofimov
78314c1ef2 cpustates added an option to ignore trace markers. 2015-06-29 17:28:00 +01:00
setrofim
b3c9d43ccd Merge pull request #33 from vflouris/master
Energy model instruments: allows power adjustment for thermal effect
2015-06-29 14:19:20 +01:00
Vasilis Flouris
7f5952aa9c Energy model instruments: allows power adjustment for thermal effect 2015-06-29 13:04:41 +01:00
Sergei Trofimov
b018adac11 pylint fixes 2015-06-29 11:34:49 +01:00
Sergei Trofimov
4904c6cf71 listdir on Linux: return empty list for an empty directory
Previously, was returning a list with a single empty string element
2015-06-29 11:28:47 +01:00
Sergei Trofimov
26dee81164 Adding arm64-v8a to ABI map 2015-06-29 09:21:06 +01:00
setrofim
c09972e7a8 Merge pull request #30 from bobbyb-arm/fixes
Fixes
2015-06-28 11:42:54 +01:00
Bobby Batacharia
6069ccacdc ExtensionLoader should follow symlinks 2015-06-28 11:00:01 +01:00
Bobby Batacharia
22d72de969 Fix terminal size discovery in DescriptionListFormatter 2015-06-28 10:35:15 +01:00
Sergei Trofimov
b712dddfc0 android device: update android_prompt so that it works even if is not / 2015-06-26 16:25:44 +01:00
Sergei Trofimov
d6cebc46ce perf: updating binaries and adding option to force install 2015-06-26 14:19:15 +01:00
Sergei Trofimov
85c78e6566 sysfile_getter: fixed Exception when both device and host paths are empty. 2015-06-26 12:14:02 +01:00
Sergei Trofimov
fcb6504f1e Adding ID to overall cpustate reports. 2015-06-26 10:24:07 +01:00
Sergei Trofimov
b25f7ec4a3 Updated installation docs with warning about potential permission issues. 2015-06-26 10:23:39 +01:00
Sergei Trofimov
5401a59da0 Adding support for U-Boot booting in Juno. 2015-06-25 11:32:01 +01:00
Sergei Trofimov
00561e0973 Adding support for U-Boot booting in Juno. 2015-06-25 10:59:19 +01:00
Sergei Trofimov
642da319d4 linpack-cli: setting run timeout based on array size 2015-06-19 09:47:49 +01:00
Sergei Trofimov
a6e9525264 Adding command line version of linpack benchmark. 2015-06-19 09:42:16 +01:00
Sergei Trofimov
4d5413ac26 add agenda command: added options for iterations and runtime parameters 2015-06-18 17:38:51 +01:00
Sergei Trofimov
ccea63555c Added retries
Failed jobs will now be automatically retired. This is controlled by two
new settings:

retry_on_status - a list of statuses which will be consided failures and
                  result in a retry
max_retries - number of retries before giving up
2015-06-18 16:46:26 +01:00
Sergei Trofimov
51c5ef1520 create command: make sure "create agenda" can pick up loacal extensions 2015-06-18 16:18:27 +01:00
Sergei Trofimov
c6ede56942 rt-app: added taskset_mask parameter 2015-06-18 15:21:12 +01:00
Sergei Trofimov
e0ecc9aaf4 Added an invoke() method to devices.
This allows invoking an executable on the device under controlled
contions (e.g. within a particular directory, or taskset to specific
CPUs)
2015-06-18 15:07:44 +01:00
Sergei Trofimov
44c2f18f76 Adding "create agenda" sub-command
It is now possible to generate agendas for a set of extensions with

  wa create agenda ext1 ext2 ext3 -o output.yaml

Confiuration will include all parameters for those extensions with
default values.
2015-06-18 12:50:12 +01:00
Sergei Trofimov
53c669f906 streamline: do not instatiate resource getter directly 2015-06-18 12:33:56 +01:00
Sergei Trofimov
c076a87098 Added support for YAML configs
Config files (the default one in ~/.workload_automation plus ones
specified with -c) can now be written using YAML syntax as well as
Python.
2015-06-18 11:35:50 +01:00
Vasilis Flouris
90c0ed281d Documentation: punctuation error fix 2015-06-18 10:46:29 +01:00
Vasilis Flouris
aac69a9c14 Documentation update 2015-06-18 10:39:20 +01:00
Sergei Trofimov
08bfef961e run command: adding a command line option to diable instruments
Also, updating help messages for exising arguments to use multiline
strings.
2015-06-18 10:07:48 +01:00
Sergei Trofimov
d9f45db71e Implementing dynamic device modules
Dynamic modules may be loaded automatically on device initialization if
the device supports them. Dynamic modules implent probe() method to
determine whether they are supported by a particular deviced.

devcpufreq and cpuidle have been converted into dynamic modules
2015-06-18 09:42:40 +01:00
Sergei Trofimov
73d85c2b4e cleaning up initialize()
- standardisded on a single context argument
- removed Device.init() no longer necessary as initilize now
  automatically gets propagated up the hierarchy. Renamed the existing
  use of it to "initilize".
- related pylint cleanup.
2015-06-18 09:30:38 +01:00
Sergei Trofimov
55b38556fe cpufreq: splitting out cpufreq stuff into a device module 2015-06-18 09:30:38 +01:00
Sergei Trofimov
a71756acda pylint fixes 2015-06-18 09:30:14 +01:00
Sergei Trofimov
9470efe410 chromeos_test_image: only set default password if keyfile was not specified 2015-06-18 08:39:02 +01:00
Sergei Trofimov
0b29bda206 rt-app: removed ftrace check
trace-cmd seems to work fine with "ftrace: true" in the config, so
removing the check that prevented both from being enabled.
2015-06-17 10:14:08 +01:00
Sergei Trofimov
042da24e7d antutu: updating result parsing to handle Android M logcat output
Looks like M formats logcat output with some extra ":"'s which was
screwing up the old parsing logic.
2015-06-17 09:46:11 +01:00
Sergei Trofimov
a1e99e5591 Android device: correctly set busybox path 2015-06-16 17:23:22 +01:00
Sergei Trofimov
b98b31a427 sysfile_getter: use self.device.busybox rather than just "busybox"
Was breaking when location into which busybox was installed  was not in
PATH.
2015-06-16 17:02:23 +01:00
Sergei Trofimov
4eb0d9d750 Reverting to the old way of getting the abi on Android
uname is not available on all Android devices, and we cannot rely on
busybox for establishing the ABI, as we need to get the ABI before we
can depoly the right version of busybox.
2015-06-16 15:15:18 +01:00
Sergei Trofimov
4af93d94dd show command: adding supported platforms 2015-06-16 12:56:48 +01:00
Sergei Trofimov
e7fae25821 list command: can now filter results by supported platform
Added -p option to the list command. This alows filtering results by
supported platforms, e.g.

	wa list workloads -p linux

Also adding missing supported_platforms attribute to various extensions.
If an extension does not have this attribute, the assumption is that it
is supported by all available platforms.
2015-06-16 12:49:07 +01:00
Sergei Trofimov
53de517488 device: set core_clusters from core_names if not explicitly specified
if core_names are specified in the device config but core_clusters are
not, assume that all cores with the same name are on the same cluster.
2015-06-16 12:21:44 +01:00
Sergei Trofimov
15e854b8f1 telemetry: fix the doc so they no longer say it must be installed
WA will now fetch it automatically
2015-06-16 11:10:50 +01:00
Sergei Trofimov
a85e45c6b0 cpustate: now generates a timeline csv as well as stats 2015-06-16 11:04:25 +01:00
Sergei Trofimov
4d3feeba64 list_file_systems: fix for Android M and Linux devices
In previous versions of Android, "mount" returned output in the format
similar to fstab entries, which is what list_file_systems expected. This
fixes it to be able to handle the more traditional "mount" output in the
format

	<device> on <mount point> type <fs type> <options>

as well as continue to parse the Android output correctly.
2015-06-16 08:49:58 +01:00
setrofim
92ddcbb1e3 Merge pull request #27 from lisatn/master
perf: remove CCIPerfEvent class
2015-06-15 18:49:29 +01:00
Lisa Nguyen
7c7a5de988 perf: remove CCIPerfEvent class
Remove the CCIPerfEvent class since it's no longer used
in WA.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-06-15 10:34:59 -07:00
Sergei Trofimov
cbf7eadc6c sqlite: adding global aliases to params
Adding global aliases to parameters to fix old configs that still used
the deprecated "<ext name>_<param name>" format for specifying parameter
values.
2015-06-15 14:34:57 +01:00
Sergei Trofimov
d775be25f7 Removing total_tasks "metric" form rt-app, as reported as a classifier. 2015-06-15 13:07:49 +01:00
Sergei Trofimov
8dc4321deb Adding rt-app workload 2015-06-15 12:04:00 +01:00
Vasilis Flouris
0d3e6b8386 Fixes missing directories problem in DynamicFrequencyInstrument 2015-06-12 17:53:05 +01:00
Vasilis Flouris
7ee44fb0e4 Fix: for the chromeos test image device 2015-06-12 14:39:46 +01:00
Sergei Trofimov
ab76aa73f2 cpustate: fixing division by zero
total running time (in parallel stats) is zero when all cores on a
cluster are hotplugged. this caused a division by zero when calculating
percentage.
2015-06-12 13:02:05 +01:00
Sergei Trofimov
179baf030e Fixed typo. 2015-06-12 12:43:35 +01:00
Vasilis Flouris
2f214da8a2 fix: uname utility is unavailable in Android. It has to be invoked through busybox. 2015-06-11 18:58:34 +01:00
Sergei Trofimov
e357a46b62 Documenting the Workload initialize and finalize methods 2015-06-11 18:17:53 +01:00
Sergei Trofimov
6c8228a26c Invoking workload finalizers at the end of the run. 2015-06-11 18:04:55 +01:00
Sergei Trofimov
b3a0933221 Adding intialize and finalize methods to workloads that will only be invoked once per run
- added initialze and finalize methods to workloads, which were the only
  major extension types that did not have them
- Semanatics for initialize/finalize for *all* Extensions are changed so
  that now they will always run at most once per run. They will not be
  executed twice even if invoke via istances of different subclasses (if
  those subclasses defined their own verions, then their versions will
  be invoked once each, but the base version will only get invoked
  once).
2015-06-11 17:45:09 +01:00
Sergei Trofimov
557b792c77 Made abi property common between Android and Linux devices
In both cases, the ABI is now obtained by executing "uname -m" on the
device and perfroming a mapping from the returned machine architecture
a known ABI. If no known ABI is found the architecture string itself is
returned.
2015-06-11 17:45:09 +01:00
Sergei Trofimov
2ee9b40527 classifiers: usability updates
- add IterationResult-level classifiers that get merged into every
  added metric (saves having to pass the same classifiers to each
  metric added).
- Added a global alias to csv result processor's  use_all_classifiers
  attribute.
2015-06-11 17:45:09 +01:00
Sergei Trofimov
32f3dc21e4 Added job_status property to ExecutionContext 2015-06-11 17:45:09 +01:00
Vasilis Flouris
b31a9bd61a fixes a minor bug in energy model instrument 2015-06-11 13:09:04 +01:00
Sergei Trofimov
67896dfd86 energy_model: adding dhrystone support
- updated energy_model to accept dhrystone as well as sysbench as
  the workload
- added "threads" parameter to sysbench (basically, an alias for
  "num_threads") to be consistent with dhrystone
- added "taskset_mask" parameter to dhrystone to allow pinning
  it to specific cores.
2015-06-11 10:10:36 +01:00
Vasilis Flouris
88ba8e3ba7 Fixes result result processing bug in sysbench 2015-06-09 18:25:30 +01:00
Sergei Trofimov
771567365d daq instrument: updating default server_port to match daq server. 2015-06-09 15:22:57 +01:00
Sergei Trofimov
5fdb94d804 removing old tarball. 2015-06-09 13:17:28 +01:00
Sergei Trofimov
026e663155 daqpower: typo fix 2015-06-09 13:14:07 +01:00
Sergei Trofimov
4b7af1d2a6 Another update to daqpower
- server will now periodically clean up uncollected files
- fixed not being able to resolve IP address for hostname
  (report "localhost" in that case).
2015-06-09 13:08:50 +01:00
Sergei Trofimov
d9cd1d3282 Removing daqpower hack that got accidentally committed 2015-06-09 12:58:13 +01:00
Sergei Trofimov
c239322c4d Updated daqpower package
- Now works with earlier versions of the DAQmx driver. This is needed to
  be able to run the server on Linux systems, which support older
  verisions of the driver only.
- DAQ error messages are now properly propaged to the client (PyDAQmx
  uses "mess" rather than "message" attribute to store the message in
  the Exception obejects).
- pylint and pep8 fixes
2015-06-09 11:03:26 +01:00
Sergei Trofimov
0c19d75bf4 Adding daqpower package to static checkers. 2015-06-09 11:03:26 +01:00
Sergei Trofimov
084de2e58c Adding an illustration for DAQ wiring. 2015-06-09 11:03:26 +01:00
Vasilis Flouris
88c304292f comment correction 2015-06-05 13:00:59 +01:00
Sergei Trofimov
c40a7fd644 more robust exit_code handling for ssh interface
Background processes may produce output on STDOUT. This could get
captured when obtaining the result of "echo $?" to get previos command's
exit code. So it's not safe to assume that output will always be an int.
Attempt to strip out superflous output before doing the int conversion
and, on failure, log a warning but don't error out.
2015-06-05 12:47:01 +01:00
Sergei Trofimov
b976164ee9 Fixing typo in a cpustate parameter name. 2015-06-05 09:05:13 +01:00
setrofim
d102994214 Merge pull request #25 from zhizhouzh/master
get options in config_example.py reachable
2015-06-04 10:03:04 +01:00
zhizhou.zhang
7422a72a7b get options in config_example.py reachable
Some options in config_example.py are not reachable. It makes users
confused. So make the options in the file as gobal_alias.

Signed-off-by: zhizhou.zhang <zhizhou.zhang@spreadtrum.com>
2015-06-04 15:56:52 +08:00
setrofim
3044d192f9 Merge pull request #24 from vflouris/pull-request
Pull request: second batch of changes for energy_model
2015-06-03 18:12:19 +01:00
Vasilis Flouris
afa2b11975 Adds copyright header to energy model instrument 2015-06-03 18:07:40 +01:00
Vasilis Flouris
1a604ac2e3 Fixes a bug in energy model instrument 2015-06-03 18:04:51 +01:00
Vasilis Flouris
d60034f7d7 Allows running the energy instrument without hotplugging 2015-06-03 18:04:45 +01:00
Sergei Trofimov
8980304e56 Adding a note about cpuidle module to cpustates. 2015-06-03 16:33:45 +01:00
Sergei Trofimov
02af02f0cb Adding cpustates result processor (and script) 2015-06-03 16:20:48 +01:00
Sergei Trofimov
9971041e45 Updating copyrights in scripts. 2015-06-03 16:20:48 +01:00
Sergei Trofimov
e9b21e2ef3 Adding a generic trace-cmd paraser. 2015-06-03 16:20:48 +01:00
Sergei Trofimov
5cfecf8068 show command: minor fix to parameter rendering
Make sure default of 'False' is reported for boolean values.
2015-06-03 16:20:48 +01:00
Sergei Trofimov
bf189dbe6a mino doc tweak. 2015-06-03 16:20:48 +01:00
Sergei Trofimov
ecb1a9f1f9 Adding /proc/cmdline to pulled metadata 2015-06-02 15:05:52 +01:00
setrofim
22b3fe1ac8 Merge pull request #23 from vflouris/pull-request
Pull request: minor fixes for energy model generation and a device interace for Chrome OS test image devices.
2015-06-02 13:07:57 +01:00
Vasilis Flouris
953783dc2b Adds the generic_chromeos device. 2015-06-02 13:05:08 +01:00
Sergei Trofimov
8f972322a5 Updating documentation for generic device interfaces 2015-06-02 12:58:04 +01:00
Vasilis Flouris
1fa93c04d2 fixes a few minor bugs. 2015-06-02 10:53:20 +01:00
Sergei Trofimov
ebe6202e22 push/pull commands always raise DeviceError + handle error on pulling a property file.
Previosuly, they raised CalledProcessError/DeviceError based on
implementation. Making it consistent to facillitated handling in calling
code.
2015-06-01 17:25:59 +01:00
Sergei Trofimov
b4971d76d6 pull more stuff during run initialization
added more paths to pull by default when device.get_properties is
invoked during run initialization. Also moved the LinuxDevice
implementation into BaseLinuxDevice, so that AndroidDevice tires to pull
the same files (on top of the Android-specific stuff).
2015-06-01 16:41:33 +01:00
Sergei Trofimov
ead0be2763 Fixing merge_lists to work for list_or_* types 2015-06-01 16:18:13 +01:00
Sergei Trofimov
29aa81a694 Adding a couple of unit tests for some of the more interesting types. 2015-06-01 15:46:14 +01:00
Sergei Trofimov
f59da723fb list_or: chaniging how list_or_* functions work and adding a generic list_or
list_or_* functions (e.g. list_or_string) will now always return a list,
however will accept lists or indivitual values. Also added a list_or()
generator function, similar to what already exists for list_of().
2015-06-01 15:31:25 +01:00
Sergei Trofimov
a9ab67990a sysbench: updating to work on unrooted Android devices. 2015-06-01 12:21:51 +01:00
Sergei Trofimov
adefbb7b2c android devices: updating executable install/uninstall work on unrooted devices 2015-06-01 12:19:54 +01:00
Sergei Trofimov
c550657912 Updating contribution guidelines.
Clarifying style guidelines and refering "Writing Extensions" section.
2015-06-01 10:36:21 +01:00
Sergei Trofimov
578dfb3d99 telemetry: fixing "test" parameter description. 2015-06-01 10:08:26 +01:00
Sergei Trofimov
f49287cf09 Fixes for Emacs
- Do not try to use a pager if it explicitly disabled with PAGER='' in
  the environment.
- If terminal size is identified as (0, 0), fall back to default (80,
  25).
2015-06-01 10:05:23 +01:00
Sergei Trofimov
777003ed51 Adding instrument_is_enabled function
As instrumentation can be enabled/disabled for a specfic workload
execution, it is sometimes not enough to verify that an instrument has
been installed for the run; one might need to check whether it is
currently enabled.
2015-05-28 10:13:50 +01:00
Sergei Trofimov
a254a44f0e adding abi property to LinuxDevice 2015-05-28 09:08:46 +01:00
Sergei Trofimov
506ed57ca6 fix: telemetry: ignore all return codes
Telemetry seems to return random values as return code, so completely
ignore them and don't treat any values as errors.
2015-05-27 17:25:17 +01:00
Sergei Trofimov
c31d4ec8a3 ssh: fixing timeout behavior
Since a command would still be running on time out, it would prevent
issuing subsequent commands in the same SSH shell, make it look like
the device has become unresponsive.

If a timeout condition is his, send ^C to kill the current foreground
process and make the shell available for subsequent commands.
2015-05-27 09:13:03 +01:00
Sergei Trofimov
9e822c4b18 dvfs: fix active_cpus --> online_cpus rename 2015-05-26 12:30:15 +01:00
Sergei Trofimov
260616711f removing unused variable. 2015-05-26 09:29:59 +01:00
Sergei Trofimov
b9a8f6155c telemetry: remove obsolete metrics. 2015-05-15 14:09:53 +01:00
Sergei Trofimov
a450957b9a Fixing locally defined instruments erroneously propagating into global instrumentation 2015-05-15 10:01:26 +01:00
Sergei Trofimov
512bacc1be Adding classifiers to metrics and updating csv and telemetry to take advantage of them
- Adding "classifiers" field to Metric objects. This is a dict mapping
  classifier names (arbitrary strings) to corresponding values for that
  specific metrics. This is to allow extensions to add
  extension-specific annotations to metric that could be handled in a
  generic way (e.g. by result processors).
- Updating telemetry workload to add classifiers for the url and internal
  iteration (or "time") for a particular result.
- Updating csv result processor with the option to use classifiers to
  add columns to results.csv (either using all classifiers found, or
  only for the specific ones listed).
2015-05-14 15:15:32 +01:00
Sergei Trofimov
782d4501cd revent: fix resource resolution when dependency location does not exist. 2015-05-14 14:34:41 +01:00
Sergei Trofimov
26dfe97ffd ssh: fix: do not attempt to check mode of keyfile if it has not been specified. 2015-05-14 11:27:50 +01:00
Sergei Trofimov
48748797b7 setup.py: fix for OSX
On Unix, pip will change current working directory to whereever it has
extracted the downloaded package. On Mac OSX, it does not appear to do
that. To get around this difference, specify paths in setup.py relative
to the location of setup.py, rather than the current working directory.
2015-05-13 16:59:11 +01:00
Sergei Trofimov
ee63cbde62 telemetry: automatically download if necessary
If run_benchmark_path isn't specified, Telemetry zip archive will be
downloaded and extracted into dependencies.
2015-05-13 14:33:13 +01:00
Sergei Trofimov
e7b58c72ac telemetry: added fps extraction
Also fixed text that mistakenly referred to "run_benchmarks" (plural) as
the name of the script.
2015-05-13 14:05:15 +01:00
Sergei Trofimov
9b16e3f282 linux device: removing (keyfile or password) validation check
It may be possible to connect to a device without either, so it
should not be invalid to not specifiy either in the device config.
2015-05-13 08:46:10 +01:00
Sergei Trofimov
4e9601d905 config_example: disabled summary_csv and added status result processor
Made status result processor enabled by default on new WA installations
and disabled summary_csv by default.
2015-05-12 16:18:08 +01:00
Sergei Trofimov
48cced2ff9 antutu: robustness fixes for v5 multi-times executions
- added slight delays to ensure more precise navigation
- handle dialogs that occasionally pop up when re-running tests
2015-05-12 14:29:24 +01:00
Sergei Trofimov
57d3b80e85 power_loadtest: kill any already-running autotest instances during setup 2015-05-12 14:27:04 +01:00
Sergei Trofimov
cbd2c6727f Adding ChromeOS power_LoadTest 2015-05-12 11:34:30 +01:00
Sergei Trofimov
5daa9014a8 Adding "agruments" type
This type represents arguments that are passed on a command line to an
application. It has both string and list representations.
2015-05-12 11:34:30 +01:00
Sergei Trofimov
b2981a57bc ssh: ensure keyfile has the right permissions
The key file must only be readable by the owner. If the specified key
file has different access permissions, create a temporary copy with the
right permissions and use that.
2015-05-12 11:34:30 +01:00
Sergei Trofimov
98b259be33 energy_model: fix np.vectorize on ImportError
np.vectorize was being unconditionally invoked at top level. On an
ImportError, np as set to None, so this was resuling in an
AttributeError when loading the module if one of the dependent libraries
was not present on the host system. This moves the invocation into the
try block with the imports to avoid an error when energy_model module is
loaded by the extension is not used.
2015-05-12 11:34:30 +01:00
Sergei Trofimov
b30d702f22 minor fixes to cpufreq APIs
- adding missing cpu id conversion to set_cpu_min_freq
- adding "exact" parameter to set_cpu_frequency; only produce an error
  when the specified frequency is not supported by the cpu if "axact"
  is set; (otherwise let cpufreq decide what the actual frequency will
  be).
2015-05-12 11:34:30 +01:00
Sergei Trofimov
40bd32280a minor fixes to cpufreq APIs
- adding missing cpu id conversion to set_cpu_min_freq
- adding "exact" parameter to set_cpu_frequency; only produce an error
  when the specified frequency is not supported by the cpu if "axact"
  is set; (otherwise let cpufreq decide what the actual frequency will
  be).
2015-05-11 12:12:41 +01:00
Sergei Trofimov
fbde403f6f pep8 2015-05-11 12:12:41 +01:00
Sergei Trofimov
4b210e7aab energy_model: adjusting sysbench setup based on feed back from power team. 2015-05-11 12:12:41 +01:00
Vasilis Flouris
2929106049 Energy model: fixing sysbench taskset failure
Make sure when migrating sshd to root cgroup also migrate their
children, including the bash for the wa session. So the subsequent
processes kicked off from that shell can be taskset to any cluster.
2015-05-11 12:12:41 +01:00
Sergei Trofimov
485ba419b3 energy_model: set matplotlib backent to AGG
Matplotlib defautls to the GTK backend. This can cause problems when
running in a headless session (e.g. over SSH). Since energy_model
istrument generates PNG plots, rather than rendering directly to UI, it
doesn't actually need GTK; set backend to AGG so that energy_model works
in headless environments.
2015-05-11 12:12:41 +01:00
Sergei Trofimov
0e751bdd73 Handling duplicate prompt in pxssh in a slightly differnt way. 2015-05-11 12:12:41 +01:00
Sergei Trofimov
556bc84023 energy_model: fix for when running as a sudo user (rather than root) 2015-05-11 12:12:41 +01:00
Sergei Trofimov
715438e486 Adding missing documentation for module. 2015-05-11 12:12:41 +01:00
Sergei Trofimov
68ae8b9277 energy_model: added energy_metric
It is now possible to specify energy_metric instead of power_metric for
cases where continuous power is not available but energy can be
measured.
2015-05-11 12:12:41 +01:00
Sergei Trofimov
c5b884cf81 casless_string: added format() method 2015-05-11 12:12:41 +01:00
Sergei Trofimov
a4bff161aa idle: updated to work on Linux devices. 2015-05-11 12:12:41 +01:00
Sergei Trofimov
950f0851bf Fix: implementing get_pid_of() and ps() for linux devices.
Existing implementation relied on android version of ps.
2015-05-11 12:12:41 +01:00
Sergei Trofimov
49d7072440 Fix: cpuidle check directory name when numerating idle states
Do not assume that all directories under cpuidle/ represent states;
check that the directory name starts with "state" before trying to parse
it.
2015-05-11 12:12:41 +01:00
Sergei Trofimov
9d5e0cdc00 Fixing pylint false positives in energy_model
The new  version of pylint is throwing up a couple of new
false positives that the earlier versions did not seem to flag.
2015-05-11 12:12:41 +01:00
Sergei Trofimov
841bd784d9 Fix: correctly set measured/measuring cores. 2015-05-11 12:12:41 +01:00
Sergei Trofimov
47ce9db383 Updating energy model instrument to be able to accumulate multiple power rails for each cluster. 2015-05-11 12:12:40 +01:00
Sergei Trofimov
c52d562411 Adding energy_model instrument.
This instrument can be used to generate an energy model for a device
based on collected power and performance measurments. The instrument
produces a C file with an energy model and an accompanying HTML report.

This instrument is very different from other instrumentation, as it
actually generates the run queue on the fly based on the operating
frequencies and idle states it discovers on the device. The agenda needs
only to contain the single "base" spec that defines the workload to be
used for performance measurement.
2015-05-11 12:12:40 +01:00
Sergei Trofimov
6b041e6822 Added cgroups device module.
cgroups  modules allows query and manipulation of cgroups controllers on
a Linux device. Currently, only cpusets controller is implemented.
2015-05-11 12:12:40 +01:00
Sergei Trofimov
c82dd87830 Adding cpuidle modules and refactoring Device cpufreq APIs.
cpuidle module implements cpuidle state discovery, query and
manipulation for a Linux device. This replaces the more primitive
get_cpuidle_states method of LinuxDevice.

Renamed APIs (and added a couple of new ones) to be more consistent:

"core" APIs take a core name as the parameter (e.g. "a15") or whatever
is listed in core_names for that device.
"cluster" APIs take a numeric cluster ID (eg. 0) as the parameter. These
get mapped using core_clusters for that device.
"cpu" APIs take a cpufreq cpu ID as a parameter. These could be
integers, e.g. 0, or full string id, e.g. "cpu0".
2015-05-11 12:12:40 +01:00
Sergei Trofimov
b002505ac2 Updated Parameter to automatically convert int and boot kinds to integer and boolean respectively.
integer and boolen are defined in wlauto.utils.types; they perform more
intuitive conversions from other types, particularly strings, so are
more suitable than int and bool for parameters. If, for whatever reason,
native types are in fact desired for a Parameter, this behavior can be
supressed by specifying convert_types=False when defining the parameter.
2015-05-11 12:12:40 +01:00
Sergei Trofimov
09aa9e6792 Adding some conversion functions to misc utils.
- list_to_range and range_to_list convert between lists of integers
  and corresponding range strings, e.g. between [0,1,2,4] and '0-2,4'
- list_to_mask and mask_to_list convert between lists of integers and
  corresponding integer masks, e.g. between [0,1,2,4] and 0x17

Conflicts:
	wlauto/utils/misc.py
2015-05-11 12:12:40 +01:00
Sergei Trofimov
108928c6a5 Added copy() method to WorkloadRunSpec. 2015-05-11 12:12:40 +01:00
Sergei Trofimov
3112eb0a9b Adding new types and updating device parameters to use them.
- added caseless_string type. This behaves exactly like a string, except
  this ignores case in comparisons. It does, however, preserve case. E.g.

	>>> s = caseless_string('Test')
	>>> s == 'test'
	True
	>>> print s
	Test

- added list_of type generating function. This allows to dynamically
  generate type-safe list types based on an existing type. E.g.

	>>> list_of_bool = list_of(bool)
	>>> list_of_bool(['foo', 0, 1, '', True])
	[True, False, True, False, True]

- Update core_names Device Parameter to be of type caseless_string
2015-05-11 12:12:40 +01:00
Sergei Trofimov
070b2230c9 updated ExecutionContext to keep a reference to the runner.
This will enable Extenstions to do things like modify the job queue.
2015-05-11 12:12:40 +01:00
Sergei Trofimov
958a8a09da daq instrument updated to report energy. 2015-05-11 12:12:40 +01:00
Sergei Trofimov
62593987f4 sysbench workload updates.
- added the ability to run based on time, rather than number of
  requests.
- added a parameter to taskset to specific core(s).

Conflicts:
	wlauto/workloads/sysbench/__init__.py
2015-05-11 12:12:40 +01:00
Sergei Trofimov
e422ccc509 show command: remove duplicat output. 2015-05-07 12:18:59 +01:00
Sergei Trofimov
6ccca6d4c0 show command: handle bad PAGER
If show command finds a PAGER defined in the user's environment but is
unable to use it, fall back to dumping output directly to STDOUT.
2015-05-07 12:02:03 +01:00
Sergei Trofimov
57972a56af geekbench: upping run time out.
Run timeout was been hit on very slow systems.
2015-05-07 11:55:37 +01:00
Sergei Trofimov
67ad4a63e4 antutu: multi-times playback fix [part 2]
A "please rate me" dialog occasionally pops but when returning to the
initial screen (when re-running the test). The check to dismiss it
wasn't being done at the right time, so it was still preventing
mutli-times execution. This commit resolves that issue.
2015-05-07 10:48:26 +01:00
Sergei Trofimov
7a86a1b17f hwmon: move sensor discovery into initialize()
It only needs to be done once per run.
2015-05-07 09:39:33 +01:00
Vasilis Flouris
f504fc8791 A fix for spec2000 to align with device API changes 2015-05-06 17:02:48 +01:00
Sergei Trofimov
09d0736d3b antutu: fixing multi-times playback for v5
"times" parameter didn't work properly for version 5 because an extra
back button press was required due to UI changes from previous versions.
This commit adds the button press.
2015-05-06 12:56:58 +01:00
Sergei Trofimov
2f1c7300d4 doc: documenting remote_assets_path
Adding documentation explaining how to use remote_assets_path setting.
2015-05-05 14:52:39 +01:00
Sergei Trofimov
6824f045fd telemetry: adding support for Android devices
Also, adding missing copyright header.
2015-05-05 12:24:53 +01:00
Sergei Trofimov
e57c5fccb3 sysfile_extractor: ignore exit code when removing directory at the end of the run.
On some systems the temporary directory may still be "busy" after WA is
don't with it. Since it's just an empty dir and it will be reused on the
subsequent runs; do check if rm -rf succeeded.
2015-05-05 09:32:40 +01:00
Sergei Trofimov
a6ef53291b ssh: making execute() thread safe. 2015-05-05 09:09:57 +01:00
Sergei Trofimov
1993007d49 telemetry: relaxing the initial validation check
It seems valid install may return values other than 0xff00. Relaxing the
check to consider anything above 0xff to be valid.
2015-05-05 08:17:43 +01:00
Sergei Trofimov
8b606dd5f9 config: fixing an issue introduced by previous config fix...
When the merging logic was updated to preserve duplicates within the
same list, it inadvertently broke the logic that removed items marked
for removal with a '~'. This commit rectifies that.

Note to self: merging functions are doing *way* to much; they should be
refactored into several individual function and config should be
resolved in distinct stages.
2015-04-30 13:34:21 +01:00
Sergei Trofimov
799558d201 Fix: lists with duplicate entries in parameter values
If list parameter values contained duplicates, those got removed when
merging parameter values from different sources. This commit fixes that
behavor, so that duplcates that appear within the *same* list are
preserved.
2015-04-30 08:46:24 +01:00
Sergei Trofimov
bb5d5cba8e daq: make EnumEntry picklable to support multiprocessing invocation 2015-04-29 12:32:15 +01:00
Vasilis Flouris
82fed172d5 fix for leaked file descriptors in daq 2015-04-29 11:41:19 +01:00
Vasilis Flouris
cd6babeab1 adding device_entry parameter to energy probe instrument 2015-04-29 10:42:44 +01:00
Sergei Trofimov
e87e4c582c Fix: properly handle carriage return stripping in ssh. 2015-04-28 12:46:21 +01:00
Sergei Trofimov
97efd11b86 Fix: properly handle core_names and core_clusters in the agenda
Keep duplicates in lists when merging device_config dict from agenda
with the rest of the config; This will ensure that core_name sand
core_clusters aren't reduced to just unique elements.
2015-04-28 12:44:08 +01:00
Sergei Trofimov
bb6421b339 Fixing file_exists on linux devices. 2015-04-28 10:29:47 +01:00
Sergei Trofimov
50c8c4da34 pep8: minor fixes in hackbench and ebizzy 2015-04-28 08:30:43 +01:00
setrofim
dcea921907 Merge pull request #17 from lisatn/ebizzy-workload
Add ebizzy workload
2015-04-28 08:27:08 +01:00
setrofim
0beb3fc3a2 Merge pull request #18 from lisatn/hackbench_workload
Hackbench workload
2015-04-28 08:26:58 +01:00
Lisa Nguyen
1633d91331 hackbench: Clean regex code and add run_timeout parameter
Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-04-27 14:42:25 -07:00
Lisa Nguyen
d126d56e98 Add ebizzy workload
Add ebizzy to allow users to run a workload resembling common
web server application workloads.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-04-27 12:53:17 -07:00
Sergei Trofimov
06e5698d89 Added link to pre-built documentation. 2015-04-27 18:26:55 +01:00
Sergei Trofimov
5ceb093f3c Improving agenda validation
- raise an error if an agenda contains duplicate keys (by default PyYAML
  will silently ignore this)
- raise an error if config section in an agenda is not dict-like
  (before, this was allowed to propagate and relsulted an a traceback
  further down the line).
2015-04-27 16:36:30 +01:00
Sergei Trofimov
698240b6e0 telemetry: copy generated trace files into WA output
If trace files have been generated during a telemetry run (e.g.
--profiler=trace was enabled), copy them into wa_output and extract
them.
2015-04-27 15:50:59 +01:00
Sergei Trofimov
101d6a37ce Fix: correctly handle non-identifier alias names
Update ExtensionLoader to index aliases by their identifier'd name.
2015-04-27 11:57:34 +01:00
Lisa Nguyen
4cff2a52c0 Add hackbench workload
Add hackbench to run tests on the Linux scheduler.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-04-26 13:12:08 -07:00
Sergei Trofimov
f906cbd851 Sent INITIAL_BOOT signal in hard reset path during intial boot 2015-04-24 17:51:55 +01:00
Sergei Trofimov
0c0be69e42 pylint: temporarily diable checkers broken in latest version
A couple of checkers appear to be broken in latest version (report false
positives). Disabling them until fixed.
2015-04-24 17:45:27 +01:00
Sergei Trofimov
97a397d5c8 ipython utils: handle ancient versions of IPython
Very old versions of IPython do not have IPython.version_info attribute
that the ultls module relied on. This commit changes it to use the more
standard __version__ attriute that is present in all versions.
2015-04-24 10:56:23 +01:00
Sergei Trofimov
54e89a2118 Fix to initial ~/.workload_automation deployment when sudoing
Only attempt to chown ~/.workload_automation to SUDO_USER if the
effective user is root.
2015-04-24 10:05:13 +01:00
Sergei Trofimov
704324ca77 Adding missing descriptions for modules. 2015-04-22 17:02:27 +01:00
Sergei Trofimov
aef7f52f96 telemetry: ignore errors in dividuation subtests.
check_output will ignore error code 1 returned by telemetry execution,
as this happens when individiual sub-tests and partial results may, and
should, still be extracted.
2015-04-21 15:12:48 +01:00
Sergei Trofimov
5035fe6f44 Adding ignore parameter to check_output
Adding a parater to wlauto.utils.misc.check_output to specify that it
should ignore certain error codes when they are returned by the
subprocess and not raise them as errors.
2015-04-21 15:01:15 +01:00
Sergei Trofimov
399c9f82c3 telemetry: handle scalar values correctly.
The result regex in telemetry workload has been updated to capture lines
reproting single value results.
2015-04-21 13:19:21 +01:00
Sergei Trofimov
ff0d08cc8e All using idetifier version of non-identifier named instruments in configuration.
E.g. refering to "trace-cmd" as "trace_cmd" in the instrumentation list.
2015-04-20 09:18:49 +01:00
setrofim
7655007f8a Merge pull request #13 from JaviMerino/ipynb_exporter_with_ipython_3
Make the ipynb_exporter result processor work with ipython version 3
2015-04-20 09:11:07 +01:00
setrofim
1fc811e437 Merge pull request #12 from JaviMerino/fix_2276ae0
Fix trace-cmd after 2276ae0c5b
2015-04-20 09:07:45 +01:00
Javi Merino
8d3f9362fb Fix trace-cmd after 2276ae0c5b
Commit 2276ae0c5b ("Fixing config processing for extensions with
non-identifier names.") broke customizing the trace-cmd instrumentation
from the agenda.  With an agenda like:

config:
  instrumentation: [trace-cmd, delay]
  trace_events: ['thermal*']
  trace_buffer_size: 28000

trace_events and trace_buffer_size get added to the RunConfiguration's
_raw_config under the trace-cmd name, but then when it's looked up in
_finalize_config_list(), the dictionary is actually looked up using
identifier(extname), i.e. 'trace_cmd'.  Fix this by adding the user's
configuration using identifier(name) as well.
2015-04-17 20:10:25 +01:00
Javi Merino
e30386ce4a Add ipython version 3 support for the generic ipython support 2015-04-17 19:00:55 +01:00
Javi Merino
d12f5c65e1 Factor out the parsing of a valid cell run in the generic ipython implementation
run_cell() becomes more complicated when we add ipython version 3
support which upsets pylint because there are "too many
branches (15/12)".  Factor out part of the function to make pylint
happy.
2015-04-17 18:56:57 +01:00
Javi Merino
2b04cb38d9 Don't break prematurely when running a cell on an ipython kernel
The kernel may go idle before it processes the next input, which break
the while=True loop in run_cell() early.  Wait for an acknowledgement
of the input we've sent to the kernel before considering an idle
message to mean that the cell has been parsed.
2015-04-17 18:56:57 +01:00
setrofim
0faa5ae455 Merge pull request #11 from JaviMerino/ipynb_exp_improvs
Generic Ipynb_exporter improvements to ease ipython 3 support
2015-04-17 18:23:04 +01:00
Javi Merino
3bf04735c1 Factor out ipython nbconvert in ipynb_exporter results processor
ipython nbconvert CLI changes between ipython 2 and 3, so keep the
string separate so that we can update when we add ipython version 3 support.
2015-04-17 17:52:07 +01:00
Javi Merino
f54bb0981f Don't use shell_channel in ipynb_exporter result processor
It's not needed for IPython 2 and it breaks in IPython 3
2015-04-17 17:52:07 +01:00
Javi Merino
f41be396fa Use notebook format 3 for ipynb_exporter
The file assumes format 3 throughout the code, so explicitly import that
format instead of the generic "current" format.
2015-04-17 17:52:07 +01:00
Javi Merino
d31a5c3c48 Factor out the ipython implementation in ipynb_exporter
The internal ABI for ipython has changed between ipython version 2 and
3.  In its current state, the result processor only works with IPython
version 2, so fail if the user wants to use the result processor with
the wrong version.

Abstract the ipython interface to a file so that we can make it support
versions 2 and 3 at the same time.
2015-04-17 17:52:07 +01:00
Sergei Trofimov
314fecfcd4 Fixing get_meansd for numbers < 1 2015-04-17 17:27:47 +01:00
Sergei Trofimov
497b5febc3 Fixing buffer_size_file in trace-cmd for non-Android devices
The default value for buffer_size_file contained a path under "/d/",
which is an Android-specific alias for "/sys/kernel/debug". This commit
updates the default value to use the system-agnostic path.
2015-04-17 16:06:12 +01:00
Sergei Trofimov
2276ae0c5b Fixing config processing for extensions with non-identifier names.
Internally, WA expects extension names to be valid Python identifiers.
When this is not the case, conversion takes place when loading
configuration for the extension (e.g. "trace-cmd" gets converted to
"trace_cmd").

The conversion is intended to be transparent to the user, so
configuration stores values as they are provided by the user, however it
needs to perform the conversion internally, e.g. when querying
ExtensionLoader. This conversion was missing when performing a lookup of
one of the internal structures, which was causing earlier-collected
settings to not be propagated into the final config.
2015-04-17 15:56:28 +01:00
Sergei Trofimov
53f2eafd16 Fixing key-based authentication for SSH. 2015-04-17 11:59:46 +01:00
setrofim
a26a50941b Merge pull request #10 from JaviMerino/notify2
Use notify2 as it's available in pypi
2015-04-16 13:13:53 +01:00
Sergei Trofimov
075712d940 Fixing format string in ipynb_exporter 2015-04-16 13:06:00 +01:00
Sergei Trofimov
c0bb30e12d Fixing overall score generation in telemetry
Updating overall score generation function to handle zero values.
2015-04-16 13:00:59 +01:00
Javi Merino
99c129cb03 Add notify2 to the extra dependencies 2015-04-16 11:43:49 +01:00
Javi Merino
dd199a6893 Move the notify results processor to notify2
pynotify is not in pypi.
2015-04-16 11:43:49 +01:00
setrofim
d627c778e5 Merge pull request #9 from JaviMerino/add_notify
Add a notify result processor
2015-04-16 08:43:47 +01:00
Sergei Trofimov
ee626fd36e Adding telemetry workload.
This workload executes tests from Google's Telemetry browser test
framework. Currently, only ChromeOS devices are supported.
2015-04-15 18:40:30 +01:00
Sergei Trofimov
315fecdf71 Updating XE503C12 device adapter.
- Setting platfrom to "chromeos"
- Changing the default binaries_directory (/usr/bin is not writable).
2015-04-15 18:40:30 +01:00
Sergei Trofimov
b198e131ee SyfsExtractor: added use_tmpfs parameter
Updating SysfsExtractor with a parameter to explicitly enable/disable
tempfs caching. Previously, this was determined entirely by whether the
device is rooted.
2015-04-15 18:40:30 +01:00
Sergei Trofimov
3e6b927cde Updating screen on instrument to support periodic polling. 2015-04-15 17:54:07 +01:00
Sergei Trofimov
fcc970db3f Adding screenon instrument 2015-04-15 16:34:24 +01:00
Javi Merino
a96088de9c Add a notify result processor
This result processor displays a desktop notification when the run
finishes.  It's useful when you are running a long agenda in WA and want
to be notified when the results are available.
2015-04-15 15:14:23 +01:00
setrofim
4195ca591c Merge pull request #8 from JaviMerino/fix_pep8
Fix pep8 errors in ipynb_exporter
2015-04-15 14:50:26 +01:00
Javi Merino
86f543cb72 Fix pep8 errors in ipynb_exporter
I ran first pep8 and then pylint, so I missed the pep8 errors introduced
by me shutting up pylint 😇
2015-04-15 14:34:23 +01:00
setrofim
9aae0e7886 Merge pull request #7 from JaviMerino/add_ipynb_export
Add an ipython notebook exporter results_processor
2015-04-15 14:23:52 +01:00
Javi Merino
8e09795c95 Add an ipython notebook exporter results_processor 2015-04-15 14:21:54 +01:00
Sergei Trofimov
5bf9f05c4b Updating result object to track their output directories.
Also, context.result will no result in context.run_result when
not executing a job.
2015-04-14 14:38:59 +01:00
Sergei Trofimov
8e340e456d Added open_file to misc utils.
A function to open a file using an associated launcher in an OS-agnostic
way.
2015-04-14 14:18:10 +01:00
Sergei Trofimov
61b834e52c Removing unnecessary comment form sqlite result processor documentation. 2015-04-14 12:02:24 +01:00
setrofim
11b8001308 Merge pull request #6 from JaviMerino/fix_param_documentation
Fix Param documentation
2015-04-14 06:03:31 +01:00
Javi Merino
1807f3e3d7 Fix Param documentation
The param documentation states that for a boolean, "kind" should be
"as_bool" from wlauto.utils.misc, but there is no "as_bool".  Currently,
workload automation automatically converts native python types like bool
and int to workload automation specific ones.  Remove this bit from the
documentation as it's not true any more.

Change-Id: I0100708530bcf67556eda386c39bc444a3e0f2b2
2015-04-13 22:42:25 +01:00
Sergei Trofimov
e03686f961 Improve error repoting when loading extensions
The error message will now contain the offending extension (either
package name or full path to extension file) if a Python error occurs
when ExtensionLoader attempts to load it.
2015-04-13 08:25:56 +01:00
Sergei Trofimov
c5d3c2bd62 Fixing reboot on Linux devices [part 2]
- connect() to device before issuing the initial reboot, as soft reset
  requires a device connection.
- boot() has been implemented to wait properly for the device to reboot
  after reset.
- port now defaults to 22 rather than being left unset, as need
  something to connect to when polling for device after reboot.
- Only use -P option for scp when port is *not* 22; as that option
  appears to cause intermittent issues with default scp on Ubuntu 12.04
2015-04-10 17:04:22 +01:00
Sergei Trofimov
d838caadd5 Adding XE503C12 device.
Adding a device interface for Samsung XE503C12 Chromebooks.
2015-04-10 09:37:40 +01:00
Sergei Trofimov
086f03730b ssh: pep8 fix 2015-04-10 09:36:55 +01:00
Sergei Trofimov
fea32de3d3 Parameterizing password prompt for sudo in Ssh interface.
On some devices, sudo presents a different prompt when asking for a
password. All the prompt to be specified in device configruation to
handle such cases.
2015-04-10 09:03:48 +01:00
Lisa Nguyen
b41d316fd6 doc/source: Add details about running WA on Linux
Mention in the documentation that Android SDK is optional for
users who plan to run WA on Linux devices only, and how they
would only be able to start running a limited number of workloads.

Also included a few trivial fixes such as spelling errors and
moving sentences around to improve flow.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-04-10 08:22:01 +01:00
Lisa Nguyen
e37bf90da7 sysbench: Fix spelling, descriptions, and error message
Make the WorkloadError() message clearer if the sysbench
binary cannot be found on the device and host.

Also make trivial fixes to improve descriptions and fix
spelling errors.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-04-10 08:22:01 +01:00
Sergei Trofimov
544f5f0b92 Fixing terminalsize import error.
terminalsize was loaded from a location added to sys.path during
bootstrap. This appeared to be causing import issues. There is no longer
a good reason for terminalsize to be loaded that way, so just moved it
under wlauto.utils so that it can be loaded normally.
2015-04-09 08:32:02 +01:00
setrofim
73f889b45b Merge pull request #3 from freedomtan/master
typo: s/SKD/SDK/
2015-04-07 09:47:51 +01:00
Sergei Trofimov
7b5f5e2ed0 Fixing Linux device reset.
_is_ready was being set to False before invoking execute("reboot")
causing a device not ready error.
2015-04-07 09:09:13 +01:00
Koan-Sin Tan
cd4790dd4b s/SKD/SDK/ 2015-04-07 09:23:24 +08:00
Lisa Nguyen
0421b58c55 doc/source: Add uninstall and upgrade sections
Add the uninstall and upgrade commands for users to remove or
upgrade Workload Automation for future reference.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-04-01 17:19:02 +01:00
Lisa Nguyen
a7895a17bf common/linux/device.py: Create binaries_directory path if it doesn't exist
Some Linux devices may run on minimal root file systems
(e.g. buildroot) where /usr/local/bin path doesn't exist. Create
the binaries_directory if it doesn't exist instead of letting WA
quit and return errors such as:

INFO     Runner: Skipping the rest of the iterations for this spec.
ERROR    Runner: Error while running memcpy
ERROR    Runner: CalledProcessError("Command 'sshpass
                  -p '<password>' /usr/bin/scp -r
                  /usr/local/lib/python2.7/dist-packages/
                  wlauto-2.3.0-py2.7.egg/wlauto/workloads/memcpy
                  /memcpy root@192.168.x.x:/usr/local/bin/memcpy'
                  returned non-zero exit status 1")
ERROR    Runner: Got:
ERROR    Runner:
ERROR    Runner: scp: /usr/local/bin/memcpy:
                  No such file or directory

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-04-01 08:05:11 +01:00
Sergei Trofimov
925c354551 Fix hotplugging all cores on a cluster.
When attempting to set number of cores on a cluster to 0, make
sure at least one core is enabled on the other cluster beforehand.
2015-03-27 12:54:26 +00:00
Lisa Nguyen
58ab762131 doc/source: Update quickstart
Update the quickstart guide to include steps for setting
up WA to run on Linux devices.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2015-03-26 17:51:51 +00:00
Sergei Trofimov
adb5ea9a30 Fixing UEFI entry creation for Juno
- UEFI config can be specified as a device_config parameter
- The same config is used to create a missing UEFI entry, and
  to re-create the entry when flashing. UEFI config now wholy
  resides within the device and is not specified for vexpress
  flasher.
2015-03-26 10:42:55 +00:00
Vasilis Flouris
5c48b75375 Adding dumpsys window output to __meta 2015-03-25 17:56:58 +00:00
Sergei Trofimov
678efa72e1 Fixing sysbench workload
- Updated sysbench binary to a statically linked verison
- Added missing LICENSE file for the sysbench binary
- Removed Android browser launch and shutdown from workload (now runs on both
  Linux and Android)
- Updated update_result to work with the new binary
- Added missing descriptions for parameters
- Added file_test_mode parameter -- this is a mandatory argumet if test is
  fileio
- Added cmd_params parameter to pass options directily to sysbench invocation
2015-03-25 10:31:02 +00:00
Vasilis Flouris
01d4126cc8 Fix for 64 bit revent [part 2]
The struct used to read events is being padded when built for 64
bit platforms. The padding has been made explicit in the struct, and
matching padding was added when writing the events during recording.
2015-03-23 18:59:54 +00:00
Kevin Hilman
bf12d18457 common/linux/device.py: don't require sudo if already root user
On generic_linux devices, one might ssh as the root user, in which
case there is no need to use sudo.

In addition, some root filesystems may not have sudo (e.g. minimal
buildroot/busybox).

This patch attempts to detect the root user using 'id -u' and, if it
detects the root user, avoids the use of sudo for running commands as
well.

Cc: Lisa Nguyen <lisa.nguyen@linaro.org>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
(setrofim: modified to only test once)
2015-03-23 10:18:32 +00:00
Kevin Hilman
d4ee737e50 doc: device_setup: fix generic_linux device_config example
A copy/paste of the documentation example results in a python
backtrace because the dict keys cannot be quoted:

    wlauto.exceptions.ConfigError: Sytax error in config: keyword can't be an expression (config.py, line XYZ)

Fix by removing the quotes from the keys in the example.

Signed-off-by: Kevin Hilman <khilman@linaro.org>
2015-03-23 09:24:40 +00:00
Sergei Trofimov
ee7f22e902 Fixing 64bit revent
Then number of input event files was being written as a size_t but read
as an int by revent. These types have different sizes with 64bit GCC,
causing revent not being able to replay recorded files. This comint
updates revent to use size_t when both reading and writing.
2015-03-20 09:25:57 +00:00
Sergei Trofimov
6a77ac656f Updated fps instrumentat to generate detailed FPS traces as well as report average FPS. 2015-03-19 17:15:36 +00:00
Sergei Trofimov
2f7acfd87a Added "post install" section to installation docs.
This section lists workloads that require additional external
dependencies.
2015-03-18 10:51:39 +00:00
Sergei Trofimov
cce287a1e7 Fixed dev_scripts/get_apk_versions.
It was relying on a distmanagement module that no longer exists. The
required functionality from that module is now part of the script.
2015-03-18 10:50:02 +00:00
Sergei Trofimov
aa74e1e8f5 Minor fix to rst table generation.
Updated format_simple_table to take headers into account when
calculating column widths. Also updated docs where it currently used.
2015-03-18 10:47:07 +00:00
Sergei Trofimov
3e0c8aa83b glb_corporate: clear logcat in Monitor's __init__ to prevent getting results from previous run. 2015-03-16 09:36:32 +00:00
setrofim
6d9b49d4bb Merge pull request #1 from rockyzhang/patch-1
Update __init__.py
2015-03-16 09:24:48 +00:00
Rocky Zhang
aca08cf74d Update __init__.py
The boot monitor seems to have some buffer overrun issue while loading the latest Linaro android images.
The behavior is after loading the initrd, the kernel loading will fail, very likely to due command line buffer overrun.
the new initrd in the Linaro image is 3 times larger than the previous version, which could cause the issue.
putting a dummy enter between loading initrd and the kernel could resolve the issue.
2015-03-16 11:26:58 +08:00
Sergei Trofimov
529a1d3b95 Fixing initialization of ~/.workload_automation. 2015-03-13 10:25:31 +00:00
Sergei Trofimov
db7a525bc7 Fixing broken label in documentation. 2015-03-11 16:05:17 +00:00
231 changed files with 16843 additions and 5219 deletions

View File

@@ -46,6 +46,8 @@ documentation.
Documentation
=============
You can view pre-built HTML documentation `here <http://pythonhosted.org/wlauto/>`_.
Documentation in reStructuredText format may be found under ``doc/source``. To
compile it into cross-linked HTML, make sure you have `Sphinx
<http://sphinx-doc.org/install.html>`_ installed, and then ::

View File

@@ -1,13 +1,82 @@
#!/usr/bin/env python
import os
import sys
import re
import logging
import subprocess
import argparse
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from wlauto.exceptions import WAError
from wlauto.utils.misc import write_table
from distmanagement.apk import get_aapt_path, get_apk_versions
from wlauto.exceptions import WAError, ToolError
from wlauto.utils.doc import format_simple_table
def get_aapt_path():
"""Return the full path to aapt tool."""
sdk_path = os.getenv('ANDROID_HOME')
if not sdk_path:
raise ToolError('Please make sure you have Android SDK installed and have ANDROID_HOME set.')
build_tools_directory = os.path.join(sdk_path, 'build-tools')
versions = os.listdir(build_tools_directory)
for version in reversed(sorted(versions)):
aapt_path = os.path.join(build_tools_directory, version, 'aapt')
if os.path.isfile(aapt_path):
logging.debug('Found aapt for version {}'.format(version))
return aapt_path
else:
raise ToolError('aapt not found. Please make sure at least one Android platform is installed.')
def get_apks(path):
"""Return a list of paths to all APK files found under the specified directory."""
apks = []
for root, dirs, files in os.walk(path):
for file in files:
_, ext = os.path.splitext(file)
if ext.lower() == '.apk':
apks.append(os.path.join(root, file))
return apks
class ApkVersionInfo(object):
def __init__(self, workload=None, package=None, label=None, version_name=None, version_code=None):
self.workload = workload
self.package = package
self.label = label
self.version_name = version_name
self.version_code = version_code
def to_tuple(self):
return (self.workload, self.package, self.label, self.version_name, self.version_code)
version_regex = re.compile(r"name='(?P<name>[^']+)' versionCode='(?P<vcode>[^']+)' versionName='(?P<vname>[^']+)'")
def extract_version_info(apk_path, aapt):
command = [aapt, 'dump', 'badging', apk_path]
output = subprocess.check_output(command)
version_info = ApkVersionInfo(workload=apk_path.split(os.sep)[-2])
for line in output.split('\n'):
if line.startswith('application-label:'):
version_info.label = line.split(':')[1].strip().replace('\'', '')
elif line.startswith('package:'):
match = version_regex.search(line)
if match:
version_info.package = match.group('name')
version_info.version_code = match.group('vcode')
version_info.version_name = match.group('vname')
else:
pass # not interested
return version_info
def get_apk_versions(path, aapt):
apks = get_apks(path)
versions = [extract_version_info(apk, aapt) for apk in apks]
return versions
if __name__ == '__main__':
@@ -18,8 +87,10 @@ if __name__ == '__main__':
args = parser.parse_args()
versions = get_apk_versions(args.path, aapt)
write_table([v.to_tuple() for v in versions], sys.stdout,
align='<<<>>', headers=['path', 'package', 'name', 'version code', 'version name'])
table = format_simple_table([v.to_tuple() for v in versions],
align='<<<>>',
headers=['workload', 'package', 'name', 'version code', 'version name'])
print table
except WAError, e:
logging.error(e)
sys.exit(1)

View File

@@ -1,8 +1,13 @@
#!/bin/bash
DEFAULT_DIRS=(
wlauto
wlauto/external/daq_server/src/daqpower
)
EXCLUDE=wlauto/external/,wlauto/tests
EXCLUDE_COMMA=wlauto/core/bootstrap.py,wlauto/workloads/geekbench/__init__.py
IGNORE=E501,E265,E266,W391
IGNORE=E501,E265,E266,W391,E401,E402,E731
if ! hash pep8 2>/dev/null; then
echo "pep8 not found in PATH"
@@ -13,7 +18,9 @@ fi
if [[ "$1" == "" ]]; then
THIS_DIR="`dirname \"$0\"`"
pushd $THIS_DIR/.. > /dev/null
pep8 --exclude=$EXCLUDE,$EXCLUDE_COMMA --ignore=$IGNORE wlauto
for dir in "${DEFAULT_DIRS[@]}"; do
pep8 --exclude=$EXCLUDE,$EXCLUDE_COMMA --ignore=$IGNORE $dir
done
pep8 --exclude=$EXCLUDE --ignore=$IGNORE,E241 $(echo "$EXCLUDE_COMMA" | sed 's/,/ /g')
popd > /dev/null
else

View File

@@ -1,5 +1,10 @@
#!/bin/bash
DEFAULT_DIRS=(
wlauto
wlauto/external/daq_server/src/daqpower
)
target=$1
compare_versions() {
@@ -30,17 +35,19 @@ compare_versions() {
}
pylint_version=$(python -c 'from pylint.__pkginfo__ import version; print version')
compare_versions $pylint_version "1.3.0"
compare_versions $pylint_version "1.5.1"
result=$?
if [ "$result" == "2" ]; then
echo "ERROR: pylint version must be at least 1.3.0; found $pylint_version"
echo "ERROR: pylint version must be at least 1.5.1; found $pylint_version"
exit 1
fi
THIS_DIR="`dirname \"$0\"`"
if [[ "$target" == "" ]]; then
pushd $THIS_DIR/.. > /dev/null
pylint --rcfile extras/pylintrc wlauto
for dir in "${DEFAULT_DIRS[@]}"; do
pylint --rcfile extras/pylintrc $dir
done
popd > /dev/null
else
pylint --rcfile $THIS_DIR/../extras/pylintrc $target

View File

@@ -6,10 +6,10 @@ Modules
Modules are essentially plug-ins for Extensions. They provide a way of defining
common and reusable functionality. An Extension can load zero or more modules
during it's creation. Loaded modules will then add their capabilities (see
during its creation. Loaded modules will then add their capabilities (see
Capabilities_) to those of the Extension. When calling code tries to access an
attribute of an Extension the Extension doesn't have, it will try to find the
attribute among it's loaded modules and will return that instead.
attribute among its loaded modules and will return that instead.
.. note:: Modules are themselves extensions, and can therefore load their own
modules. *Do not* abuse this.

View File

@@ -1,6 +1,398 @@
=================================
What's New in Workload Automation
=================================
-------------
Version 2.4.0
-------------
Additions:
##########
Devices
~~~~~~~~
- ``gem5_linux`` and ``gem5_android``: Interfaces for Gem5 simulation
environment running Linux and Android respectively.
- ``XE503C1211``: Interface for Samsung XE503C12 Chromebooks.
- ``chromeos_test_image``: Chrome OS test image device. An off the shelf
device will not work with this device interface.
Instruments
~~~~~~~~~~~~
- ``freq_sweep``: Allows "sweeping" workloads across multiple CPU frequencies.
- ``screenon``: Ensures screen is on, before each iteration, or periodically
on Android devices.
- ``energy_model``: This instrument can be used to generate an energy model
for a device based on collected power and performance measurments.
- ``netstats``: Allows monitoring data sent/received by applications on an
Android device.
Modules
~~~~~~~
- ``cgroups``: Allows query and manipulation of cgroups controllers on a Linux
device. Currently, only cpusets controller is implemented.
- ``cpuidle``: Implements cpuidle state discovery, query and manipulation for
a Linux device. This replaces the more primitive get_cpuidle_states method
of LinuxDevice.
- ``cpufreq`` has now been split out into a device module
Reasource Getters
~~~~~~~~~~~~~~~~~
- ``http_assets``: Downloads resources from a web server.
Results Processors
~~~~~~~~~~~~~~~~~~~
- ``ipynb_exporter``: Generates an IPython notebook from a template with the
results and runs it.
- ``notify``: Displays a desktop notification when a run finishes
(Linux only).
- ``cpustates``: Processes power ftrace to produce CPU state and parallelism
stats. There is also a script to invoke this outside of WA.
Workloads
~~~~~~~~~
- ``telemetry``: Executes Google's Telemetery benchmarking framework
- ``hackbench``: Hackbench runs tests on the Linux scheduler
- ``ebizzy``: This workload resembles common web server application workloads.
- ``power_loadtest``: Continuously cycles through a set of browser-based
activities and monitors battery drain on a device (part of ChromeOS autotest
suite).
- ``rt-app``: Simulates configurable real-time periodic load.
- ``linpack-cli``: Command line version of linpack benchmark.
- ``lmbench``: A suite of portable ANSI/C microbenchmarks for UNIX/POSIX.
- ``stream``: Measures memory bandwidth.
- ``iozone``: Runs a series of disk I/O performance tests.
- ``androbench``: Measures the storage performance of device.
- ``autotest``: Executes tests from ChromeOS autotest suite.
Framework
~~~~~~~~~
- ``wlauto.utils``:
- Added ``trace_cmd``, a generic trace-cmd paraser.
- Added ``UbootMenu``, allows navigating Das U-boot menu over serial.
- ``wlauto.utils.types``:
- ``caseless_string``: Behaves exactly like a string, except this ignores
case in comparisons. It does, however, preserve case.
- ``list_of``: allows dynamic generation of type-safe list types based on
an existing type.
- ``arguments``: represents arguments that are passed on a command line to
an application.
- ``list-or``: allows dynamic generation of types that accept either a base
type or a list of base type. Using this ``list_or_integer``,
``list_or_number`` and ``list_or_bool`` were also added.
- ``wlauto.core.configuration.WorkloadRunSpec``:
- ``copy``: Allows making duplicates of ``WorkloadRunSpec``'s
- ``wlatuo.utils.misc``:
- ``list_to_ranges`` and ``ranges_to_list``: convert between lists of
integers and corresponding range strings, e.g. between [0,1,2,4] and
'0-2,4'
- ``list_to_mask`` and ``mask_to_list``: convert between lists of integers
and corresponding integer masks, e.g. between [0,1,2,4] and 0x17
- ``wlauto.instrumentation``:
- ``instrument_is_enabled``: Returns whether or not an instrument is
enabled for the current job.
- ``wlauto.core.result``:
- Added "classifiers" field to Metric objects. This is a dict mapping
classifier names (arbitrary strings) to corresponding values for that
specific metrics. This is to allow extensions to add extension-specific
annotations to metric that could be handled in a generic way (e.g. by
result processors). They can also be set in agendas.
- Failed jobs will now be automatically retired
- Implemented dynamic device modules that may be loaded automatically on
device initialization if the device supports them.
- Added support for YAML configs.
- Added ``initialze`` and ``finalize`` methods to workloads.
- ``wlauto.core.ExecutionContext``:
- Added ``job_status`` property that returns the status of the currently
running job.
Fixes/Improvements
##################
Devices
~~~~~~~~
- ``tc2``: Workaround for buffer overrun when loading large initrd blob.
- ``juno``:
- UEFI config can now be specified as a parameter.
- Adding support for U-Boot booting.
- No longer auto-disconnects ADB at the end of a run.
- Added ``actually_disconnect`` to restore old disconnect behaviour
- Now passes ``video`` command line to Juno kernel to work around a known
issue where HDMI loses sync with monitors.
- Fixed flashing.
Instruments
~~~~~~~~~~~
- ``trace_cmd``:
- Fixed ``buffer_size_file`` for non-Android devices
- Reduce starting priority.
- Now handles trace headers and thread names with spaces
- ``energy_probe``: Added ``device_entry`` parameter.
- ``hwmon``:
- Sensor discovery is now done only at the start of a run.
- Now prints both before/after and mean temperatures.
- ``daq``:
- Now reports energy
- Fixed file descriptor leak
- ``daq_power.csv`` now matches the order of labels (if specified).
- Added ``gpio_sync``. When enabled, this wil cause the instrument to
insert a marker into ftrace, while at the same time setting a GPIO pin
high.
- Added ``negative_values`` parameter. which can be used to specify how
negative values in the samples should be handled.
- Added ``merge_channels`` parameter. When set DAQ channel will be summed
together.
- Workload labels, rather than names, are now used in the "workload"
column.
- ``cpufreq``:
- Fixes missing directories problem.
- Refined the availability check not to rely on the top-level cpu/cpufreq
directory
- Now handles non-integer output in ``get_available_frequencies``.
- ``sysfs_extractor``:
- No longer raises an error when both device and host paths are empty.
- Fixed pulled files verification.
- ``perf``:
- Updated binaries.
- Added option to force install.
- ``killall`` is now run as root on rooted Android devices.
- ``fps``:
- now generates detailed FPS traces as well as report average FPS.
- Updated jank calcluation to only count "large" janks.
- Now filters out bogus ``actual-present`` times and ignore janks above
``PAUSE_LATENCY``
- ``delay``:
- Added ``fixed_before_start`` parameter.
- Changed existing ``*_between_specs`` and ``*_between_iterations``
callbacks to be ``very_slow``
- ``streamline``:
- Added Linux support
- ``gatord`` is now only started once at the start of the run.
modules
~~~~~~~
- ``flashing``:
- Fixed vexpress flashing
- Added an option to keep UEFI entry
Result Processors
~~~~~~~~~~~~~~~~~
- ``cpustate``:
- Now generates a timeline csv as well as stats.
- Adding ID to overall cpustate reports.
- ``csv``: (partial) ``results.csv`` will now be written after each iteration
rather than at the end of the run.
Workloads
~~~~~~~~~
- ``glb_corporate``: clears logcat to prevent getting results from previous
run.
- ``sysbench``:
- Updated sysbench binary to a statically linked verison
- Added ``file_test_mode parameter`` - this is a mandatory argumet if
``test`` is ``"fileio"``.
- Added ``cmd_params`` parameter to pass options directily to sysbench
invocation.
- Removed Android browser launch and shutdown from workload (now runs on
both Linux and Android).
- Now works with unrooted devices.
- Added the ability to run based on time.
- Added a parameter to taskset to specific core(s).
- Added ``threads`` parameter to be consistent with dhrystone.
- Fixed case where default ``timeout`` < ``max_time``.
- ``Dhrystone``:
- added ``taskset_mask`` parameter to allow pinning to specific cores.
- Now kills any running instances during setup (also handles CTRL-C).
- ``sysfs_extractor``: Added parameter to explicitly enable/disable tempfs
caching.
- ``antutu``:
- Fixed multi-``times`` playback for v5.
- Updated result parsing to handle Android M logcat output.
- ``geekbench``: Increased timout to cater for slower devices.
- ``idle``: Now works on Linux devices.
- ``manhattan``: Added ``run_timemout`` parameter.
- ``bbench``: Now works when binaries_directory is not in path.
- ``nemamark``: Made duration configurable.
Framework
~~~~~~~~~~
- ``BaseLinuxDevice``:
- Now checks that at least one core is enabled on another cluster before
attempting to set number of cores on a cluster to ``0``.
- No longer uses ``sudo`` if already logged in as ``root``.
- Now saves ``dumpsys window`` output to the ``__meta`` directory.
- Now takes ``password_prompt`` as a parameter for devices with a non
standard ``sudo`` password prompt.
- No longer raises an error if ``keyfile`` or ``password`` are not
provided when they are not necessary.
- Added new cpufreq APIs:
- ``core`` APIs take a core name as the parameter (e.g. "a15")
- ``cluster`` APIs take a numeric cluster ID (eg. 0)
- ``cpu`` APIs take a cpufreq cpu ID as a parameter.
- ``set_cpu_frequency`` now has a ``exact`` parameter. When true (the
default) it will produce an error when the specified frequency is not
supported by the cpu, otherwise cpufreq will decide what to do.
- Added ``{core}_frequency`` runtime parameter to set cluster frequency.
- Added ``abi`` property.
- ``get_properties`` moved from ``LinuxDevice``, meaning ``AndroidDevice``
will try to pull the same files. Added more paths to pull by default
too.
- fixed ``list_file_systems`` for Android M and Linux devices.
- Now sets ``core_clusters`` from ``core_names`` if not explicitly
specified.
- Added ``invoke`` method that allows invoking an executable on the device
under controlled contions (e.g. within a particular directory, or
taskset to specific CPUs).
- No longer attempts to ``get_sysfile_value()`` as root on unrooted
devices.
- ``LinuxDevice``:
- Now creates ``binaries_directory`` path if it doesn't exist.
- Fixed device reset
- Fixed ``file_exists``
- implemented ``get_pid_of()`` and ``ps()``. Existing implementation
relied on Android version of ps.
- ``listdir`` will now return an empty list for an empty directory
instead of a list containing a single empty string.
- ``AndroidDevice``:
- Executable (un)installation now works on unrooted devices.
- Now takes into account ``binar_directory`` when setting up busybox path.
- update ``android_prompt`` so that it works even if is not ``"/"``
- ``adb_connect``: do not assume port 5555 anymore.
- Now always deploys busybox on rooted devices.
- Added ``swipe_to_unlock`` method.
- Fixed initialization of ``~/.workload_automation.``.
- Fixed replaying events using revent on 64 bit platforms.
- Improved error repoting when loading extensions.
- ``result`` objects now track their output directories.
- ``context.result`` will not result in ``context.run_result`` when not
executing a job.
- ``wlauto.utils.ssh``:
- Fixed key-based authentication.
- Fixed carriage return stripping in ssh.
- Now takes ``password_prompt`` as a parameter for non standard ``sudo``
password prompts.
- Now with 100% more thread safety!
- If a timeout condition is hit, ^C is now sent to kill the current
foreground process and make the shell available for subsequent commands.
- More robust ``exit_code`` handling for ssh interface
- Now attempts to deal with dropped connections
- Fixed error reporting on failed exit code extraction.
- Now handles backspaces in serial output
- Added ``port`` argument for telnet connections.
- Now allows telnet connections without a password.
- Fixed config processing for extensions with non-identifier names.
- Fixed ``get_meansd`` for numbers < 1
- ``wlatuo.utils.ipython``:
- Now supports old versions of IPython
- Updated version check to only initialize ipython utils if version is
< 4.0.0. Version 4.0.0 changes API and breaks WA's usage of it.
- Added ``ignore`` parameter to ``check_output``
- Agendas:
- Now raise an error if an agenda contains duplicate keys
- Now raise an error if config section in an agenda is not dict-like
- Now properly handles ``core_names`` and ``core_clusters``
- When merging list parameters from different sources, duplicates are no
longer removed.
- The ``INITIAL_BOOT`` signal is now sent went performing a hard reset during
intial boot
- updated ``ExecutionContext`` to keep a reference to the ``runner``. This
will enable Extenstions to do things like modify the job queue.
- Parameter now automatically convert int and boot kinds to integer and
boolean respectively, this behavior can be supressed by specifying
``convert_types``=``False`` when defining the parameter.
- Fixed resource resolution when dependency location does not exist.
- All device ``push`` and ``pull`` commands now raise ``DeviceError`` if they
didn't succeed.
- Fixed showing Parameter default of ``False`` for boolean values.
- Updated csv result processor with the option to use classifiers to
add columns to ``results.csv``.
- ``wlauto.utils.formatter``: Fix terminal size discovery.
- The extension loader will now follow symlinks.
- Added arm64-v8a to ABI map
- WA now reports syntax errors in a more informative way.
- Resource resolver: now prints the path of the found resource to the log.
- Resource getter: look for executable in the bin/ directory under resource
owner's dependencies directory as well as general dependencies bin.
- ``GamingWorkload``:
- Added an option to prevent clearing of package data before execution.
- Added the ability to override the timeout of deploying the assets
tarball.
- ``ApkWorkload``: Added an option to skip host-side APK check entirely.
- ``utils.misc.normalize``: only normalize string keys.
- Better error reporting for subprocess.CalledProcessError
- ``boolean`` now interprets ``'off'`` as ``False``
- ``wlauto.utils.uefi``: Added support for debug builds.
- ``wlauto.utils.serial_port``: Now supports fdexpect versions > 4.0.0
- Semanatics for ``initialize``/``finalize`` for *all* Extensions are changed
so that now they will always run at most once per run. They will not be
executed twice even if invoked via instances of different subclasses (if
those subclasses defined their own verions, then their versions will be
invoked once each, but the base version will only get invoked once).
- Pulling entries from procfs does not work on some platforms. WA now tries
to cat the contents of a property_file and write it to a output file on the
host.
Documentation
~~~~~~~~~~~~~
- ``installation``:
- Added ``post install`` section which lists workloads that require
additional external dependencies.
- Added the ``uninstall`` and ``upgrade`` commands for users to remove or
upgrade Workload Automation.
- Added documentation explaining how to use ``remote_assets_path``
setting.
- Added warning about potential permission issues with pip.
- ``quickstart``: Added steps for setting up WA to run on Linux devices.
- ``device_setup``: fixed ``generic_linux`` ``device_config`` example.
- ``contributing``: Clarified style guidelines
- ``daq_device_setup``: Added an illustration for DAQ wiring.
- ``writing_extensions``: Documented the Workload initialize and finalize
methods.
- Added descriptions to extension that didn't have one.
Other
~~~~~
- ``daq_server``:
- Fixed showing available devices.
- Now works with earlier versions of the DAQmx driver.thus you can now run
the server on Linux systems.
- DAQ error messages are now properly propaged to the client.
- Server will now periodically clean up uncollected files.
- fixed not being able to resolve IP address for hostname
(report "localhost" in that case).
- Works with latest version of twisted.
- ``setup.py``: Fixed paths to work with Mac OS X.
- ``summary_csv`` is no longer enabled by default.
- ``status`` result processor is now enabled by default.
- Commands:
- ``show``:
- Now shows what platform extensions support.
- Will no longer try to use a pager if ``PAGER=''`` in the environment.
- ``list``:
- Added ``"-p"`` option to filter results by supported platforms.
- Added ``"--packaged-only"`` option to only list extensions packaged
with WA.
- ``run``: Added ``"--disable"`` option to diable instruments.
- ``create``:
- Added ``agenda`` sub-command to generate agendas for a set of
extensions.
- ``create workload`` now gives more informative errors if Android SDK
installed but no platform has been downloaded.
Incompatible changes
####################
Framework
~~~~~~~~~
- ``BaseLinuxDevice``:
- Renamed ``active_cpus`` to ``online_cpus``
- Renamed ``get_cluster_cpu`` to ``get_cluster_active_cpu``
- Renamed ``get_core_cpu`` to ``get_core_online_cpu``
- All extension's ``initialize`` function now takes one (and only one)
parameter, ``context``.
- ``wlauto.core.device``: Removed ``init`` function. Replaced with
``initialize``
-------------
Version 2.3.0
-------------

View File

@@ -113,7 +113,7 @@ pygments_style = 'sphinx'
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
html_theme = 'classic'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the

View File

@@ -97,6 +97,32 @@ Available Settings
Added in version 2.1.5.
.. confval:: retry_on_status
This is list of statuses on which a job will be cosidered to have failed and
will be automatically retried up to ``max_retries`` times. This defaults to
``["FAILED", "PARTIAL"]`` if not set. Possible values are:
``"OK"``
This iteration has completed and no errors have been detected
``"PARTIAL"``
One or more instruments have failed (the iteration may still be running).
``"FAILED"``
The workload itself has failed.
``"ABORTED"``
The user interupted the workload
.. confval:: max_retries
The maximum number of times failed jobs will be retried before giving up. If
not set, this will default to ``3``.
.. note:: this number does not include the original attempt
.. confval:: instrumentation
This should be a list of instruments to be enabled during run execution.
@@ -136,6 +162,12 @@ Available Settings
All three values should be Python `old-style format strings`_ specifying which
`log record attributes`_ should be displayed.
.. confval:: remote_assets_path
Path to the local mount of a network assets repository. See
:ref:`assets_repository`.
There are also a couple of settings are used to provide additional metadata
for a run. These may get picked up by instruments or result processors to
attach context to results.

View File

@@ -2,12 +2,23 @@
Contributing Code
=================
We welcome code contributions via GitHub pull requests to the official WA
repository. To help with maintainability of the code line we ask that the code
uses a coding style consistent with the rest of WA code, which is basically
`PEP8 <https://www.python.org/dev/peps/pep-0008/>`_ with line length and block
comment rules relaxed (the wrapper for PEP8 checker inside ``dev_scripts`` will
run it with appropriate configuration).
We welcome code contributions via GitHub pull requests.To help with
maintainability of the code line we ask that the code uses a coding style
consistent with the rest of WA code. Briefly, it is
- `PEP8 <https://www.python.org/dev/peps/pep-0008/>`_ with line length and block
comment rules relaxed (the wrapper for PEP8 checker inside ``dev_scripts``
will run it with appropriate configuration).
- Four-space indentation (*no tabs!*).
- Title-case for class names, underscore-delimited lower case for functions,
methods, and variables.
- Use descriptive variable names. Delimit words with ``'_'`` for readability.
Avoid shortening words, skipping vowels, etc (common abbreviations such as
"stats" for "statistics", "config" for "configuration", etc are OK). Do
*not* use Hungarian notation (so prefer ``birth_date`` over ``dtBirth``).
New extensions should also follow implementation guidelines specified in
:ref:`writing_extensions` section of the documentation.
We ask that the following checks are performed on the modified code prior to
submitting a pull request:

BIN
doc/source/daq-wiring.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

View File

@@ -68,6 +68,12 @@ varies between models.
possible to use any other configuration (e.g. ports 1, 2 and 5).
As an example, the following illustration shows the wiring of PORT0 (using AI/0
and AI/1 channels) on a DAQ USB-6210
.. image:: daq-wiring.png
:scale: 70 %
Setting up NI-DAQmx driver on a Windows Machine
-----------------------------------------------

View File

@@ -64,7 +64,7 @@ you might want to change are outlined below.
advanced WA functionality (like setting of core-related runtime parameters
such as governors, frequencies, etc). ``core_names`` should be a list of
core names matching the order in which they are exposed in sysfs. For
example, ARM TC2 SoC is a 2x3 big.LITTLE system; it's core_names would be
example, ARM TC2 SoC is a 2x3 big.LITTLE system; its core_names would be
``['a7', 'a7', 'a7', 'a15', 'a15']``, indicating that cpu0-cpu2 in cpufreq
sysfs structure are A7's and cpu3 and cpu4 are A15's.
@@ -363,11 +363,11 @@ A typical ``device_config`` inside ``config.py`` may look something like
.. code-block:: python
device_config = dict(
'host'='192.168.0.7',
'username'='guest',
'password'='guest',
'core_names'=['a7', 'a7', 'a7', 'a15', 'a15'],
'core_clusters'=[0, 0, 0, 1, 1],
host='192.168.0.7',
username='guest',
password='guest',
core_names=['a7', 'a7', 'a7', 'a15', 'a15'],
core_clusters=[0, 0, 0, 1, 1],
# ...
)

View File

@@ -15,16 +15,23 @@ Operating System
WA runs on a native Linux install. It was tested with Ubuntu 12.04,
but any recent Linux distribution should work. It should run on either
32bit or 64bit OS, provided the correct version of Android (see below)
32-bit or 64-bit OS, provided the correct version of Android (see below)
was installed. Officially, **other environments are not supported**. WA
has been known to run on Linux Virtual machines and in Cygwin environments,
though additional configuration maybe required in both cases (known issues
though additional configuration may be required in both cases (known issues
include makings sure USB/serial connections are passed to the VM, and wrong
python/pip binaries being picked up in Cygwin). WA *should* work on other
Unix-based systems such as BSD or Mac OS X, but it has not been tested
in those environments. WA *does not* run on Windows (though it should be
possible to get limited functionality with minimal porting effort).
.. Note:: If you plan to run Workload Automation on Linux devices only,
SSH is required, and Android SDK is optional if you wish
to run WA on Android devices at a later time. Then follow the
steps to install the necessary python packages to set up WA.
However, you would be starting off with a limited number of
workloads that will run on Linux devices.
Android SDK
-----------
@@ -32,12 +39,11 @@ Android SDK
You need to have the Android SDK with at least one platform installed.
To install it, download the ADT Bundle from here_. Extract it
and add ``<path_to_android_sdk>/sdk/platform-tools`` and ``<path_to_android_sdk>/sdk/tools``
to your ``PATH``. To test that you've installed it properly run ``adb
version``, the output should be similar to this::
to your ``PATH``. To test that you've installed it properly, run ``adb
version``. The output should be similar to this::
$$ adb version
adb version
Android Debug Bridge version 1.0.31
$$
.. _here: https://developer.android.com/sdk/index.html
@@ -57,7 +63,7 @@ the install location of the SDK (i.e. ``<path_to_android_sdk>/sdk``).
Python
------
Workload Automation 2 requires Python 2.7 (Python 3 is not supported, at the moment).
Workload Automation 2 requires Python 2.7 (Python 3 is not supported at the moment).
pip
@@ -69,6 +75,23 @@ similar distributions, this may be done with APT::
sudo apt-get install python-pip
.. note:: Some versions of pip (in particluar v1.5.4 which comes with Ubuntu
14.04) are know to set the wrong permissions when installing
packages, resulting in WA failing to import them. To avoid this it
is recommended that you update pip and setuptools before proceeding
with installation::
sudo -H pip install --upgrade pip
sudo -H pip install --upgrade setuptools
If you do run into this issue after already installing some packages,
you can resolve it by running ::
sudo chmod -R a+r /usr/local/lib/python2.7/dist-packagessudo
find /usr/local/lib/python2.7/dist-packages -type d -exec chmod a+x {} \;
(The paths above will work for Ubuntu; they may need to be adjusted
for other distros).
Python Packages
---------------
@@ -86,11 +109,11 @@ Workload Automation 2 depends on the following additional libraries:
You can install these with pip::
sudo pip install pexpect
sudo pip install pyserial
sudo pip install pyyaml
sudo pip install docutils
sudo pip install python-dateutil
sudo -H pip install pexpect
sudo -H pip install pyserial
sudo -H pip install pyyaml
sudo -H pip install docutils
sudo -H pip install python-dateutil
Some of these may also be available in your distro's repositories, e.g. ::
@@ -129,12 +152,26 @@ may not always have Internet access).
headers to install. You can get those by installing ``python-dev``
package in apt on Ubuntu (or the equivalent for your distribution).
Installing
==========
Download the tarball and run pip::
Installing the latest released version from PyPI (Python Package Index)::
sudo -H pip install wlauto
This will install WA along with its mandatory dependencies. If you would like to
install all optional dependencies at the same time, do the following instead::
sudo -H pip install wlauto[all]
Alternatively, you can also install the latest development version from GitHub
(you will need git installed for this to work)::
git clone git@github.com:ARM-software/workload-automation.git workload-automation
sudo -H pip install ./workload-automation
sudo pip install wlauto-$version.tar.gz
If the above succeeds, try ::
@@ -142,3 +179,143 @@ If the above succeeds, try ::
Hopefully, this should output something along the lines of "Workload Automation
version $version".
(Optional) Post Installation
============================
Some WA extensions have additional dependencies that need to be
statisfied before they can be used. Not all of these can be provided with WA and
so will need to be supplied by the user. They should be placed into
``~/.workload_uatomation/dependencies/<extenion name>`` so that WA can find
them (you may need to create the directory if it doesn't already exist). You
only need to provide the dependencies for workloads you want to use.
APK Files
---------
APKs are applicaton packages used by Android. These are necessary to install an
application onto devices that do not have Google Play (e.g. devboards running
AOSP). The following is a list of workloads that will need one, including the
version(s) for which UI automation has been tested. Automation may also work
with other versions (especially if it's only a minor or revision difference --
major version differens are more likely to contain incompatible UI changes) but
this has not been tested.
================ ============================================ ========================= ============ ============
workload package name version code version name
================ ============================================ ========================= ============ ============
andebench com.eembc.coremark AndEBench v1383a 1383
angrybirds com.rovio.angrybirds Angry Birds 2.1.1 2110
angrybirds_rio com.rovio.angrybirdsrio Angry Birds 1.3.2 1320
anomaly2 com.elevenbitstudios.anomaly2Benchmark A2 Benchmark 1.1 50
antutu com.antutu.ABenchMark AnTuTu Benchmark 5.3 5030000
antutu com.antutu.ABenchMark AnTuTu Benchmark 3.3.2 3322
antutu com.antutu.ABenchMark AnTuTu Benchmark 4.0.3 4000300
benchmarkpi gr.androiddev.BenchmarkPi BenchmarkPi 1.11 5
caffeinemark com.flexycore.caffeinemark CaffeineMark 1.2.4 9
castlebuilder com.ettinentertainment.castlebuilder Castle Builder 1.0 1
castlemaster com.alphacloud.castlemaster Castle Master 1.09 109
cfbench eu.chainfire.cfbench CF-Bench 1.2 7
citadel com.epicgames.EpicCitadel Epic Citadel 1.07 901107
dungeondefenders com.trendy.ddapp Dungeon Defenders 5.34 34
facebook com.facebook.katana Facebook 3.4 258880
geekbench ca.primatelabs.geekbench2 Geekbench 2 2.2.7 202007
geekbench com.primatelabs.geekbench3 Geekbench 3 3.0.0 135
glb_corporate net.kishonti.gfxbench GFXBench 3.0.0 1
glbenchmark com.glbenchmark.glbenchmark25 GLBenchmark 2.5 2.5 4
glbenchmark com.glbenchmark.glbenchmark27 GLBenchmark 2.7 2.7 1
gunbros2 com.glu.gunbros2 GunBros2 1.2.2 122
ironman com.gameloft.android.ANMP.GloftIMHM Iron Man 3 1.3.1 1310
krazykart com.polarbit.sg2.krazyracers Krazy Kart Racing 1.2.7 127
linpack com.greenecomputing.linpackpro Linpack Pro for Android 1.2.9 31
nenamark se.nena.nenamark2 NenaMark2 2.4 5
peacekeeper com.android.chrome Chrome 18.0.1025469 1025469
peacekeeper org.mozilla.firefox Firefox 23.0 2013073011
quadrant com.aurorasoftworks.quadrant.ui.professional Quadrant Professional 2.0 2000000
realracing3 com.ea.games.r3_row Real Racing 3 1.3.5 1305
smartbench com.smartbench.twelve Smartbench 2012 1.0.0 5
sqlite com.redlicense.benchmark.sqlite RL Benchmark 1.3 5
templerun com.imangi.templerun Temple Run 1.0.8 11
thechase com.unity3d.TheChase The Chase 1.0 1
truckerparking3d com.tapinator.truck.parking.bus3d Truck Parking 3D 2.5 7
vellamo com.quicinc.vellamo Vellamo 3.0 3001
vellamo com.quicinc.vellamo Vellamo 2.0.3 2003
videostreaming tw.com.freedi.youtube.player FREEdi YT Player 2.1.13 79
================ ============================================ ========================= ============ ============
Gaming Workloads
----------------
Some workloads (games, demos, etc) cannot be automated using Android's
UIAutomator framework because they render the entire UI inside a single OpenGL
surface. For these, an interaction session needs to be recorded so that it can
be played back by WA. These recordings are device-specific, so they would need
to be done for each device you're planning to use. The tool for doing is
``revent`` and it is packaged with WA. You can find instructions on how to use
it :ref:`here <revent_files_creation>`.
This is the list of workloads that rely on such recordings:
+------------------+
| angrybirds |
+------------------+
| angrybirds_rio |
+------------------+
| anomaly2 |
+------------------+
| castlebuilder |
+------------------+
| castlemastera |
+------------------+
| citadel |
+------------------+
| dungeondefenders |
+------------------+
| gunbros2 |
+------------------+
| ironman |
+------------------+
| krazykart |
+------------------+
| realracing3 |
+------------------+
| templerun |
+------------------+
| truckerparking3d |
+------------------+
.. _assets_repository:
Maintaining Centralized Assets Repository
-----------------------------------------
If there are multiple users within an organization that may need to deploy
assets for WA extensions, that organization may wish to maintain a centralized
repository of assets that individual WA installs will be able to automatically
retrieve asset files from as they are needed. This repository can be any
directory on a network filer that mirrors the structure of
``~/.workload_automation/dependencies``, i.e. has a subdirectories named after
the extensions which assets they contain. Individual WA installs can then set
``remote_assets_path`` setting in their config to point to the local mount of
that location.
(Optional) Uninstalling
=======================
If you have installed Workload Automation via ``pip`` and wish to remove it, run this command to
uninstall it::
sudo -H pip uninstall wlauto
.. Note:: This will *not* remove any user configuration (e.g. the ~/.workload_automation directory)
(Optional) Upgrading
====================
To upgrade Workload Automation to the latest version via ``pip``, run::
sudo -H pip install --upgrade --no-deps wlauto

View File

@@ -61,13 +61,13 @@ Instrument method realive to other callbacks registered for the signal (within t
level, callbacks are invoked in the order they were registered). The table below shows the mapping
of the prifix to the corresponding priority:
=========== ===
=========== ========
prefix priority
=========== ===
very_fast\_ 20
fast\_ 10
normal\_ 0
slow\_ -10
very_slow\_ -20
=========== ===
=========== ========
very_fast\_ 20
fast\_ 10
normal\_ 0
slow\_ -10
very_slow\_ -20
=========== ========

View File

@@ -2,7 +2,7 @@
Quickstart
==========
This sections will show you how to quickly start running workloads using
This guide will show you how to quickly start running workloads using
Workload Automation 2.
@@ -13,22 +13,26 @@ Install
the :doc:`installation` section.
Make sure you have Python 2.7 and a recent Android SDK with API level 18 or above
installed on your system. For the SDK, make sure that either ``ANDROID_HOME``
environment variable is set, or that ``adb`` is in your ``PATH``.
installed on your system. A complete install of the Android SDK is required, as
WA uses a number of its utilities, not just adb. For the SDK, make sure that either
``ANDROID_HOME`` environment variable is set, or that ``adb`` is in your ``PATH``.
.. note:: A complete install of the Android SDK is required, as WA uses a
number of its utilities, not just adb.
.. Note:: If you plan to run Workload Automation on Linux devices only, SSH is required,
and Android SDK is optional if you wish to run WA on Android devices at a
later time.
However, you would be starting off with a limited number of workloads that
will run on Linux devices.
In addition to the base Python 2.7 install, you will also need to have ``pip``
(Python's package manager) installed as well. This is usually a separate package.
Once you have the pre-requisites and a tarball with the workload automation package,
you can install it with pip::
Once you have those, you can install WA with::
sudo pip install wlauto-2.2.0dev.tar.gz
sudo -H pip install wlauto
This will install Workload Automation on your system, along with the Python
packages it depends on.
This will install Workload Automation on your system, along with its mandatory
dependencies.
(Optional) Verify installation
-------------------------------
@@ -52,15 +56,23 @@ For more details, please see the :doc:`installation` section.
Configure Your Device
=====================
Out of the box, WA is configured to work with a generic Android device through
``adb``. If you only have one device listed when you execute ``adb devices``,
and your device has a standard Android configuration, then no extra configuration
is required (if your device is connected via network, you will have to manually execute
``adb connect <device ip>`` so that it appears in the device listing).
Locate the device configuration file, config.py, under the
~/.workload_automation directory. Then adjust the device
configuration settings accordingly to the device you are using.
If you have multiple devices connected, you will need to tell WA which one you
want it to use. You can do that by setting ``adb_name`` in device configuration inside
``~/.workload_automation/config.py``\ , e.g.
Android
-------
By default, the device is set to 'generic_android'. WA is configured to work
with a generic Android device through ``adb``. If you only have one device listed
when you execute ``adb devices``, and your device has a standard Android
configuration, then no extra configuration is required.
However, if your device is connected via network, you will have to manually execute
``adb connect <device ip>`` so that it appears in the device listing.
If you have multiple devices connected, you will need to tell WA which one you
want it to use. You can do that by setting ``adb_name`` in device_config section.
.. code-block:: python
@@ -73,10 +85,73 @@ want it to use. You can do that by setting ``adb_name`` in device configuration
# ...
This should give you basic functionality. If your device has non-standard
Android configuration (e.g. it's a development board) or your need some advanced
functionality (e.g. big.LITTLE tuning parameters), additional configuration may
be required. Please see the :doc:`device_setup` section for more details.
Linux
-----
First, set the device to 'generic_linux'
.. code-block:: python
# ...
device = 'generic_linux'
# ...
Find the device_config section and add these parameters
.. code-block:: python
# ...
device_config = dict(
host = '192.168.0.100',
username = 'root',
password = 'password'
# ...
)
# ...
Parameters:
- Host is the IP of your target Linux device
- Username is the user for the device
- Password is the password for the device
Enabling and Disabling Instrumentation
---------------------------------------
Some instrumentation tools are enabled after your initial install of WA.
.. note:: Some Linux devices may not be able to run certain instruments
provided by WA (e.g. cpufreq is disabled or unsupported by the
device).
As a start, keep the 'execution_time' instrument enabled while commenting out
the rest to disable them.
.. code-block:: python
# ...
Instrumentation = [
# Records the time it took to run the workload
'execution_time',
# Collects /proc/interrupts before and after execution and does a diff.
# 'interrupts',
# Collects the contents of/sys/devices/system/cpu before and after execution and does a diff.
# 'cpufreq',
# ...
)
This should give you basic functionality. If you are working with a development
board or you need some advanced functionality (e.g. big.LITTLE tuning parameters),
additional configuration may be required. Please see the :doc:`device_setup`
section for more details.
Running Your First Workload
@@ -155,8 +230,55 @@ This agenda
the config.py.
- Disables execution_time instrument, if it is enabled in the config.py
There is a lot more that could be done with an agenda. Please see :doc:`agenda`
section for details.
An agenda can be created in a text editor and saved as a YAML file. Please make note of
where you have saved the agenda.
Please see :doc:`agenda` section for more options.
.. _YAML: http://en.wikipedia.org/wiki/YAML
Examples
========
These examples show some useful options with the ``wa run`` command.
To run your own agenda::
wa run <path/to/agenda> (e.g. wa run ~/myagenda.yaml)
To redirect the output to a different directory other than wa_output::
wa run dhrystone -d my_output_directory
To use a different config.py file::
wa run -c myconfig.py dhrystone
To use the same output directory but override existing contents to
store new dhrystone results::
wa run -f dhrystone
To display verbose output while running memcpy::
wa run --verbose memcpy
Uninstall
=========
If you have installed Workload Automation via ``pip``, then run this command to
uninstall it::
sudo pip uninstall wlauto
.. Note:: It will *not* remove any user configuration (e.g. the ~/.workload_automation
directory).
Upgrade
=======
To upgrade Workload Automation to the latest version via ``pip``, run::
sudo pip install --upgrade --no-deps wlauto

View File

@@ -1,3 +1,5 @@
.. _resources:
Dynamic Resource Resolution
===========================
@@ -7,10 +9,10 @@ The idea is to decouple resource identification from resource discovery.
Workloads/instruments/devices/etc state *what* resources they need, and not
*where* to look for them -- this instead is left to the resource resolver that
is now part of the execution context. The actual discovery of resources is
performed by resource getters that are registered with the resolver.
performed by resource getters that are registered with the resolver.
A resource type is defined by a subclass of
:class:`wlauto.core.resource.Resource`. An instance of this class describes a
:class:`wlauto.core.resource.Resource`. An instance of this class describes a
resource that is to be obtained. At minimum, a ``Resource`` instance has an
owner (which is typically the object that is looking for the resource), but
specific resource types may define other parameters that describe an instance of

View File

@@ -17,6 +17,11 @@ to Android UI Automator for providing automation for workloads. ::
info:shows info about each event char device
any additional parameters make it verbose
.. note:: There are now also WA commands that perform the below steps.
Please see ``wa show record/replay`` and ``wa record/replay --help``
for details.
Recording
---------

View File

@@ -1,3 +1,5 @@
.. _writing_extensions:
==================
Writing Extensions
==================
@@ -9,7 +11,7 @@ interesting of these are
can be benchmarks, high-level use cases, or pretty much anything else.
:devices: These are interfaces to the physical devices (development boards or end-user
devices, such as smartphones) that use cases run on. Typically each model of a
physical device would require it's own interface class (though some functionality
physical device would require its own interface class (though some functionality
may be reused by subclassing from an existing base).
:instruments: Instruments allow collecting additional data from workload execution (e.g.
system traces). Instruments are not specific to a particular Workload. Instruments
@@ -29,7 +31,7 @@ Extension Basics
================
This sub-section covers things common to implementing extensions of all types.
It is recommended you familiarize yourself with the information here before
It is recommended you familiarize yourself with the information here before
proceeding onto guidance for specific extension types.
To create an extension, you basically subclass an appropriate base class and them
@@ -39,22 +41,22 @@ The Context
-----------
The majority of methods in extensions accept a context argument. This is an
instance of :class:`wlauto.core.execution.ExecutionContext`. If contains
instance of :class:`wlauto.core.execution.ExecutionContext`. If contains
of information about current state of execution of WA and keeps track of things
like which workload is currently running and the current iteration.
Notable attributes of the context are
context.spec
context.spec
the current workload specification being executed. This is an
instance of :class:`wlauto.core.configuration.WorkloadRunSpec`
and defines the workload and the parameters under which it is
being executed.
being executed.
context.workload
context.workload
``Workload`` object that is currently being executed.
context.current_iteration
context.current_iteration
The current iteration of the spec that is being executed. Note that this
is the iteration for that spec, i.e. the number of times that spec has
been run, *not* the total number of all iterations have been executed so
@@ -77,9 +79,9 @@ In addition to these, context also defines a few useful paths (see below).
Paths
-----
You should avoid using hard-coded absolute paths in your extensions whenever
You should avoid using hard-coded absolute paths in your extensions whenever
possible, as they make your code too dependent on a particular environment and
may mean having to make adjustments when moving to new (host and/or device)
may mean having to make adjustments when moving to new (host and/or device)
platforms. To help avoid hard-coded absolute paths, WA automation defines
a number of standard locations. You should strive to define your paths relative
to one of those.
@@ -93,7 +95,7 @@ extension methods.
context.run_output_directory
This is the top-level output directory for all WA results (by default,
this will be "wa_output" in the directory in which WA was invoked.
context.output_directory
This is the output directory for the current iteration. This will an
iteration-specific subdirectory under the main results location. If
@@ -102,7 +104,7 @@ context.output_directory
context.host_working_directory
This an addition location that may be used by extensions to store
non-iteration specific intermediate files (e.g. configuration).
non-iteration specific intermediate files (e.g. configuration).
Additionally, the global ``wlauto.settings`` object exposes on other location:
@@ -130,12 +132,63 @@ device, the ``os.path`` modules should *not* be used for on-device path
manipulation. Instead device has an equipment module exposed through
``device.path`` attribute. This has all the same attributes and behaves the
same way as ``os.path``, but is guaranteed to produce valid paths for the device,
irrespective of the host's path notation.
irrespective of the host's path notation. For example:
.. code:: python
result_file = self.device.path.join(self.device.working_directory, "result.txt")
self.command = "{} -a -b -c {}".format(target_binary, result_file)
.. note:: result processors, unlike workloads and instruments, do not have their
own device attribute; however they can access the device through the
context.
Deploying executables to a device
---------------------------------
Some devices may have certain restrictions on where executable binaries may be
placed and how they should be invoked. To ensure your extension works with as
wide a range of devices as possible, you should use WA APIs for deploying and
invoking executables on a device, as outlined below.
As with other resources (see :ref:`resources`) , host-side paths to the exectuable
binary to be deployed should be obtained via the resource resolver. A special
resource type, ``Executable`` is used to identify a binary to be deployed.
This is simiar to the regular ``File`` resource, however it takes an additional
parameter that specifies the ABI for which executable was compiled.
In order for the binary to be obtained in this way, it must be stored in one of
the locations scanned by the resource resolver in a directry structure
``<root>/bin/<abi>/<binary>`` (where ``root`` is the base resource location to
be searched, e.g. ``~/.workload_automation/depencencies/<extension name>``, and
``<abi>`` is the ABI for which the exectuable has been compiled, as returned by
``self.device.abi``).
Once the path to the host-side binary has been obtained, it may be deployed using
one of two methods of a ``Device`` instace -- ``install`` or ``install_if_needed``.
The latter will check a version of that binary has been perviously deployed by
WA and will not try to re-install.
.. code:: python
from wlauto import Executable
host_binary = context.resolver.get(Executable(self, self.device.abi, 'some_binary'))
target_binary = self.device.install_if_needed(host_binary)
.. note:: Please also note that the check is done based solely on the binary name.
For more information please see: :func:`wlauto.common.linux.BaseLinuxDevice.install_if_needed`
Both of the above methods will return the path to the installed binary on the
device. The executable should be invoked *only* via that path; do **not** assume
that it will be in ``PATH`` on the target (or that the executable with the same
name in ``PATH`` is the version deployed by WA.
.. code:: python
self.command = "{} -a -b -c".format(target_binary)
self.device.execute(self.command)
Parameters
----------
@@ -186,11 +239,11 @@ mandatory
and there really is no sensible default that could be given
(e.g. something like login credentials), should you consider
making it mandatory.
constraint
This is an additional constraint to be enforced on the parameter beyond
its type or fixed allowed values set. This should be a predicate (a function
that takes a single argument -- the user-supplied value -- and returns
its type or fixed allowed values set. This should be a predicate (a function
that takes a single argument -- the user-supplied value -- and returns
a ``bool`` indicating whether the constraint has been satisfied).
override
@@ -199,7 +252,7 @@ override
with the same name as already exists, you will get an error. If you do
want to override a parameter from further up in the inheritance
hierarchy, you can indicate that by setting ``override`` attribute to
``True``.
``True``.
When overriding, you do not need to specify every other attribute of the
parameter, just the ones you what to override. Values for the rest will
@@ -220,7 +273,7 @@ surrounding environment (e.g. that the device has been initialized).
The contract for ``validate`` method is that it should raise an exception
(either ``wlauto.exceptions.ConfigError`` or extension-specific exception type -- see
further on this page) if some validation condition has not, and cannot, been met.
further on this page) if some validation condition has not, and cannot, been met.
If the method returns without raising an exception, then the extension is in a
valid internal state.
@@ -240,7 +293,7 @@ everything it is doing, so you shouldn't need to add much additional logging in
your expansion's. But you might what to log additional information, e.g.
what settings your extension is using, what it is doing on the host, etc.
Operations on the host will not normally be logged, so your extension should
definitely log what it is doing on the host. One situation in particular where
definitely log what it is doing on the host. One situation in particular where
you should add logging is before doing something that might take a significant amount
of time, such as downloading a file.
@@ -257,7 +310,7 @@ Subsequent paragraphs (separated by blank lines) can then provide a more
detailed description, including any limitations and setup instructions.
For parameters, the description is passed as an argument on creation. Please
note that if ``default``, ``allowed_values``, or ``constraint``, are set in the
note that if ``default``, ``allowed_values``, or ``constraint``, are set in the
parameter, they do not need to be explicitly mentioned in the description (wa
documentation utilities will automatically pull those). If the ``default`` is set
in ``validate`` or additional cross-parameter constraints exist, this *should*
@@ -302,7 +355,7 @@ Utils
Workload Automation defines a number of utilities collected under
:mod:`wlauto.utils` subpackage. These utilities were created to help with the
implementation of the framework itself, but may be also be useful when
implementing extensions.
implementing extensions.
Adding a Workload
@@ -324,22 +377,31 @@ The Workload class defines the following interface::
def init_resources(self, context):
pass
def setup(self, context):
raise NotImplementedError()
def run(self, context):
raise NotImplementedError()
def update_result(self, context):
raise NotImplementedError()
def teardown(self, context):
raise NotImplementedError()
def validate(self):
pass
def initialize(self, context):
pass
def setup(self, context):
pass
def setup(self, context):
pass
def run(self, context):
pass
def update_result(self, context):
pass
def teardown(self, context):
pass
def finalize(self, context):
pass
.. note:: Please see :doc:`conventions` section for notes on how to interpret
this.
@@ -348,8 +410,23 @@ The interface should be implemented as follows
:name: This identifies the workload (e.g. it used to specify it in the
agenda_.
:init_resources: This method may be optionally override to implement dynamic
resource discovery for the workload.
**Added in version 2.1.3**
resource discovery for the workload. This method executes
early on, before the device has been initialized, so it
should only be used to initialize resources that do not
depend on the device to resolve. This method is executed
once per run for each workload instance.
:validate: This method can be used to validate any assumptions your workload
makes about the environment (e.g. that required files are
present, environment variables are set, etc) and should raise
a :class:`wlauto.exceptions.WorkloadError` if that is not the
case. The base class implementation only makes sure sure that
the name attribute has been set.
:initialize: This method will be executed exactly once per run (no matter
how many instances of the workload there are). It will run
after the device has been initialized, so it may be used to
perform device-dependent initialization that does not need to
be repeated on each iteration (e.g. as installing executables
required by the workload on the device).
:setup: Everything that needs to be in place for workload execution should
be done in this method. This includes copying files to the device,
starting up an application, configuring communications channels,
@@ -371,13 +448,11 @@ The interface should be implemented as follows
to the result (see below).
:teardown: This could be used to perform any cleanup you may wish to do,
e.g. Uninstalling applications, deleting file on the device, etc.
:finalize: This is the complement to ``initialize``. This will be executed
exactly once at the end of the run. This should be used to
perform any final clean up (e.g. uninstalling binaries installed
in the ``initialize``).
:validate: This method can be used to validate any assumptions your workload
makes about the environment (e.g. that required files are
present, environment variables are set, etc) and should raise
a :class:`wlauto.exceptions.WorkloadError` if that is not the
case. The base class implementation only makes sure sure that
the name attribute has been set.
.. _agenda: agenda.html
@@ -512,17 +587,17 @@ device name(case sensitive) then followed by a dot '.' then the stage name
then '.revent'. All your custom revent files should reside at
'~/.workload_automation/dependencies/WORKLOAD NAME/'. These are the current
supported stages:
:setup: This stage is where the game is loaded. It is a good place to
record revent here to modify the game settings and get it ready
to start.
:run: This stage is where the game actually starts. This will allow for
more accurate results if the revent file for this stage only
records the game being played.
For instance, to add a custom revent files for a device named mydevice and
a workload name mygame, you create a new directory called mygame in
'~/.workload_automation/dependencies/'. Then you add the revent files for
a workload name mygame, you create a new directory called mygame in
'~/.workload_automation/dependencies/'. Then you add the revent files for
the stages you want in ~/.workload_automation/dependencies/mygame/::
mydevice.setup.revent
@@ -531,7 +606,7 @@ the stages you want in ~/.workload_automation/dependencies/mygame/::
Any revent file in the dependencies will always overwrite the revent file in the
workload directory. So it is possible for example to just provide one revent for
setup in the dependencies and use the run.revent that is in the workload directory.
Adding an Instrument
====================
@@ -576,7 +651,7 @@ which is perhaps ``initialize`` that gets invoked after the device has been
initialised for the first time, and can be used to perform one-time setup (e.g.
copying files to the device -- there is no point in doing that for each
iteration). The full list of available methods can be found in
:ref:`Signals Documentation <instrument_name_mapping>`.
:ref:`Signals Documentation <instrumentation_method_map>`.
Prioritization
@@ -727,19 +802,19 @@ table::
with open(outfile, 'w') as wfh:
write_table(rows, wfh)
Adding a Resource Getter
========================
A resource getter is a new extension type added in version 2.1.3. A resource
getter implement a method of acquiring resources of a particular type (such as
APK files or additional workload assets). Resource getters are invoked in
priority order until one returns the desired resource.
priority order until one returns the desired resource.
If you want WA to look for resources somewhere it doesn't by default (e.g. you
have a repository of APK files), you can implement a getter for the resource and
register it with a higher priority than the standard WA getters, so that it gets
invoked first.
invoked first.
Instances of a resource getter should implement the following interface::
@@ -751,7 +826,7 @@ Instances of a resource getter should implement the following interface::
def get(self, resource, **kwargs):
raise NotImplementedError()
The getter should define a name (as with all extensions), a resource
type, which should be a string, e.g. ``'jar'``, and a priority (see `Getter
Prioritization`_ below). In addition, ``get`` method should be implemented. The
@@ -823,7 +898,7 @@ looks for the file under
elif not found_files:
return None
else:
raise ResourceError('More than one .apk found in {} for {}.'.format(resource_dir,
raise ResourceError('More than one .apk found in {} for {}.'.format(resource_dir,
resource.owner.name))
.. _adding_a_device:
@@ -923,7 +998,7 @@ top-level package directory is created by default, and it is OK to have
everything in there.
.. note:: When discovering extensions thorugh this mechanism, WA traveries the
Python module/submodule tree, not the directory strucuter, therefore,
Python module/submodule tree, not the directory strucuter, therefore,
if you are going to create subdirectories under the top level dictory
created for you, it is important that your make sure they are valid
Python packages; i.e. each subdirectory must contain a __init__.py
@@ -934,7 +1009,7 @@ At this stage, you may want to edit ``params`` structure near the bottom of
the ``setup.py`` to add correct author, license and contact information (see
"Writing the Setup Script" section in standard Python documentation for
details). You may also want to add a README and/or a COPYING file at the same
level as the setup.py. Once you have the contents of your package sorted,
level as the setup.py. Once you have the contents of your package sorted,
you can generate the package by running ::
cd my_wa_exts

View File

@@ -16,7 +16,7 @@
#
[MASTER]
profile=no
#profile=no
ignore=external
@@ -41,7 +41,9 @@ ignore=external
# https://bitbucket.org/logilab/pylint/issue/272/anomalous-backslash-in-string-for-raw
# C0330: bad continuation, due to:
# https://bitbucket.org/logilab/pylint/issue/232/wrong-hanging-indentation-false-positive
disable=C0301,C0103,C0111,W0142,R0903,R0904,R0922,W0511,W0141,I0011,R0921,W1401,C0330
# TODO: disabling no-value-for-parameter and logging-format-interpolation, as they appear to be broken
# in version 1.4.1 and return a lot of false postives; should be re-enabled once fixed.
disable=C0301,C0103,C0111,W0142,R0903,R0904,R0922,W0511,W0141,I0011,R0921,W1401,C0330,no-value-for-parameter,logging-format-interpolation
[FORMAT]
max-module-lines=4000

View File

@@ -1,4 +1,5 @@
# Copyright 2014-2015 ARM Limited
#!/usr/bin/env python
# Copyright 2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,5 +13,5 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from wlauto.utils.power import main
main()

View File

@@ -1,17 +1,17 @@
#!/bin/bash
# $Copyright:
# ----------------------------------------------------------------
# This confidential and proprietary software may be used only as
# authorised by a licensing agreement from ARM Limited
# (C) COPYRIGHT 2013 ARM Limited
# ALL RIGHTS RESERVED
# The entire notice above must be reproduced on all authorised
# copies and copies may only be made to the extent permitted
# by a licensing agreement from ARM Limited.
# ----------------------------------------------------------------
# File: create_workload
# ----------------------------------------------------------------
# $
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
wa create workload $@

View File

@@ -1,16 +1,16 @@
#!/bin/bash
# $Copyright:
# ----------------------------------------------------------------
# This confidential and proprietary software may be used only as
# authorised by a licensing agreement from ARM Limited
# (C) COPYRIGHT 2013 ARM Limited
# ALL RIGHTS RESERVED
# The entire notice above must be reproduced on all authorised
# copies and copies may only be made to the extent permitted
# by a licensing agreement from ARM Limited.
# ----------------------------------------------------------------
# File: list_extensions
# ----------------------------------------------------------------
# $
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
wa list $@

View File

@@ -1,17 +1,17 @@
#!/bin/bash
# $Copyright:
# ----------------------------------------------------------------
# This confidential and proprietary software may be used only as
# authorised by a licensing agreement from ARM Limited
# (C) COPYRIGHT 2013 ARM Limited
# ALL RIGHTS RESERVED
# The entire notice above must be reproduced on all authorised
# copies and copies may only be made to the extent permitted
# by a licensing agreement from ARM Limited.
# ----------------------------------------------------------------
# File: run_workloads
# ----------------------------------------------------------------
# $
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
wa run $@

View File

@@ -1,17 +1,17 @@
#!/usr/bin/env python
# $Copyright:
# ----------------------------------------------------------------
# This confidential and proprietary software may be used only as
# authorised by a licensing agreement from ARM Limited
# (C) COPYRIGHT 2013 ARM Limited
# ALL RIGHTS RESERVED
# The entire notice above must be reproduced on all authorised
# copies and copies may only be made to the extent permitted
# by a licensing agreement from ARM Limited.
# ----------------------------------------------------------------
# File: run_workloads
# ----------------------------------------------------------------
# $
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from wlauto.core.entry_point import main
main()

View File

@@ -23,7 +23,10 @@ try:
except ImportError:
from distutils.core import setup
sys.path.insert(0, './wlauto/core/')
wlauto_dir = os.path.join(os.path.dirname(__file__), 'wlauto')
sys.path.insert(0, os.path.join(wlauto_dir, 'core'))
from version import get_wa_version
# happends if falling back to distutils
@@ -38,7 +41,7 @@ except OSError:
packages = []
data_files = {}
source_dir = os.path.dirname(__file__)
for root, dirs, files in os.walk('wlauto'):
for root, dirs, files in os.walk(wlauto_dir):
rel_dir = os.path.relpath(root, source_dir)
data = []
if '__init__.py' in files:
@@ -73,11 +76,14 @@ params = dict(
'pyserial', # Serial port interface
'colorama', # Printing with colors
'pyYAML', # YAML-formatted agenda parsing
'requests', # Fetch assets over HTTP
'devlib', # Interacting with devices
],
extras_require={
'other': ['jinja2', 'pandas>=0.13.1'],
'test': ['nose'],
'mongodb': ['pymongo'],
'notify': ['notify2'],
'doc': ['sphinx'],
},
# https://pypi.python.org/pypi?%3Aaction=list_classifiers

View File

@@ -14,7 +14,7 @@
#
from wlauto.core.bootstrap import settings # NOQA
from wlauto.core.device import Device, RuntimeParameter, CoreParameter # NOQA
from wlauto.core.device_manager import DeviceManager, RuntimeParameter, CoreParameter # NOQA
from wlauto.core.command import Command # NOQA
from wlauto.core.workload import Workload # NOQA
from wlauto.core.extension import Module, Parameter, Artifact, Alias # NOQA
@@ -25,8 +25,6 @@ from wlauto.core.resource import ResourceGetter, Resource, GetterPriority, NO_ON
from wlauto.core.exttype import get_extension_type # NOQA Note: MUST be imported after other core imports.
from wlauto.common.resources import File, ExtensionAsset, Executable
from wlauto.common.linux.device import LinuxDevice # NOQA
from wlauto.common.android.device import AndroidDevice, BigLittleDevice # NOQA
from wlauto.common.android.resources import ApkFile, JarFile
from wlauto.common.android.workload import (UiAutomatorWorkload, ApkWorkload, AndroidBenchmark, # NOQA
AndroidUiAutoBenchmark, GameWorkload) # NOQA

View File

@@ -15,15 +15,20 @@
import os
import sys
import stat
import string
import textwrap
import argparse
import shutil
import getpass
import subprocess
from collections import OrderedDict
import yaml
from wlauto import ExtensionLoader, Command, settings
from wlauto.exceptions import CommandError
from wlauto.exceptions import CommandError, ConfigError
from wlauto.utils.cli import init_argument_parser
from wlauto.utils.misc import (capitalize, check_output,
ensure_file_directory_exists as _f, ensure_directory_exists as _d)
@@ -169,15 +174,105 @@ class CreatePackageSubcommand(CreateSubcommand):
touch(os.path.join(actual_package_path, '__init__.py'))
class CreateAgendaSubcommand(CreateSubcommand):
name = 'agenda'
description = """
Create an agenda whith the specified extensions enabled. And parameters set to their
default values.
"""
def initialize(self):
self.parser.add_argument('extensions', nargs='+',
help='Extensions to be added')
self.parser.add_argument('-i', '--iterations', type=int, default=1,
help='Sets the number of iterations for all workloads')
self.parser.add_argument('-r', '--include-runtime-params', action='store_true',
help="""
Adds runtime parameters to the global section of the generated
agenda. Note: these do not have default values, so only name
will be added. Also, runtime parameters are devices-specific, so
a device must be specified (either in the list of extensions,
or in the existing config).
""")
self.parser.add_argument('-o', '--output', metavar='FILE',
help='Output file. If not specfied, STDOUT will be used instead.')
def execute(self, args): # pylint: disable=no-self-use,too-many-branches,too-many-statements
loader = ExtensionLoader(packages=settings.extension_packages,
paths=settings.extension_paths)
agenda = OrderedDict()
agenda['config'] = OrderedDict(instrumentation=[], result_processors=[])
agenda['global'] = OrderedDict(iterations=args.iterations)
agenda['workloads'] = []
device = None
device_config = None
for name in args.extensions:
extcls = loader.get_extension_class(name)
config = loader.get_default_config(name)
del config['modules']
if extcls.kind == 'workload':
entry = OrderedDict()
entry['name'] = extcls.name
if name != extcls.name:
entry['label'] = name
entry['params'] = config
agenda['workloads'].append(entry)
elif extcls.kind == 'device':
if device is not None:
raise ConfigError('Specifying multiple devices: {} and {}'.format(device.name, name))
device = extcls
device_config = config
agenda['config']['device'] = name
agenda['config']['device_config'] = config
else:
if extcls.kind == 'instrument':
agenda['config']['instrumentation'].append(name)
if extcls.kind == 'result_processor':
agenda['config']['result_processors'].append(name)
agenda['config'][name] = config
if args.include_runtime_params:
if not device:
if settings.device:
device = loader.get_extension_class(settings.device)
device_config = loader.get_default_config(settings.device)
else:
raise ConfigError('-r option requires for a device to be in the list of extensions')
rps = OrderedDict()
for rp in device.runtime_parameters:
if hasattr(rp, 'get_runtime_parameters'):
# a core parameter needs to be expanded for each of the
# device's cores, if they're avialable
for crp in rp.get_runtime_parameters(device_config.get('core_names', [])):
rps[crp.name] = None
else:
rps[rp.name] = None
agenda['global']['runtime_params'] = rps
if args.output:
wfh = open(args.output, 'w')
else:
wfh = sys.stdout
yaml.dump(agenda, wfh, indent=4, default_flow_style=False)
if args.output:
wfh.close()
class CreateCommand(Command):
name = 'create'
description = '''Used to create various WA-related objects (see positional arguments list for what
objects may be created).\n\nUse "wa create <object> -h" for object-specific arguments.'''
formatter_class = argparse.RawDescriptionHelpFormatter
subcmd_classes = [CreateWorkloadSubcommand, CreatePackageSubcommand]
subcmd_classes = [
CreateWorkloadSubcommand,
CreatePackageSubcommand,
CreateAgendaSubcommand,
]
def initialize(self):
def initialize(self, context):
subparsers = self.parser.add_subparsers(dest='what')
self.subcommands = [] # pylint: disable=W0201
for subcmd_cls in self.subcmd_classes:
@@ -257,7 +352,12 @@ def create_uiauto_project(path, name, target='1'):
package_name,
target,
path)
check_output(command, shell=True)
try:
check_output(command, shell=True)
except subprocess.CalledProcessError as e:
if 'is is not valid' in e.output:
message = 'No Android SDK target found; have you run "{} update sdk" and download a platform?'
raise CommandError(message.format(android_path))
build_script = os.path.join(path, 'build.sh')
with open(build_script, 'w') as wfh:
@@ -296,5 +396,5 @@ def render_template(name, params):
def touch(path):
with open(path, 'w') as wfh: # pylint: disable=unused-variable
with open(path, 'w') as _:
pass

View File

@@ -24,22 +24,33 @@ class ListCommand(Command):
name = 'list'
description = 'List available WA extensions with a short description of each.'
def initialize(self):
def initialize(self, context):
extension_types = ['{}s'.format(ext.name) for ext in settings.extensions]
self.parser.add_argument('kind', metavar='KIND',
help=('Specify the kind of extension to list. Must be '
'one of: {}'.format(', '.join(extension_types))),
choices=extension_types)
self.parser.add_argument('-n', '--name', help='Filter results by the name specified')
self.parser.add_argument('-o', '--packaged-only', action='store_true',
help='''
Only list extensions packaged with WA itself. Do not list extensions
installed locally or from other packages.
''')
self.parser.add_argument('-p', '--platform', help='Only list results that are supported by '
'the specified platform')
def execute(self, args):
filters = {}
if args.name:
filters['name'] = args.name
ext_loader = ExtensionLoader(packages=settings.extension_packages, paths=settings.extension_paths)
if args.packaged_only:
ext_loader = ExtensionLoader()
else:
ext_loader = ExtensionLoader(packages=settings.extension_packages,
paths=settings.extension_paths)
results = ext_loader.list_extensions(args.kind[:-1])
if filters:
if filters or args.platform:
filtered_results = []
for result in results:
passed = True
@@ -47,6 +58,8 @@ class ListCommand(Command):
if getattr(result, k) != v:
passed = False
break
if passed and args.platform:
passed = check_platform(result, args.platform)
if passed:
filtered_results.append(result)
else: # no filters specified
@@ -57,3 +70,10 @@ class ListCommand(Command):
for result in sorted(filtered_results, key=lambda x: x.name):
output.add_item(get_summary(result), result.name)
print output.format_data()
def check_platform(extension, platform):
supported_platforms = getattr(extension, 'supported_platforms', [])
if supported_platforms:
return platform in supported_platforms
return True

166
wlauto/commands/record.py Normal file
View File

@@ -0,0 +1,166 @@
# Copyright 2014-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import sys
from wlauto import ExtensionLoader, Command, settings
from wlauto.common.resources import Executable
from wlauto.core.resource import NO_ONE
from wlauto.core.resolver import ResourceResolver
from wlauto.core.configuration import RunConfiguration
from wlauto.core.agenda import Agenda
class RecordCommand(Command):
name = 'record'
description = '''Performs a revent recording
This command helps making revent recordings. It will automatically
deploy revent and even has the option of automatically opening apps.
Revent allows you to record raw inputs such as screen swipes or button presses.
This can be useful for recording inputs for workloads such as games that don't
have XML UI layouts that can be used with UIAutomator. As a drawback from this,
revent recordings are specific to the device type they were recorded on.
WA uses two parts to the names of revent recordings in the format,
{device_name}.{suffix}.revent.
- device_name can either be specified manually with the ``-d`` argument or
it can be automatically determined. On Android device it will be obtained
from ``build.prop``, on Linux devices it is obtained from ``/proc/device-tree/model``.
- suffix is used by WA to determine which part of the app execution the
recording is for, currently these are either ``setup`` or ``run``. This
should be specified with the ``-s`` argument.
'''
def initialize(self, context):
self.context = context
self.parser.add_argument('-d', '--device', help='The name of the device')
self.parser.add_argument('-s', '--suffix', help='The suffix of the revent file, e.g. ``setup``')
self.parser.add_argument('-o', '--output', help='Directory to save the recording in')
self.parser.add_argument('-p', '--package', help='Package to launch before recording')
self.parser.add_argument('-C', '--clear', help='Clear app cache before launching it',
action="store_true")
# Validate command options
def validate_args(self, args):
if args.clear and not args.package:
print "Package must be specified if you want to clear cache\n"
self.parser.print_help()
sys.exit()
# pylint: disable=W0201
def execute(self, args):
self.validate_args(args)
self.logger.info("Connecting to device...")
ext_loader = ExtensionLoader(packages=settings.extension_packages,
paths=settings.extension_paths)
# Setup config
self.config = RunConfiguration(ext_loader)
for filepath in settings.get_config_paths():
self.config.load_config(filepath)
self.config.set_agenda(Agenda())
self.config.finalize()
context = LightContext(self.config)
# Setup device
self.device = ext_loader.get_device(settings.device, **settings.device_config)
self.device.validate()
self.device.dynamic_modules = []
self.device.connect()
self.device.initialize(context)
host_binary = context.resolver.get(Executable(NO_ONE, self.device.abi, 'revent'))
self.target_binary = self.device.install_if_needed(host_binary)
self.run(args)
def run(self, args):
if args.device:
self.device_name = args.device
else:
self.device_name = self.device.get_device_model()
if args.suffix:
args.suffix += "."
revent_file = self.device.path.join(self.device.working_directory,
'{}.{}revent'.format(self.device_name, args.suffix or ""))
if args.clear:
self.device.execute("pm clear {}".format(args.package))
if args.package:
self.logger.info("Starting {}".format(args.package))
self.device.execute('monkey -p {} -c android.intent.category.LAUNCHER 1'.format(args.package))
self.logger.info("Press Enter when you are ready to record...")
raw_input("")
command = "{} record -t 100000 -s {}".format(self.target_binary, revent_file)
self.device.kick_off(command)
self.logger.info("Press Enter when you have finished recording...")
raw_input("")
self.device.killall("revent")
self.logger.info("Pulling files from device")
self.device.pull(revent_file, args.output or os.getcwdu())
class ReplayCommand(RecordCommand):
name = 'replay'
description = '''Replay a revent recording
Revent allows you to record raw inputs such as screen swipes or button presses.
See ``wa show record`` to see how to make an revent recording.
'''
def initialize(self, context):
self.context = context
self.parser.add_argument('revent', help='The name of the file to replay')
self.parser.add_argument('-p', '--package', help='Package to launch before recording')
self.parser.add_argument('-C', '--clear', help='Clear app cache before launching it',
action="store_true")
# pylint: disable=W0201
def run(self, args):
self.logger.info("Pushing file to device")
self.device.push(args.revent, self.device.working_directory)
revent_file = self.device.path.join(self.device.working_directory, os.path.split(args.revent)[1])
if args.clear:
self.device.execute("pm clear {}".format(args.package))
if args.package:
self.logger.info("Starting {}".format(args.package))
self.device.execute('monkey -p {} -c android.intent.category.LAUNCHER 1'.format(args.package))
command = "{} replay {}".format(self.target_binary, revent_file)
self.device.execute(command)
self.logger.info("Finished replay")
# Used to satisfy the API
class LightContext(object):
def __init__(self, config):
self.resolver = ResourceResolver(config)
self.resolver.load()

View File

@@ -30,24 +30,43 @@ class RunCommand(Command):
name = 'run'
description = 'Execute automated workloads on a remote device and process the resulting output.'
def initialize(self):
def initialize(self, context):
self.parser.add_argument('agenda', metavar='AGENDA',
help='Agenda for this workload automation run. This defines which workloads will ' +
'be executed, how many times, with which tunables, etc. ' +
'See example agendas in {} '.format(os.path.dirname(wlauto.__file__)) +
'for an example of how this file should be structured.')
help="""
Agenda for this workload automation run. This defines which
workloads will be executed, how many times, with which
tunables, etc. See example agendas in {} for an example of
how this file should be structured.
""".format(os.path.dirname(wlauto.__file__)))
self.parser.add_argument('-d', '--output-directory', metavar='DIR', default=None,
help='Specify a directory where the output will be generated. If the directory' +
'already exists, the script will abort unless -f option (see below) is used,' +
'in which case the contents of the directory will be overwritten. If this option' +
'is not specified, then {} will be used instead.'.format(settings.output_directory))
help="""
Specify a directory where the output will be generated. If
the directory already exists, the script will abort unless -f
option (see below) is used, in which case the contents of the
directory will be overwritten. If this option is not specified,
then {} will be used instead.
""".format(settings.output_directory))
self.parser.add_argument('-f', '--force', action='store_true',
help='Overwrite output directory if it exists. By default, the script will abort in this' +
'situation to prevent accidental data loss.')
help="""
Overwrite output directory if it exists. By default, the script
will abort in this situation to prevent accidental data loss.
""")
self.parser.add_argument('-i', '--id', action='append', dest='only_run_ids', metavar='ID',
help='Specify a workload spec ID from an agenda to run. If this is specified, only that particular ' +
'spec will be run, and other workloads in the agenda will be ignored. This option may be used to ' +
'specify multiple IDs.')
help="""
Specify a workload spec ID from an agenda to run. If this is
specified, only that particular spec will be run, and other
workloads in the agenda will be ignored. This option may be
used to specify multiple IDs.
""")
self.parser.add_argument('--disable', action='append', dest='instruments_to_disable',
metavar='INSTRUMENT', help="""
Specify an instrument to disable from the command line. This
equivalent to adding "~{metavar}" to the instrumentation list in
the agenda. This can be used to temporarily disable a troublesome
instrument for a particular run without introducing permanent
change to the config (which one might then forget to revert).
This option may be specified multiple times.
""")
def execute(self, args): # NOQA
self.set_up_output_directory(args)
@@ -62,9 +81,18 @@ class RunCommand(Command):
agenda = Agenda()
agenda.add_workload_entry(args.agenda)
file_name = 'config_{}.py'
if args.instruments_to_disable:
if 'instrumentation' not in agenda.config:
agenda.config['instrumentation'] = []
for itd in args.instruments_to_disable:
self.logger.debug('Updating agenda to disable {}'.format(itd))
agenda.config['instrumentation'].append('~{}'.format(itd))
basename = 'config_'
for file_number, path in enumerate(settings.get_config_paths(), 1):
shutil.copy(path, os.path.join(settings.meta_directory, file_name.format(file_number)))
file_ext = os.path.splitext(path)[1]
shutil.copy(path, os.path.join(settings.meta_directory,
basename + str(file_number) + file_ext))
executor = Executor()
executor.execute(agenda, selectors={'ids': args.only_run_ids})

View File

@@ -18,11 +18,11 @@ import sys
import subprocess
from cStringIO import StringIO
from terminalsize import get_terminal_size # pylint: disable=import-error
from wlauto import Command, ExtensionLoader, settings
from wlauto.utils.doc import (get_summary, get_description, get_type_name, format_column, format_body,
format_paragraph, indent, strip_inlined_text)
from wlauto.utils.misc import get_pager
from wlauto.utils.terminalsize import get_terminal_size
class ShowCommand(Command):
@@ -33,12 +33,13 @@ class ShowCommand(Command):
Display documentation for the specified extension (workload, instrument, etc.).
"""
def initialize(self):
def initialize(self, context):
self.parser.add_argument('name', metavar='EXTENSION',
help='''The name of the extension for which information will
be shown.''')
def execute(self, args):
# pylint: disable=unpacking-non-sequence
ext_loader = ExtensionLoader(packages=settings.extension_packages, paths=settings.extension_paths)
extension = ext_loader.get_extension_class(args.name)
out = StringIO()
@@ -47,8 +48,12 @@ class ShowCommand(Command):
text = out.getvalue()
pager = get_pager()
if len(text.split('\n')) > term_height and pager:
sp = subprocess.Popen(pager, stdin=subprocess.PIPE)
sp.communicate(text)
try:
sp = subprocess.Popen(pager, stdin=subprocess.PIPE)
sp.communicate(text)
except OSError:
self.logger.warning('Could not use PAGER "{}"'.format(pager))
sys.stdout.write(text)
else:
sys.stdout.write(text)
@@ -58,6 +63,9 @@ def format_extension(extension, out, width):
out.write('\n')
format_extension_summary(extension, out, width)
out.write('\n')
if hasattr(extension, 'supported_platforms'):
format_supported_platforms(extension, out, width)
out.write('\n')
if extension.parameters:
format_extension_parameters(extension, out, width)
out.write('\n')
@@ -72,6 +80,11 @@ def format_extension_summary(extension, out, width):
out.write('{}\n'.format(format_body(strip_inlined_text(get_summary(extension)), width)))
def format_supported_platforms(extension, out, width):
text = 'supported on: {}'.format(', '.join(extension.supported_platforms))
out.write('{}\n'.format(format_body(text, width)))
def format_extension_description(extension, out, width):
# skip the initial paragraph of multi-paragraph description, as already
# listed above.
@@ -93,7 +106,7 @@ def format_extension_parameters(extension, out, width, shift=4):
param_text += indent('allowed values: {}\n'.format(', '.join(map(str, param.allowed_values))))
elif param.constraint:
param_text += indent('constraint: {}\n'.format(get_type_name(param.constraint)))
if param.default:
if param.default is not None:
param_text += indent('default: {}\n'.format(param.default))
param_texts.append(indent(param_text, shift))

View File

@@ -1,678 +0,0 @@
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=E1101
import os
import sys
import re
import time
import tempfile
import shutil
import threading
from subprocess import CalledProcessError
from wlauto.core.extension import Parameter
from wlauto.common.linux.device import BaseLinuxDevice
from wlauto.exceptions import DeviceError, WorkerThreadError, TimeoutError, DeviceNotRespondingError
from wlauto.utils.misc import convert_new_lines
from wlauto.utils.types import boolean, regex
from wlauto.utils.android import (adb_shell, adb_background_shell, adb_list_devices,
adb_command, AndroidProperties, ANDROID_VERSION_MAP)
SCREEN_STATE_REGEX = re.compile('(?:mPowerState|mScreenOn)=([0-9]+|true|false)', re.I)
class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
"""
Device running Android OS.
"""
platform = 'android'
parameters = [
Parameter('adb_name',
description='The unique ID of the device as output by "adb devices".'),
Parameter('android_prompt', kind=regex, default=re.compile('^.*(shell|root)@.*:/ [#$] ', re.MULTILINE),
description='The format of matching the shell prompt in Android.'),
Parameter('working_directory', default='/sdcard/wa-working',
description='Directory that will be used WA on the device for output files etc.'),
Parameter('binaries_directory', default='/system/bin',
description='Location of binaries on the device.'),
Parameter('package_data_directory', default='/data/data',
description='Location of of data for an installed package (APK).'),
Parameter('external_storage_directory', default='/sdcard',
description='Mount point for external storage.'),
Parameter('connection', default='usb', allowed_values=['usb', 'ethernet'],
description='Specified the nature of adb connection.'),
Parameter('logcat_poll_period', kind=int,
description="""
If specified and is not ``0``, logcat will be polled every
``logcat_poll_period`` seconds, and buffered on the host. This
can be used if a lot of output is expected in logcat and the fixed
logcat buffer on the device is not big enough. The trade off is that
this introduces some minor runtime overhead. Not set by default.
"""),
Parameter('enable_screen_check', kind=boolean, default=False,
description="""
Specified whether the device should make sure that the screen is on
during initialization.
"""),
]
default_timeout = 30
delay = 2
long_delay = 3 * delay
ready_timeout = 60
# Overwritten from Device. For documentation, see corresponding method in
# Device.
@property
def is_rooted(self):
if self._is_rooted is None:
try:
result = adb_shell(self.adb_name, 'su', timeout=1)
if 'not found' in result:
self._is_rooted = False
else:
self._is_rooted = True
except TimeoutError:
self._is_rooted = True
except DeviceError:
self._is_rooted = False
return self._is_rooted
@property
def abi(self):
return self.getprop()['ro.product.cpu.abi'].split('-')[0]
@property
def supported_eabi(self):
props = self.getprop()
result = [props['ro.product.cpu.abi']]
if 'ro.product.cpu.abi2' in props:
result.append(props['ro.product.cpu.abi2'])
if 'ro.product.cpu.abilist' in props:
for eabi in props['ro.product.cpu.abilist'].split(','):
if eabi not in result:
result.append(eabi)
return result
def __init__(self, **kwargs):
super(AndroidDevice, self).__init__(**kwargs)
self._logcat_poller = None
def reset(self):
self._is_ready = False
self._just_rebooted = True
adb_command(self.adb_name, 'reboot', timeout=self.default_timeout)
def hard_reset(self):
super(AndroidDevice, self).hard_reset()
self._is_ready = False
self._just_rebooted = True
def boot(self, **kwargs):
self.reset()
def connect(self): # NOQA pylint: disable=R0912
iteration_number = 0
max_iterations = self.ready_timeout / self.delay
available = False
self.logger.debug('Polling for device {}...'.format(self.adb_name))
while iteration_number < max_iterations:
devices = adb_list_devices()
if self.adb_name:
for device in devices:
if device.name == self.adb_name and device.status != 'offline':
available = True
else: # adb_name not set
if len(devices) == 1:
available = True
elif len(devices) > 1:
raise DeviceError('More than one device is connected and adb_name is not set.')
if available:
break
else:
time.sleep(self.delay)
iteration_number += 1
else:
raise DeviceError('Could not boot {} ({}).'.format(self.name, self.adb_name))
while iteration_number < max_iterations:
available = (1 == int('0' + adb_shell(self.adb_name, 'getprop sys.boot_completed', timeout=self.default_timeout)))
if available:
break
else:
time.sleep(self.delay)
iteration_number += 1
else:
raise DeviceError('Could not boot {} ({}).'.format(self.name, self.adb_name))
if self._just_rebooted:
self.logger.debug('Waiting for boot to complete...')
# On some devices, adb connection gets reset some time after booting.
# This causes errors during execution. To prevent this, open a shell
# session and wait for it to be killed. Once its killed, give adb
# enough time to restart, and then the device should be ready.
# TODO: This is more of a work-around rather than an actual solution.
# Need to figure out what is going on the "proper" way of handling it.
try:
adb_shell(self.adb_name, '', timeout=20)
time.sleep(5) # give adb time to re-initialize
except TimeoutError:
pass # timed out waiting for the session to be killed -- assume not going to be.
self.logger.debug('Boot completed.')
self._just_rebooted = False
self._is_ready = True
def initialize(self, context, *args, **kwargs):
self.execute('mkdir -p {}'.format(self.working_directory))
if self.is_rooted:
if not self.executable_is_installed('busybox'):
self.busybox = self.deploy_busybox(context)
else:
self.busybox = 'busybox'
self.disable_screen_lock()
self.disable_selinux()
if self.enable_screen_check:
self.ensure_screen_is_on()
self.init(context, *args, **kwargs)
def disconnect(self):
if self._logcat_poller:
self._logcat_poller.close()
def ping(self):
try:
# May be triggered inside initialize()
adb_shell(self.adb_name, 'ls /', timeout=10)
except (TimeoutError, CalledProcessError):
raise DeviceNotRespondingError(self.adb_name or self.name)
def start(self):
if self.logcat_poll_period:
if self._logcat_poller:
self._logcat_poller.close()
self._logcat_poller = _LogcatPoller(self, self.logcat_poll_period, timeout=self.default_timeout)
self._logcat_poller.start()
def stop(self):
if self._logcat_poller:
self._logcat_poller.stop()
def get_android_version(self):
return ANDROID_VERSION_MAP.get(self.get_sdk_version(), None)
def get_android_id(self):
"""
Get the device's ANDROID_ID. Which is
"A 64-bit number (as a hex string) that is randomly generated when the user
first sets up the device and should remain constant for the lifetime of the
user's device."
.. note:: This will get reset on userdata erasure.
"""
return self.execute('settings get secure android_id').strip()
def get_sdk_version(self):
try:
return int(self.getprop('ro.build.version.sdk'))
except (ValueError, TypeError):
return None
def get_installed_package_version(self, package):
"""
Returns the version (versionName) of the specified package if it is installed
on the device, or ``None`` otherwise.
Added in version 2.1.4
"""
output = self.execute('dumpsys package {}'.format(package))
for line in convert_new_lines(output).split('\n'):
if 'versionName' in line:
return line.split('=', 1)[1]
return None
def list_packages(self):
"""
List packages installed on the device.
Added in version 2.1.4
"""
output = self.execute('pm list packages')
output = output.replace('package:', '')
return output.split()
def package_is_installed(self, package_name):
"""
Returns ``True`` the if a package with the specified name is installed on
the device, and ``False`` otherwise.
Added in version 2.1.4
"""
return package_name in self.list_packages()
def executable_is_installed(self, executable_name):
return executable_name in self.listdir(self.binaries_directory)
def is_installed(self, name):
return self.executable_is_installed(name) or self.package_is_installed(name)
def listdir(self, path, as_root=False, **kwargs):
contents = self.execute('ls {}'.format(path), as_root=as_root)
return [x.strip() for x in contents.split()]
def push_file(self, source, dest, as_root=False, timeout=default_timeout): # pylint: disable=W0221
"""
Modified in version 2.1.4: added ``as_root`` parameter.
"""
self._check_ready()
if not as_root:
adb_command(self.adb_name, "push '{}' '{}'".format(source, dest), timeout=timeout)
else:
device_tempfile = self.path.join(self.file_transfer_cache, source.lstrip(self.path.sep))
self.execute('mkdir -p {}'.format(self.path.dirname(device_tempfile)))
adb_command(self.adb_name, "push '{}' '{}'".format(source, device_tempfile), timeout=timeout)
self.execute('cp {} {}'.format(device_tempfile, dest), as_root=True)
def pull_file(self, source, dest, as_root=False, timeout=default_timeout): # pylint: disable=W0221
"""
Modified in version 2.1.4: added ``as_root`` parameter.
"""
self._check_ready()
if not as_root:
adb_command(self.adb_name, "pull '{}' '{}'".format(source, dest), timeout=timeout)
else:
device_tempfile = self.path.join(self.file_transfer_cache, source.lstrip(self.path.sep))
self.execute('mkdir -p {}'.format(self.path.dirname(device_tempfile)))
self.execute('cp {} {}'.format(source, device_tempfile), as_root=True)
adb_command(self.adb_name, "pull '{}' '{}'".format(device_tempfile, dest), timeout=timeout)
def delete_file(self, filepath, as_root=False): # pylint: disable=W0221
self._check_ready()
adb_shell(self.adb_name, "rm '{}'".format(filepath), as_root=as_root, timeout=self.default_timeout)
def file_exists(self, filepath):
self._check_ready()
output = adb_shell(self.adb_name, 'if [ -e \'{}\' ]; then echo 1; else echo 0; fi'.format(filepath),
timeout=self.default_timeout)
if int(output):
return True
else:
return False
def install(self, filepath, timeout=default_timeout, with_name=None): # pylint: disable=W0221
ext = os.path.splitext(filepath)[1].lower()
if ext == '.apk':
return self.install_apk(filepath, timeout)
else:
return self.install_executable(filepath, with_name)
def install_apk(self, filepath, timeout=default_timeout): # pylint: disable=W0221
self._check_ready()
ext = os.path.splitext(filepath)[1].lower()
if ext == '.apk':
return adb_command(self.adb_name, "install {}".format(filepath), timeout=timeout)
else:
raise DeviceError('Can\'t install {}: unsupported format.'.format(filepath))
def install_executable(self, filepath, with_name=None):
"""
Installs a binary executable on device. Requires root access. Returns
the path to the installed binary, or ``None`` if the installation has failed.
Optionally, ``with_name`` parameter may be used to specify a different name under
which the executable will be installed.
Added in version 2.1.3.
Updated in version 2.1.5 with ``with_name`` parameter.
"""
executable_name = with_name or os.path.basename(filepath)
on_device_file = self.path.join(self.working_directory, executable_name)
on_device_executable = self.path.join(self.binaries_directory, executable_name)
self.push_file(filepath, on_device_file)
matched = []
for entry in self.list_file_systems():
if self.binaries_directory.rstrip('/').startswith(entry.mount_point):
matched.append(entry)
if matched:
entry = sorted(matched, key=lambda x: len(x.mount_point))[-1]
if 'rw' not in entry.options:
self.execute('mount -o rw,remount {} {}'.format(entry.device, entry.mount_point), as_root=True)
self.execute('cp {} {}'.format(on_device_file, on_device_executable), as_root=True)
self.execute('chmod 0777 {}'.format(on_device_executable), as_root=True)
return on_device_executable
else:
raise DeviceError('Could not find mount point for binaries directory {}'.format(self.binaries_directory))
def uninstall(self, package):
self._check_ready()
adb_command(self.adb_name, "uninstall {}".format(package), timeout=self.default_timeout)
def uninstall_executable(self, executable_name):
"""
Requires root access.
Added in version 2.1.3.
"""
on_device_executable = self.path.join(self.binaries_directory, executable_name)
for entry in self.list_file_systems():
if entry.mount_point == '/system':
if 'rw' not in entry.options:
self.execute('mount -o rw,remount {} /system'.format(entry.device), as_root=True)
self.delete_file(on_device_executable)
def execute(self, command, timeout=default_timeout, check_exit_code=True, background=False,
as_root=False, busybox=False, **kwargs):
"""
Execute the specified command on the device using adb.
Parameters:
:param command: The command to be executed. It should appear exactly
as if you were typing it into a shell.
:param timeout: Time, in seconds, to wait for adb to return before aborting
and raising an error. Defaults to ``AndroidDevice.default_timeout``.
:param check_exit_code: If ``True``, the return code of the command on the Device will
be check and exception will be raised if it is not 0.
Defaults to ``True``.
:param background: If ``True``, will execute adb in a subprocess, and will return
immediately, not waiting for adb to return. Defaults to ``False``
:param busybox: If ``True``, will use busybox to execute the command. Defaults to ``False``.
Added in version 2.1.3
.. note:: The device must be rooted to be able to use busybox.
:param as_root: If ``True``, will attempt to execute command in privileged mode. The device
must be rooted, otherwise an error will be raised. Defaults to ``False``.
Added in version 2.1.3
:returns: If ``background`` parameter is set to ``True``, the subprocess object will
be returned; otherwise, the contents of STDOUT from the device will be returned.
:raises: DeviceError if adb timed out or if the command returned non-zero exit
code on the device, or if attempting to execute a command in privileged mode on an
unrooted device.
"""
self._check_ready()
if as_root and not self.is_rooted:
raise DeviceError('Attempting to execute "{}" as root on unrooted device.'.format(command))
if busybox:
if not self.is_rooted:
DeviceError('Attempting to execute "{}" with busybox. '.format(command) +
'Busybox can only be deployed to rooted devices.')
command = ' '.join([self.busybox, command])
if background:
return adb_background_shell(self.adb_name, command, as_root=as_root)
else:
return adb_shell(self.adb_name, command, timeout, check_exit_code, as_root)
def kick_off(self, command):
"""
Like execute but closes adb session and returns immediately, leaving the command running on the
device (this is different from execute(background=True) which keeps adb connection open and returns
a subprocess object).
.. note:: This relies on busybox's nohup applet and so won't work on unrooted devices.
Added in version 2.1.4
"""
if not self.is_rooted:
raise DeviceError('kick_off uses busybox\'s nohup applet and so can only be run a rooted device.')
try:
command = 'cd {} && busybox nohup {}'.format(self.working_directory, command)
output = self.execute(command, timeout=1, as_root=True)
except TimeoutError:
pass
else:
raise ValueError('Background command exited before timeout; got "{}"'.format(output))
def get_properties(self, context):
"""Captures and saves the information from /system/build.prop and /proc/version"""
props = {}
props['android_id'] = self.get_android_id()
buildprop_file = os.path.join(context.host_working_directory, 'build.prop')
if not os.path.isfile(buildprop_file):
self.pull_file('/system/build.prop', context.host_working_directory)
self._update_build_properties(buildprop_file, props)
context.add_run_artifact('build_properties', buildprop_file, 'export')
version_file = os.path.join(context.host_working_directory, 'version')
if not os.path.isfile(version_file):
self.pull_file('/proc/version', context.host_working_directory)
self._update_versions(version_file, props)
context.add_run_artifact('device_version', version_file, 'export')
return props
def getprop(self, prop=None):
"""Returns parsed output of Android getprop command. If a property is
specified, only the value for that property will be returned (with
``None`` returned if the property doesn't exist. Otherwise,
``wlauto.utils.android.AndroidProperties`` will be returned, which is
a dict-like object."""
props = AndroidProperties(self.execute('getprop'))
if prop:
return props[prop]
return props
# Android-specific methods. These either rely on specifics of adb or other
# Android-only concepts in their interface and/or implementation.
def forward_port(self, from_port, to_port):
"""
Forward a port on the device to a port on localhost.
:param from_port: Port on the device which to forward.
:param to_port: Port on the localhost to which the device port will be forwarded.
Ports should be specified using adb spec. See the "adb forward" section in "adb help".
"""
adb_command(self.adb_name, 'forward {} {}'.format(from_port, to_port), timeout=self.default_timeout)
def dump_logcat(self, outfile, filter_spec=None):
"""
Dump the contents of logcat, for the specified filter spec to the
specified output file.
See http://developer.android.com/tools/help/logcat.html
:param outfile: Output file on the host into which the contents of the
log will be written.
:param filter_spec: Logcat filter specification.
see http://developer.android.com/tools/debugging/debugging-log.html#filteringOutput
"""
if self._logcat_poller:
return self._logcat_poller.write_log(outfile)
else:
if filter_spec:
command = 'logcat -d -s {} > {}'.format(filter_spec, outfile)
else:
command = 'logcat -d > {}'.format(outfile)
return adb_command(self.adb_name, command, timeout=self.default_timeout)
def clear_logcat(self):
"""Clear (flush) logcat log."""
if self._logcat_poller:
return self._logcat_poller.clear_buffer()
else:
return adb_shell(self.adb_name, 'logcat -c', timeout=self.default_timeout)
def capture_screen(self, filepath):
"""Caputers the current device screen into the specified file in a PNG format."""
on_device_file = self.path.join(self.working_directory, 'screen_capture.png')
self.execute('screencap -p {}'.format(on_device_file))
self.pull_file(on_device_file, filepath)
self.delete_file(on_device_file)
def is_screen_on(self):
"""Returns ``True`` if the device screen is currently on, ``False`` otherwise."""
output = self.execute('dumpsys power')
match = SCREEN_STATE_REGEX.search(output)
if match:
return boolean(match.group(1))
else:
raise DeviceError('Could not establish screen state.')
def ensure_screen_is_on(self):
if not self.is_screen_on():
self.execute('input keyevent 26')
def disable_screen_lock(self):
"""
Attempts to disable he screen lock on the device.
.. note:: This does not always work...
Added inversion 2.1.4
"""
lockdb = '/data/system/locksettings.db'
sqlcommand = "update locksettings set value=\\'0\\' where name=\\'screenlock.disabled\\';"
self.execute('sqlite3 {} "{}"'.format(lockdb, sqlcommand), as_root=True)
def disable_selinux(self):
# This may be invoked from intialize() so we can't use execute() or the
# standard API for doing this.
api_level = int(adb_shell(self.adb_name, 'getprop ro.build.version.sdk',
timeout=self.default_timeout).strip())
# SELinux was added in Android 4.3 (API level 18). Trying to
# 'getenforce' in earlier versions will produce an error.
if api_level >= 18:
se_status = self.execute('getenforce', as_root=True).strip()
if se_status == 'Enforcing':
self.execute('setenforce 0', as_root=True)
# Internal methods: do not use outside of the class.
def _update_build_properties(self, filepath, props):
try:
with open(filepath) as fh:
for line in fh:
line = re.sub(r'#.*', '', line).strip()
if not line:
continue
key, value = line.split('=', 1)
props[key] = value
except ValueError:
self.logger.warning('Could not parse build.prop.')
def _update_versions(self, filepath, props):
with open(filepath) as fh:
text = fh.read()
props['version'] = text
text = re.sub(r'#.*', '', text).strip()
match = re.search(r'^(Linux version .*?)\s*\((gcc version .*)\)$', text)
if match:
props['linux_version'] = match.group(1).strip()
props['gcc_version'] = match.group(2).strip()
else:
self.logger.warning('Could not parse version string.')
class _LogcatPoller(threading.Thread):
join_timeout = 5
def __init__(self, device, period, timeout=None):
super(_LogcatPoller, self).__init__()
self.adb_device = device.adb_name
self.logger = device.logger
self.period = period
self.timeout = timeout
self.stop_signal = threading.Event()
self.lock = threading.RLock()
self.buffer_file = tempfile.mktemp()
self.last_poll = 0
self.daemon = True
self.exc = None
def run(self):
self.logger.debug('Starting logcat polling.')
try:
while True:
if self.stop_signal.is_set():
break
with self.lock:
current_time = time.time()
if (current_time - self.last_poll) >= self.period:
self._poll()
time.sleep(0.5)
except Exception: # pylint: disable=W0703
self.exc = WorkerThreadError(self.name, sys.exc_info())
self.logger.debug('Logcat polling stopped.')
def stop(self):
self.logger.debug('Stopping logcat polling.')
self.stop_signal.set()
self.join(self.join_timeout)
if self.is_alive():
self.logger.error('Could not join logcat poller thread.')
if self.exc:
raise self.exc # pylint: disable=E0702
def clear_buffer(self):
self.logger.debug('Clearing logcat buffer.')
with self.lock:
adb_shell(self.adb_device, 'logcat -c', timeout=self.timeout)
with open(self.buffer_file, 'w') as _: # NOQA
pass
def write_log(self, outfile):
self.logger.debug('Writing logbuffer to {}.'.format(outfile))
with self.lock:
self._poll()
if os.path.isfile(self.buffer_file):
shutil.copy(self.buffer_file, outfile)
else: # there was no logcat trace at this time
with open(outfile, 'w') as _: # NOQA
pass
def close(self):
self.logger.debug('Closing logcat poller.')
if os.path.isfile(self.buffer_file):
os.remove(self.buffer_file)
def _poll(self):
with self.lock:
self.last_poll = time.time()
adb_command(self.adb_device, 'logcat -d >> {}'.format(self.buffer_file), timeout=self.timeout)
adb_command(self.adb_device, 'logcat -c', timeout=self.timeout)
class BigLittleDevice(AndroidDevice): # pylint: disable=W0223
parameters = [
Parameter('scheduler', default='hmp', override=True),
]

View File

@@ -21,8 +21,8 @@ from wlauto.core.extension import Parameter
from wlauto.core.workload import Workload
from wlauto.core.resource import NO_ONE
from wlauto.common.resources import ExtensionAsset, Executable
from wlauto.exceptions import WorkloadError, ResourceError
from wlauto.utils.android import ApkInfo
from wlauto.exceptions import WorkloadError, ResourceError, ConfigError
from wlauto.utils.android import ApkInfo, ANDROID_NORMAL_PERMISSIONS
from wlauto.utils.types import boolean
import wlauto.common.android.resources
@@ -89,7 +89,7 @@ class UiAutomatorWorkload(Workload):
for k, v in self.uiauto_params.iteritems():
params += ' -e {} {}'.format(k, v)
self.command = 'uiautomator runtest {}{} -c {}'.format(self.device_uiauto_file, params, method_string)
self.device.push_file(self.uiauto_file, self.device_uiauto_file)
self.device.push(self.uiauto_file, self.device_uiauto_file)
self.device.killall('uiautomator')
def run(self, context):
@@ -104,7 +104,7 @@ class UiAutomatorWorkload(Workload):
pass
def teardown(self, context):
self.device.delete_file(self.device_uiauto_file)
self.device.remove(self.device_uiauto_file)
def validate(self):
if not self.uiauto_file:
@@ -142,12 +142,23 @@ class ApkWorkload(Workload):
package = None
activity = None
view = None
install_timeout = None
default_install_timeout = 300
supported_platforms = ['android']
parameters = [
Parameter('install_timeout', kind=int, default=300,
description='Timeout for the installation of the apk.'),
Parameter('check_apk', kind=boolean, default=True,
description='''
Discover the APK for this workload on the host, and check that
the version matches the one on device (if already installed).
'''),
Parameter('force_install', kind=boolean, default=False,
description='''
Always re-install the APK, even if matching version is found
on already installed on the device.
'''),
Parameter('uninstall_apk', kind=boolean, default=False,
description="If ``True``, will uninstall workload's APK as part of teardown."),
description='If ``True``, will uninstall workload\'s APK as part of teardown.'),
]
def __init__(self, device, _call_super=True, **kwargs):
@@ -156,12 +167,19 @@ class ApkWorkload(Workload):
self.apk_file = None
self.apk_version = None
self.logcat_log = None
self.force_reinstall = kwargs.get('force_reinstall', False)
if not self.install_timeout:
self.install_timeout = self.default_install_timeout
def init_resources(self, context):
self.apk_file = context.resolver.get(wlauto.common.android.resources.ApkFile(self), version=getattr(self, 'version', None))
self.apk_file = context.resolver.get(wlauto.common.android.resources.ApkFile(self),
version=getattr(self, 'version', None),
strict=self.check_apk)
def validate(self):
if self.check_apk:
if not self.apk_file:
raise WorkloadError('No APK file found for workload {}.'.format(self.name))
else:
if self.force_install:
raise ConfigError('force_install cannot be "True" when check_apk is set to "False".')
def setup(self, context):
self.initialize_package(context)
@@ -170,20 +188,36 @@ class ApkWorkload(Workload):
self.device.clear_logcat()
def initialize_package(self, context):
installed_version = self.device.get_installed_package_version(self.package)
installed_version = self.device.get_package_version(self.package)
if self.check_apk:
self.initialize_with_host_apk(context, installed_version)
else:
if not installed_version:
message = '''{} not found found on the device and check_apk is set to "False"
so host version was not checked.'''
raise WorkloadError(message.format(self.package))
message = 'Version {} installed on device; skipping host APK check.'
self.logger.debug(message.format(installed_version))
self.reset(context)
self.apk_version = installed_version
def initialize_with_host_apk(self, context, installed_version):
host_version = ApkInfo(self.apk_file).version_name
if installed_version != host_version:
if installed_version:
message = '{} host version: {}, device version: {}; re-installing...'
self.logger.debug(message.format(os.path.basename(self.apk_file), host_version, installed_version))
self.logger.debug(message.format(os.path.basename(self.apk_file),
host_version, installed_version))
else:
message = '{} host version: {}, not found on device; installing...'
self.logger.debug(message.format(os.path.basename(self.apk_file), host_version))
self.force_reinstall = True
self.logger.debug(message.format(os.path.basename(self.apk_file),
host_version))
self.force_install = True # pylint: disable=attribute-defined-outside-init
else:
message = '{} version {} found on both device and host.'
self.logger.debug(message.format(os.path.basename(self.apk_file), host_version))
if self.force_reinstall:
self.logger.debug(message.format(os.path.basename(self.apk_file),
host_version))
if self.force_install:
if installed_version:
self.device.uninstall(self.package)
self.install_apk(context)
@@ -202,6 +236,11 @@ class ApkWorkload(Workload):
self.device.execute('am force-stop {}'.format(self.package))
self.device.execute('pm clear {}'.format(self.package))
# As of android API level 23, apps can request permissions at runtime,
# this will grant all of them so requests do not pop up when running the app
if self.device.os_version['sdk'] >= 23:
self._grant_requested_permissions()
def install_apk(self, context):
output = self.device.install(self.apk_file, self.install_timeout)
if 'Failure' in output:
@@ -213,6 +252,26 @@ class ApkWorkload(Workload):
self.logger.debug(output)
self.do_post_install(context)
def _grant_requested_permissions(self):
dumpsys_output = self.device.execute(command="dumpsys package {}".format(self.package))
permissions = []
lines = iter(dumpsys_output.splitlines())
for line in lines:
if "requested permissions:" in line:
break
for line in lines:
if "android.permission." in line:
permissions.append(line.split(":")[0].strip())
else:
break
for permission in permissions:
# "Normal" Permisions are automatically granted and cannot be changed
permission_name = permission.rsplit('.', 1)[1]
if permission_name not in ANDROID_NORMAL_PERMISSIONS:
self.device.execute("pm grant {} {}".format(self.package, permission))
def do_post_install(self, context):
""" May be overwritten by dervied classes."""
pass
@@ -222,7 +281,7 @@ class ApkWorkload(Workload):
def update_result(self, context):
self.logcat_log = os.path.join(context.output_directory, 'logcat.log')
self.device.dump_logcat(self.logcat_log)
context.device_manager.dump_logcat(self.logcat_log)
context.add_iteration_artifact(name='logcat',
path='logcat.log',
kind='log',
@@ -233,10 +292,6 @@ class ApkWorkload(Workload):
if self.uninstall_apk:
self.device.uninstall(self.package)
def validate(self):
if not self.apk_file:
raise WorkloadError('No APK file found for workload {}.'.format(self.name))
AndroidBenchmark = ApkWorkload # backward compatibility
@@ -250,7 +305,7 @@ class ReventWorkload(Workload):
if _call_super:
super(ReventWorkload, self).__init__(device, **kwargs)
devpath = self.device.path
self.on_device_revent_binary = devpath.join(self.device.working_directory, 'revent')
self.on_device_revent_binary = devpath.join(self.device.binaries_directory, 'revent')
self.on_device_setup_revent = devpath.join(self.device.working_directory, '{}.setup.revent'.format(self.device.name))
self.on_device_run_revent = devpath.join(self.device.working_directory, '{}.run.revent'.format(self.device.name))
self.setup_timeout = kwargs.get('setup_timeout', self.default_setup_timeout)
@@ -278,8 +333,8 @@ class ReventWorkload(Workload):
pass
def teardown(self, context):
self.device.delete_file(self.on_device_setup_revent)
self.device.delete_file(self.on_device_run_revent)
self.device.remove(self.on_device_setup_revent)
self.device.remove(self.on_device_run_revent)
def _check_revent_files(self, context):
# check the revent binary
@@ -298,12 +353,14 @@ class ReventWorkload(Workload):
raise WorkloadError(message)
self.on_device_revent_binary = self.device.install_executable(revent_binary)
self.device.push_file(self.revent_run_file, self.on_device_run_revent)
self.device.push_file(self.revent_setup_file, self.on_device_setup_revent)
self.device.push(self.revent_run_file, self.on_device_run_revent)
self.device.push(self.revent_setup_file, self.on_device_setup_revent)
class AndroidUiAutoBenchmark(UiAutomatorWorkload, AndroidBenchmark):
supported_platforms = ['android']
def __init__(self, device, **kwargs):
UiAutomatorWorkload.__init__(self, device, **kwargs)
AndroidBenchmark.__init__(self, device, _call_super=False, **kwargs)
@@ -355,8 +412,19 @@ class GameWorkload(ApkWorkload, ReventWorkload):
asset_file = None
saved_state_file = None
view = 'SurfaceView'
install_timeout = 500
loading_time = 10
supported_platforms = ['android']
parameters = [
Parameter('install_timeout', default=500, override=True),
Parameter('assets_push_timeout', kind=int, default=500,
description='Timeout used during deployment of the assets package (if there is one).'),
Parameter('clear_data_on_reset', kind=bool, default=True,
description="""
If set to ``False``, this will prevent WA from clearing package
data for this workload prior to running it.
"""),
]
def __init__(self, device, **kwargs): # pylint: disable=W0613
ApkWorkload.__init__(self, device, **kwargs)
@@ -377,15 +445,17 @@ class GameWorkload(ApkWorkload, ReventWorkload):
def do_post_install(self, context):
ApkWorkload.do_post_install(self, context)
self._deploy_assets(context)
self._deploy_assets(context, self.assets_push_timeout)
def reset(self, context):
# If saved state exists, restore it; if not, do full
# uninstall/install cycle.
self.device.execute('am force-stop {}'.format(self.package))
if self.saved_state_file:
self._deploy_resource_tarball(context, self.saved_state_file)
else:
ApkWorkload.reset(self, context)
if self.clear_data_on_reset:
self.device.execute('pm clear {}'.format(self.package))
self._deploy_assets(context)
def run(self, context):
@@ -416,9 +486,9 @@ class GameWorkload(ApkWorkload, ReventWorkload):
raise WorkloadError(message.format(resource_file, self.name))
# adb push will create intermediate directories if they don't
# exist.
self.device.push_file(asset_tarball, ondevice_cache)
self.device.push(asset_tarball, ondevice_cache, timeout=timeout)
device_asset_directory = self.device.path.join(self.device.external_storage_directory, 'Android', kind)
device_asset_directory = self.device.path.join(self.context.device_manager.external_storage_directory, 'Android', kind)
deploy_command = 'cd {} && {} tar -xzf {}'.format(device_asset_directory,
self.device.busybox,
ondevice_cache)

BIN
wlauto/common/bin/arm64/m5 Executable file

Binary file not shown.

Binary file not shown.

BIN
wlauto/common/bin/armeabi/m5 Executable file

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,6 @@
The gem5 simulator can be obtained from http://repo.gem5.org/gem5/ and the
corresponding documentation can be found at http://www.gem5.org.
The source for the m5 binaries bundled with Workload Automation (found at
wlauto/common/bin/arm64/m5 and wlauto/common/bin/armeabi/m5) can be found at
util/m5 in the gem5 source at http://repo.gem5.org/gem5/.

View File

@@ -1,4 +1,4 @@
# Copyright 2014-2015 ARM Limited
# Copyright 2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

View File

@@ -1,966 +0,0 @@
# Copyright 2014-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=E1101
import os
import re
from collections import namedtuple
from subprocess import CalledProcessError
from wlauto.core.extension import Parameter
from wlauto.core.device import Device, RuntimeParameter, CoreParameter
from wlauto.core.resource import NO_ONE
from wlauto.exceptions import ConfigError, DeviceError, TimeoutError, DeviceNotRespondingError
from wlauto.common.resources import Executable
from wlauto.utils.cpuinfo import Cpuinfo
from wlauto.utils.misc import convert_new_lines, escape_double_quotes
from wlauto.utils.ssh import SshShell
from wlauto.utils.types import boolean, list_of_strings
# a dict of governor name and a list of it tunables that can't be read
WRITE_ONLY_TUNABLES = {
'interactive': ['boostpulse']
}
FstabEntry = namedtuple('FstabEntry', ['device', 'mount_point', 'fs_type', 'options', 'dump_freq', 'pass_num'])
PsEntry = namedtuple('PsEntry', 'user pid ppid vsize rss wchan pc state name')
class BaseLinuxDevice(Device): # pylint: disable=abstract-method
path_module = 'posixpath'
has_gpu = True
parameters = [
Parameter('scheduler', kind=str, default='unknown',
allowed_values=['unknown', 'smp', 'hmp', 'iks', 'ea', 'other'],
description="""
Specifies the type of multi-core scheduling model utilized in the device. The value
must be one of the following:
:unknown: A generic Device interface is used to interact with the underlying device
and the underlying scheduling model is unkown.
:smp: A standard single-core or Symmetric Multi-Processing system.
:hmp: ARM Heterogeneous Multi-Processing system.
:iks: Linaro In-Kernel Switcher.
:ea: ARM Energy-Aware scheduler.
:other: Any other system not covered by the above.
.. note:: most currently-available systems would fall under ``smp`` rather than
this value. ``other`` is there to future-proof against new schemes
not yet covered by WA.
"""),
Parameter('iks_switch_frequency', kind=int, default=None,
description="""
This is the switching frequency, in kilohertz, of IKS devices. This parameter *MUST NOT*
be set for non-IKS device (i.e. ``scheduler != 'iks'``). If left unset for IKS devices,
it will default to ``800000``, i.e. 800MHz.
"""),
]
runtime_parameters = [
RuntimeParameter('sysfile_values', 'get_sysfile_values', 'set_sysfile_values', value_name='params'),
CoreParameter('${core}_cores', 'get_number_of_active_cores', 'set_number_of_active_cores',
value_name='number'),
CoreParameter('${core}_min_frequency', 'get_core_min_frequency', 'set_core_min_frequency',
value_name='freq'),
CoreParameter('${core}_max_frequency', 'get_core_max_frequency', 'set_core_max_frequency',
value_name='freq'),
CoreParameter('${core}_governor', 'get_core_governor', 'set_core_governor',
value_name='governor'),
CoreParameter('${core}_governor_tunables', 'get_core_governor_tunables', 'set_core_governor_tunables',
value_name='tunables'),
]
@property
def active_cpus(self):
val = self.get_sysfile_value('/sys/devices/system/cpu/online')
cpus = re.findall(r"([\d]\-[\d]|[\d])", val)
active_cpus = []
for cpu in cpus:
if '-' in cpu:
lo, hi = cpu.split('-')
active_cpus.extend(range(int(lo), int(hi) + 1))
else:
active_cpus.append(int(cpu))
return active_cpus
@property
def number_of_cores(self):
"""
Added in version 2.1.4.
"""
if self._number_of_cores is None:
corere = re.compile('^\s*cpu\d+\s*$')
output = self.execute('ls /sys/devices/system/cpu')
self._number_of_cores = 0
for entry in output.split():
if corere.match(entry):
self._number_of_cores += 1
return self._number_of_cores
@property
def resource_cache(self):
return self.path.join(self.working_directory, '.cache')
@property
def file_transfer_cache(self):
return self.path.join(self.working_directory, '.transfer')
@property
def cpuinfo(self):
if not self._cpuinfo:
self._cpuinfo = Cpuinfo(self.execute('cat /proc/cpuinfo'))
return self._cpuinfo
def __init__(self, **kwargs):
super(BaseLinuxDevice, self).__init__(**kwargs)
self.busybox = None
self._is_initialized = False
self._is_ready = False
self._just_rebooted = False
self._is_rooted = None
self._available_frequencies = {}
self._available_governors = {}
self._available_governor_tunables = {}
self._number_of_cores = None
self._written_sysfiles = []
self._cpuinfo = None
def validate(self):
if len(self.core_names) != len(self.core_clusters):
raise ConfigError('core_names and core_clusters are of different lengths.')
if self.iks_switch_frequency is not None and self.scheduler != 'iks': # pylint: disable=E0203
raise ConfigError('iks_switch_frequency must NOT be set for non-IKS devices.')
if self.iks_switch_frequency is None and self.scheduler == 'iks': # pylint: disable=E0203
self.iks_switch_frequency = 800000 # pylint: disable=W0201
def initialize(self, context, *args, **kwargs):
self.execute('mkdir -p {}'.format(self.working_directory))
if self.is_rooted:
if not self.is_installed('busybox'):
self.busybox = self.deploy_busybox(context)
else:
self.busybox = 'busybox'
self.init(context, *args, **kwargs)
def get_sysfile_value(self, sysfile, kind=None):
"""
Get the contents of the specified sysfile.
:param sysfile: The file who's contents will be returned.
:param kind: The type of value to be expected in the sysfile. This can
be any Python callable that takes a single str argument.
If not specified or is None, the contents will be returned
as a string.
"""
output = self.execute('cat \'{}\''.format(sysfile), as_root=True).strip() # pylint: disable=E1103
if kind:
return kind(output)
else:
return output
def set_sysfile_value(self, sysfile, value, verify=True):
"""
Set the value of the specified sysfile. By default, the value will be checked afterwards.
Can be overridden by setting ``verify`` parameter to ``False``.
"""
value = str(value)
self.execute('echo {} > \'{}\''.format(value, sysfile), check_exit_code=False, as_root=True)
if verify:
output = self.get_sysfile_value(sysfile)
if not output.strip() == value: # pylint: disable=E1103
message = 'Could not set the value of {} to {}'.format(sysfile, value)
raise DeviceError(message)
self._written_sysfiles.append(sysfile)
def get_sysfile_values(self):
"""
Returns a dict mapping paths of sysfiles that were previously set to their
current values.
"""
values = {}
for sysfile in self._written_sysfiles:
values[sysfile] = self.get_sysfile_value(sysfile)
return values
def set_sysfile_values(self, params):
"""
The plural version of ``set_sysfile_value``. Takes a single parameter which is a mapping of
file paths to values to be set. By default, every value written will be verified. The can
be disabled for individual paths by appending ``'!'`` to them.
"""
for sysfile, value in params.iteritems():
verify = not sysfile.endswith('!')
sysfile = sysfile.rstrip('!')
self.set_sysfile_value(sysfile, value, verify=verify)
def deploy_busybox(self, context, force=False):
"""
Deploys the busybox Android binary (hence in android module) to the
specified device, and returns the path to the binary on the device.
:param device: device to deploy the binary to.
:param context: an instance of ExecutionContext
:param force: by default, if the binary is already present on the
device, it will not be deployed again. Setting force
to ``True`` overrides that behavior and ensures that the
binary is always copied. Defaults to ``False``.
:returns: The on-device path to the busybox binary.
"""
on_device_executable = self.path.join(self.binaries_directory, 'busybox')
if not force and self.file_exists(on_device_executable):
return on_device_executable
host_file = context.resolver.get(Executable(NO_ONE, self.abi, 'busybox'))
return self.install(host_file)
def list_file_systems(self):
output = self.execute('mount')
fstab = []
for line in output.split('\n'):
fstab.append(FstabEntry(*line.split()))
return fstab
# Process query and control
def get_pids_of(self, process_name):
"""Returns a list of PIDs of all processes with the specified name."""
result = self.execute('ps {}'.format(process_name[-15:]), check_exit_code=False).strip()
if result and 'not found' not in result:
return [int(x.split()[1]) for x in result.split('\n')[1:]]
else:
return []
def ps(self, **kwargs):
"""
Returns the list of running processes on the device. Keyword arguments may
be used to specify simple filters for columns.
Added in version 2.1.4
"""
lines = iter(convert_new_lines(self.execute('ps')).split('\n'))
lines.next() # header
result = []
for line in lines:
parts = line.split()
if parts:
result.append(PsEntry(*(parts[0:1] + map(int, parts[1:5]) + parts[5:])))
if not kwargs:
return result
else:
filtered_result = []
for entry in result:
if all(getattr(entry, k) == v for k, v in kwargs.iteritems()):
filtered_result.append(entry)
return filtered_result
def kill(self, pid, signal=None, as_root=False): # pylint: disable=W0221
"""
Kill the specified process.
:param pid: PID of the process to kill.
:param signal: Specify which singal to send to the process. This must
be a valid value for -s option of kill. Defaults to ``None``.
Modified in version 2.1.4: added ``signal`` parameter.
"""
signal_string = '-s {}'.format(signal) if signal else ''
self.execute('kill {} {}'.format(signal_string, pid), as_root=as_root)
def killall(self, process_name, signal=None, as_root=False): # pylint: disable=W0221
"""
Kill all processes with the specified name.
:param process_name: The name of the process(es) to kill.
:param signal: Specify which singal to send to the process. This must
be a valid value for -s option of kill. Defaults to ``None``.
Modified in version 2.1.5: added ``as_root`` parameter.
"""
for pid in self.get_pids_of(process_name):
self.kill(pid, signal=signal, as_root=as_root)
# cpufreq
def list_available_cpu_governors(self, cpu):
"""Returns a list of governors supported by the cpu."""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
if cpu not in self._available_governors:
cmd = 'cat /sys/devices/system/cpu/{}/cpufreq/scaling_available_governors'.format(cpu)
output = self.execute(cmd, check_exit_code=True)
self._available_governors[cpu] = output.strip().split() # pylint: disable=E1103
return self._available_governors[cpu]
def get_cpu_governor(self, cpu):
"""Returns the governor currently set for the specified CPU."""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_governor'.format(cpu)
return self.get_sysfile_value(sysfile)
def set_cpu_governor(self, cpu, governor, **kwargs):
"""
Set the governor for the specified CPU.
See https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt
:param cpu: The CPU for which the governor is to be set. This must be
the full name as it appears in sysfs, e.g. "cpu0".
:param governor: The name of the governor to be used. This must be
supported by the specific device.
Additional keyword arguments can be used to specify governor tunables for
governors that support them.
:note: On big.LITTLE all cores in a cluster must be using the same governor.
Setting the governor on any core in a cluster will also set it on all
other cores in that cluster.
:raises: ConfigError if governor is not supported by the CPU.
:raises: DeviceError if, for some reason, the governor could not be set.
"""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
supported = self.list_available_cpu_governors(cpu)
if governor not in supported:
raise ConfigError('Governor {} not supported for cpu {}'.format(governor, cpu))
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_governor'.format(cpu)
self.set_sysfile_value(sysfile, governor)
self.set_cpu_governor_tunables(cpu, governor, **kwargs)
def list_available_cpu_governor_tunables(self, cpu):
"""Returns a list of tunables available for the governor on the specified CPU."""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
governor = self.get_cpu_governor(cpu)
if governor not in self._available_governor_tunables:
try:
tunables_path = '/sys/devices/system/cpu/{}/cpufreq/{}'.format(cpu, governor)
self._available_governor_tunables[governor] = self.listdir(tunables_path)
except DeviceError: # probably an older kernel
try:
tunables_path = '/sys/devices/system/cpu/cpufreq/{}'.format(governor)
self._available_governor_tunables[governor] = self.listdir(tunables_path)
except DeviceError: # governor does not support tunables
self._available_governor_tunables[governor] = []
return self._available_governor_tunables[governor]
def get_cpu_governor_tunables(self, cpu):
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
governor = self.get_cpu_governor(cpu)
tunables = {}
for tunable in self.list_available_cpu_governor_tunables(cpu):
if tunable not in WRITE_ONLY_TUNABLES.get(governor, []):
try:
path = '/sys/devices/system/cpu/{}/cpufreq/{}/{}'.format(cpu, governor, tunable)
tunables[tunable] = self.get_sysfile_value(path)
except DeviceError: # May be an older kernel
path = '/sys/devices/system/cpu/cpufreq/{}/{}'.format(governor, tunable)
tunables[tunable] = self.get_sysfile_value(path)
return tunables
def set_cpu_governor_tunables(self, cpu, governor, **kwargs):
"""
Set tunables for the specified governor. Tunables should be specified as
keyword arguments. Which tunables and values are valid depends on the
governor.
:param cpu: The cpu for which the governor will be set. This must be the
full cpu name as it appears in sysfs, e.g. ``cpu0``.
:param governor: The name of the governor. Must be all lower case.
The rest should be keyword parameters mapping tunable name onto the value to
be set for it.
:raises: ConfigError if governor specified is not a valid governor name, or if
a tunable specified is not valid for the governor.
:raises: DeviceError if could not set tunable.
"""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
valid_tunables = self.list_available_cpu_governor_tunables(cpu)
for tunable, value in kwargs.iteritems():
if tunable in valid_tunables:
try:
path = '/sys/devices/system/cpu/{}/cpufreq/{}/{}'.format(cpu, governor, tunable)
self.set_sysfile_value(path, value)
except DeviceError: # May be an older kernel
path = '/sys/devices/system/cpu/cpufreq/{}/{}'.format(governor, tunable)
self.set_sysfile_value(path, value)
else:
message = 'Unexpected tunable {} for governor {} on {}.\n'.format(tunable, governor, cpu)
message += 'Available tunables are: {}'.format(valid_tunables)
raise ConfigError(message)
def enable_cpu(self, cpu):
"""
Enable the specified core.
:param cpu: CPU core to enable. This must be the full name as it
appears in sysfs, e.g. "cpu0".
"""
self.hotplug_cpu(cpu, online=True)
def disable_cpu(self, cpu):
"""
Disable the specified core.
:param cpu: CPU core to disable. This must be the full name as it
appears in sysfs, e.g. "cpu0".
"""
self.hotplug_cpu(cpu, online=False)
def hotplug_cpu(self, cpu, online):
"""
Hotplug the specified CPU either on or off.
See https://www.kernel.org/doc/Documentation/cpu-hotplug.txt
:param cpu: The CPU for which the governor is to be set. This must be
the full name as it appears in sysfs, e.g. "cpu0".
:param online: CPU will be enabled if this value bool()'s to True, and
will be disabled otherwise.
"""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
status = 1 if online else 0
sysfile = '/sys/devices/system/cpu/{}/online'.format(cpu)
self.set_sysfile_value(sysfile, status)
def list_available_cpu_frequencies(self, cpu):
"""Returns a list of frequencies supported by the cpu or an empty list
if not could be found."""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
if cpu not in self._available_frequencies:
try:
cmd = 'cat /sys/devices/system/cpu/{}/cpufreq/scaling_available_frequencies'.format(cpu)
output = self.execute(cmd)
self._available_frequencies[cpu] = map(int, output.strip().split()) # pylint: disable=E1103
except DeviceError:
# we return an empty list because on some devices scaling_available_frequencies
# is not generated. So we are returing an empty list as an indication
# http://adrynalyne-teachtofish.blogspot.co.uk/2011/11/how-to-enable-scalingavailablefrequenci.html
self._available_frequencies[cpu] = []
return self._available_frequencies[cpu]
def get_cpu_min_frequency(self, cpu):
"""
Returns the min frequency currently set for the specified CPU.
Warning, this method does not check if the cpu is online or not. It will
try to read the minimum frequency and the following exception will be
raised ::
:raises: DeviceError if for some reason the frequency could not be read.
"""
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_min_freq'.format(cpu)
return self.get_sysfile_value(sysfile)
def set_cpu_min_frequency(self, cpu, frequency):
"""
Set's the minimum value for CPU frequency. Actual frequency will
depend on the Governor used and may vary during execution. The value should be
either an int or a string representing an integer. The Value must also be
supported by the device. The available frequencies can be obtained by calling
get_available_frequencies() or examining
/sys/devices/system/cpu/cpuX/cpufreq/scaling_available_frequencies
on the device.
:raises: ConfigError if the frequency is not supported by the CPU.
:raises: DeviceError if, for some reason, frequency could not be set.
"""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
available_frequencies = self.list_available_cpu_frequencies(cpu)
try:
value = int(frequency)
if available_frequencies and value not in available_frequencies:
raise ConfigError('Can\'t set {} frequency to {}\nmust be in {}'.format(cpu,
value,
available_frequencies))
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_min_freq'.format(cpu)
self.set_sysfile_value(sysfile, value)
except ValueError:
raise ValueError('value must be an integer; got: "{}"'.format(value))
def get_cpu_max_frequency(self, cpu):
"""
Returns the max frequency currently set for the specified CPU.
Warning, this method does not check if the cpu is online or not. It will
try to read the maximum frequency and the following exception will be
raised ::
:raises: DeviceError if for some reason the frequency could not be read.
"""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_max_freq'.format(cpu)
return self.get_sysfile_value(sysfile)
def set_cpu_max_frequency(self, cpu, frequency):
"""
Set's the minimum value for CPU frequency. Actual frequency will
depend on the Governor used and may vary during execution. The value should be
either an int or a string representing an integer. The Value must also be
supported by the device. The available frequencies can be obtained by calling
get_available_frequencies() or examining
/sys/devices/system/cpu/cpuX/cpufreq/scaling_available_frequencies
on the device.
:raises: ConfigError if the frequency is not supported by the CPU.
:raises: DeviceError if, for some reason, frequency could not be set.
"""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
available_frequencies = self.list_available_cpu_frequencies(cpu)
try:
value = int(frequency)
if available_frequencies and value not in available_frequencies:
raise DeviceError('Can\'t set {} frequency to {}\nmust be in {}'.format(cpu,
value,
available_frequencies))
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_max_freq'.format(cpu)
self.set_sysfile_value(sysfile, value)
except ValueError:
raise ValueError('value must be an integer; got: "{}"'.format(value))
def get_cpuidle_states(self, cpu=0):
"""
Return map of cpuidle states with their descriptive names.
"""
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
cpuidle_states = {}
statere = re.compile('^\s*state\d+\s*$')
output = self.execute("ls /sys/devices/system/cpu/{}/cpuidle".format(cpu))
for entry in output.split():
if statere.match(entry):
cpuidle_states[entry] = self.get_sysfile_value("/sys/devices/system/cpu/{}/cpuidle/{}/desc".format(cpu, entry))
return cpuidle_states
# Core- and cluster-level mapping for the above cpu-level APIs above. The
# APIs make the following assumptions, which were True for all devices that
# existed at the time of writing:
# 1. A cluster can only contain cores of one type.
# 2. All cores in a cluster are tied to the same DVFS domain, therefore
# changes to cpufreq for a core will affect all other cores on the
# same cluster.
def get_core_clusters(self, core, strict=True):
"""Returns the list of clusters that contain the specified core. if ``strict``
is ``True``, raises ValueError if no clusters has been found (returns empty list
if ``strict`` is ``False``)."""
core_indexes = [i for i, c in enumerate(self.core_names) if c == core]
clusters = sorted(list(set(self.core_clusters[i] for i in core_indexes)))
if strict and not clusters:
raise ValueError('No active clusters for core {}'.format(core))
return clusters
def get_cluster_cpu(self, cluster):
"""Returns the first *active* cpu for the cluster. If the entire cluster
has been hotplugged, this will raise a ``ValueError``."""
cpu_indexes = set([i for i, c in enumerate(self.core_clusters) if c == cluster])
active_cpus = sorted(list(cpu_indexes.intersection(self.active_cpus)))
if not active_cpus:
raise ValueError('All cpus for cluster {} are offline'.format(cluster))
return active_cpus[0]
def list_available_cluster_governors(self, cluster):
return self.list_available_cpu_governors(self.get_cluster_cpu(cluster))
def get_cluster_governor(self, cluster):
return self.get_cpu_governor(self.get_cluster_cpu(cluster))
def set_cluster_governor(self, cluster, governor, **tunables):
return self.set_cpu_governor(self.get_cluster_cpu(cluster), governor, **tunables)
def list_available_cluster_governor_tunables(self, cluster):
return self.list_available_cpu_governor_tunables(self.get_cluster_cpu(cluster))
def get_cluster_governor_tunables(self, cluster):
return self.get_cpu_governor_tunables(self.get_cluster_cpu(cluster))
def set_cluster_governor_tunables(self, cluster, governor, **tunables):
return self.set_cpu_governor_tunables(self.get_cluster_cpu(cluster), governor, **tunables)
def get_cluster_min_frequency(self, cluster):
return self.get_cpu_min_frequency(self.get_cluster_cpu(cluster))
def set_cluster_min_frequency(self, cluster, freq):
return self.set_cpu_min_frequency(self.get_cluster_cpu(cluster), freq)
def get_cluster_max_frequency(self, cluster):
return self.get_cpu_max_frequency(self.get_cluster_cpu(cluster))
def set_cluster_max_frequency(self, cluster, freq):
return self.set_cpu_max_frequency(self.get_cluster_cpu(cluster), freq)
def get_core_cpu(self, core):
for cluster in self.get_core_clusters(core):
try:
return self.get_cluster_cpu(cluster)
except ValueError:
pass
raise ValueError('No active CPUs found for core {}'.format(core))
def list_available_core_governors(self, core):
return self.list_available_cpu_governors(self.get_core_cpu(core))
def get_core_governor(self, core):
return self.get_cpu_governor(self.get_core_cpu(core))
def set_core_governor(self, core, governor, **tunables):
for cluster in self.get_core_clusters(core):
self.set_cluster_governor(cluster, governor, **tunables)
def list_available_core_governor_tunables(self, core):
return self.list_available_cpu_governor_tunables(self.get_core_cpu(core))
def get_core_governor_tunables(self, core):
return self.get_cpu_governor_tunables(self.get_core_cpu(core))
def set_core_governor_tunables(self, core, tunables):
for cluster in self.get_core_clusters(core):
governor = self.get_cluster_governor(cluster)
self.set_cluster_governor_tunables(cluster, governor, **tunables)
def get_core_min_frequency(self, core):
return self.get_cpu_min_frequency(self.get_core_cpu(core))
def set_core_min_frequency(self, core, freq):
for cluster in self.get_core_clusters(core):
self.set_cluster_min_frequency(cluster, freq)
def get_core_max_frequency(self, core):
return self.get_cpu_max_frequency(self.get_core_cpu(core))
def set_core_max_frequency(self, core, freq):
for cluster in self.get_core_clusters(core):
self.set_cluster_max_frequency(cluster, freq)
def get_number_of_active_cores(self, core):
if core not in self.core_names:
raise ValueError('Unexpected core: {}; must be in {}'.format(core, list(set(self.core_names))))
active_cpus = self.active_cpus
num_active_cores = 0
for i, c in enumerate(self.core_names):
if c == core and i in active_cpus:
num_active_cores += 1
return num_active_cores
def set_number_of_active_cores(self, core, number):
if core not in self.core_names:
raise ValueError('Unexpected core: {}; must be in {}'.format(core, list(set(self.core_names))))
core_ids = [i for i, c in enumerate(self.core_names) if c == core]
max_cores = len(core_ids)
if number > max_cores:
message = 'Attempting to set the number of active {} to {}; maximum is {}'
raise ValueError(message.format(core, number, max_cores))
for i in xrange(0, number):
self.enable_cpu(core_ids[i])
for i in xrange(number, max_cores):
self.disable_cpu(core_ids[i])
# internal methods
def _check_ready(self):
if not self._is_ready:
raise AttributeError('Device not ready.')
def _get_core_cluster(self, core):
"""Returns the first cluster that has cores of the specified type. Raises
value error if no cluster for the specified type has been found"""
core_indexes = [i for i, c in enumerate(self.core_names) if c == core]
core_clusters = set(self.core_clusters[i] for i in core_indexes)
if not core_clusters:
raise ValueError('No cluster found for core {}'.format(core))
return sorted(list(core_clusters))[0]
class LinuxDevice(BaseLinuxDevice):
platform = 'linux'
default_timeout = 30
delay = 2
long_delay = 3 * delay
ready_timeout = 60
parameters = [
Parameter('host', mandatory=True, description='Host name or IP address for the device.'),
Parameter('username', mandatory=True, description='User name for the account on the device.'),
Parameter('password', description='Password for the account on the device (for password-based auth).'),
Parameter('keyfile', description='Keyfile to be used for key-based authentication.'),
Parameter('port', kind=int, description='SSH port number on the device.'),
Parameter('use_telnet', kind=boolean, default=False,
description='Optionally, telnet may be used instead of ssh, though this is discouraged.'),
Parameter('working_directory', default=None,
description='''
Working directory to be used by WA. This must be in a location where the specified user
has write permissions. This will default to /home/<username>/wa (or to /root/wa, if
username is 'root').
'''),
Parameter('binaries_directory', default='/usr/local/bin',
description='Location of executable binaries on this device (must be in PATH).'),
Parameter('property_files', kind=list_of_strings,
default=['/proc/version', '/etc/debian_version', '/etc/lsb-release', '/etc/arch-release'],
description='''
A list of paths to files containing static OS properties. These will be pulled into the
__meta directory in output for each run in order to provide information about the platfrom.
These paths do not have to exist and will be ignored if the path is not present on a
particular device.
'''),
]
@property
def is_rooted(self):
if self._is_rooted is None:
try:
self.execute('ls /', as_root=True)
self._is_rooted = True
except DeviceError:
self._is_rooted = False
return self._is_rooted
def __init__(self, *args, **kwargs):
super(LinuxDevice, self).__init__(*args, **kwargs)
self.shell = None
self.local_binaries_directory = None
self._is_rooted = None
def validate(self):
if not self.password and not self.keyfile:
raise ConfigError('Either a password or a keyfile must be provided.')
if self.working_directory is None: # pylint: disable=access-member-before-definition
if self.username == 'root':
self.working_directory = '/root/wa' # pylint: disable=attribute-defined-outside-init
else:
self.working_directory = '/home/{}/wa'.format(self.username) # pylint: disable=attribute-defined-outside-init
self.local_binaries_directory = self.path.join(self.working_directory, 'bin')
def initialize(self, context, *args, **kwargs):
self.execute('mkdir -p {}'.format(self.local_binaries_directory))
self.execute('export PATH={}:$PATH'.format(self.local_binaries_directory))
super(LinuxDevice, self).initialize(context, *args, **kwargs)
# Power control
def reset(self):
self._is_ready = False
self.execute('reboot', as_root=True)
def hard_reset(self):
super(LinuxDevice, self).hard_reset()
self._is_ready = False
def boot(self, **kwargs):
self.reset()
def connect(self): # NOQA pylint: disable=R0912
self.shell = SshShell(timeout=self.default_timeout)
self.shell.login(self.host, self.username, self.password, self.keyfile, self.port, telnet=self.use_telnet)
self._is_ready = True
def disconnect(self): # NOQA pylint: disable=R0912
self.shell.logout()
self._is_ready = False
# Execution
def has_root(self):
try:
self.execute('ls /', as_root=True)
return True
except DeviceError as e:
if 'not in the sudoers file' not in e.message:
raise e
return False
def execute(self, command, timeout=default_timeout, check_exit_code=True, background=False,
as_root=False, strip_colors=True, **kwargs):
"""
Execute the specified command on the device using adb.
Parameters:
:param command: The command to be executed. It should appear exactly
as if you were typing it into a shell.
:param timeout: Time, in seconds, to wait for adb to return before aborting
and raising an error. Defaults to ``AndroidDevice.default_timeout``.
:param check_exit_code: If ``True``, the return code of the command on the Device will
be check and exception will be raised if it is not 0.
Defaults to ``True``.
:param background: If ``True``, will execute create a new ssh shell rather than using
the default session and will return it immediately. If this is ``True``,
``timeout``, ``strip_colors`` and (obvisously) ``check_exit_code`` will
be ignored; also, with this, ``as_root=True`` is only valid if ``username``
for the device was set to ``root``.
:param as_root: If ``True``, will attempt to execute command in privileged mode. The device
must be rooted, otherwise an error will be raised. Defaults to ``False``.
Added in version 2.1.3
:returns: If ``background`` parameter is set to ``True``, the subprocess object will
be returned; otherwise, the contents of STDOUT from the device will be returned.
"""
self._check_ready()
if background:
if as_root and self.username != 'root':
raise DeviceError('Cannot execute in background with as_root=True unless user is root.')
return self.shell.background(command)
else:
return self.shell.execute(command, timeout, check_exit_code, as_root, strip_colors)
def kick_off(self, command):
"""
Like execute but closes adb session and returns immediately, leaving the command running on the
device (this is different from execute(background=True) which keeps adb connection open and returns
a subprocess object).
"""
self._check_ready()
command = 'sh -c "{}" 1>/dev/null 2>/dev/null &'.format(escape_double_quotes(command))
return self.shell.execute(command)
# File management
def push_file(self, source, dest, as_root=False, timeout=default_timeout): # pylint: disable=W0221
self._check_ready()
if not as_root or self.username == 'root':
self.shell.push_file(source, dest, timeout=timeout)
else:
tempfile = self.path.join(self.working_directory, self.path.basename(dest))
self.shell.push_file(source, tempfile, timeout=timeout)
self.shell.execute('cp -r {} {}'.format(tempfile, dest), timeout=timeout, as_root=True)
def pull_file(self, source, dest, as_root=False, timeout=default_timeout): # pylint: disable=W0221
self._check_ready()
if not as_root or self.username == 'root':
self.shell.pull_file(source, dest, timeout=timeout)
else:
tempfile = self.path.join(self.working_directory, self.path.basename(source))
self.shell.execute('cp -r {} {}'.format(source, tempfile), timeout=timeout, as_root=True)
self.shell.execute('chown -R {} {}'.format(self.username, tempfile), timeout=timeout, as_root=True)
self.shell.pull_file(tempfile, dest, timeout=timeout)
def delete_file(self, filepath, as_root=False): # pylint: disable=W0221
self.execute('rm -rf {}'.format(filepath), as_root=as_root)
def file_exists(self, filepath):
output = self.execute('if [ -e \'{}\' ]; then echo 1; else echo 0; fi'.format(filepath))
return boolean(output.strip()) # pylint: disable=maybe-no-member
def listdir(self, path, as_root=False, **kwargs):
contents = self.execute('ls -1 {}'.format(path), as_root=as_root)
return [x.strip() for x in contents.split('\n')] # pylint: disable=maybe-no-member
def install(self, filepath, timeout=default_timeout, with_name=None): # pylint: disable=W0221
if self.is_rooted:
destpath = self.path.join(self.binaries_directory,
with_name and with_name or self.path.basename(filepath))
self.push_file(filepath, destpath, as_root=True)
self.execute('chmod a+x {}'.format(destpath), timeout=timeout, as_root=True)
else:
destpath = self.path.join(self.local_binaries_directory,
with_name and with_name or self.path.basename(filepath))
self.push_file(filepath, destpath)
self.execute('chmod a+x {}'.format(destpath), timeout=timeout)
return destpath
install_executable = install # compatibility
def uninstall(self, name):
path = self.path.join(self.local_binaries_directory, name)
self.delete_file(path)
uninstall_executable = uninstall # compatibility
def is_installed(self, name):
try:
self.execute('which {}'.format(name))
return True
except DeviceError:
return False
# misc
def ping(self):
try:
# May be triggered inside initialize()
self.shell.execute('ls /', timeout=5)
except (TimeoutError, CalledProcessError):
raise DeviceNotRespondingError(self.host)
def capture_screen(self, filepath):
if not self.is_installed('scrot'):
self.logger.debug('Could not take screenshot as scrot is not installed.')
return
try:
tempfile = self.path.join(self.working_directory, os.path.basename(filepath))
self.execute('DISPLAY=:0.0 scrot {}'.format(tempfile))
self.pull_file(tempfile, filepath)
self.delete_file(tempfile)
except DeviceError as e:
if "Can't open X dispay." not in e.message:
raise e
message = e.message.split('OUTPUT:', 1)[1].strip()
self.logger.debug('Could not take screenshot: {}'.format(message))
def is_screen_on(self):
pass # TODO
def ensure_screen_is_on(self):
pass # TODO
def get_properties(self, context):
for propfile in self.property_files:
if not self.file_exists(propfile):
continue
normname = propfile.lstrip(self.path.sep).replace(self.path.sep, '.')
outfile = os.path.join(context.host_working_directory, normname)
self.pull_file(propfile, outfile)
return {}

View File

@@ -40,6 +40,20 @@ reboot_policy = 'as_needed'
# random: Randomisizes the order in which specs run. #
execution_order = 'by_iteration'
# This indicates when a job will be re-run.
# Possible values:
# OK: This iteration has completed and no errors have been detected
# PARTIAL: One or more instruments have failed (the iteration may still be running).
# FAILED: The workload itself has failed.
# ABORTED: The user interupted the workload
#
# If set to an empty list, a job will not be re-run ever.
retry_on_status = ['FAILED', 'PARTIAL']
# How many times a job will be re-run before giving up
max_retries = 3
####################################################################################################
######################################### Device Settings ##########################################
####################################################################################################
@@ -120,6 +134,9 @@ instrumentation = [
# Specifies how results will be processed and presented. #
# #
result_processors = [
# Creates a status.txt that provides a summary status for the run
'status',
# Creates a results.txt file for each iteration that lists all collected metrics
# in "name = value (units)" format
'standard',
@@ -132,7 +149,7 @@ result_processors = [
# all in the .csv format. Summary metrics are defined on per-worklod basis
# are typically things like overall scores. The contents of summary.csv are
# always a subset of the contents of results.csv (if it is generated).
'summary_csv',
#'summary_csv',
# Creates a results.csv that contains metrics for all iterations of all workloads
# in the JSON format
@@ -195,18 +212,6 @@ logging = {
# The kinds of sensors hwmon instrument will look for
#hwmon_sensors = ['energy', 'temp']
####################################################################################################
##################################### streamline configuration #####################################
# The port number on which gatord will listen
#port = 8080
# Enabling/disabling the run of 'streamline -analyze' on the captured data.
#streamline_analyze = True
# Enabling/disabling the generation of a CSV report
#streamline_report_csv = True
####################################################################################################
###################################### trace-cmd configuration #####################################

View File

@@ -16,13 +16,12 @@
import os
from copy import copy
from collections import OrderedDict, defaultdict
import yaml
from wlauto.exceptions import ConfigError
from wlauto.utils.misc import load_struct_from_yaml, LoadSyntaxError
from wlauto.utils.types import counter, reset_counter
import yaml
def get_aliased_param(d, aliases, default=None, pop=True):
alias_map = [i for i, a in enumerate(aliases) if a in d]
@@ -70,6 +69,7 @@ class AgendaWorkloadEntry(AgendaEntry):
default=OrderedDict())
self.instrumentation = kwargs.pop('instrumentation', [])
self.flash = kwargs.pop('flash', OrderedDict())
self.classifiers = kwargs.pop('classifiers', OrderedDict())
if kwargs:
raise ConfigError('Invalid entry(ies) in workload {}: {}'.format(self.id, ', '.join(kwargs.keys())))
@@ -96,6 +96,7 @@ class AgendaSectionEntry(AgendaEntry):
default=OrderedDict())
self.instrumentation = kwargs.pop('instrumentation', [])
self.flash = kwargs.pop('flash', OrderedDict())
self.classifiers = kwargs.pop('classifiers', OrderedDict())
self.workloads = []
for w in kwargs.pop('workloads', []):
self.workloads.append(agenda.get_workload_entry(w))
@@ -128,6 +129,7 @@ class AgendaGlobalEntry(AgendaEntry):
default=OrderedDict())
self.instrumentation = kwargs.pop('instrumentation', [])
self.flash = kwargs.pop('flash', OrderedDict())
self.classifiers = kwargs.pop('classifiers', OrderedDict())
if kwargs:
raise ConfigError('Invalid entries in global section: {}'.format(kwargs))
@@ -136,7 +138,7 @@ class Agenda(object):
def __init__(self, source=None):
self.filepath = None
self.config = None
self.config = {}
self.global_ = None
self.sections = []
self.workloads = []
@@ -161,13 +163,22 @@ class Agenda(object):
self._assign_id_if_needed(w, 'workload')
return AgendaWorkloadEntry(**w)
def _load(self, source):
raw = self._load_raw_from_source(source)
def _load(self, source): # pylint: disable=too-many-branches
try:
raw = self._load_raw_from_source(source)
except ValueError as e:
name = getattr(source, 'name', '')
raise ConfigError('Error parsing agenda {}: {}'.format(name, e))
if not isinstance(raw, dict):
message = '{} does not contain a valid agenda structure; top level must be a dict.'
raise ConfigError(message.format(self.filepath))
for k, v in raw.iteritems():
if v is None:
raise ConfigError('Empty "{}" entry in {}'.format(k, self.filepath))
if k == 'config':
if not isinstance(v, dict):
raise ConfigError('Invalid agenda: "config" entry must be a dict')
self.config = v
elif k == 'global':
self.global_ = AgendaGlobalEntry(**v)
@@ -237,7 +248,13 @@ def dict_representer(dumper, data):
def dict_constructor(loader, node):
return OrderedDict(loader.construct_pairs(node))
pairs = loader.construct_pairs(node)
seen_keys = set()
for k, _ in pairs:
if k in seen_keys:
raise ValueError('Duplicate entry: {}'.format(k))
seen_keys.add(k)
return OrderedDict(pairs)
yaml.add_representer(OrderedDict, dict_representer)

View File

@@ -16,13 +16,13 @@
import os
import shutil
import imp
import sys
import re
from collections import namedtuple, OrderedDict
from wlauto.exceptions import ConfigError
from wlauto.utils.misc import merge_dicts, normalize, unique
from wlauto.utils.misc import load_struct_from_yaml, load_struct_from_python, LoadSyntaxError
from wlauto.utils.types import identifier
@@ -53,14 +53,13 @@ sys.path.insert(0, os.path.join(_this_dir, '..', 'external'))
#pylint: disable=C0326
_EXTENSION_TYPE_TABLE = [
# name, class, default package, default path
('command', 'wlauto.core.command.Command', 'wlauto.commands', 'commands'),
('device', 'wlauto.core.device.Device', 'wlauto.devices', 'devices'),
('instrument', 'wlauto.core.instrumentation.Instrument', 'wlauto.instrumentation', 'instruments'),
('module', 'wlauto.core.extension.Module', 'wlauto.modules', 'modules'),
('resource_getter', 'wlauto.core.resource.ResourceGetter', 'wlauto.resource_getters', 'resource_getters'),
('result_processor', 'wlauto.core.result.ResultProcessor', 'wlauto.result_processors', 'result_processors'),
('workload', 'wlauto.core.workload.Workload', 'wlauto.workloads', 'workloads'),
# name, class, default package, default path
('command', 'wlauto.core.command.Command', 'wlauto.commands', 'commands'),
('device_manager', 'wlauto.core.device_manager.DeviceManager', 'wlauto.managers', 'managers'),
('instrument', 'wlauto.core.instrumentation.Instrument', 'wlauto.instrumentation', 'instruments'),
('resource_getter', 'wlauto.core.resource.ResourceGetter', 'wlauto.resource_getters', 'resource_getters'),
('result_processor', 'wlauto.core.result.ResultProcessor', 'wlauto.result_processors', 'result_processors'),
('workload', 'wlauto.core.workload.Workload', 'wlauto.workloads', 'workloads'),
]
_Extension = namedtuple('_Extension', 'name, cls, default_package, default_path')
_extensions = [_Extension._make(ext) for ext in _EXTENSION_TYPE_TABLE] # pylint: disable=W0212
@@ -76,7 +75,7 @@ class ConfigLoader(object):
self._loaded = False
self._config = {}
self.config_count = 0
self._loaded_files = []
self.loaded_files = []
self.environment_root = None
self.output_directory = 'wa_output'
self.reboot_after_each_iteration = True
@@ -106,14 +105,22 @@ class ConfigLoader(object):
self.update_from_file(source)
def update_from_file(self, source):
ext = os.path.splitext(source)[1].lower() # pylint: disable=redefined-outer-name
try:
new_config = imp.load_source('config_{}'.format(self.config_count), source)
except SyntaxError, e:
message = 'Sytax error in config: {}'.format(str(e))
raise ConfigError(message)
self._config = merge_dicts(self._config, vars(new_config),
list_duplicates='first', match_types=False, dict_type=OrderedDict)
self._loaded_files.append(source)
if ext in ['.py', '.pyo', '.pyc']:
new_config = load_struct_from_python(source)
elif ext == '.yaml':
new_config = load_struct_from_yaml(source)
else:
raise ConfigError('Unknown config format: {}'.format(source))
except LoadSyntaxError as e:
raise ConfigError(e)
self._config = merge_dicts(self._config, new_config,
list_duplicates='first',
match_types=False,
dict_type=OrderedDict)
self.loaded_files.append(source)
self._loaded = True
def update_from_dict(self, source):
@@ -123,7 +130,7 @@ class ConfigLoader(object):
self._loaded = True
def get_config_paths(self):
return [lf.rstrip('c') for lf in self._loaded_files]
return [lf.rstrip('c') for lf in self.loaded_files]
def _check_loaded(self):
if not self._loaded:
@@ -151,33 +158,44 @@ def init_environment(env_root, dep_dir, extension_paths, overwrite_existing=Fals
for path in extension_paths:
os.makedirs(path)
# If running with sudo on POSIX, change the ownership to the real user.
real_user = os.getenv('SUDO_USER')
if real_user:
import pwd # done here as module won't import on win32
user_entry = pwd.getpwnam(real_user)
uid, gid = user_entry.pw_uid, user_entry.pw_gid
os.chown(env_root, uid, gid)
# why, oh why isn't there a recusive=True option for os.chown?
for root, dirs, files in os.walk(env_root):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files: # pylint: disable=W0621
os.chown(os.path.join(root, f), uid, gid)
if os.getenv('USER') == 'root':
# If running with sudo on POSIX, change the ownership to the real user.
real_user = os.getenv('SUDO_USER')
if real_user:
import pwd # done here as module won't import on win32
user_entry = pwd.getpwnam(real_user)
uid, gid = user_entry.pw_uid, user_entry.pw_gid
os.chown(env_root, uid, gid)
# why, oh why isn't there a recusive=True option for os.chown?
for root, dirs, files in os.walk(env_root):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files: # pylint: disable=W0621
os.chown(os.path.join(root, f), uid, gid)
_env_root = os.getenv('WA_USER_DIRECTORY', os.path.join(_user_home, '.workload_automation'))
_dep_dir = os.path.join(_env_root, 'dependencies')
_extension_paths = [os.path.join(_env_root, ext.default_path) for ext in _extensions]
_extension_paths.extend(os.getenv('WA_EXTENSION_PATHS', '').split(os.pathsep))
_env_var_paths = os.getenv('WA_EXTENSION_PATHS', '')
if _env_var_paths:
_extension_paths.extend(_env_var_paths.split(os.pathsep))
_env_configs = []
for filename in ['config.py', 'config.yaml']:
filepath = os.path.join(_env_root, filename)
if os.path.isfile(filepath):
_env_configs.append(filepath)
if not os.path.isdir(_env_root):
init_environment(_env_root, _dep_dir, _extension_paths)
elif not os.path.isfile(os.path.join(_env_root, 'config.py')):
elif not _env_configs:
filepath = os.path.join(_env_root, 'config.py')
with open(os.path.join(_this_dir, '..', 'config_example.py')) as f:
f_text = re.sub(r'""".*?"""', '', f.read(), 1, re.DOTALL)
with open(os.path.join(_env_root, 'config.py'), 'w') as f:
with open(filepath, 'w') as f:
f.write(f_text)
_env_configs.append(filepath)
settings = ConfigLoader()
settings.environment_root = _env_root
@@ -190,6 +208,5 @@ if os.path.isfile(_packages_file):
with open(_packages_file) as fh:
settings.extension_packages = unique(fh.read().split())
_env_config = os.path.join(settings.environment_root, 'config.py')
settings.update(_env_config)
for config in _env_configs:
settings.update(config)

View File

@@ -45,12 +45,12 @@ class Command(Extension):
parser_params['formatter_class'] = self.formatter_class
self.parser = subparsers.add_parser(self.name, **parser_params)
init_argument_parser(self.parser) # propagate top-level options
self.initialize()
self.initialize(None)
def initialize(self):
def initialize(self, context):
"""
Perform command-specific initialisation (e.g. adding command-specific options to the command's
parser).
parser). ``context`` is always ``None``.
"""
pass

View File

@@ -77,6 +77,7 @@ class WorkloadRunSpec(object):
runtime_parameters=None,
instrumentation=None,
flash=None,
classifiers=None,
): # pylint: disable=W0622
self.id = id
self.number_of_iterations = number_of_iterations
@@ -88,6 +89,7 @@ class WorkloadRunSpec(object):
self.workload_parameters = workload_parameters or OrderedDict()
self.instrumentation = instrumentation or []
self.flash = flash or OrderedDict()
self.classifiers = classifiers or OrderedDict()
self._workload = None
self._section = None
self.enabled = True
@@ -96,7 +98,7 @@ class WorkloadRunSpec(object):
if param in ['id', 'section_id', 'number_of_iterations', 'workload_name', 'label']:
if value is not None:
setattr(self, param, value)
elif param in ['boot_parameters', 'runtime_parameters', 'workload_parameters', 'flash']:
elif param in ['boot_parameters', 'runtime_parameters', 'workload_parameters', 'flash', 'classifiers']:
setattr(self, param, merge_dicts(getattr(self, param), value, list_duplicates='last',
dict_type=OrderedDict, should_normalize=False))
elif param in ['instrumentation']:
@@ -155,6 +157,23 @@ class WorkloadRunSpec(object):
del d['_section']
return d
def copy(self):
other = WorkloadRunSpec()
other.id = self.id
other.number_of_iterations = self.number_of_iterations
other.workload_name = self.workload_name
other.label = self.label
other.section_id = self.section_id
other.boot_parameters = copy(self.boot_parameters)
other.runtime_parameters = copy(self.runtime_parameters)
other.workload_parameters = copy(self.workload_parameters)
other.instrumentation = copy(self.instrumentation)
other.flash = copy(self.flash)
other.classifiers = copy(self.classifiers)
other._section = self._section # pylint: disable=protected-access
other.enabled = self.enabled
return other
def __str__(self):
return '{} {}'.format(self.id, self.label)
@@ -224,6 +243,13 @@ class RebootPolicy(object):
else:
return cmp(self.policy, other)
def to_pod(self):
return self.policy
@staticmethod
def from_pod(pod):
return RebootPolicy(pod)
class RunConfigurationItem(object):
"""
@@ -292,6 +318,12 @@ def _combine_ids(*args):
return '_'.join(args)
class status_list(list):
def append(self, item):
list.append(self, str(item).upper())
class RunConfiguration(object):
"""
Loads and maintains the unified configuration for this run. This includes configuration
@@ -400,7 +432,7 @@ class RunConfiguration(object):
is validated (to make sure there are no missing settings, etc).
- Extensions are loaded through the run config object, which instantiates
them with appropriate parameters based on the "raw" config collected earlier. When an
Extension is instantiated in such a way, it's config is "officially" added to run configuration
Extension is instantiated in such a way, its config is "officially" added to run configuration
tracked by the run config object. Raw config is discarded at the end of the run, so
that any config that wasn't loaded in this way is not recoded (as it was not actually used).
- Extension parameters a validated individually (for type, value ranges, etc) as they are
@@ -454,6 +486,8 @@ class RunConfiguration(object):
RunConfigurationItem('reboot_policy', 'scalar', 'replace'),
RunConfigurationItem('device', 'scalar', 'replace'),
RunConfigurationItem('flashing_config', 'dict', 'replace'),
RunConfigurationItem('retry_on_status', 'list', 'replace'),
RunConfigurationItem('max_retries', 'scalar', 'replace'),
]
# Configuration specified for each workload spec. "workload_parameters"
@@ -468,11 +502,12 @@ class RunConfiguration(object):
RunConfigurationItem('runtime_parameters', 'dict', 'merge'),
RunConfigurationItem('instrumentation', 'list', 'merge'),
RunConfigurationItem('flash', 'dict', 'merge'),
RunConfigurationItem('classifiers', 'dict', 'merge'),
]
# List of names that may be present in configuration (and it is valid for
# them to be there) but are not handled buy RunConfiguration.
ignore_names = ['logging']
ignore_names = ['logging', 'remote_assets_mount_point']
def get_reboot_policy(self):
if not self._reboot_policy:
@@ -507,6 +542,8 @@ class RunConfiguration(object):
self.workload_specs = []
self.flashing_config = {}
self.other_config = {} # keeps track of used config for extensions other than of the four main kinds.
self.retry_on_status = status_list(['FAILED', 'PARTIAL'])
self.max_retries = 3
self._used_config_items = []
self._global_instrumentation = []
self._reboot_policy = None
@@ -639,7 +676,7 @@ class RunConfiguration(object):
for param, ext in ga.iteritems():
for name in [ext.name] + [a.name for a in ext.aliases]:
self._load_default_config_if_necessary(name)
self._raw_config[name][param.name] = value
self._raw_config[identifier(name)][param.name] = value
def _set_run_config_item(self, name, value):
item = self._general_config_map[name]
@@ -653,12 +690,12 @@ class RunConfiguration(object):
def _set_raw_dict(self, name, value, default_config=None):
existing_config = self._raw_config.get(name, default_config or {})
new_config = _merge_config_dicts(existing_config, value)
self._raw_config[name] = new_config
self._raw_config[identifier(name)] = new_config
def _set_raw_list(self, name, value):
old_value = self._raw_config.get(name, [])
new_value = merge_lists(old_value, value, duplicates='last')
self._raw_config[name] = new_value
self._raw_config[identifier(name)] = new_value
def _finalize_config_list(self, attr_name):
"""Note: the name is somewhat misleading. This finalizes a list
@@ -668,18 +705,21 @@ class RunConfiguration(object):
raw_list = self._raw_config.get(attr_name, [])
for extname in raw_list:
default_config = self.ext_loader.get_default_config(extname)
ext_config[extname] = self._raw_config.get(extname, default_config)
ext_config[extname] = self._raw_config.get(identifier(extname), default_config)
list_name = '_global_{}'.format(attr_name)
setattr(self, list_name, raw_list)
global_list = self._raw_config.get(list_name, [])
setattr(self, list_name, global_list)
setattr(self, attr_name, ext_config)
def _finalize_device_config(self):
self._load_default_config_if_necessary(self.device)
config = _merge_config_dicts(self._raw_config.get(self.device),
self._raw_config.get('device_config', {}))
config = _merge_config_dicts(self._raw_config.get(self.device, {}),
self._raw_config.get('device_config', {}),
list_duplicates='all')
self.device_config = config
def _load_default_config_if_necessary(self, name):
name = identifier(name)
if name not in self._raw_config:
self._raw_config[name] = self.ext_loader.get_default_config(name)

View File

@@ -1,418 +0,0 @@
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Base classes for device interfaces.
:Device: The base class for all devices. This defines the interface that must be
implemented by all devices and therefore any workload and instrumentation
can always rely on.
:AndroidDevice: Implements most of the :class:`Device` interface, and extends it
with a number of Android-specific methods.
:BigLittleDevice: Subclasses :class:`AndroidDevice` to implement big.LITTLE-specific
runtime parameters.
:SimpleMulticoreDevice: Subclasses :class:`AndroidDevice` to implement homogeneous cores
device runtime parameters.
"""
import os
import imp
import string
from collections import OrderedDict
from contextlib import contextmanager
from wlauto.core.extension import Extension, ExtensionMeta, AttributeCollection, Parameter
from wlauto.exceptions import DeviceError, ConfigError
from wlauto.utils.types import list_of_strings, list_of_integers
__all__ = ['RuntimeParameter', 'CoreParameter', 'Device', 'DeviceMeta']
class RuntimeParameter(object):
"""
A runtime parameter which has its getter and setter methods associated it
with it.
"""
def __init__(self, name, getter, setter,
getter_args=None, setter_args=None,
value_name='value', override=False):
"""
:param name: the name of the parameter.
:param getter: the getter method which returns the value of this parameter.
:param setter: the setter method which sets the value of this parameter. The setter
always expects to be passed one argument when it is called.
:param getter_args: keyword arguments to be used when invoking the getter.
:param setter_args: keyword arguments to be used when invoking the setter.
:param override: A ``bool`` that specifies whether a parameter of the same name further up the
hierarchy should be overridden. If this is ``False`` (the default), an exception
will be raised by the ``AttributeCollection`` instead.
"""
self.name = name
self.getter = getter
self.setter = setter
self.getter_args = getter_args or {}
self.setter_args = setter_args or {}
self.value_name = value_name
self.override = override
def __str__(self):
return self.name
__repr__ = __str__
class CoreParameter(RuntimeParameter):
"""A runtime parameter that will get expanded into a RuntimeParameter for each core type."""
def get_runtime_parameters(self, core_names):
params = []
for core in set(core_names):
name = string.Template(self.name).substitute(core=core)
getter = string.Template(self.getter).substitute(core=core)
setter = string.Template(self.setter).substitute(core=core)
getargs = dict(self.getter_args.items() + [('core', core)])
setargs = dict(self.setter_args.items() + [('core', core)])
params.append(RuntimeParameter(name, getter, setter, getargs, setargs, self.value_name, self.override))
return params
class DeviceMeta(ExtensionMeta):
to_propagate = ExtensionMeta.to_propagate + [
('runtime_parameters', RuntimeParameter, AttributeCollection),
]
class Device(Extension):
"""
Base class for all devices supported by Workload Automation. Defines
the interface the rest of WA uses to interact with devices.
:name: Unique name used to identify the device.
:platform: The name of the device's platform (e.g. ``Android``) this may
be used by workloads and instrumentation to assess whether they
can run on the device.
:working_directory: a string of the directory which is
going to be used by the workloads on the device.
:binaries_directory: a string of the binary directory for
the device.
:has_gpu: Should be ``True`` if the device as a separate GPU, and
``False`` if graphics processing is done on a CPU.
.. note:: Pretty much all devices currently on the market
have GPUs, however this may not be the case for some
development boards.
:path_module: The name of one of the modules implementing the os.path
interface, e.g. ``posixpath`` or ``ntpath``. You can provide
your own implementation rather than relying on one of the
standard library modules, in which case you need to specify
the *full* path to you module. e.g. '/home/joebloggs/mypathimp.py'
:parameters: A list of RuntimeParameter objects. The order of the objects
is very important as the setters and getters will be called
in the order the RuntimeParameter objects inserted.
:active_cores: This should be a list of all the currently active cpus in
the device in ``'/sys/devices/system/cpu/online'``. The
returned list should be read from the device at the time
of read request.
"""
__metaclass__ = DeviceMeta
parameters = [
Parameter('core_names', kind=list_of_strings, mandatory=True, default=None,
description="""
This is a list of all cpu cores on the device with each
element being the core type, e.g. ``['a7', 'a7', 'a15']``. The
order of the cores must match the order they are listed in
``'/sys/devices/system/cpu'``. So in this case, ``'cpu0'`` must
be an A7 core, and ``'cpu2'`` an A15.'
"""),
Parameter('core_clusters', kind=list_of_integers, mandatory=True, default=None,
description="""
This is a list indicating the cluster affinity of the CPU cores,
each element correponding to the cluster ID of the core coresponding
to it's index. E.g. ``[0, 0, 1]`` indicates that cpu0 and cpu1 are on
cluster 0, while cpu2 is on cluster 1.
"""),
]
runtime_parameters = []
# These must be overwritten by subclasses.
name = None
platform = None
default_working_directory = None
has_gpu = None
path_module = None
active_cores = None
def __init__(self, **kwargs): # pylint: disable=W0613
super(Device, self).__init__(**kwargs)
if not self.path_module:
raise NotImplementedError('path_module must be specified by the deriving classes.')
libpath = os.path.dirname(os.__file__)
modpath = os.path.join(libpath, self.path_module)
if not modpath.lower().endswith('.py'):
modpath += '.py'
try:
self.path = imp.load_source('device_path', modpath)
except IOError:
raise DeviceError('Unsupported path module: {}'.format(self.path_module))
def reset(self):
"""
Initiate rebooting of the device.
Added in version 2.1.3.
"""
raise NotImplementedError()
def boot(self, *args, **kwargs):
"""
Perform the seteps necessary to boot the device to the point where it is ready
to accept other commands.
Changed in version 2.1.3: no longer expected to wait until boot completes.
"""
raise NotImplementedError()
def connect(self, *args, **kwargs):
"""
Establish a connection to the device that will be used for subsequent commands.
Added in version 2.1.3.
"""
raise NotImplementedError()
def disconnect(self):
""" Close the established connection to the device. """
raise NotImplementedError()
def initialize(self, context, *args, **kwargs):
"""
Default implementation just calls through to init(). May be overriden by specialised
abstract sub-cleasses to implent platform-specific intialization without requiring
concrete implementations to explicitly invoke parent's init().
Added in version 2.1.3.
"""
self.init(context, *args, **kwargs)
def init(self, context, *args, **kwargs):
"""
Initialize the device. This method *must* be called after a device reboot before
any other commands can be issued, however it may also be called without rebooting.
It is up to device-specific implementations to identify what initialisation needs
to be preformed on a particular invocation. Bear in mind that no assumptions can be
made about the state of the device prior to the initiation of workload execution,
so full initialisation must be performed at least once, even if no reboot has occurred.
After that, the device-specific implementation may choose to skip initialization if
the device has not been rebooted; it is up to the implementation to keep track of
that, however.
All arguments are device-specific (see the documentation for the your device).
"""
pass
def ping(self):
"""
This must return successfully if the device is able to receive commands, or must
raise :class:`wlauto.exceptions.DeviceUnresponsiveError` if the device cannot respond.
"""
raise NotImplementedError()
def get_runtime_parameter_names(self):
return [p.name for p in self._expand_runtime_parameters()]
def get_runtime_parameters(self):
""" returns the runtime parameters that have been set. """
# pylint: disable=cell-var-from-loop
runtime_parameters = OrderedDict()
for rtp in self._expand_runtime_parameters():
if not rtp.getter:
continue
getter = getattr(self, rtp.getter)
rtp_value = getter(**rtp.getter_args)
runtime_parameters[rtp.name] = rtp_value
return runtime_parameters
def set_runtime_parameters(self, params):
"""
The parameters are taken from the keyword arguments and are specific to
a particular device. See the device documentation.
"""
runtime_parameters = self._expand_runtime_parameters()
rtp_map = {rtp.name.lower(): rtp for rtp in runtime_parameters}
params = OrderedDict((k.lower(), v) for k, v in params.iteritems())
expected_keys = rtp_map.keys()
if not set(params.keys()) <= set(expected_keys):
unknown_params = list(set(params.keys()).difference(set(expected_keys)))
raise ConfigError('Unknown runtime parameter(s): {}'.format(unknown_params))
for param in params:
rtp = rtp_map[param]
setter = getattr(self, rtp.setter)
args = dict(rtp.setter_args.items() + [(rtp.value_name, params[rtp.name.lower()])])
setter(**args)
def capture_screen(self, filepath):
"""Captures the current device screen into the specified file in a PNG format."""
raise NotImplementedError()
def get_properties(self, output_path):
"""Captures and saves the device configuration properties version and
any other relevant information. Return them in a dict"""
raise NotImplementedError()
def listdir(self, path, **kwargs):
""" List the contents of the specified directory. """
raise NotImplementedError()
def push_file(self, source, dest):
""" Push a file from the host file system onto the device. """
raise NotImplementedError()
def pull_file(self, source, dest):
""" Pull a file from device system onto the host file system. """
raise NotImplementedError()
def delete_file(self, filepath):
""" Delete the specified file on the device. """
raise NotImplementedError()
def file_exists(self, filepath):
""" Check if the specified file or directory exist on the device. """
raise NotImplementedError()
def get_pids_of(self, process_name):
""" Returns a list of PIDs of the specified process name. """
raise NotImplementedError()
def kill(self, pid, as_root=False):
""" Kill the process with the specified PID. """
raise NotImplementedError()
def killall(self, process_name, as_root=False):
""" Kill all running processes with the specified name. """
raise NotImplementedError()
def install(self, filepath, **kwargs):
""" Install the specified file on the device. What "install" means is device-specific
and may possibly also depend on the type of file."""
raise NotImplementedError()
def uninstall(self, filepath):
""" Uninstall the specified file on the device. What "uninstall" means is device-specific
and may possibly also depend on the type of file."""
raise NotImplementedError()
def execute(self, command, timeout=None, **kwargs):
"""
Execute the specified command command on the device and return the output.
:param command: Command to be executed on the device.
:param timeout: If the command does not return after the specified time,
execute() will abort with an error. If there is no timeout for
the command, this should be set to 0 or None.
Other device-specific keyword arguments may also be specified.
:returns: The stdout output from the command.
"""
raise NotImplementedError()
def set_sysfile_value(self, filepath, value, verify=True):
"""
Write the specified value to the specified file on the device
and verify that the value has actually been written.
:param file: The file to be modified.
:param value: The value to be written to the file. Must be
an int or a string convertable to an int.
:param verify: Specifies whether the value should be verified, once written.
Should raise DeviceError if could write value.
"""
raise NotImplementedError()
def get_sysfile_value(self, sysfile, kind=None):
"""
Get the contents of the specified sysfile.
:param sysfile: The file who's contents will be returned.
:param kind: The type of value to be expected in the sysfile. This can
be any Python callable that takes a single str argument.
If not specified or is None, the contents will be returned
as a string.
"""
raise NotImplementedError()
def start(self):
"""
This gets invoked before an iteration is started and is endented to help the
device manange any internal supporting functions.
"""
pass
def stop(self):
"""
This gets invoked after iteration execution has completed and is endented to help the
device manange any internal supporting functions.
"""
pass
def __str__(self):
return 'Device<{}>'.format(self.name)
__repr__ = __str__
def _expand_runtime_parameters(self):
expanded_params = []
for param in self.runtime_parameters:
if isinstance(param, CoreParameter):
expanded_params.extend(param.get_runtime_parameters(self.core_names)) # pylint: disable=no-member
else:
expanded_params.append(param)
return expanded_params
@contextmanager
def _check_alive(self):
try:
yield
except Exception as e:
self.ping()
raise e

View File

@@ -0,0 +1,318 @@
import string
from collections import OrderedDict
from wlauto.core.extension import Extension, Parameter
from wlauto.exceptions import ConfigError
from wlauto.utils.types import list_of_integers, list_of, caseless_string
from devlib.platform import Platform
from devlib.target import AndroidTarget, Cpuinfo, KernelVersion, KernelConfig
__all__ = ['RuntimeParameter', 'CoreParameter', 'DeviceManager', 'TargetInfo']
class RuntimeParameter(object):
"""
A runtime parameter which has its getter and setter methods associated it
with it.
"""
def __init__(self, name, getter, setter,
getter_args=None, setter_args=None,
value_name='value', override=False):
"""
:param name: the name of the parameter.
:param getter: the getter method which returns the value of this parameter.
:param setter: the setter method which sets the value of this parameter. The setter
always expects to be passed one argument when it is called.
:param getter_args: keyword arguments to be used when invoking the getter.
:param setter_args: keyword arguments to be used when invoking the setter.
:param override: A ``bool`` that specifies whether a parameter of the same name further up the
hierarchy should be overridden. If this is ``False`` (the default), an exception
will be raised by the ``AttributeCollection`` instead.
"""
self.name = name
self.getter = getter
self.setter = setter
self.getter_args = getter_args or {}
self.setter_args = setter_args or {}
self.value_name = value_name
self.override = override
def __str__(self):
return self.name
__repr__ = __str__
class CoreParameter(RuntimeParameter):
"""A runtime parameter that will get expanded into a RuntimeParameter for each core type."""
def get_runtime_parameters(self, core_names):
params = []
for core in set(core_names):
name = string.Template(self.name).substitute(core=core)
getter = string.Template(self.getter).substitute(core=core)
setter = string.Template(self.setter).substitute(core=core)
getargs = dict(self.getter_args.items() + [('core', core)])
setargs = dict(self.setter_args.items() + [('core', core)])
params.append(RuntimeParameter(name, getter, setter, getargs, setargs, self.value_name, self.override))
return params
class TargetInfo(object):
@staticmethod
def from_pod(pod):
instance = TargetInfo()
instance.target = pod['target']
instance.abi = pod['abi']
instance.cpuinfo = Cpuinfo(pod['cpuinfo'])
instance.os = pod['os']
instance.os_version = pod['os_version']
instance.abi = pod['abi']
instance.is_rooted = pod['is_rooted']
instance.kernel_version = KernelVersion(pod['kernel_version'])
instance.kernel_config = KernelConfig(pod['kernel_config'])
if pod["target"] == "AndroidTarget":
instance.screen_resolution = pod['screen_resolution']
instance.prop = pod['prop']
instance.prop = pod['android_id']
return instance
def __init__(self, target=None):
if target:
self.target = target.__class__.__name__
self.cpuinfo = target.cpuinfo
self.os = target.os
self.os_version = target.os_version
self.abi = target.abi
self.is_rooted = target.is_rooted
self.kernel_version = target.kernel_version
self.kernel_config = target.config
if isinstance(target, AndroidTarget):
self.screen_resolution = target.screen_resolution
self.prop = target.getprop()
self.android_id = target.android_id
else:
self.target = None
self.cpuinfo = None
self.os = None
self.os_version = None
self.abi = None
self.is_rooted = None
self.kernel_version = None
self.kernel_config = None
if isinstance(target, AndroidTarget):
self.screen_resolution = None
self.prop = None
self.android_id = None
def to_pod(self):
pod = {}
pod['target'] = self.target.__class__.__name__
pod['abi'] = self.abi
pod['cpuinfo'] = self.cpuinfo.text
pod['os'] = self.os
pod['os_version'] = self.os_version
pod['abi'] = self.abi
pod['is_rooted'] = self.is_rooted
pod['kernel_version'] = self.kernel_version.version
pod['kernel_config'] = self.kernel_config.text
if self.target == "AndroidTarget":
pod['screen_resolution'] = self.screen_resolution
pod['prop'] = self.prop
pod['android_id'] = self.android_id
return pod
class DeviceManager(Extension):
name = None
target_type = None
platform_type = Platform
has_gpu = None
path_module = None
info = None
parameters = [
Parameter('core_names', kind=list_of(caseless_string),
description="""
This is a list of all cpu cores on the device with each
element being the core type, e.g. ``['a7', 'a7', 'a15']``. The
order of the cores must match the order they are listed in
``'/sys/devices/system/cpu'``. So in this case, ``'cpu0'`` must
be an A7 core, and ``'cpu2'`` an A15.'
"""),
Parameter('core_clusters', kind=list_of_integers,
description="""
This is a list indicating the cluster affinity of the CPU cores,
each element correponding to the cluster ID of the core coresponding
to its index. E.g. ``[0, 0, 1]`` indicates that cpu0 and cpu1 are on
cluster 0, while cpu2 is on cluster 1. If this is not specified, this
will be inferred from ``core_names`` if possible (assuming all cores with
the same name are on the same cluster).
"""),
Parameter('working_directory',
description='''
Working directory to be used by WA. This must be in a location where the specified user
has write permissions. This will default to /home/<username>/wa (or to /root/wa, if
username is 'root').
'''),
Parameter('binaries_directory',
description='Location of executable binaries on this device (must be in PATH).'),
]
modules = []
runtime_parameters = [
RuntimeParameter('sysfile_values', 'get_sysfile_values', 'set_sysfile_values', value_name='params'),
CoreParameter('${core}_cores', 'get_number_of_online_cpus', 'set_number_of_online_cpus',
value_name='number'),
CoreParameter('${core}_min_frequency', 'get_core_min_frequency', 'set_core_min_frequency',
value_name='freq'),
CoreParameter('${core}_max_frequency', 'get_core_max_frequency', 'set_core_max_frequency',
value_name='freq'),
CoreParameter('${core}_frequency', 'get_core_cur_frequency', 'set_core_cur_frequency',
value_name='freq'),
CoreParameter('${core}_governor', 'get_core_governor', 'set_core_governor',
value_name='governor'),
CoreParameter('${core}_governor_tunables', 'get_core_governor_tunables', 'set_core_governor_tunables',
value_name='tunables'),
]
# Framework
def connect(self):
raise NotImplementedError("connect method must be implemented for device managers")
def initialize(self, context):
super(DeviceManager, self).initialize(context)
self.info = TargetInfo(self.target)
self.target.setup()
def start(self):
pass
def stop(self):
pass
def validate(self):
pass
# Runtime Parameters
def get_runtime_parameter_names(self):
return [p.name for p in self._expand_runtime_parameters()]
def get_runtime_parameters(self):
""" returns the runtime parameters that have been set. """
# pylint: disable=cell-var-from-loop
runtime_parameters = OrderedDict()
for rtp in self._expand_runtime_parameters():
if not rtp.getter:
continue
getter = getattr(self, rtp.getter)
rtp_value = getter(**rtp.getter_args)
runtime_parameters[rtp.name] = rtp_value
return runtime_parameters
def set_runtime_parameters(self, params):
"""
The parameters are taken from the keyword arguments and are specific to
a particular device. See the device documentation.
"""
runtime_parameters = self._expand_runtime_parameters()
rtp_map = {rtp.name.lower(): rtp for rtp in runtime_parameters}
params = OrderedDict((k.lower(), v) for k, v in params.iteritems() if v is not None)
expected_keys = rtp_map.keys()
if not set(params.keys()).issubset(set(expected_keys)):
unknown_params = list(set(params.keys()).difference(set(expected_keys)))
raise ConfigError('Unknown runtime parameter(s): {}'.format(unknown_params))
for param in params:
self.logger.debug('Setting runtime parameter "{}"'.format(param))
rtp = rtp_map[param]
setter = getattr(self, rtp.setter)
args = dict(rtp.setter_args.items() + [(rtp.value_name, params[rtp.name.lower()])])
setter(**args)
def _expand_runtime_parameters(self):
expanded_params = []
for param in self.runtime_parameters:
if isinstance(param, CoreParameter):
expanded_params.extend(param.get_runtime_parameters(self.target.core_names)) # pylint: disable=no-member
else:
expanded_params.append(param)
return expanded_params
#Runtime parameter getters/setters
_written_sysfiles = []
def get_sysfile_values(self):
return self._written_sysfiles
def set_sysfile_values(self, params):
for sysfile, value in params.iteritems():
verify = not sysfile.endswith('!')
sysfile = sysfile.rstrip('!')
self._written_sysfiles.append((sysfile, value))
self.target.write_value(sysfile, value, verify=verify)
# pylint: disable=E1101
def _get_core_online_cpu(self, core):
try:
return self.target.list_online_core_cpus(core)[0]
except IndexError:
raise ValueError("No {} cores are online".format(core))
def get_number_of_online_cpus(self, core):
return len(self._get_core_online_cpu(core))
def set_number_of_online_cpus(self, core, number):
for cpu in self.target.core_cpus(core)[:number]:
self.target.hotplug.online(cpu)
def get_core_min_frequency(self, core):
return self.target.cpufreq.get_min_frequency(self._get_core_online_cpu(core))
def set_core_min_frequency(self, core, frequency):
self.target.cpufreq.set_min_frequency(self._get_core_online_cpu(core), frequency)
def get_core_max_frequency(self, core):
return self.target.cpufreq.get_max_frequency(self._get_core_online_cpu(core))
def set_core_max_frequency(self, core, frequency):
self.target.cpufreq.set_max_frequency(self._get_core_online_cpu(core), frequency)
def get_core_frequency(self, core):
return self.target.cpufreq.get_frequency(self._get_core_online_cpu(core))
def set_core_frequency(self, core, frequency):
self.target.cpufreq.set_frequency(self._get_core_online_cpu(core), frequency)
def get_core_governor(self, core):
return self.target.cpufreq.get_cpu_governor(self._get_core_online_cpu(core))
def set_core_governor(self, core, governor):
self.target.cpufreq.set_cpu_governor(self._get_core_online_cpu(core), governor)
def get_core_governor_tunables(self, core):
return self.target.cpufreq.get_governor_tunables(self._get_core_online_cpu(core))
def set_core_governor_tunables(self, core, tunables):
self.target.cpufreq.set_governor_tunables(self._get_core_online_cpu(core),
*tunables)

View File

@@ -17,17 +17,19 @@
import sys
import argparse
import logging
import os
import subprocess
import warnings
from wlauto.core.bootstrap import settings
from wlauto.core.extension_loader import ExtensionLoader
from wlauto.exceptions import WAError
from wlauto.exceptions import WAError, ConfigError
from wlauto.utils.misc import get_traceback
from wlauto.utils.log import init_logging
from wlauto.utils.cli import init_argument_parser
from wlauto.utils.doc import format_body
import warnings
warnings.filterwarnings(action='ignore', category=UserWarning, module='zope')
@@ -55,6 +57,8 @@ def main():
settings.verbosity = args.verbose
settings.debug = args.debug
if args.config:
if not os.path.exists(args.config):
raise ConfigError("Config file {} not found".format(args.config))
settings.update(args.config)
init_logging(settings.verbosity)
@@ -64,12 +68,27 @@ def main():
except KeyboardInterrupt:
logging.info('Got CTRL-C. Aborting.')
sys.exit(3)
except WAError, e:
except WAError as e:
logging.critical(e)
sys.exit(1)
except Exception, e: # pylint: disable=broad-except
except subprocess.CalledProcessError as e:
tb = get_traceback()
logging.critical(tb)
command = e.cmd
if e.args:
command = '{} {}'.format(command, ' '.join(e.args))
message = 'Command \'{}\' returned non-zero exit status {}\nOUTPUT:\n{}\n'
logging.critical(message.format(command, e.returncode, e.output))
sys.exit(2)
except SyntaxError as e:
tb = get_traceback()
logging.critical(tb)
message = 'Syntax Error in {}, line {}, offset {}:'
logging.critical(message.format(e.filename, e.lineno, e.offset))
logging.critical('\t{}'.format(e.msg))
sys.exit(2)
except Exception as e: # pylint: disable=broad-except
tb = get_traceback()
logging.critical(tb)
logging.critical('{}({})'.format(e.__class__.__name__, e))
sys.exit(2)

View File

@@ -72,7 +72,7 @@ REBOOT_DELAY = 3
class RunInfo(object):
"""
Information about the current run, such as it's unique ID, run
Information about the current run, such as its unique ID, run
time, etc.
"""
@@ -85,7 +85,8 @@ class RunInfo(object):
self.duration = None
self.project = config.project
self.project_stage = config.project_stage
self.run_name = config.run_name
self.run_name = config.run_name or "{}_{}".format(os.path.split(settings.output_directory)[1],
datetime.utcnow().strftime("%Y-%m-%d_%H-%M-%S"))
self.notes = None
self.device_properties = {}
@@ -123,6 +124,12 @@ class ExecutionContext(object):
else:
return None
@property
def job_status(self):
if not self.current_job:
return None
return self.current_job.result.status
@property
def workload(self):
return getattr(self.spec, 'workload', None)
@@ -133,10 +140,11 @@ class ExecutionContext(object):
@property
def result(self):
return getattr(self.current_job, 'result', None)
return getattr(self.current_job, 'result', self.run_result)
def __init__(self, device, config):
self.device = device
def __init__(self, device_manager, config):
self.device_manager = device_manager
self.device = self.device_manager.target
self.config = config
self.reboot_policy = config.reboot_policy
self.output_directory = None
@@ -151,6 +159,7 @@ class ExecutionContext(object):
self.run_artifacts = copy(self.default_run_artifacts)
self.job_iteration_counts = defaultdict(int)
self.aborted = False
self.runner = None
if settings.agenda:
self.run_artifacts.append(Artifact('agenda',
os.path.join(self.host_working_directory,
@@ -158,10 +167,12 @@ class ExecutionContext(object):
'meta',
mandatory=True,
description='Agenda for this run.'))
for i in xrange(1, settings.config_count + 1):
self.run_artifacts.append(Artifact('config_{}'.format(i),
os.path.join(self.host_working_directory,
'config_{}.py'.format(i)),
for i, filepath in enumerate(settings.loaded_files, 1):
name = 'config_{}'.format(i)
path = os.path.join(self.host_working_directory,
name + os.path.splitext(filepath)[1])
self.run_artifacts.append(Artifact(name,
path,
kind='meta',
mandatory=True,
description='Config file used for the run.'))
@@ -172,17 +183,18 @@ class ExecutionContext(object):
self.output_directory = self.run_output_directory
self.resolver = ResourceResolver(self.config)
self.run_info = RunInfo(self.config)
self.run_result = RunResult(self.run_info)
self.run_result = RunResult(self.run_info, self.run_output_directory)
def next_job(self, job):
"""Invoked by the runner when starting a new iteration of workload execution."""
self.current_job = job
self.job_iteration_counts[self.spec.id] += 1
self.current_job.result.iteration = self.current_iteration
if not self.aborted:
outdir_name = '_'.join(map(str, [self.spec.label, self.spec.id, self.current_iteration]))
self.output_directory = _d(os.path.join(self.run_output_directory, outdir_name))
self.iteration_artifacts = [wa for wa in self.workload.artifacts]
self.current_job.result.iteration = self.current_iteration
self.current_job.result.output_directory = self.output_directory
def end_job(self):
if self.current_job.result.status == IterationResult.ABORTED:
@@ -190,6 +202,9 @@ class ExecutionContext(object):
self.current_job = None
self.output_directory = self.run_output_directory
def add_metric(self, *args, **kwargs):
self.result.add_metric(*args, **kwargs)
def add_artifact(self, name, path, kind, *args, **kwargs):
if self.current_job is None:
self.add_run_artifact(name, path, kind, *args, **kwargs)
@@ -244,6 +259,7 @@ class Executor(object):
self.warning_logged = False
self.config = None
self.ext_loader = None
self.device_manager = None
self.device = None
self.context = None
@@ -287,10 +303,11 @@ class Executor(object):
self.logger.debug('Initialising device configuration.')
if not self.config.device:
raise ConfigError('Make sure a device is specified in the config.')
self.device = self.ext_loader.get_device(self.config.device, **self.config.device_config)
self.device.validate()
self.device_manager = self.ext_loader.get_device_manager(self.config.device, **self.config.device_config)
self.device_manager.validate()
self.device = self.device_manager.target
self.context = ExecutionContext(self.device, self.config)
self.context = ExecutionContext(self.device_manager, self.config)
self.logger.debug('Loading resource discoverers.')
self.context.initialize()
@@ -370,7 +387,7 @@ class Executor(object):
runnercls = RandomRunner
else:
raise ConfigError('Unexpected execution order: {}'.format(self.config.execution_order))
return runnercls(self.device, self.context, result_manager)
return runnercls(self.device_manager, self.context, result_manager)
def _error_signalled_callback(self):
self.error_logged = True
@@ -388,8 +405,9 @@ class RunnerJob(object):
"""
def __init__(self, spec):
def __init__(self, spec, retry=0):
self.spec = spec
self.retry = retry
self.iteration = None
self.result = IterationResult(self.spec)
@@ -410,6 +428,10 @@ class Runner(object):
"""Internal runner error."""
pass
@property
def config(self):
return self.context.config
@property
def current_job(self):
if self.job_queue:
@@ -445,8 +467,9 @@ class Runner(object):
return True
return self.current_job.spec.id != self.next_job.spec.id
def __init__(self, device, context, result_manager):
self.device = device
def __init__(self, device_manager, context, result_manager):
self.device_manager = device_manager
self.device = device_manager.target
self.context = context
self.result_manager = result_manager
self.logger = logging.getLogger('Runner')
@@ -477,7 +500,7 @@ class Runner(object):
if self.context.reboot_policy.can_reboot and self.device.can('reset_power'):
self.logger.info('Attempting to hard-reset the device...')
try:
self.device.hard_reset()
self.device.boot(hard=True)
self.device.connect()
except DeviceError: # hard_boot not implemented for the device.
raise e
@@ -510,25 +533,42 @@ class Runner(object):
self._send(signal.RUN_END)
def _initialize_run(self):
self.context.runner = self
self.context.run_info.start_time = datetime.utcnow()
if self.context.reboot_policy.perform_initial_boot:
self.logger.info('\tBooting device')
with self._signal_wrap('INITIAL_BOOT'):
self._reboot_device()
else:
self.logger.info('Connecting to device')
self.device.connect()
self._connect_to_device()
self.logger.info('Initializing device')
self.device.initialize(self.context)
self.device_manager.initialize(self.context)
props = self.device.get_properties(self.context)
self.context.run_info.device_properties = props
self.logger.info('Initializing workloads')
for workload_spec in self.context.config.workload_specs:
workload_spec.workload.initialize(self.context)
self.context.run_info.device_properties = self.device_manager.info
self.result_manager.initialize(self.context)
self._send(signal.RUN_INIT)
if instrumentation.check_failures():
raise InstrumentError('Detected failure(s) during instrumentation initialization.')
def _connect_to_device(self):
if self.context.reboot_policy.perform_initial_boot:
try:
self.device_manager.connect()
except DeviceError: # device may be offline
if self.device.can('reset_power'):
with self._signal_wrap('INITIAL_BOOT'):
self.device.boot(hard=True)
else:
raise DeviceError('Cannot connect to device for initial reboot; '
'and device does not support hard reset.')
else: # successfully connected
self.logger.info('\tBooting device')
with self._signal_wrap('INITIAL_BOOT'):
self._reboot_device()
else:
self.logger.info('Connecting to device')
self.device_manager.connect()
def _init_job(self):
self.current_job.result.status = IterationResult.RUNNING
self.context.next_job(self.current_job)
@@ -560,7 +600,7 @@ class Runner(object):
instrumentation.disable_all()
instrumentation.enable(spec.instrumentation)
self.device.start()
self.device_manager.start()
if self.spec_changed:
self._send(signal.WORKLOAD_SPEC_START)
@@ -569,7 +609,7 @@ class Runner(object):
try:
setup_ok = False
with self._handle_errors('Setting up device parameters'):
self.device.set_runtime_parameters(spec.runtime_parameters)
self.device_manager.set_runtime_parameters(spec.runtime_parameters)
setup_ok = True
if setup_ok:
@@ -588,15 +628,27 @@ class Runner(object):
if self.spec_will_change or not spec.enabled:
self._send(signal.WORKLOAD_SPEC_END)
finally:
self.device.stop()
self.device_manager.stop()
def _finalize_job(self):
self.context.run_result.iteration_results.append(self.current_job.result)
self.job_queue[0].iteration = self.context.current_iteration
self.completed_jobs.append(self.job_queue.pop(0))
job = self.job_queue.pop(0)
job.iteration = self.context.current_iteration
if job.result.status in self.config.retry_on_status:
if job.retry >= self.config.max_retries:
self.logger.error('Exceeded maxium number of retries. Abandoning job.')
else:
self.logger.info('Job status was {}. Retrying...'.format(job.result.status))
retry_job = RunnerJob(job.spec, job.retry + 1)
self.job_queue.insert(0, retry_job)
self.completed_jobs.append(job)
self.context.end_job()
def _finalize_run(self):
self.logger.info('Finalizing workloads')
for workload_spec in self.context.config.workload_specs:
workload_spec.workload.finalize(self.context)
self.logger.info('Finalizing.')
self._send(signal.RUN_FIN)
@@ -688,7 +740,7 @@ class Runner(object):
except (KeyboardInterrupt, DeviceNotRespondingError):
raise
except (WAError, TimeoutError), we:
self.device.ping()
self.device.check_responsive()
if self.current_job:
self.current_job.result.status = on_error_status
self.current_job.result.add_event(str(we))

View File

@@ -24,7 +24,7 @@ from collections import OrderedDict
from wlauto.core.bootstrap import settings
from wlauto.exceptions import ValidationError, ConfigError
from wlauto.utils.misc import isiterable, ensure_directory_exists as _d, get_article
from wlauto.utils.types import identifier
from wlauto.utils.types import identifier, integer, boolean
class AttributeCollection(object):
@@ -41,9 +41,10 @@ class AttributeCollection(object):
def values(self):
return self._attrs.values()
def __init__(self, attrcls):
def __init__(self, attrcls, owner):
self._attrcls = attrcls
self._attrs = OrderedDict()
self.owner = owner
def add(self, p):
p = self._to_attrcls(p)
@@ -53,6 +54,8 @@ class AttributeCollection(object):
for a, v in p.__dict__.iteritems():
if v is not None:
setattr(newp, a, v)
if not hasattr(newp, "_overridden"):
newp._overridden = self.owner # pylint: disable=protected-access
self._attrs[p.name] = newp
else:
# Duplicate attribute condition is check elsewhere.
@@ -82,7 +85,12 @@ class AttributeCollection(object):
return p
def __iadd__(self, other):
other = [self._to_attrcls(p) for p in other]
names = []
for p in other:
if p.name in names:
raise ValueError("Duplicate '{}' {}".format(p.name, p.__class__.__name__.split('.')[-1]))
names.append(p.name)
self.add(p)
return self
@@ -102,7 +110,7 @@ class AttributeCollection(object):
class AliasCollection(AttributeCollection):
def __init__(self):
super(AliasCollection, self).__init__(Alias)
super(AliasCollection, self).__init__(Alias, None)
def _to_attrcls(self, p):
if isinstance(p, tuple) or isinstance(p, list):
@@ -117,8 +125,9 @@ class AliasCollection(AttributeCollection):
class ListCollection(list):
def __init__(self, attrcls): # pylint: disable=unused-argument
def __init__(self, attrcls, owner): # pylint: disable=unused-argument
super(ListCollection, self).__init__()
self.owner = owner
class Param(object):
@@ -128,8 +137,14 @@ class Param(object):
"""
# Mapping for kind conversion; see docs for convert_types below
kind_map = {
int: integer,
bool: boolean,
}
def __init__(self, name, kind=None, mandatory=None, default=None, override=False,
allowed_values=None, description=None, constraint=None, global_alias=None):
allowed_values=None, description=None, constraint=None, global_alias=None, convert_types=True):
"""
Create a new Parameter object.
@@ -139,9 +154,7 @@ class Param(object):
:param kind: The type of parameter this is. This must be a callable that takes an arbitrary
object and converts it to the expected type, or raised ``ValueError`` if such
conversion is not possible. Most Python standard types -- ``str``, ``int``, ``bool``, etc. --
can be used here (though for ``bool``, ``wlauto.utils.misc.as_bool`` is preferred
as it intuitively handles strings like ``'false'``). This defaults to ``str`` if
not specified.
can be used here. This defaults to ``str`` if not specified.
:param mandatory: If set to ``True``, then a non-``None`` value for this parameter *must* be
provided on extension object construction, otherwise ``ConfigError`` will be
raised.
@@ -164,10 +177,17 @@ class Param(object):
that old extension settings names still work. This should not be used for
new parameters.
:param convert_types: If ``True`` (the default), will automatically convert ``kind`` values from
native Python types to WA equivalents. This allows more ituitive interprestation
of parameter values, e.g. the string ``"false"`` being interpreted as ``False``
when specifed as the value for a boolean Parameter.
"""
self.name = identifier(name)
if kind is not None and not callable(kind):
raise ValueError('Kind must be callable.')
if convert_types and kind in self.kind_map:
kind = self.kind_map[kind]
self.kind = kind
self.mandatory = mandatory
self.default = default
@@ -292,7 +312,7 @@ class Artifact(object):
network filer archiver may choose to archive them).
.. note: The kind parameter is intended to represent the logical function of a particular
artifact, not it's intended means of processing -- this is left entirely up to the
artifact, not its intended means of processing -- this is left entirely up to the
result processors.
"""
@@ -381,34 +401,42 @@ class ExtensionMeta(type):
('core_modules', str, ListCollection),
]
virtual_methods = ['validate']
virtual_methods = ['validate', 'initialize', 'finalize']
global_virtuals = ['initialize', 'finalize']
def __new__(mcs, clsname, bases, attrs):
mcs._propagate_attributes(bases, attrs)
mcs._propagate_attributes(bases, attrs, clsname)
cls = type.__new__(mcs, clsname, bases, attrs)
mcs._setup_aliases(cls)
mcs._implement_virtual(cls, bases)
return cls
@classmethod
def _propagate_attributes(mcs, bases, attrs):
def _propagate_attributes(mcs, bases, attrs, clsname):
"""
For attributes specified by to_propagate, their values will be a union of
that specified for cls and it's bases (cls values overriding those of bases
that specified for cls and its bases (cls values overriding those of bases
in case of conflicts).
"""
for prop_attr, attr_cls, attr_collector_cls in mcs.to_propagate:
should_propagate = False
propagated = attr_collector_cls(attr_cls)
propagated = attr_collector_cls(attr_cls, clsname)
for base in bases:
if hasattr(base, prop_attr):
propagated += getattr(base, prop_attr) or []
should_propagate = True
if prop_attr in attrs:
propagated += attrs[prop_attr] or []
pattrs = attrs[prop_attr] or []
propagated += pattrs
should_propagate = True
if should_propagate:
for p in propagated:
override = bool(getattr(p, "override", None))
overridden = bool(getattr(p, "_overridden", None))
if override != overridden:
msg = "Overriding non existing parameter '{}' inside '{}'"
raise ValueError(msg.format(p.name, clsname))
attrs[prop_attr] = propagated
@classmethod
@@ -430,13 +458,13 @@ class ExtensionMeta(type):
super(cls, self).vmname()
.. note:: current implementation imposes a restriction in that
parameters into the function *must* be passed as keyword
arguments. There *must not* be positional arguments on
virutal method invocation.
This also ensures that the methods that have beend identified as
"globally virtual" are executed exactly once per WA execution, even if
invoked through instances of different subclasses
"""
methods = {}
called_globals = set()
for vmname in mcs.virtual_methods:
clsmethod = getattr(cls, vmname, None)
if clsmethod:
@@ -444,11 +472,24 @@ class ExtensionMeta(type):
methods[vmname] = [bm for bm in basemethods if bm != clsmethod]
methods[vmname].append(clsmethod)
def wrapper(self, __name=vmname, **kwargs):
for dm in methods[__name]:
dm(self, **kwargs)
def generate_method_wrapper(vname): # pylint: disable=unused-argument
# this creates a closure with the method name so that it
# does not need to be passed to the wrapper as an argument,
# leaving the wrapper to accept exactly the same set of
# arguments as the method it is wrapping.
name__ = vmname # pylint: disable=cell-var-from-loop
setattr(cls, vmname, wrapper)
def wrapper(self, *args, **kwargs):
for dm in methods[name__]:
if name__ in mcs.global_virtuals:
if dm not in called_globals:
dm(self, *args, **kwargs)
called_globals.add(dm)
else:
dm(self, *args, **kwargs)
return wrapper
setattr(cls, vmname, generate_method_wrapper(vmname))
class Extension(object):
@@ -528,6 +569,12 @@ class Extension(object):
for param in self.parameters:
param.validate(self)
def initialize(self, context):
pass
def finalize(self, context):
pass
def check_artifacts(self, context, level):
"""
Make sure that all mandatory artifacts have been generated.
@@ -567,28 +614,8 @@ class Extension(object):
for module_spec in modules:
if not module_spec:
continue
if isinstance(module_spec, basestring):
name = module_spec
params = {}
elif isinstance(module_spec, dict):
if len(module_spec) != 1:
message = 'Invalid module spec: {}; dict must have exctly one key -- the module name.'
raise ValueError(message.format(module_spec))
name, params = module_spec.items()[0]
else:
message = 'Invalid module spec: {}; must be a string or a one-key dict.'
raise ValueError(message.format(module_spec))
if not isinstance(params, dict):
message = 'Invalid module spec: {}; dict value must also be a dict.'
raise ValueError(message.format(module_spec))
module = loader.get_module(name, owner=self, **params)
module.initialize()
for capability in module.capabilities:
if capability not in self.capabilities:
self.capabilities.append(capability)
self._modules.append(module)
module = self._load_module(loader, module_spec)
self._install_module(module)
def has(self, capability):
"""Check if this extension has the specified capability. The alternative method ``can`` is
@@ -598,6 +625,33 @@ class Extension(object):
can = has
def _load_module(self, loader, module_spec):
if isinstance(module_spec, basestring):
name = module_spec
params = {}
elif isinstance(module_spec, dict):
if len(module_spec) != 1:
message = 'Invalid module spec: {}; dict must have exctly one key -- the module name.'
raise ValueError(message.format(module_spec))
name, params = module_spec.items()[0]
else:
message = 'Invalid module spec: {}; must be a string or a one-key dict.'
raise ValueError(message.format(module_spec))
if not isinstance(params, dict):
message = 'Invalid module spec: {}; dict value must also be a dict.'
raise ValueError(message.format(module_spec))
module = loader.get_module(name, owner=self, **params)
module.initialize(None)
return module
def _install_module(self, module):
for capability in module.capabilities:
if capability not in self.capabilities:
self.capabilities.append(capability)
self._modules.append(module)
def __check_from_loader(self):
"""
There are a few things that need to happen in order to get a valide extension instance.
@@ -627,7 +681,7 @@ class Module(Extension):
In other words, a Module is roughly equivalent to a kernel module and its primary purpose is to
implement WA "drivers" for various peripherals that may or may not be present in a particular setup.
.. note:: A mudule is itself an Extension and can therefore have it's own modules.
.. note:: A mudule is itself an Extension and can therefore have its own modules.
"""
@@ -647,6 +701,5 @@ class Module(Extension):
if owner.name == self.name:
raise ValueError('Circular module import for {}'.format(self.name))
def initialize(self):
def initialize(self, context):
pass

View File

@@ -80,8 +80,8 @@ class GlobalParameterAlias(object):
other_param.kind != param.kind):
message = 'Duplicate global alias {} declared in {} and {} extensions with different types'
raise LoaderError(message.format(self.name, ext.name, other_ext.name))
if not param.name == other_param.name:
message = 'Two params {} in {} and {} in {} both declare global alias {}'
if param.kind != other_param.kind:
message = 'Two params {} in {} and {} in {} both declare global alias {}, and are of different kinds'
raise LoaderError(message.format(param.name, ext.name,
other_param.name, other_ext.name, self.name))
@@ -304,14 +304,14 @@ class ExtensionLoader(object):
for module in walk_modules(package):
self._load_module(module)
except ImportError as e:
message = 'Problem loading extensions from extra packages: {}'
raise LoaderError(message.format(e.message))
message = 'Problem loading extensions from package {}: {}'
raise LoaderError(message.format(package, e.message))
def _load_from_paths(self, paths, ignore_paths):
self.logger.debug('Loading from paths.')
for path in paths:
self.logger.debug('Checking path %s', path)
for root, _, files in os.walk(path):
for root, _, files in os.walk(path, followlinks=True):
should_skip = False
for igpath in ignore_paths:
if root.startswith(igpath):
@@ -320,7 +320,7 @@ class ExtensionLoader(object):
if should_skip:
continue
for fname in files:
if not os.path.splitext(fname)[1].lower() == '.py':
if os.path.splitext(fname)[1].lower() != '.py':
continue
filepath = os.path.join(root, fname)
try:
@@ -333,6 +333,9 @@ class ExtensionLoader(object):
self.logger.warn('Got: {}'.format(e))
else:
raise LoaderError('Failed to load {}'.format(filepath), sys.exc_info())
except Exception as e:
message = 'Problem loading extensions from {}: {}'
raise LoaderError(message.format(filepath, e))
def _load_module(self, module): # NOQA pylint: disable=too-many-branches
self.logger.debug('Checking module %s', module.__name__)
@@ -371,9 +374,10 @@ class ExtensionLoader(object):
store = self._get_store(ext)
store[key] = obj
for alias in obj.aliases:
if alias in self.extensions or alias in self.aliases:
alias_id = identifier(alias.name)
if alias_id in self.extensions or alias_id in self.aliases:
raise LoaderError('{} {} already exists.'.format(ext.name, obj.name))
self.aliases[alias.name] = alias
self.aliases[alias_id] = alias
# Update global aliases list. If a global alias is already in the list,
# then make sure this extension is in the same parent/child hierarchy
@@ -397,4 +401,3 @@ def _instantiate(cls, args=None, kwargs=None):
return cls(*args, **kwargs)
except Exception:
raise LoaderError('Could not load {}'.format(cls), sys.exc_info())

View File

@@ -61,7 +61,7 @@ we want to push the file to the target device and then change the file mode to
755 ::
def setup(self, context):
self.device.push_file(BINARY_FILE, self.device.working_directory)
self.device.push(BINARY_FILE, self.device.working_directory)
self.device.execute('chmod 755 {}'.format(self.trace_on_device))
Then we implemented the start method, which will simply run the file to start
@@ -85,7 +85,7 @@ are metric key, value, unit and lower_is_better, which is a boolean. ::
def update_result(self, context):
# pull the trace file to the device
result = os.path.join(self.device.working_directory, 'trace.txt')
self.device.pull_file(result, context.working_directory)
self.device.pull(result, context.working_directory)
# parse the file if needs to be parsed, or add result to
# context.result
@@ -94,7 +94,7 @@ At the end, we might want to delete any files generated by the instrumentation
and the code to clear these file goes in teardown method. ::
def teardown(self, context):
self.device.delete_file(os.path.join(self.device.working_directory, 'trace.txt'))
self.device.remove(os.path.join(self.device.working_directory, 'trace.txt'))
"""
@@ -106,6 +106,7 @@ import wlauto.core.signal as signal
from wlauto.core.extension import Extension
from wlauto.exceptions import WAError, DeviceNotRespondingError, TimeoutError
from wlauto.utils.misc import get_traceback, isiterable
from wlauto.utils.types import identifier
logger = logging.getLogger('instrumentation')
@@ -191,11 +192,23 @@ def is_installed(instrument):
if instrument in [i.__class__ for i in installed]:
return True
else: # assume string
if instrument in [i.name for i in installed]:
if identifier(instrument) in [identifier(i.name) for i in installed]:
return True
return False
def is_enabled(instrument):
if isinstance(instrument, Instrument) or isinstance(instrument, type):
name = instrument.name
else: # assume string
name = instrument
try:
installed_instrument = get_instrument(name)
return installed_instrument.is_enabled
except ValueError:
return False
failures_detected = False
@@ -275,9 +288,15 @@ def install(instrument):
attr = getattr(instrument, attr_name)
if not callable(attr):
raise ValueError('Attribute {} not callable in {}.'.format(attr_name, instrument))
arg_num = len(inspect.getargspec(attr).args)
if not arg_num == 2:
raise ValueError('{} must take exactly 2 arguments; {} given.'.format(attr_name, arg_num))
argspec = inspect.getargspec(attr)
arg_num = len(argspec.args)
# Instrument callbacks will be passed exactly two arguments: self
# (the instrument instance to which the callback is bound) and
# context. However, we also allow callbacks to capture the context
# in variable arguments (declared as "*args" in the definition).
if arg_num > 2 or (arg_num < 2 and argspec.varargs is None):
message = '{} must take exactly 2 positional arguments; {} given.'
raise ValueError(message.format(attr_name, arg_num))
logger.debug('\tConnecting %s to %s', attr.__name__, SIGNAL_MAP[stripped_attr_name])
mc = ManagedCallback(instrument, attr)
@@ -300,7 +319,7 @@ def get_instrument(inst):
if isinstance(inst, Instrument):
return inst
for installed_inst in installed:
if installed_inst.name == inst:
if identifier(installed_inst.name) == identifier(inst):
return installed_inst
raise ValueError('Instrument {} is not installed'.format(inst))
@@ -366,6 +385,12 @@ class Instrument(Extension):
self.is_enabled = True
self.is_broken = False
def initialize(self, context):
pass
def finalize(self, context):
pass
def __str__(self):
return self.name

View File

@@ -65,7 +65,8 @@ class ResourceResolver(object):
self.logger.debug('Trying {}'.format(getter))
result = getter.get(resource, *args, **kwargs)
if result is not None:
self.logger.debug('Resource {} found using {}'.format(resource, getter))
self.logger.debug('Resource {} found using {}:'.format(resource, getter))
self.logger.debug('\t{}'.format(result))
return result
if strict:
raise ResourceError('{} could not be found'.format(resource))

View File

@@ -13,6 +13,7 @@
# limitations under the License.
#
from wlauto.core.bootstrap import settings
from wlauto.core.extension import Extension
@@ -37,10 +38,10 @@ class GetterPriority(object):
"""
cached = 20
preferred = 10
remote = 5
environment = 0
external_package = -5
package = -10
remote = -20
class Resource(object):
@@ -81,7 +82,7 @@ class ResourceGetter(Extension):
Base class for implementing resolvers. Defines resolver interface. Resolvers are
responsible for discovering resources (such as particular kinds of files) they know
about based on the parameters that are passed to them. Each resolver also has a dict of
attributes that describe it's operation, and may be used to determine which get invoked.
attributes that describe its operation, and may be used to determine which get invoked.
There is no pre-defined set of attributes and resolvers may define their own.
Class attributes:
@@ -169,6 +170,7 @@ class __NullOwner(object):
"""Represents an owner for a resource not owned by anyone."""
name = 'noone'
dependencies_directory = settings.dependencies_directory
def __getattr__(self, name):
return None

View File

@@ -44,7 +44,7 @@ from datetime import datetime
from wlauto.core.extension import Extension
from wlauto.exceptions import WAError
from wlauto.utils.types import numeric
from wlauto.utils.misc import enum_metaclass
from wlauto.utils.misc import enum_metaclass, merge_dicts
class ResultManager(object):
@@ -191,12 +191,13 @@ class RunResult(object):
else:
return self.UNKNOWN # should never happen
def __init__(self, run_info):
def __init__(self, run_info, output_directory=None):
self.info = run_info
self.iteration_results = []
self.artifacts = []
self.events = []
self.non_iteration_errors = False
self.output_directory = output_directory
class RunEvent(object):
@@ -253,14 +254,18 @@ class IterationResult(object):
self.spec = spec
self.id = spec.id
self.workload = spec.workload
self.classifiers = copy(spec.classifiers)
self.iteration = None
self.status = self.NOT_STARTED
self.output_directory = None
self.events = []
self.metrics = []
self.artifacts = []
def add_metric(self, name, value, units=None, lower_is_better=False):
self.metrics.append(Metric(name, value, units, lower_is_better))
def add_metric(self, name, value, units=None, lower_is_better=False, classifiers=None):
classifiers = merge_dicts(self.classifiers, classifiers or {},
list_duplicates='last', should_normalize=False)
self.metrics.append(Metric(name, value, units, lower_is_better, classifiers))
def has_metric(self, name):
for metric in self.metrics:
@@ -298,14 +303,18 @@ class Metric(object):
has no units (e.g. it's a count or a standardised score).
:param lower_is_better: Boolean flag indicating where lower values are
better than higher ones. Defaults to False.
:param classifiers: A set of key-value pairs to further classify this metric
beyond current iteration (e.g. this can be used to identify
sub-tests).
"""
def __init__(self, name, value, units=None, lower_is_better=False):
def __init__(self, name, value, units=None, lower_is_better=False, classifiers=None):
self.name = name
self.value = numeric(value)
self.units = units
self.lower_is_better = lower_is_better
self.classifiers = classifiers or {}
def to_dict(self):
return self.__dict__

View File

@@ -18,7 +18,7 @@ from collections import namedtuple
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision'])
version = VersionTuple(2, 3, 0)
version = VersionTuple(2, 4, 0)
def get_wa_version():

View File

@@ -47,24 +47,32 @@ class Workload(Extension):
super(Workload, self).__init__(**kwargs)
if self.supported_devices and device.name not in self.supported_devices:
raise WorkloadError('Workload {} does not support device {}'.format(self.name, device.name))
if self.supported_platforms and device.platform not in self.supported_platforms:
raise WorkloadError('Workload {} does not support platform {}'.format(self.name, device.platform))
if self.supported_platforms and device.os not in self.supported_platforms:
raise WorkloadError('Workload {} does not support platform {}'.format(self.name, device.os))
self.device = device
def init_resources(self, context):
"""
May be optionally overridden by concrete instances in order to discover and initialise
necessary resources. This method will be invoked at most once during the execution:
before running any workloads, and before invocation of ``validate()``, but after it is
clear that this workload will run (i.e. this method will not be invoked for workloads
that have been discovered but have not been scheduled run in the agenda).
This method may be used to perform early resource discovery and initialization. This is invoked
during the initial loading stage and before the device is ready, so cannot be used for any
device-dependent initialization. This method is invoked before the workload instance is
validated.
"""
pass
def initialize(self, context):
"""
This method should be used to perform once-per-run initialization of a workload instance, i.e.,
unlike ``setup()`` it will not be invoked on each iteration.
"""
pass
def setup(self, context):
"""
Perform the setup necessary to run the workload, such as copying the necessry files
Perform the setup necessary to run the workload, such as copying the necessary files
to the device, configuring the environments, etc.
This is also the place to perform any on-device checks prior to attempting to execute
@@ -89,6 +97,8 @@ class Workload(Extension):
""" Perform any final clean up for the Workload. """
pass
def finalize(self, context):
pass
def __str__(self):
return '<Workload {}>'.format(self.name)

View File

@@ -1,173 +0,0 @@
# Copyright 2014-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=E1101
import os
import re
import time
import pexpect
from wlauto import BigLittleDevice, Parameter
from wlauto.exceptions import DeviceError
from wlauto.utils.serial_port import open_serial_connection, pulse_dtr
from wlauto.utils.android import adb_connect, adb_disconnect, adb_list_devices
from wlauto.utils.uefi import UefiMenu
AUTOSTART_MESSAGE = 'Press Enter to stop auto boot...'
class Juno(BigLittleDevice):
name = 'juno'
description = """
ARM Juno next generation big.LITTLE development platform.
"""
capabilities = ['reset_power']
has_gpu = True
modules = [
'vexpress',
]
parameters = [
Parameter('retries', kind=int, default=2,
description="""Specifies the number of times the device will attempt to recover
(normally, with a hard reset) if it detects that something went wrong."""),
# VExpress flasher expects a device to have these:
Parameter('uefi_entry', default='WA',
description='The name of the entry to use (will be created if does not exist).'),
Parameter('microsd_mount_point', default='/media/JUNO',
description='Location at which the device\'s MicroSD card will be mounted.'),
Parameter('port', default='/dev/ttyS0', description='Serial port on which the device is connected.'),
Parameter('baudrate', kind=int, default=115200, description='Serial connection baud.'),
Parameter('timeout', kind=int, default=300, description='Serial connection timeout.'),
Parameter('core_names', default=['a53', 'a53', 'a53', 'a53', 'a57', 'a57'], override=True),
Parameter('core_clusters', default=[0, 0, 0, 0, 1, 1], override=True),
]
short_delay = 1
firmware_prompt = 'Cmd>'
# this is only used if there is no UEFI entry and one has to be created.
kernel_arguments = 'console=ttyAMA0,115200 earlyprintk=pl011,0x7ff80000 verbose debug init=/init root=/dev/sda1 rw ip=dhcp rootwait'
def boot(self, **kwargs):
self.logger.debug('Resetting the device.')
self.reset()
with open_serial_connection(port=self.port,
baudrate=self.baudrate,
timeout=self.timeout,
init_dtr=0) as target:
menu = UefiMenu(target)
self.logger.debug('Waiting for UEFI menu...')
menu.open(timeout=120)
try:
menu.select(self.uefi_entry)
except LookupError:
self.logger.debug('{} UEFI entry not found.'.format(self.uefi_entry))
self.logger.debug('Attempting to create one using default flasher configuration.')
self.flasher.image_args = self.kernel_arguments
self.flasher.create_uefi_enty(self, menu)
menu.select(self.uefi_entry)
self.logger.debug('Waiting for the Android prompt.')
target.expect(self.android_prompt, timeout=self.timeout)
def connect(self):
if not self._is_ready:
if not self.adb_name: # pylint: disable=E0203
with open_serial_connection(timeout=self.timeout,
port=self.port,
baudrate=self.baudrate,
init_dtr=0) as target:
target.sendline('')
self.logger.debug('Waiting for android prompt.')
target.expect(self.android_prompt)
self.logger.debug('Waiting for IP address...')
wait_start_time = time.time()
while True:
target.sendline('ip addr list eth0')
time.sleep(1)
try:
target.expect('inet ([1-9]\d*.\d+.\d+.\d+)', timeout=10)
self.adb_name = target.match.group(1) + ':5555' # pylint: disable=W0201
break
except pexpect.TIMEOUT:
pass # We have our own timeout -- see below.
if (time.time() - wait_start_time) > self.ready_timeout:
raise DeviceError('Could not acquire IP address.')
if self.adb_name in adb_list_devices():
adb_disconnect(self.adb_name)
adb_connect(self.adb_name, timeout=self.timeout)
super(Juno, self).connect() # wait for boot to complete etc.
self._is_ready = True
def disconnect(self):
if self._is_ready:
super(Juno, self).disconnect()
adb_disconnect(self.adb_name)
self._is_ready = False
def reset(self):
# Currently, reboot is not working in Android on Juno, so
# perfrom a ahard reset instead
self.hard_reset()
def get_cpuidle_states(self, cpu=0):
return {}
def hard_reset(self):
self.disconnect()
self.adb_name = None # Force re-acquire IP address on reboot. pylint: disable=attribute-defined-outside-init
with open_serial_connection(port=self.port,
baudrate=self.baudrate,
timeout=self.timeout,
init_dtr=0,
get_conn=True) as (target, conn):
pulse_dtr(conn, state=True, duration=0.1) # TRM specifies a pulse of >=100ms
i = target.expect([AUTOSTART_MESSAGE, self.firmware_prompt])
if i:
self.logger.debug('Saw firmware prompt.')
time.sleep(self.short_delay)
target.sendline('reboot')
else:
self.logger.debug('Saw auto boot message.')
def wait_for_microsd_mount_point(self, target, timeout=100):
attempts = 1 + self.retries
if os.path.exists(os.path.join(self.microsd_mount_point, 'config.txt')):
return
self.logger.debug('Waiting for VExpress MicroSD to mount...')
for i in xrange(attempts):
if i: # Do not reboot on the first attempt.
target.sendline('reboot')
for _ in xrange(timeout):
time.sleep(self.short_delay)
if os.path.exists(os.path.join(self.microsd_mount_point, 'config.txt')):
return
raise DeviceError('Did not detect MicroSD mount on {}'.format(self.microsd_mount_point))
def get_android_id(self):
# Android ID currenlty not set properly in Juno Android builds.
return 'abad1deadeadbeef'

View File

@@ -1,48 +0,0 @@
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
from wlauto import AndroidDevice, Parameter
class Nexus10Device(AndroidDevice):
name = 'Nexus10'
description = """
Nexus10 is a 10 inch tablet device, which has dual-core A15.
To be able to use Nexus10 in WA, the following must be true:
- USB Debugging Mode is enabled.
- Generate USB debugging authorisation for the host machine
"""
default_working_directory = '/sdcard/working'
has_gpu = True
max_cores = 2
parameters = [
Parameter('core_names', default=['A15', 'A15'], override=True),
Parameter('core_clusters', default=[0, 0], override=True),
]
def init(self, context, *args, **kwargs):
time.sleep(self.long_delay)
self.execute('svc power stayon true', check_exit_code=False)
time.sleep(self.long_delay)
self.execute('input keyevent 82')

View File

@@ -1,40 +0,0 @@
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from wlauto import AndroidDevice, Parameter
class Nexus5Device(AndroidDevice):
name = 'Nexus5'
description = """
Adapter for Nexus 5.
To be able to use Nexus5 in WA, the following must be true:
- USB Debugging Mode is enabled.
- Generate USB debugging authorisation for the host machine
"""
default_working_directory = '/storage/sdcard0/working'
has_gpu = True
max_cores = 4
parameters = [
Parameter('core_names', default=['krait400', 'krait400', 'krait400', 'krait400'], override=True),
Parameter('core_clusters', default=[0, 0, 0, 0], override=True),
]

View File

@@ -1,76 +0,0 @@
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
from wlauto import AndroidDevice, Parameter
from wlauto.exceptions import TimeoutError
from wlauto.utils.android import adb_shell
class Note3Device(AndroidDevice):
name = 'Note3'
description = """
Adapter for Galaxy Note 3.
To be able to use Note3 in WA, the following must be true:
- USB Debugging Mode is enabled.
- Generate USB debugging authorisation for the host machine
"""
parameters = [
Parameter('core_names', default=['A15', 'A15', 'A15', 'A15'], override=True),
Parameter('core_clusters', default=[0, 0, 0, 0], override=True),
Parameter('working_directory', default='/storage/sdcard0/wa-working', override=True),
]
def __init__(self, **kwargs):
super(Note3Device, self).__init__(**kwargs)
self._just_rebooted = False
def init(self, context):
self.execute('svc power stayon true', check_exit_code=False)
def reset(self):
super(Note3Device, self).reset()
self._just_rebooted = True
def hard_reset(self):
super(Note3Device, self).hard_reset()
self._just_rebooted = True
def connect(self): # NOQA pylint: disable=R0912
super(Note3Device, self).connect()
if self._just_rebooted:
self.logger.debug('Waiting for boot to complete...')
# On the Note 3, adb connection gets reset some time after booting.
# This causes errors during execution. To prevent this, open a shell
# session and wait for it to be killed. Once its killed, give adb
# enough time to restart, and then the device should be ready.
try:
adb_shell(self.adb_name, '', timeout=20) # pylint: disable=no-member
time.sleep(5) # give adb time to re-initialize
except TimeoutError:
pass # timed out waiting for the session to be killed -- assume not going to be.
self.logger.debug('Boot completed.')
self._just_rebooted = False
# Swipe upwards to unlock the screen.
time.sleep(self.long_delay)
self.execute('input touchscreen swipe 540 1600 560 800 ')

View File

@@ -1,38 +0,0 @@
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from wlauto import AndroidDevice, Parameter
class OdroidXU3(AndroidDevice):
name = "odroidxu3"
description = 'HardKernel Odroid XU3 development board.'
core_modules = [
'odroidxu3-fan',
]
parameters = [
Parameter('adb_name', default='BABABEEFBABABEEF', override=True),
Parameter('working_directory', default='/data/local/wa-working', override=True),
Parameter('core_names', default=['a7', 'a7', 'a7', 'a7', 'a15', 'a15', 'a15', 'a15'], override=True),
Parameter('core_clusters', default=[0, 0, 0, 0, 1, 1, 1, 1], override=True),
Parameter('port', default='/dev/ttyUSB0', kind=str,
description='Serial port on which the device is connected'),
Parameter('baudrate', default=115200, kind=int, description='Serial connection baud rate'),
]

View File

@@ -1,847 +0,0 @@
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import sys
import re
import string
import shutil
import time
from collections import Counter
import pexpect
from wlauto import BigLittleDevice, RuntimeParameter, Parameter, settings
from wlauto.exceptions import ConfigError, DeviceError
from wlauto.utils.android import adb_connect, adb_disconnect, adb_list_devices
from wlauto.utils.serial_port import open_serial_connection
from wlauto.utils.misc import merge_dicts
from wlauto.utils.types import boolean
BOOT_FIRMWARE = {
'uefi': {
'SCC_0x010': '0x000003E0',
'reboot_attempts': 0,
},
'bootmon': {
'SCC_0x010': '0x000003D0',
'reboot_attempts': 2,
},
}
MODES = {
'mp_a7_only': {
'images_file': 'images_mp.txt',
'dtb': 'mp_a7',
'initrd': 'init_mp',
'kernel': 'kern_mp',
'SCC_0x700': '0x1032F003',
'cpus': ['a7', 'a7', 'a7'],
},
'mp_a7_bootcluster': {
'images_file': 'images_mp.txt',
'dtb': 'mp_a7bc',
'initrd': 'init_mp',
'kernel': 'kern_mp',
'SCC_0x700': '0x1032F003',
'cpus': ['a7', 'a7', 'a7', 'a15', 'a15'],
},
'mp_a15_only': {
'images_file': 'images_mp.txt',
'dtb': 'mp_a15',
'initrd': 'init_mp',
'kernel': 'kern_mp',
'SCC_0x700': '0x0032F003',
'cpus': ['a15', 'a15'],
},
'mp_a15_bootcluster': {
'images_file': 'images_mp.txt',
'dtb': 'mp_a15bc',
'initrd': 'init_mp',
'kernel': 'kern_mp',
'SCC_0x700': '0x0032F003',
'cpus': ['a15', 'a15', 'a7', 'a7', 'a7'],
},
'iks_cpu': {
'images_file': 'images_iks.txt',
'dtb': 'iks',
'initrd': 'init_iks',
'kernel': 'kern_iks',
'SCC_0x700': '0x1032F003',
'cpus': ['a7', 'a7'],
},
'iks_a15': {
'images_file': 'images_iks.txt',
'dtb': 'iks',
'initrd': 'init_iks',
'kernel': 'kern_iks',
'SCC_0x700': '0x0032F003',
'cpus': ['a15', 'a15'],
},
'iks_a7': {
'images_file': 'images_iks.txt',
'dtb': 'iks',
'initrd': 'init_iks',
'kernel': 'kern_iks',
'SCC_0x700': '0x0032F003',
'cpus': ['a7', 'a7'],
},
'iks_ns_a15': {
'images_file': 'images_iks.txt',
'dtb': 'iks',
'initrd': 'init_iks',
'kernel': 'kern_iks',
'SCC_0x700': '0x0032F003',
'cpus': ['a7', 'a7', 'a7', 'a15', 'a15'],
},
'iks_ns_a7': {
'images_file': 'images_iks.txt',
'dtb': 'iks',
'initrd': 'init_iks',
'kernel': 'kern_iks',
'SCC_0x700': '0x0032F003',
'cpus': ['a7', 'a7', 'a7', 'a15', 'a15'],
},
}
A7_ONLY_MODES = ['mp_a7_only', 'iks_a7', 'iks_cpu']
A15_ONLY_MODES = ['mp_a15_only', 'iks_a15']
DEFAULT_A7_GOVERNOR_TUNABLES = {
'interactive': {
'above_hispeed_delay': 80000,
'go_hispeed_load': 85,
'hispeed_freq': 800000,
'min_sample_time': 80000,
'timer_rate': 20000,
},
'ondemand': {
'sampling_rate': 50000,
},
}
DEFAULT_A15_GOVERNOR_TUNABLES = {
'interactive': {
'above_hispeed_delay': 80000,
'go_hispeed_load': 85,
'hispeed_freq': 1000000,
'min_sample_time': 80000,
'timer_rate': 20000,
},
'ondemand': {
'sampling_rate': 50000,
},
}
ADB_SHELL_TIMEOUT = 30
class _TC2DeviceConfig(object):
name = 'TC2 Configuration'
device_name = 'TC2'
def __init__(self, # pylint: disable=R0914,W0613
root_mount='/media/VEMSD',
disable_boot_configuration=False,
boot_firmware=None,
mode=None,
fs_medium='usb',
device_working_directory='/data/local/usecase',
bm_image='bm_v519r.axf',
serial_device='/dev/ttyS0',
serial_baud=38400,
serial_max_timeout=600,
serial_log=sys.stdout,
init_timeout=120,
always_delete_uefi_entry=True,
psci_enable=True,
host_working_directory=None,
a7_governor_tunables=None,
a15_governor_tunables=None,
adb_name=None,
# Compatibility with other android devices.
enable_screen_check=None, # pylint: disable=W0613
**kwargs
):
self.root_mount = root_mount
self.disable_boot_configuration = disable_boot_configuration
if not disable_boot_configuration:
self.boot_firmware = boot_firmware or 'uefi'
self.default_mode = mode or 'mp_a7_bootcluster'
elif boot_firmware or mode:
raise ConfigError('boot_firmware and/or mode cannot be specified when disable_boot_configuration is enabled.')
self.mode = self.default_mode
self.working_directory = device_working_directory
self.serial_device = serial_device
self.serial_baud = serial_baud
self.serial_max_timeout = serial_max_timeout
self.serial_log = serial_log
self.bootmon_prompt = re.compile('^([KLM]:\\\)?>', re.MULTILINE)
self.fs_medium = fs_medium.lower()
self.bm_image = bm_image
self.init_timeout = init_timeout
self.always_delete_uefi_entry = always_delete_uefi_entry
self.psci_enable = psci_enable
self.resource_dir = os.path.join(os.path.dirname(__file__), 'resources')
self.board_dir = os.path.join(self.root_mount, 'SITE1', 'HBI0249A')
self.board_file = 'board.txt'
self.board_file_bak = 'board.bak'
self.images_file = 'images.txt'
self.host_working_directory = host_working_directory or settings.meta_directory
if not a7_governor_tunables:
self.a7_governor_tunables = DEFAULT_A7_GOVERNOR_TUNABLES
else:
self.a7_governor_tunables = merge_dicts(DEFAULT_A7_GOVERNOR_TUNABLES, a7_governor_tunables)
if not a15_governor_tunables:
self.a15_governor_tunables = DEFAULT_A15_GOVERNOR_TUNABLES
else:
self.a15_governor_tunables = merge_dicts(DEFAULT_A15_GOVERNOR_TUNABLES, a15_governor_tunables)
self.adb_name = adb_name
@property
def src_images_template_file(self):
return os.path.join(self.resource_dir, MODES[self.mode]['images_file'])
@property
def src_images_file(self):
return os.path.join(self.host_working_directory, 'images.txt')
@property
def src_board_template_file(self):
return os.path.join(self.resource_dir, 'board_template.txt')
@property
def src_board_file(self):
return os.path.join(self.host_working_directory, 'board.txt')
@property
def kernel_arguments(self):
kernel_args = ' console=ttyAMA0,38400 androidboot.console=ttyAMA0 selinux=0'
if self.fs_medium == 'usb':
kernel_args += ' androidboot.hardware=arm-versatileexpress-usb'
if 'iks' in self.mode:
kernel_args += ' no_bL_switcher=0'
return kernel_args
@property
def kernel(self):
return MODES[self.mode]['kernel']
@property
def initrd(self):
return MODES[self.mode]['initrd']
@property
def dtb(self):
return MODES[self.mode]['dtb']
@property
def SCC_0x700(self):
return MODES[self.mode]['SCC_0x700']
@property
def SCC_0x010(self):
return BOOT_FIRMWARE[self.boot_firmware]['SCC_0x010']
@property
def reboot_attempts(self):
return BOOT_FIRMWARE[self.boot_firmware]['reboot_attempts']
def validate(self):
valid_modes = MODES.keys()
if self.mode not in valid_modes:
message = 'Invalid mode: {}; must be in {}'.format(
self.mode, valid_modes)
raise ConfigError(message)
valid_boot_firmware = BOOT_FIRMWARE.keys()
if self.boot_firmware not in valid_boot_firmware:
message = 'Invalid boot_firmware: {}; must be in {}'.format(
self.boot_firmware,
valid_boot_firmware)
raise ConfigError(message)
if self.fs_medium not in ['usb', 'sdcard']:
message = 'Invalid filesystem medium: {} allowed values : usb, sdcard '.format(self.fs_medium)
raise ConfigError(message)
class TC2Device(BigLittleDevice):
name = 'TC2'
description = """
TC2 is a development board, which has three A7 cores and two A15 cores.
TC2 has a number of boot parameters which are:
:root_mount: Defaults to '/media/VEMSD'
:boot_firmware: It has only two boot firmware options, which are
uefi and bootmon. Defaults to 'uefi'.
:fs_medium: Defaults to 'usb'.
:device_working_directory: The direcitory that WA will be using to copy
files to. Defaults to 'data/local/usecase'
:serial_device: The serial device which TC2 is connected to. Defaults to
'/dev/ttyS0'.
:serial_baud: Defaults to 38400.
:serial_max_timeout: Serial timeout value in seconds. Defaults to 600.
:serial_log: Defaults to standard output.
:init_timeout: The timeout in seconds to init the device. Defaults set
to 30.
:always_delete_uefi_entry: If true, it will delete the ufi entry.
Defaults to True.
:psci_enable: Enabling the psci. Defaults to True.
:host_working_directory: The host working directory. Defaults to None.
:disable_boot_configuration: Disables boot configuration through images.txt and board.txt. When
this is ``True``, those two files will not be overwritten in VEMSD.
This option may be necessary if the firmware version in the ``TC2``
is not compatible with the templates in WA. Please note that enabling
this will prevent you form being able to set ``boot_firmware`` and
``mode`` parameters. Defaults to ``False``.
TC2 can also have a number of different booting mode, which are:
:mp_a7_only: Only the A7 cluster.
:mp_a7_bootcluster: Both A7 and A15 clusters, but it boots on A7
cluster.
:mp_a15_only: Only the A15 cluster.
:mp_a15_bootcluster: Both A7 and A15 clusters, but it boots on A15
clusters.
:iks_cpu: Only A7 cluster with only 2 cpus.
:iks_a15: Only A15 cluster.
:iks_a7: Same as iks_cpu
:iks_ns_a15: Both A7 and A15 clusters.
:iks_ns_a7: Both A7 and A15 clusters.
The difference between mp and iks is the scheduling policy.
TC2 takes the following runtime parameters
:a7_cores: Number of active A7 cores.
:a15_cores: Number of active A15 cores.
:a7_governor: CPUFreq governor for the A7 cluster.
:a15_governor: CPUFreq governor for the A15 cluster.
:a7_min_frequency: Minimum CPU frequency for the A7 cluster.
:a15_min_frequency: Minimum CPU frequency for the A15 cluster.
:a7_max_frequency: Maximum CPU frequency for the A7 cluster.
:a15_max_frequency: Maximum CPU frequency for the A7 cluster.
:irq_affinity: lambda x: Which cluster will receive IRQs.
:cpuidle: Whether idle states should be enabled.
:sysfile_values: A dict mapping a complete file path to the value that
should be echo'd into it. By default, the file will be
subsequently read to verify that the value was written
into it with DeviceError raised otherwise. For write-only
files, this check can be disabled by appending a ``!`` to
the end of the file path.
"""
has_gpu = False
a15_only_modes = A15_ONLY_MODES
a7_only_modes = A7_ONLY_MODES
not_configurable_modes = ['iks_a7', 'iks_cpu', 'iks_a15']
parameters = [
Parameter('core_names', mandatory=False, override=True,
description='This parameter will be ignored for TC2'),
Parameter('core_clusters', mandatory=False, override=True,
description='This parameter will be ignored for TC2'),
]
runtime_parameters = [
RuntimeParameter('irq_affinity', lambda d, x: d.set_irq_affinity(x.lower()), lambda: None),
RuntimeParameter('cpuidle', lambda d, x: d.enable_idle_states() if boolean(x) else d.disable_idle_states(),
lambda d: d.get_cpuidle())
]
def get_mode(self):
return self.config.mode
def set_mode(self, mode):
if self._has_booted:
raise DeviceError('Attempting to set boot mode when already booted.')
valid_modes = MODES.keys()
if mode is None:
mode = self.config.default_mode
if mode not in valid_modes:
message = 'Invalid mode: {}; must be in {}'.format(mode, valid_modes)
raise ConfigError(message)
self.config.mode = mode
mode = property(get_mode, set_mode)
def _get_core_names(self):
return MODES[self.mode]['cpus']
def _set_core_names(self, value):
pass
core_names = property(_get_core_names, _set_core_names)
def _get_core_clusters(self):
seen = set([])
core_clusters = []
cluster_id = -1
for core in MODES[self.mode]['cpus']:
if core not in seen:
seen.add(core)
cluster_id += 1
core_clusters.append(cluster_id)
return core_clusters
def _set_core_clusters(self, value):
pass
core_clusters = property(_get_core_clusters, _set_core_clusters)
@property
def cpu_cores(self):
return MODES[self.mode]['cpus']
@property
def max_a7_cores(self):
return Counter(MODES[self.mode]['cpus'])['a7']
@property
def max_a15_cores(self):
return Counter(MODES[self.mode]['cpus'])['a15']
@property
def a7_governor_tunables(self):
return self.config.a7_governor_tunables
@property
def a15_governor_tunables(self):
return self.config.a15_governor_tunables
def __init__(self, **kwargs):
super(TC2Device, self).__init__()
self.config = _TC2DeviceConfig(**kwargs)
self.working_directory = self.config.working_directory
self._serial = None
self._has_booted = None
def boot(self, **kwargs): # NOQA
mode = kwargs.get('os_mode', None)
self._is_ready = False
self._has_booted = False
self.mode = mode
self.logger.debug('Booting in {} mode'.format(self.mode))
with open_serial_connection(timeout=self.config.serial_max_timeout,
port=self.config.serial_device,
baudrate=self.config.serial_baud) as target:
if self.config.boot_firmware == 'bootmon':
self._boot_using_bootmon(target)
elif self.config.boot_firmware == 'uefi':
self._boot_using_uefi(target)
else:
message = 'Unexpected boot firmware: {}'.format(self.config.boot_firmware)
raise ConfigError(message)
try:
target.sendline('')
self.logger.debug('Waiting for the Android prompt.')
target.expect(self.android_prompt, timeout=40) # pylint: disable=E1101
except pexpect.TIMEOUT:
# Try a second time before giving up.
self.logger.debug('Did not get Android prompt, retrying...')
target.sendline('')
target.expect(self.android_prompt, timeout=10) # pylint: disable=E1101
self.logger.debug('Waiting for OS to initialize...')
started_waiting_time = time.time()
time.sleep(20) # we know it's not going to to take less time than this.
boot_completed, got_ip_address = False, False
while True:
try:
if not boot_completed:
target.sendline('getprop sys.boot_completed')
boot_completed = target.expect(['0.*', '1.*'], timeout=10)
if not got_ip_address:
target.sendline('getprop dhcp.eth0.ipaddress')
# regexes are processed in order, so ip regex has to
# come first (as we only want to match new line if we
# don't match the IP). We do a "not" make the logic
# consistent with boot_completed.
got_ip_address = not target.expect(['[1-9]\d*.\d+.\d+.\d+', '\n'], timeout=10)
except pexpect.TIMEOUT:
pass # We have our own timeout -- see below.
if boot_completed and got_ip_address:
break
time.sleep(5)
if (time.time() - started_waiting_time) > self.config.init_timeout:
raise DeviceError('Timed out waiting for the device to initialize.')
self._has_booted = True
def connect(self):
if not self._is_ready:
if self.config.adb_name:
self.adb_name = self.config.adb_name # pylint: disable=attribute-defined-outside-init
else:
with open_serial_connection(timeout=self.config.serial_max_timeout,
port=self.config.serial_device,
baudrate=self.config.serial_baud) as target:
# Get IP address and push the Gator and PMU logger.
target.sendline('su') # as of Android v5.0.2, Linux does not boot into root shell
target.sendline('netcfg')
ipaddr_re = re.compile('eth0 +UP +(.+)/.+', re.MULTILINE)
target.expect(ipaddr_re)
output = target.after
match = re.search('eth0 +UP +(.+)/.+', output)
if not match:
raise DeviceError('Could not get adb IP address.')
ipaddr = match.group(1)
# Connect to device using adb.
target.expect(self.android_prompt) # pylint: disable=E1101
self.adb_name = ipaddr + ":5555" # pylint: disable=W0201
if self.adb_name in adb_list_devices():
adb_disconnect(self.adb_name)
adb_connect(self.adb_name)
self._is_ready = True
self.execute("input keyevent 82", timeout=ADB_SHELL_TIMEOUT)
self.execute("svc power stayon true", timeout=ADB_SHELL_TIMEOUT)
def disconnect(self):
adb_disconnect(self.adb_name)
self._is_ready = False
# TC2-specific methods. You should avoid calling these in
# Workloads/Instruments as that would tie them to TC2 (and if that is
# the case, then you should set the supported_devices parameter in the
# Workload/Instrument accordingly). Most of these can be replace with a
# call to set_runtime_parameters.
def get_cpuidle(self):
return self.get_sysfile_value('/sys/devices/system/cpu/cpu0/cpuidle/state1/disable')
def enable_idle_states(self):
"""
Fully enables idle states on TC2.
See http://wiki.arm.com/Research/TC2SetupAndUsage ("Enabling Idle Modes" section)
and http://wiki.arm.com/ASD/ControllingPowerManagementInLinaroKernels
"""
# Enable C1 (cluster shutdown).
self.set_sysfile_value('/sys/devices/system/cpu/cpu0/cpuidle/state1/disable', 0, verify=False)
# Enable C0 on A15 cluster.
self.set_sysfile_value('/sys/kernel/debug/idle_debug/enable_idle', 0, verify=False)
# Enable C0 on A7 cluster.
self.set_sysfile_value('/sys/kernel/debug/idle_debug/enable_idle', 1, verify=False)
def disable_idle_states(self):
"""
Disable idle states on TC2.
See http://wiki.arm.com/Research/TC2SetupAndUsage ("Enabling Idle Modes" section)
and http://wiki.arm.com/ASD/ControllingPowerManagementInLinaroKernels
"""
# Disable C1 (cluster shutdown).
self.set_sysfile_value('/sys/devices/system/cpu/cpu0/cpuidle/state1/disable', 1, verify=False)
# Disable C0.
self.set_sysfile_value('/sys/kernel/debug/idle_debug/enable_idle', 0xFF, verify=False)
def set_irq_affinity(self, cluster):
"""
Set's IRQ affinity to the specified cluster.
This method will only work if the device mode is mp_a7_bootcluster or
mp_a15_bootcluster. This operation does not make sense if there is only one
cluster active (all IRQs will obviously go to that), and it will not work for
IKS kernel because clusters are not exposed to sysfs.
:param cluster: must be either 'a15' or 'a7'.
"""
if self.config.mode not in ('mp_a7_bootcluster', 'mp_a15_bootcluster'):
raise ConfigError('Cannot set IRQ affinity with mode {}'.format(self.config.mode))
if cluster == 'a7':
self.execute('/sbin/set_irq_affinity.sh 0xc07', check_exit_code=False)
elif cluster == 'a15':
self.execute('/sbin/set_irq_affinity.sh 0xc0f', check_exit_code=False)
else:
raise ConfigError('cluster must either "a15" or "a7"; got {}'.format(cluster))
def _boot_using_uefi(self, target):
self.logger.debug('Booting using UEFI.')
self._wait_for_vemsd_mount(target)
self._setup_before_reboot()
self._perform_uefi_reboot(target)
# Get to the UEFI menu.
self.logger.debug('Waiting for UEFI default selection.')
target.sendline('reboot')
target.expect('The default boot selection will start in'.rstrip())
time.sleep(1)
target.sendline(''.rstrip())
# If delete every time is specified, try to delete entry.
if self.config.always_delete_uefi_entry:
self._delete_uefi_entry(target, entry='workload_automation_MP')
self.config.always_delete_uefi_entry = False
# Specify argument to be passed specifying that psci is (or is not) enabled
if self.config.psci_enable:
psci_enable = ' psci=enable'
else:
psci_enable = ''
# Identify the workload automation entry.
selection_pattern = r'\[([0-9]*)\] '
try:
target.expect(re.compile(selection_pattern + 'workload_automation_MP'), timeout=5)
wl_menu_item = target.match.group(1)
except pexpect.TIMEOUT:
self._create_uefi_entry(target, psci_enable, entry_name='workload_automation_MP')
# At this point the board should be rebooted so we need to retry to boot
self._boot_using_uefi(target)
else: # Did not time out.
try:
#Identify the boot manager menu item
target.expect(re.compile(selection_pattern + 'Boot Manager'))
boot_manager_menu_item = target.match.group(1)
#Update FDT
target.sendline(boot_manager_menu_item)
target.expect(re.compile(selection_pattern + 'Update FDT path'), timeout=15)
update_fdt_menu_item = target.match.group(1)
target.sendline(update_fdt_menu_item)
target.expect(re.compile(selection_pattern + 'NOR Flash .*'), timeout=15)
bootmonfs_menu_item = target.match.group(1)
target.sendline(bootmonfs_menu_item)
target.expect('File path of the FDT blob:')
target.sendline(self.config.dtb)
#Return to main manu and boot from wl automation
target.expect(re.compile(selection_pattern + 'Return to main menu'), timeout=15)
return_to_main_menu_item = target.match.group(1)
target.sendline(return_to_main_menu_item)
target.sendline(wl_menu_item)
except pexpect.TIMEOUT:
raise DeviceError('Timed out')
def _setup_before_reboot(self):
if not self.config.disable_boot_configuration:
self.logger.debug('Performing pre-boot setup.')
substitution = {
'SCC_0x010': self.config.SCC_0x010,
'SCC_0x700': self.config.SCC_0x700,
}
with open(self.config.src_board_template_file, 'r') as fh:
template_board_txt = string.Template(fh.read())
with open(self.config.src_board_file, 'w') as wfh:
wfh.write(template_board_txt.substitute(substitution))
with open(self.config.src_images_template_file, 'r') as fh:
template_images_txt = string.Template(fh.read())
with open(self.config.src_images_file, 'w') as wfh:
wfh.write(template_images_txt.substitute({'bm_image': self.config.bm_image}))
shutil.copyfile(self.config.src_board_file,
os.path.join(self.config.board_dir, self.config.board_file))
shutil.copyfile(self.config.src_images_file,
os.path.join(self.config.board_dir, self.config.images_file))
os.system('sync') # make sure everything is flushed to microSD
else:
self.logger.debug('Boot configuration disabled proceeding with existing board.txt and images.txt.')
def _delete_uefi_entry(self, target, entry): # pylint: disable=R0201
"""
this method deletes the entry specified as parameter
as a precondition serial port input needs to be parsed AT MOST up to
the point BEFORE recognizing this entry (both entry and boot manager has
not yet been parsed)
"""
try:
selection_pattern = r'\[([0-9]+)\] *'
try:
target.expect(re.compile(selection_pattern + entry), timeout=5)
wl_menu_item = target.match.group(1)
except pexpect.TIMEOUT:
return # Entry does not exist, nothing to delete here...
# Identify and select boot manager menu item
target.expect(selection_pattern + 'Boot Manager', timeout=15)
bootmanager_item = target.match.group(1)
target.sendline(bootmanager_item)
# Identify and select 'Remove entry'
target.expect(selection_pattern + 'Remove Boot Device Entry', timeout=15)
new_entry_item = target.match.group(1)
target.sendline(new_entry_item)
# Delete entry
target.expect(re.compile(selection_pattern + entry), timeout=5)
wl_menu_item = target.match.group(1)
target.sendline(wl_menu_item)
# Return to main manu
target.expect(re.compile(selection_pattern + 'Return to main menu'), timeout=15)
return_to_main_menu_item = target.match.group(1)
target.sendline(return_to_main_menu_item)
except pexpect.TIMEOUT:
raise DeviceError('Timed out while deleting UEFI entry.')
def _create_uefi_entry(self, target, psci_enable, entry_name):
"""
Creates the default boot entry that is expected when booting in uefi mode.
"""
self._wait_for_vemsd_mount(target)
try:
selection_pattern = '\[([0-9]+)\] *'
# Identify and select boot manager menu item.
target.expect(selection_pattern + 'Boot Manager', timeout=15)
bootmanager_item = target.match.group(1)
target.sendline(bootmanager_item)
# Identify and select 'add new entry'.
target.expect(selection_pattern + 'Add Boot Device Entry', timeout=15)
new_entry_item = target.match.group(1)
target.sendline(new_entry_item)
# Identify and select BootMonFs.
target.expect(selection_pattern + 'NOR Flash .*', timeout=15)
BootMonFs_item = target.match.group(1)
target.sendline(BootMonFs_item)
# Specify the parameters of the new entry.
target.expect('.+the kernel', timeout=5)
target.sendline(self.config.kernel) # kernel path
target.expect('Has FDT support\?.*\[y\/n\].*', timeout=5)
time.sleep(0.5)
target.sendline('y') # Has Fdt support? -> y
target.expect('Add an initrd.*\[y\/n\].*', timeout=5)
time.sleep(0.5)
target.sendline('y') # add an initrd? -> y
target.expect('.+the initrd.*', timeout=5)
time.sleep(0.5)
target.sendline(self.config.initrd) # initrd path
target.expect('.+to the binary.*', timeout=5)
time.sleep(0.5)
_slow_sendline(target, self.config.kernel_arguments + psci_enable) # arguments to pass to binary
time.sleep(0.5)
target.expect('.+new Entry.+', timeout=5)
_slow_sendline(target, entry_name) # Entry name
target.expect('Choice.+', timeout=15)
time.sleep(2)
except pexpect.TIMEOUT:
raise DeviceError('Timed out while creating UEFI entry.')
self._perform_uefi_reboot(target)
def _perform_uefi_reboot(self, target):
self._wait_for_vemsd_mount(target)
open(os.path.join(self.config.root_mount, 'reboot.txt'), 'a').close()
def _wait_for_vemsd_mount(self, target, timeout=100):
attempts = 1 + self.config.reboot_attempts
if os.path.exists(os.path.join(self.config.root_mount, 'config.txt')):
return
self.logger.debug('Waiting for VEMSD to mount...')
for i in xrange(attempts):
if i: # Do not reboot on the first attempt.
target.sendline('reboot')
target.sendline('usb_on')
for _ in xrange(timeout):
time.sleep(1)
if os.path.exists(os.path.join(self.config.root_mount, 'config.txt')):
return
raise DeviceError('Timed out waiting for VEMSD to mount.')
def _boot_using_bootmon(self, target):
"""
This method Boots TC2 using the bootmon interface.
"""
self.logger.debug('Booting using bootmon.')
try:
self._wait_for_vemsd_mount(target, timeout=20)
except DeviceError:
# OK, something's wrong. Reboot the board and try again.
self.logger.debug('VEMSD not mounted, attempting to power cycle device.')
target.sendline(' ')
state = target.expect(['Cmd> ', self.config.bootmon_prompt, self.android_prompt]) # pylint: disable=E1101
if state == 0 or state == 1:
# Reboot - Bootmon
target.sendline('reboot')
target.expect('Powering up system...')
elif state == 2:
target.sendline('reboot -n')
target.expect('Powering up system...')
else:
raise DeviceError('Unexpected board state {}; should be 0, 1 or 2'.format(state))
self._wait_for_vemsd_mount(target)
self._setup_before_reboot()
# Reboot - Bootmon
self.logger.debug('Rebooting into bootloader...')
open(os.path.join(self.config.root_mount, 'reboot.txt'), 'a').close()
target.expect('Powering up system...')
target.expect(self.config.bootmon_prompt)
# Wait for VEMSD to mount
self._wait_for_vemsd_mount(target)
#Boot Linux - Bootmon
target.sendline('fl linux fdt ' + self.config.dtb)
target.expect(self.config.bootmon_prompt)
target.sendline('fl linux initrd ' + self.config.initrd)
target.expect(self.config.bootmon_prompt)
target.sendline('fl linux boot ' + self.config.kernel + self.config.kernel_arguments)
# Utility functions.
def _slow_sendline(target, line):
for c in line:
target.send(c)
time.sleep(0.1)
target.sendline('')

View File

@@ -1,96 +0,0 @@
BOARD: HBI0249
TITLE: V2P-CA15_A7 Configuration File
[DCCS]
TOTALDCCS: 1 ;Total Number of DCCS
M0FILE: dbb_v110.ebf ;DCC0 Filename
M0MODE: MICRO ;DCC0 Programming Mode
[FPGAS]
TOTALFPGAS: 0 ;Total Number of FPGAs
[TAPS]
TOTALTAPS: 3 ;Total Number of TAPs
T0NAME: STM32TMC ;TAP0 Device Name
T0FILE: NONE ;TAP0 Filename
T0MODE: NONE ;TAP0 Programming Mode
T1NAME: STM32CM3 ;TAP1 Device Name
T1FILE: NONE ;TAP1 Filename
T1MODE: NONE ;TAP1 Programming Mode
T2NAME: CORTEXA15 ;TAP2 Device Name
T2FILE: NONE ;TAP2 Filename
T2MODE: NONE ;TAP2 Programming Mode
[OSCCLKS]
TOTALOSCCLKS: 9 ;Total Number of OSCCLKS
OSC0: 50.0 ;CPUREFCLK0 A15 CPU (20:1 - 1.0GHz)
OSC1: 50.0 ;CPUREFCLK1 A15 CPU (20:1 - 1.0GHz)
OSC2: 40.0 ;CPUREFCLK0 A7 CPU (20:1 - 800MHz)
OSC3: 40.0 ;CPUREFCLK1 A7 CPU (20:1 - 800MHz)
OSC4: 40.0 ;HSBM AXI (40MHz)
OSC5: 23.75 ;HDLCD (23.75MHz - TC PLL is in bypass)
OSC6: 50.0 ;SMB (50MHz)
OSC7: 50.0 ;SYSREFCLK (20:1 - 1.0GHz, ACLK - 500MHz)
OSC8: 50.0 ;DDR2 (8:1 - 400MHz)
[SCC REGISTERS]
TOTALSCCS: 33 ;Total Number of SCC registers
;SCC: 0x010 0x000003D0 ;Remap to NOR0
SCC: 0x010 $SCC_0x010 ;Switch between NOR0/NOR1
SCC: 0x01C 0xFF00FF00 ;CFGRW3 - SMC CS6/7 N/U
SCC: 0x118 0x01CD1011 ;CFGRW17 - HDLCD PLL external bypass
;SCC: 0x700 0x00320003 ;CFGRW48 - [25:24]Boot CPU [28]Boot Cluster (default CA7_0)
SCC: 0x700 $SCC_0x700 ;CFGRW48 - [25:24]Boot CPU [28]Boot Cluster (default CA7_0)
; Bootmon configuration:
; [15]: A7 Event stream generation (default: disabled)
; [14]: A15 Event stream generation (default: disabled)
; [13]: Power down the non-boot cluster (default: disabled)
; [12]: Use per-cpu mailboxes for power management (default: disabled)
; [11]: A15 executes WFEs as nops (default: disabled)
SCC: 0x400 0x33330c00 ;CFGREG41 - A15 configuration register 0 (Default 0x33330c80)
; [29:28] SPNIDEN
; [25:24] SPIDEN
; [21:20] NIDEN
; [17:16] DBGEN
; [13:12] CFGTE
; [9:8] VINITHI_CORE
; [7] IMINLN
; [3:0] CLUSTER_ID
;Set the CPU clock PLLs
SCC: 0x120 0x022F1010 ;CFGRW19 - CA15_0 PLL control - 20:1 (lock OFF)
SCC: 0x124 0x0011710D ;CFGRW20 - CA15_0 PLL value
SCC: 0x128 0x022F1010 ;CFGRW21 - CA15_1 PLL control - 20:1 (lock OFF)
SCC: 0x12C 0x0011710D ;CFGRW22 - CA15_1 PLL value
SCC: 0x130 0x022F1010 ;CFGRW23 - CA7_0 PLL control - 20:1 (lock OFF)
SCC: 0x134 0x0011710D ;CFGRW24 - CA7_0 PLL value
SCC: 0x138 0x022F1010 ;CFGRW25 - CA7_1 PLL control - 20:1 (lock OFF)
SCC: 0x13C 0x0011710D ;CFGRW26 - CA7_1 PLL value
;Power management interface
SCC: 0xC00 0x00000005 ;Control: [0]PMI_EN [1]DBG_EN [2]SPC_SYSCFG
SCC: 0xC04 0x060E0356 ;Latency in uS max: [15:0]DVFS [31:16]PWRUP
SCC: 0xC08 0x00000000 ;Reserved
SCC: 0xC0C 0x00000000 ;Reserved
;CA15 performance values: 0xVVVFFFFF
SCC: 0xC10 0x384061A8 ;CA15 PERFVAL0, 900mV, 20,000*20= 500MHz
SCC: 0xC14 0x38407530 ;CA15 PERFVAL1, 900mV, 25,000*20= 600MHz
SCC: 0xC18 0x384088B8 ;CA15 PERFVAL2, 900mV, 30,000*20= 700MHz
SCC: 0xC1C 0x38409C40 ;CA15 PERFVAL3, 900mV, 35,000*20= 800MHz
SCC: 0xC20 0x3840AFC8 ;CA15 PERFVAL4, 900mV, 40,000*20= 900MHz
SCC: 0xC24 0x3840C350 ;CA15 PERFVAL5, 900mV, 45,000*20=1000MHz
SCC: 0xC28 0x3CF0D6D8 ;CA15 PERFVAL6, 975mV, 50,000*20=1100MHz
SCC: 0xC2C 0x41A0EA60 ;CA15 PERFVAL7, 1050mV, 55,000*20=1200MHz
;CA7 performance values: 0xVVVFFFFF
SCC: 0xC30 0x3840445C ;CA7 PERFVAL0, 900mV, 10,000*20= 350MHz
SCC: 0xC34 0x38404E20 ;CA7 PERFVAL1, 900mV, 15,000*20= 400MHz
SCC: 0xC38 0x384061A8 ;CA7 PERFVAL2, 900mV, 20,000*20= 500MHz
SCC: 0xC3C 0x38407530 ;CA7 PERFVAL3, 900mV, 25,000*20= 600MHz
SCC: 0xC40 0x384088B8 ;CA7 PERFVAL4, 900mV, 30,000*20= 700MHz
SCC: 0xC44 0x38409C40 ;CA7 PERFVAL5, 900mV, 35,000*20= 800MHz
SCC: 0xC48 0x3CF0AFC8 ;CA7 PERFVAL6, 975mV, 40,000*20= 900MHz
SCC: 0xC4C 0x41A0C350 ;CA7 PERFVAL7, 1050mV, 45,000*20=1000MHz

View File

@@ -1,25 +0,0 @@
TITLE: Versatile Express Images Configuration File
[IMAGES]
TOTALIMAGES: 4 ;Number of Images (Max : 32)
NOR0UPDATE: AUTO ;Image Update:NONE/AUTO/FORCE
NOR0ADDRESS: BOOT ;Image Flash Address
NOR0FILE: \SOFTWARE\$bm_image ;Image File Name
NOR1UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR1ADDRESS: 0x00000000 ;Image Flash Address
NOR1FILE: \SOFTWARE\kern_iks.bin ;Image File Name
NOR1LOAD: 0x80008000
NOR1ENTRY: 0x80008000
NOR2UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR2ADDRESS: 0x00000000 ;Image Flash Address
NOR2FILE: \SOFTWARE\iks.dtb ;Image File Name for booting in A7 cluster
NOR2LOAD: 0x84000000
NOR2ENTRY: 0x84000000
NOR3UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR3ADDRESS: 0x00000000 ;Image Flash Address
NOR3FILE: \SOFTWARE\init_iks.bin ;Image File Name
NOR3LOAD: 0x90100000
NOR3ENTRY: 0x90100000

View File

@@ -1,55 +0,0 @@
TITLE: Versatile Express Images Configuration File
[IMAGES]
TOTALIMAGES: 9 ;Number of Images (Max: 32)
NOR0UPDATE: AUTO ;Image Update:NONE/AUTO/FORCE
NOR0ADDRESS: BOOT ;Image Flash Address
NOR0FILE: \SOFTWARE\$bm_image ;Image File Name
NOR1UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR1ADDRESS: 0x0E000000 ;Image Flash Address
NOR1FILE: \SOFTWARE\kern_mp.bin ;Image File Name
NOR1LOAD: 0x80008000
NOR1ENTRY: 0x80008000
NOR2UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR2ADDRESS: 0x0E800000 ;Image Flash Address
NOR2FILE: \SOFTWARE\mp_a7.dtb ;Image File Name for booting in A7 cluster
NOR2LOAD: 0x84000000
NOR2ENTRY: 0x84000000
NOR3UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR3ADDRESS: 0x0E900000 ;Image Flash Address
NOR3FILE: \SOFTWARE\mp_a15.dtb ;Image File Name
NOR3LOAD: 0x84000000
NOR3ENTRY: 0x84000000
NOR4UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR4ADDRESS: 0x0EA00000 ;Image Flash Address
NOR4FILE: \SOFTWARE\mp_a7bc.dtb ;Image File Name
NOR4LOAD: 0x84000000
NOR4ENTRY: 0x84000000
NOR5UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR5ADDRESS: 0x0EB00000 ;Image Flash Address
NOR5FILE: \SOFTWARE\mp_a15bc.dtb ;Image File Name
NOR5LOAD: 0x84000000
NOR5ENTRY: 0x84000000
NOR6UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR6ADDRESS: 0x0EC00000 ;Image Flash Address
NOR6FILE: \SOFTWARE\init_mp.bin ;Image File Name
NOR6LOAD: 0x85000000
NOR6ENTRY: 0x85000000
NOR7UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR7ADDRESS: 0x0C000000 ;Image Flash Address
NOR7FILE: \SOFTWARE\tc2_sec.bin ;Image File Name
NOR7LOAD: 0
NOR7ENTRY: 0
NOR8UPDATE: AUTO ;IMAGE UPDATE:NONE/AUTO/FORCE
NOR8ADDRESS: 0x0D000000 ;Image Flash Address
NOR8FILE: \SOFTWARE\tc2_uefi.bin ;Image File Name
NOR8LOAD: 0
NOR8ENTRY: 0

View File

@@ -1,16 +0,0 @@
# Copyright 2014-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

View File

@@ -1,37 +0,0 @@
# Copyright 2014-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from wlauto import LinuxDevice, Parameter
class GenericDevice(LinuxDevice):
name = 'generic_linux'
description = """
Generic Linux device. Use this if you do not have a device file for
your device.
This implements the minimum functionality that should be supported by
all Linux devices.
"""
abi = 'armeabi'
has_gpu = True
parameters = [
Parameter('core_names', default=[], override=True),
Parameter('core_clusters', default=[], override=True),
]

View File

@@ -1,35 +0,0 @@
# Copyright 2014-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from wlauto import LinuxDevice, Parameter
class OdroidXU3LinuxDevice(LinuxDevice):
name = "odroidxu3_linux"
description = 'HardKernel Odroid XU3 development board (Ubuntu image).'
core_modules = [
'odroidxu3-fan',
]
parameters = [
Parameter('core_names', default=['a7', 'a7', 'a7', 'a7', 'a15', 'a15', 'a15', 'a15'], override=True),
Parameter('core_clusters', default=[0, 0, 0, 0, 1, 1, 1, 1], override=True),
]
abi = 'armeabi'

View File

@@ -141,3 +141,20 @@ class WorkerThreadError(WAError):
message = 'Exception of type {} occured on thread {}:\n'.format(orig_name, thread)
message += '{}\n{}: {}'.format(get_traceback(self.exc_info), orig_name, orig)
super(WorkerThreadError, self).__init__(message)
class SerializerSyntaxError(Exception):
"""
Error loading a serialized structure from/to a file handle.
"""
def __init__(self, message, line=None, column=None):
super(SerializerSyntaxError, self).__init__(message)
self.line = line
self.column = column
def __str__(self):
linestring = ' on line {}'.format(self.line) if self.line else ''
colstring = ' in column {}'.format(self.column) if self.column else ''
message = 'Syntax Error{}: {}'
return message.format(''.join([linestring, colstring]), self.message)

Binary file not shown.

Binary file not shown.

View File

@@ -14,4 +14,4 @@
#
__version__ = '1.0.1'
__version__ = '1.0.5'

View File

@@ -14,7 +14,7 @@
#
# pylint: disable=E1101,E1103
# pylint: disable=E1101,E1103,wrong-import-position
import os
import sys
@@ -128,10 +128,11 @@ class CommandExecutorProtocol(Protocol):
response.message = 'No ports were returned.'
def processDevicesResponse(self, response):
if 'devices' not in response.data:
self.errorOut('Response did not containt devices data: {} ({}).'.format(response, response.data))
ports = response.data['devices']
response.data = ports
if response.status == Status.OK:
if 'devices' not in response.data:
self.errorOut('Response did not containt devices data: {} ({}).'.format(response, response.data))
devices = response.data['devices']
response.data = devices
def sendPullRequest(self, port_id):
self.sendRequest('pull', port_id=port_id)
@@ -167,7 +168,7 @@ class CommandExecutorProtocol(Protocol):
class CommandExecutorFactory(ClientFactory):
protocol = CommandExecutorProtocol
wait_delay = 1
wait_delay = 1
def __init__(self, config, command, timeout=10, retries=1):
self.config = config
@@ -186,19 +187,19 @@ class CommandExecutorFactory(ClientFactory):
os.makedirs(self.output_directory)
def buildProtocol(self, addr):
protocol = CommandExecutorProtocol(self.command, self.timeout, self.retries)
protocol = CommandExecutorProtocol(self.command, self.timeout, self.retries)
protocol.factory = self
return protocol
def initiateFileTransfer(self, filename, port):
log.debug('Downloading {} from port {}'.format(filename, port))
filepath = os.path.join(self.output_directory, filename)
session = FileReceiverFactory(filepath, self)
session = FileReceiverFactory(filepath, self)
connector = reactor.connectTCP(self.config.host, port, session)
self.transfers_in_progress[session] = connector
def transferComplete(self, session):
connector = self.transfers_in_progress[session]
connector = self.transfers_in_progress[session]
log.debug('Transfer on port {} complete.'.format(connector.port))
del self.transfers_in_progress[session]
@@ -321,7 +322,7 @@ def execute_command(server_config, command, **kwargs):
# so in the long run, we need to do this properly and get the FDs
# from the reactor.
after_fds = _get_open_fds()
for fd in (after_fds - before_fds):
for fd in after_fds - before_fds:
try:
os.close(int(fd[1:]))
except OSError:
@@ -338,8 +339,7 @@ def _get_open_fds():
if os.name == 'posix':
import subprocess
pid = os.getpid()
procs = subprocess.check_output(
[ "lsof", '-w', '-Ff', "-p", str( pid ) ] )
procs = subprocess.check_output(["lsof", '-w', '-Ff', "-p", str(pid)])
return set(procs.split())
else:
# TODO: Implement the Windows equivalent.
@@ -362,7 +362,7 @@ def run_send_command():
else:
log.start_logging('INFO', fmt='%(levelname)-8s %(message)s')
if args.command == 'configure':
if args.command == 'configure':
args.device_config.validate()
command = Command(args.command, config=args.device_config)
elif args.command == 'get_data':

View File

@@ -23,7 +23,7 @@ class Serializer(json.JSONEncoder):
def default(self, o): # pylint: disable=E0202
if isinstance(o, Serializable):
return o.serialize()
if isinstance(o, Enum.EnumEntry):
if isinstance(o, EnumEntry):
return o.name
return json.JSONEncoder.default(self, o)
@@ -58,6 +58,18 @@ class DaqServerResponse(Serializable):
return '{} {}'.format(self.status, self.message or '')
class EnumEntry(object):
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
def __cmp__(self, other):
return cmp(self.name, str(other))
class Enum(object):
"""
Assuming MyEnum = Enum('A', 'B'),
@@ -74,17 +86,9 @@ class Enum(object):
"""
class EnumEntry(object):
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
def __cmp__(self, other):
return cmp(self.name, str(other))
def __init__(self, *args):
for a in args:
setattr(self, a, self.EnumEntry(a))
setattr(self, a, EnumEntry(a))
def __call__(self, value):
if value not in self.__dict__:

View File

@@ -44,9 +44,9 @@ class DeviceConfiguration(Serializable):
def __init__(self, **kwargs): # pylint: disable=W0231
try:
self.device_id = kwargs.pop('device_id') or self.default_device_id
self.v_range = float(kwargs.pop('v_range') or self.default_v_range)
self.v_range = float(kwargs.pop('v_range') or self.default_v_range)
self.dv_range = float(kwargs.pop('dv_range') or self.default_dv_range)
self.sampling_rate = int(kwargs.pop('sampling_rate') or self.default_sampling_rate)
self.sampling_rate = int(kwargs.pop('sampling_rate') or self.default_sampling_rate)
self.resistor_values = kwargs.pop('resistor_values') or []
self.channel_map = kwargs.pop('channel_map') or self.default_channel_map
self.labels = (kwargs.pop('labels') or
@@ -59,7 +59,7 @@ class DeviceConfiguration(Serializable):
def validate(self):
if not self.number_of_ports:
raise ConfigurationError('No resistor values were specified.')
if not len(self.resistor_values) == len(self.labels):
if len(self.resistor_values) != len(self.labels):
message = 'The number of resistors ({}) does not match the number of labels ({})'
raise ConfigurationError(message.format(len(self.resistor_values), len(self.labels)))
@@ -98,7 +98,7 @@ class UpdateDeviceConfig(argparse.Action):
setting = option_string.strip('-').replace('-', '_')
if setting not in DeviceConfiguration.valid_settings:
raise ConfigurationError('Unkown option: {}'.format(option_string))
setattr(namespace._device_config, setting, values)
setattr(namespace._device_config, setting, values) # pylint: disable=protected-access
class UpdateServerConfig(argparse.Action):
@@ -151,4 +151,3 @@ def get_config_parser(server=True, device=True):
parser.add_argument('--host', action=UpdateServerConfig)
parser.add_argument('--port', action=UpdateServerConfig, type=int)
return parser

View File

@@ -42,7 +42,7 @@ Port 0
:sampling_rate: The rate at which DAQ takes a sample from each channel.
"""
# pylint: disable=F0401,E1101,W0621
# pylint: disable=F0401,E1101,W0621,no-name-in-module,wrong-import-position,wrong-import-order
import os
import sys
import csv
@@ -52,23 +52,41 @@ from Queue import Queue, Empty
import numpy
from PyDAQmx import Task
from PyDAQmx.DAQmxFunctions import DAQmxGetSysDevNames
from PyDAQmx import Task, DAQError
try:
from PyDAQmx.DAQmxFunctions import DAQmxGetSysDevNames
CAN_ENUMERATE_DEVICES = True
except ImportError: # earlier driver version
DAQmxGetSysDevNames = None
CAN_ENUMERATE_DEVICES = False
from PyDAQmx.DAQmxTypes import int32, byref, create_string_buffer
from PyDAQmx.DAQmxConstants import (DAQmx_Val_Diff, DAQmx_Val_Volts, DAQmx_Val_GroupByScanNumber, DAQmx_Val_Auto,
DAQmx_Val_Acquired_Into_Buffer, DAQmx_Val_Rising, DAQmx_Val_ContSamps)
DAQmx_Val_Rising, DAQmx_Val_ContSamps)
try:
from PyDAQmx.DAQmxConstants import DAQmx_Val_Acquired_Into_Buffer
callbacks_supported = True
except ImportError: # earlier driver version
DAQmx_Val_Acquired_Into_Buffer = None
callbacks_supported = False
from daqpower import log
def list_available_devices():
"""Returns the list of DAQ devices visible to the driver."""
bufsize = 2048 # Should be plenty for all but the most pathalogical of situations.
buf = create_string_buffer('\000' * bufsize)
DAQmxGetSysDevNames(buf, bufsize)
return buf.value.split(',')
if DAQmxGetSysDevNames:
bufsize = 2048 # Should be plenty for all but the most pathalogical of situations.
buf = create_string_buffer('\000' * bufsize)
DAQmxGetSysDevNames(buf, bufsize)
return buf.value.split(',')
else:
return []
class ReadSamplesTask(Task):
class ReadSamplesBaseTask(Task):
def __init__(self, config, consumer):
Task.__init__(self)
@@ -93,11 +111,27 @@ class ReadSamplesTask(Task):
DAQmx_Val_Rising,
DAQmx_Val_ContSamps,
self.config.sampling_rate)
class ReadSamplesCallbackTask(ReadSamplesBaseTask):
"""
More recent verisons of the driver (on Windows) support callbacks
"""
def __init__(self, config, consumer):
ReadSamplesBaseTask.__init__(self, config, consumer)
# register callbacks
self.AutoRegisterEveryNSamplesEvent(DAQmx_Val_Acquired_Into_Buffer, self.config.sampling_rate // 2, 0)
self.AutoRegisterDoneEvent(0)
def EveryNCallback(self):
# Note to future self: do NOT try to "optimize" this but re-using the same array and just
# zeroing it out each time. The writes happen asynchronously and if your zero it out too soon,
# you'll see a whole bunch of 0.0's in the output. If you wanna go down that route, you'll need
# cycler through several arrays and have the code that's actually doing the writing zero them out
# mark them as available to be used by this call. But, honestly, numpy array allocation does not
# appear to be a bottleneck at the moment, so the current solution is "good enough".
samples_buffer = numpy.zeros((self.sample_buffer_size,), dtype=numpy.float64)
self.ReadAnalogF64(DAQmx_Val_Auto, 0.0, DAQmx_Val_GroupByScanNumber, samples_buffer,
self.sample_buffer_size, byref(self.samples_read), None)
@@ -107,6 +141,51 @@ class ReadSamplesTask(Task):
return 0 # The function should return an integer
class ReadSamplesThreadedTask(ReadSamplesBaseTask):
"""
Earlier verisons of the driver (on CentOS) do not support callbacks. So need
to create a thread to periodically poll the buffer
"""
def __init__(self, config, consumer):
ReadSamplesBaseTask.__init__(self, config, consumer)
self.poller = DaqPoller(self)
def StartTask(self):
ReadSamplesBaseTask.StartTask(self)
self.poller.start()
def StopTask(self):
self.poller.stop()
ReadSamplesBaseTask.StopTask(self)
class DaqPoller(threading.Thread):
def __init__(self, task, wait_period=1):
super(DaqPoller, self).__init__()
self.task = task
self.wait_period = wait_period
self._stop_signal = threading.Event()
self.samples_buffer = numpy.zeros((self.task.sample_buffer_size,), dtype=numpy.float64)
def run(self):
while not self._stop_signal.is_set():
# Note to future self: see the comment inside EventNCallback() above
samples_buffer = numpy.zeros((self.task.sample_buffer_size,), dtype=numpy.float64)
try:
self.task.ReadAnalogF64(DAQmx_Val_Auto, self.wait_period, DAQmx_Val_GroupByScanNumber, samples_buffer,
self.task.sample_buffer_size, byref(self.task.samples_read), None)
except DAQError:
pass
self.task.consumer.write((samples_buffer, self.task.samples_read.value))
def stop(self):
self._stop_signal.set()
self.join()
class AsyncWriter(threading.Thread):
def __init__(self, wait_period=1):
@@ -220,7 +299,10 @@ class DaqRunner(object):
def __init__(self, config, output_directory):
self.config = config
self.processor = SampleProcessor(config.resistor_values, output_directory, config.labels)
self.task = ReadSamplesTask(config, self.processor)
if callbacks_supported:
self.task = ReadSamplesCallbackTask(config, self.processor)
else:
self.task = ReadSamplesThreadedTask(config, self.processor) # pylint: disable=redefined-variable-type
self.is_running = False
def start(self):
@@ -252,13 +334,13 @@ if __name__ == '__main__':
resistor_values = [0.005]
labels = ['PORT_0']
dev_config = DeviceConfig('Dev1', channel_map, resistor_values, 2.5, 0.2, 10000, len(resistor_values), labels)
if not len(sys.argv) == 3:
if len(sys.argv) != 3:
print 'Usage: {} OUTDIR DURATION'.format(os.path.basename(__file__))
sys.exit(1)
output_directory = sys.argv[1]
duration = float(sys.argv[2])
print "Avialable devices:", list_availabe_devices()
print "Avialable devices:", list_available_devices()
runner = DaqRunner(dev_config, output_directory)
runner.start()
time.sleep(duration)

View File

@@ -29,6 +29,11 @@ critical = lambda x: log.msg(x, logLevel=logging.CRITICAL)
class CustomLoggingObserver(log.PythonLoggingObserver):
def __init__(self, loggerName="twisted"):
super(CustomLoggingObserver, self).__init__(loggerName)
if hasattr(self, '_newObserver'): # new vesions of Twisted
self.logger = self._newObserver.logger # pylint: disable=no-member
def emit(self, eventDict):
if 'logLevel' in eventDict:
level = eventDict['logLevel']

View File

@@ -14,15 +14,15 @@
#
# pylint: disable=E1101,W0613
# pylint: disable=E1101,W0613,wrong-import-position
from __future__ import division
import os
import sys
import socket
import argparse
import shutil
import socket
import time
from datetime import datetime
from datetime import datetime, timedelta
from zope.interface import implements
from twisted.protocols.basic import LineReceiver
@@ -30,18 +30,20 @@ from twisted.internet.protocol import Factory, Protocol
from twisted.internet import reactor, interfaces
from twisted.internet.error import ConnectionLost, ConnectionDone
if __name__ == "__main__": # for debugging
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from daqpower import log
from daqpower.config import DeviceConfiguration
from daqpower.common import DaqServerRequest, DaqServerResponse, Status
try:
from daqpower.daq import DaqRunner, list_available_devices
except ImportError:
from daqpower.daq import DaqRunner, list_available_devices, CAN_ENUMERATE_DEVICES
__import_error = None
except ImportError as e:
# May be using debug mode.
__import_error = e
DaqRunner = None
list_available_devices = lambda : ['Dev1']
list_available_devices = lambda: ['Dev1']
class ProtocolError(Exception):
@@ -64,11 +66,11 @@ class DummyDaqRunner(object):
self.is_running = False
def start(self):
import csv, random
import csv, random # pylint: disable=multiple-imports
log.info('runner started')
for i in xrange(self.config.number_of_ports):
rows = [['power', 'voltage']] + [[random.gauss(1.0, 1.0), random.gauss(1.0, 0.1)]
for j in xrange(self.num_rows)]
for _ in xrange(self.num_rows)]
with open(self.get_port_file_path(self.config.labels[i]), 'wb') as wfh:
writer = csv.writer(wfh)
writer.writerows(rows)
@@ -139,7 +141,7 @@ class DaqServer(object):
else:
raise ProtocolError('Stop called before a session has been configured.')
def list_devices(self):
def list_devices(self): # pylint: disable=no-self-use
return list_available_devices()
def list_ports(self):
@@ -205,7 +207,10 @@ class DaqControlProtocol(LineReceiver): # pylint: disable=W0223
try:
request = DaqServerRequest.deserialize(line)
except Exception, e: # pylint: disable=W0703
self.sendError('Received bad request ({}: {})'.format(e.__class__.__name__, e.message))
# PyDAQmx exceptions use "mess" rather than the standard "message"
# to pass errors...
message = getattr(e, 'mess', e.message)
self.sendError('Received bad request ({}: {})'.format(e.__class__.__name__, message))
else:
self.processRequest(request)
@@ -230,7 +235,8 @@ class DaqControlProtocol(LineReceiver): # pylint: disable=W0223
else:
self.sendError('Received unknown command: {}'.format(request.command))
except Exception, e: # pylint: disable=W0703
self.sendError('{}: {}'.format(e.__class__.__name__, e.message))
message = getattr(e, 'mess', e.message)
self.sendError('{}: {}'.format(e.__class__.__name__, message))
def configure(self, request):
if 'config' in request.params:
@@ -269,8 +275,12 @@ class DaqControlProtocol(LineReceiver): # pylint: disable=W0223
self.sendError('Invalid pull request; port id not provided.')
def list_devices(self, request):
devices = self.daq_server.list_devices()
self.sendResponse(Status.OK, data={'devices': devices})
if CAN_ENUMERATE_DEVICES:
devices = self.daq_server.list_devices()
self.sendResponse(Status.OK, data={'devices': devices})
else:
message = "Server does not support DAQ device enumration"
self.sendResponse(Status.OKISH, message=message)
def list_ports(self, request):
port_labels = self.daq_server.list_ports()
@@ -303,7 +313,7 @@ class DaqControlProtocol(LineReceiver): # pylint: disable=W0223
def sendLine(self, line):
log.info('Responding: {}'.format(line))
LineReceiver.sendLine(self, line.replace('\r\n',''))
LineReceiver.sendLine(self, line.replace('\r\n', ''))
def _initiate_file_transfer(self, filepath):
sender_factory = FileSenderFactory(filepath, self.factory)
@@ -318,14 +328,17 @@ class DaqFactory(Factory):
check_alive_period = 5 * 60
max_transfer_lifetime = 30 * 60
def __init__(self, server):
def __init__(self, server, cleanup_period=24 * 60 * 60, cleanup_after_days=5):
self.server = server
self.cleanup_period = cleanup_period
self.cleanup_threshold = timedelta(cleanup_after_days)
self.transfer_sessions = {}
def buildProtocol(self, addr):
proto = DaqControlProtocol(self.server)
proto.factory = self
reactor.callLater(self.check_alive_period, self.pulse)
reactor.callLater(self.cleanup_period, self.perform_cleanup)
return proto
def clientConnectionLost(self, connector, reason):
@@ -355,6 +368,25 @@ class DaqFactory(Factory):
if self.transfer_sessions:
reactor.callLater(self.check_alive_period, self.pulse)
def perform_cleanup(self):
"""
Cleanup and old uncollected data files to recover disk space.
"""
log.msg('Performing cleanup of the output directory...')
base_directory = self.server.base_output_directory
current_time = datetime.now()
for entry in os.listdir(base_directory):
entry_path = os.path.join(base_directory, entry)
entry_ctime = datetime.fromtimestamp(os.path.getctime(entry_path))
existence_time = current_time - entry_ctime
if existence_time > self.cleanup_threshold:
log.debug('Removing {} (existed for {})'.format(entry, existence_time))
shutil.rmtree(entry_path)
else:
log.debug('Keeping {} (existed for {})'.format(entry, existence_time))
log.msg('Cleanup complete.')
def __str__(self):
return '<DAQ {}>'.format(self.server)
@@ -453,6 +485,13 @@ def run_server():
parser.add_argument('-d', '--directory', help='Working directory', metavar='DIR', default='.')
parser.add_argument('-p', '--port', help='port the server will listen on.',
metavar='PORT', default=45677, type=int)
parser.add_argument('-c', '--cleanup-after', type=int, default=5, metavar='DAYS',
help="""
Sever will perodically clean up data files that are older than the number of
days specfied by this parameter.
""")
parser.add_argument('--cleanup-period', type=int, default=1, metavar='DAYS',
help='Specifies how ofte the server will attempt to clean up old files.')
parser.add_argument('--debug', help='Run in debug mode (no DAQ connected).',
action='store_true', default=False)
parser.add_argument('--verbose', help='Produce verobose output.', action='store_true', default=False)
@@ -463,15 +502,22 @@ def run_server():
DaqRunner = DummyDaqRunner
else:
if not DaqRunner:
raise ImportError('DaqRunner')
raise __import_error # pylint: disable=raising-bad-type
if args.verbose or args.debug:
log.start_logging('DEBUG')
else:
log.start_logging('INFO')
# days to seconds
cleanup_period = args.cleanup_period * 24 * 60 * 60
server = DaqServer(args.directory)
reactor.listenTCP(args.port, DaqFactory(server)).getHost()
hostname = socket.gethostbyname(socket.gethostname())
factory = DaqFactory(server, cleanup_period, args.cleanup_after)
reactor.listenTCP(args.port, factory).getHost()
try:
hostname = socket.gethostbyname(socket.gethostname())
except socket.gaierror:
hostname = 'localhost'
log.info('Listening on {}:{}'.format(hostname, args.port))
reactor.run()

View File

@@ -14,6 +14,7 @@
*/
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
@@ -22,6 +23,8 @@
#include <limits.h>
#include <linux/input.h>
#include <sys/stat.h>
#include <signal.h>
#include <ctype.h>
#ifdef ANDROID
#include <android/log.h>
@@ -65,38 +68,39 @@ typedef enum {
typedef struct {
revent_mode_t mode;
int record_time;
int device_number;
int32_t record_time;
int32_t device_number;
char *file;
} revent_args_t;
typedef struct {
size_t id_pathc; /* Count of total paths so far. */
int32_t id_pathc; /* Count of total paths so far. */
char id_pathv[INPDEV_MAX_DEVICES][INPDEV_MAX_PATH]; /* List of paths matching pattern. */
} inpdev_t;
typedef struct {
int dev_idx;
int32_t dev_idx;
int32_t _padding;
struct input_event event;
} replay_event_t;
typedef struct {
int num_fds;
int num_events;
int32_t num_fds;
int32_t num_events;
int *fds;
replay_event_t *events;
} replay_buffer_t;
bool_t verbose = FALSE;
bool_t wait_for_stdin = TRUE;
bool_t is_numeric(char *string)
{
int len = strlen(string);
int i = 0;
while(i < len)
while(i < len)
{
if(!isdigit(string[i]))
return FALSE;
@@ -113,13 +117,13 @@ off_t get_file_size(const char *filename) {
return st.st_size;
die("Cannot determine size of %s: %s\n", filename, strerror(errno));
}
}
int inpdev_init(inpdev_t **inpdev, int devid)
{
int i;
int32_t i;
int fd;
int num_devices;
int32_t num_devices;
*inpdev = malloc(sizeof(inpdev_t));
(*inpdev)->id_pathc = 0;
@@ -193,8 +197,8 @@ void dump(const char *logfile)
int *fds = malloc(sizeof(int)*nfds);
if (!fds) die("out of memory\n");
int len;
int i;
int32_t len;
int32_t i;
char buf[INPDEV_MAX_PATH];
inpdev_t *inpdev = malloc(sizeof(inpdev_t));
@@ -213,7 +217,7 @@ void dump(const char *logfile)
struct input_event ev;
int count = 0;
while(1) {
int idx;
int32_t idx;
rb = read(fdin, &idx, sizeof(idx));
if (rb != sizeof(idx)) break;
rb = read(fdin, &ev, sizeof(ev));
@@ -240,27 +244,26 @@ int replay_buffer_init(replay_buffer_t **buffer, const char *logfile)
die("out of memory\n");
int fdin = open(logfile, O_RDONLY);
if (fdin < 0)
if (fdin < 0)
die("Could not open eventlog %s\n", logfile);
size_t rb = read(fdin, &(buff->num_fds), sizeof(buff->num_fds));
if (rb!=sizeof(buff->num_fds))
if (rb!=sizeof(buff->num_fds))
die("problems reading eventlog\n");
buff->fds = malloc(sizeof(int) * buff->num_fds);
if (!buff->fds)
if (!buff->fds)
die("out of memory\n");
int len, i;
int32_t len, i;
char path_buff[256]; // should be more than enough
for (i = 0; i < buff->num_fds; i++) {
memset(path_buff, 0, sizeof(path_buff));
rb = read(fdin, &len, sizeof(len));
if (rb!=sizeof(len))
if (rb!=sizeof(len))
die("problems reading eventlog\n");
rb = read(fdin, &path_buff[0], len);
if (rb != len)
if (rb != len)
die("problems reading eventlog\n");
buff->fds[i] = open(path_buff, O_WRONLY | O_NDELAY);
@@ -270,20 +273,20 @@ int replay_buffer_init(replay_buffer_t **buffer, const char *logfile)
struct timeval start_time;
replay_event_t rep_ev;
buff->num_events = 0;
i = 0;
while(1) {
int idx;
rb = read(fdin, &rep_ev, sizeof(rep_ev));
if (rb < (int)sizeof(rep_ev))
if (rb < (int)sizeof(rep_ev))
break;
if (buff->num_events == 0) {
if (i == 0) {
start_time = rep_ev.event.time;
}
timersub(&(rep_ev.event.time), &start_time, &(rep_ev.event.time));
memcpy(&(buff->events[buff->num_events]), &rep_ev, sizeof(rep_ev));
buff->num_events++;
memcpy(&(buff->events[i]), &rep_ev, sizeof(rep_ev));
i++;
}
buff->num_events = i - 1;
close(fdin);
return 0;
}
@@ -298,7 +301,7 @@ int replay_buffer_close(replay_buffer_t *buff)
int replay_buffer_play(replay_buffer_t *buff)
{
int i = 0, rb;
int32_t i = 0, rb;
struct timeval start_time, now, desired_time, last_event_delta, delta;
memset(&last_event_delta, 0, sizeof(struct timeval));
gettimeofday(&start_time, NULL);
@@ -316,11 +319,11 @@ int replay_buffer_play(replay_buffer_t *buff)
usleep(d);
}
int idx = (buff->events[i]).dev_idx;
int32_t idx = (buff->events[i]).dev_idx;
struct input_event ev = (buff->events[i]).event;
while((i < buff->num_events) && !timercmp(&ev.time, &last_event_delta, !=)) {
rb = write(buff->fds[idx], &ev, sizeof(ev));
if (rb!=sizeof(ev))
if (rb!=sizeof(ev))
die("problems writing\n");
dprintf("replayed event: type %d code %d value %d\n", ev.type, ev.code, ev.value);
@@ -346,97 +349,6 @@ void replay(const char *logfile)
replay_buffer_close(replay_buffer);
}
void record(inpdev_t *inpdev, int delay, const char *logfile)
{
fd_set readfds;
FILE* fdout;
struct input_event ev;
int i;
int maxfd = 0;
int keydev=0;
int* fds = malloc(sizeof(int)*inpdev->id_pathc);
if (!fds) die("out of memory\n");
fdout = fopen(logfile, "wb");
if (!fdout) die("Could not open eventlog %s\n", logfile);
fwrite(&inpdev->id_pathc, sizeof(inpdev->id_pathc), 1, fdout);
for (i=0; i<inpdev->id_pathc; i++) {
int len = strlen(inpdev->id_pathv[i]);
fwrite(&len, sizeof(len), 1, fdout);
fwrite(inpdev->id_pathv[i], len, 1, fdout);
}
for (i=0; i < inpdev->id_pathc; i++)
{
fds[i] = open(inpdev->id_pathv[i], O_RDONLY);
if (fds[i]>maxfd) maxfd = fds[i];
dprintf("opened %s with %d\n", inpdev->id_pathv[i], fds[i]);
if (fds[i]<0) die("could not open \%s\n", inpdev->id_pathv[i]);
}
int count =0;
struct timeval tout;
while(1)
{
FD_ZERO(&readfds);
FD_SET(STDIN_FILENO, &readfds);
for (i=0; i < inpdev->id_pathc; i++)
FD_SET(fds[i], &readfds);
/* wait for input */
tout.tv_sec = delay;
tout.tv_usec = 0;
int r = select(maxfd+1, &readfds, NULL, NULL, &tout);
/* dprintf("got %d (err %d)\n", r, errno); */
if (!r) break;
if (FD_ISSET(STDIN_FILENO, &readfds)) {
// in this case the key down for the return key will be recorded
// so we need to up the key up
memset(&ev, 0, sizeof(ev));
ev.type = EV_KEY;
ev.code = KEY_ENTER;
ev.value = 0;
gettimeofday(&ev.time, NULL);
fwrite(&keydev, sizeof(keydev), 1, fdout);
fwrite(&ev, sizeof(ev), 1, fdout);
memset(&ev, 0, sizeof(ev)); // SYN
gettimeofday(&ev.time, NULL);
fwrite(&keydev, sizeof(keydev), 1, fdout);
fwrite(&ev, sizeof(ev), 1, fdout);
dprintf("added fake return exiting...\n");
break;
}
for (i=0; i < inpdev->id_pathc; i++)
{
if (FD_ISSET(fds[i], &readfds))
{
dprintf("Got event from %s\n", inpdev->id_pathv[i]);
memset(&ev, 0, sizeof(ev));
size_t rb = read(fds[i], (void*) &ev, sizeof(ev));
dprintf("%d event: type %d code %d value %d\n",
(unsigned int)rb, ev.type, ev.code, ev.value);
if (ev.type == EV_KEY && ev.code == KEY_ENTER && ev.value == 1)
keydev = i;
fwrite(&i, sizeof(i), 1, fdout);
fwrite(&ev, sizeof(ev), 1, fdout);
count++;
}
}
}
for (i=0; i < inpdev->id_pathc; i++)
{
close(fds[i]);
}
fclose(fdout);
free(fds);
dprintf("Recorded %d events\n", count);
}
void usage()
{
printf("usage:\n revent [-h] [-v] COMMAND [OPTIONS] \n"
@@ -485,7 +397,7 @@ void revent_args_init(revent_args_t **rargs, int argc, char** argv)
revent_args->file = NULL;
int opt;
while ((opt = getopt(argc, argv, "ht:d:v")) != -1)
while ((opt = getopt(argc, argv, "ht:d:vs")) != -1)
{
switch (opt) {
case 'h':
@@ -511,6 +423,10 @@ void revent_args_init(revent_args_t **rargs, int argc, char** argv)
case 'v':
verbose = TRUE;
break;
case 's':
wait_for_stdin = FALSE;
break;
default:
die("Unexpected option: %c", opt);
}
@@ -521,13 +437,13 @@ void revent_args_init(revent_args_t **rargs, int argc, char** argv)
usage();
die("Must specify a command.\n");
}
if (!strcmp(argv[next_arg], "record"))
if (!strcmp(argv[next_arg], "record"))
revent_args->mode = RECORD;
else if (!strcmp(argv[next_arg], "replay"))
else if (!strcmp(argv[next_arg], "replay"))
revent_args->mode = REPLAY;
else if (!strcmp(argv[next_arg], "dump"))
else if (!strcmp(argv[next_arg], "dump"))
revent_args->mode = DUMP;
else if (!strcmp(argv[next_arg], "info"))
else if (!strcmp(argv[next_arg], "info"))
revent_args->mode = INFO;
else {
usage();
@@ -564,15 +480,138 @@ int revent_args_close(revent_args_t *rargs)
return 0;
}
int* fds = NULL;
FILE* fdout = NULL;
revent_args_t *rargs = NULL;
inpdev_t *inpdev = NULL;
int count;
void term_handler(int signum)
{
int32_t i;
for (i=0; i < inpdev->id_pathc; i++)
{
close(fds[i]);
}
fclose(fdout);
free(fds);
dprintf("Recorded %d events\n", count);
inpdev_close(inpdev);
revent_args_close(rargs);
exit(0);
}
void record(inpdev_t *inpdev, int delay, const char *logfile)
{
fd_set readfds;
struct input_event ev;
int32_t i;
int32_t _padding = 0xdeadbeef;
int32_t maxfd = 0;
int32_t keydev=0;
//signal handler
struct sigaction action;
memset(&action, 0, sizeof(struct sigaction));
action.sa_handler = term_handler;
sigaction(SIGTERM, &action, NULL);
fds = malloc(sizeof(int)*inpdev->id_pathc);
if (!fds) die("out of memory\n");
fdout = fopen(logfile, "wb");
if (!fdout) die("Could not open eventlog %s\n", logfile);
fwrite(&inpdev->id_pathc, sizeof(inpdev->id_pathc), 1, fdout);
for (i=0; i<inpdev->id_pathc; i++) {
int32_t len = strlen(inpdev->id_pathv[i]);
fwrite(&len, sizeof(len), 1, fdout);
fwrite(inpdev->id_pathv[i], len, 1, fdout);
}
for (i=0; i < inpdev->id_pathc; i++)
{
fds[i] = open(inpdev->id_pathv[i], O_RDONLY);
if (fds[i]>maxfd) maxfd = fds[i];
dprintf("opened %s with %d\n", inpdev->id_pathv[i], fds[i]);
if (fds[i]<0) die("could not open \%s\n", inpdev->id_pathv[i]);
}
count = 0;
struct timeval tout;
while(1)
{
FD_ZERO(&readfds);
if (wait_for_stdin)
{
FD_SET(STDIN_FILENO, &readfds);
}
for (i=0; i < inpdev->id_pathc; i++)
FD_SET(fds[i], &readfds);
/* wait for input */
tout.tv_sec = delay;
tout.tv_usec = 0;
int32_t r = select(maxfd+1, &readfds, NULL, NULL, &tout);
/* dprintf("got %d (err %d)\n", r, errno); */
if (!r) break;
if (wait_for_stdin && FD_ISSET(STDIN_FILENO, &readfds)) {
// in this case the key down for the return key will be recorded
// so we need to up the key up
memset(&ev, 0, sizeof(ev));
ev.type = EV_KEY;
ev.code = KEY_ENTER;
ev.value = 0;
gettimeofday(&ev.time, NULL);
fwrite(&keydev, sizeof(keydev), 1, fdout);
fwrite(&_padding, sizeof(_padding), 1, fdout);
fwrite(&ev, sizeof(ev), 1, fdout);
memset(&ev, 0, sizeof(ev)); // SYN
gettimeofday(&ev.time, NULL);
fwrite(&keydev, sizeof(keydev), 1, fdout);
fwrite(&_padding, sizeof(_padding), 1, fdout);
fwrite(&ev, sizeof(ev), 1, fdout);
dprintf("added fake return exiting...\n");
break;
}
for (i=0; i < inpdev->id_pathc; i++)
{
if (FD_ISSET(fds[i], &readfds))
{
dprintf("Got event from %s\n", inpdev->id_pathv[i]);
memset(&ev, 0, sizeof(ev));
size_t rb = read(fds[i], (void*) &ev, sizeof(ev));
dprintf("%d event: type %d code %d value %d\n",
(unsigned int)rb, ev.type, ev.code, ev.value);
if (ev.type == EV_KEY && ev.code == KEY_ENTER && ev.value == 1)
keydev = i;
fwrite(&i, sizeof(i), 1, fdout);
fwrite(&_padding, sizeof(_padding), 1, fdout);
fwrite(&ev, sizeof(ev), 1, fdout);
count++;
}
}
}
for (i=0; i < inpdev->id_pathc; i++)
{
close(fds[i]);
}
fclose(fdout);
free(fds);
dprintf("Recorded %d events\n", count);
}
int main(int argc, char** argv)
{
int i;
char *logfile = NULL;
revent_args_t *rargs;
revent_args_init(&rargs, argc, argv);
inpdev_t *inpdev;
inpdev_init(&inpdev, rargs->device_number);
switch(rargs->mode) {
@@ -595,4 +634,3 @@ int main(int argc, char** argv)
revent_args_close(rargs);
return 0;
}

View File

@@ -23,5 +23,13 @@ def instrument_is_installed(instrument):
return instrumentation.is_installed(instrument)
def instrument_is_enabled(instrument):
"""Returns ``True`` if the specified instrument is installed and is currently
enabled, and ``False`` other wise. The insturment maybe specified either
as a name or a subclass (or instance of subclass) of
:class:`wlauto.core.Instrument`."""
return instrumentation.is_enabled(instrument)
def clear_instrumentation():
instrumentation.installed = []

View File

@@ -13,19 +13,25 @@
# limitations under the License.
#
# pylint: disable=W0613,E1101,access-member-before-definition,attribute-defined-outside-init
from __future__ import division
import os
import sys
import csv
from collections import OrderedDict
import shutil
import tempfile
from collections import OrderedDict, defaultdict
from string import ascii_lowercase
from multiprocessing import Process, Queue
from wlauto import Instrument, Parameter
from wlauto.exceptions import ConfigError, InstrumentError
from wlauto.core import signal
from wlauto.exceptions import ConfigError, InstrumentError, DeviceError
from wlauto.utils.misc import ensure_directory_exists as _d
from wlauto.utils.types import list_of_ints, list_of_strs
from wlauto.utils.types import list_of_ints, list_of_strs, boolean
# pylint: disable=wrong-import-position,wrong-import-order
daqpower_path = os.path.join(os.path.dirname(__file__), '..', '..', 'external', 'daq_server', 'src')
sys.path.insert(0, daqpower_path)
try:
@@ -38,11 +44,25 @@ sys.path.pop(0)
UNITS = {
'energy': 'Joules',
'power': 'Watts',
'voltage': 'Volts',
}
GPIO_ROOT = '/sys/class/gpio'
TRACE_MARKER_PATH = '/sys/kernel/debug/tracing/trace_marker'
def dict_or_bool(value):
"""
Ensures that either a dictionary or a boolean is used as a parameter.
"""
if isinstance(value, dict):
return value
return boolean(value)
class Daq(Instrument):
name = 'daq'
@@ -90,43 +110,103 @@ class Daq(Instrument):
parameters = [
Parameter('server_host', kind=str, default='localhost',
global_alias='daq_server_host',
description='The host address of the machine that runs the daq Server which the '
'insturment communicates with.'),
Parameter('server_port', kind=int, default=56788,
Parameter('server_port', kind=int, default=45677,
global_alias='daq_server_port',
description='The port number for daq Server in which daq insturment communicates '
'with.'),
Parameter('device_id', kind=str, default='Dev1',
global_alias='daq_device_id',
description='The ID under which the DAQ is registered with the driver.'),
Parameter('v_range', kind=float, default=2.5,
global_alias='daq_v_range',
description='Specifies the voltage range for the SOC voltage channel on the DAQ '
'(please refer to :ref:`daq_setup` for details).'),
Parameter('dv_range', kind=float, default=0.2,
global_alias='daq_dv_range',
description='Specifies the voltage range for the resistor voltage channel on '
'the DAQ (please refer to :ref:`daq_setup` for details).'),
Parameter('sampling_rate', kind=int, default=10000,
global_alias='daq_sampling_rate',
description='DAQ sampling rate. DAQ will take this many samples each '
'second. Please note that this maybe limitted by your DAQ model '
'and then number of ports you\'re measuring (again, see '
':ref:`daq_setup`)'),
Parameter('resistor_values', kind=list, mandatory=True,
global_alias='daq_resistor_values',
description='The values of resistors (in Ohms) across which the voltages are measured on '
'each port.'),
Parameter('channel_map', kind=list_of_ints, default=(0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23),
global_alias='daq_channel_map',
description='Represents mapping from logical AI channel number to physical '
'connector on the DAQ (varies between DAQ models). The default '
'assumes DAQ 6363 and similar with AI channels on connectors '
'0-7 and 16-23.'),
Parameter('labels', kind=list_of_strs,
global_alias='daq_labels',
description='List of port labels. If specified, the lenght of the list must match '
'the length of ``resistor_values``. Defaults to "PORT_<pnum>", where '
'"pnum" is the number of the port.')
'"pnum" is the number of the port.'),
Parameter('negative_samples', default='keep', allowed_values=['keep', 'zero', 'drop', 'abs'],
global_alias='daq_negative_samples',
description="""
Specifies how negative power samples should be handled. The following
methods are possible:
:keep: keep them as they are
:zero: turn negative values to zero
:drop: drop samples if they contain negative values. *warning:* this may result in
port files containing different numbers of samples
:abs: take the absoulte value of negave samples
"""),
Parameter('gpio_sync', kind=int, constraint=lambda x: x > 0,
description="""
If specified, the instrument will simultaneously set the
specified GPIO pin high and put a marker into ftrace. This is
to facillitate syncing kernel trace events to DAQ power
trace.
"""),
Parameter('merge_channels', kind=dict_or_bool, default=False,
description="""
If set to ``True``, channels with consecutive letter suffixes will be summed.
e.g. If you have channels A7a, A7b, A7c, A15a, A15b they will be summed to A7, A15
You can also manually specify the name of channels to be merged and the name of the
result like so:
merge_channels:
A15: [A15dvfs, A15ram]
NonCPU: [GPU, RoS, Mem]
In the above exaples the DAQ channels labeled A15a and A15b will be summed together
with the results being saved as 'channel' ''a''. A7, GPU and RoS will be summed to 'c'
""")
]
def initialize(self, context):
devices = self._execute_command('list_devices')
if not devices:
status, devices = self._execute_command('list_devices')
if status == daq.Status.OK and not devices:
raise InstrumentError('DAQ: server did not report any devices registered with the driver.')
self._results = OrderedDict()
self.gpio_path = None
if self.gpio_sync:
if not self.device.file_exists(GPIO_ROOT):
raise InstrumentError('GPIO sysfs not enabled on the device.')
try:
export_path = self.device.path.join(GPIO_ROOT, 'export')
self.device.write_value(export_path, self.gpio_sync, verify=False)
pin_root = self.device.path.join(GPIO_ROOT, 'gpio{}'.format(self.gpio_sync))
direction_path = self.device.path.join(pin_root, 'direction')
self.device.write_value(direction_path, 'out')
self.gpio_path = self.device.path.join(pin_root, 'value')
self.device.write_value(self.gpio_path, 0, verify=False)
signal.connect(self.insert_start_marker, signal.BEFORE_WORKLOAD_EXECUTION, priority=11)
signal.connect(self.insert_stop_marker, signal.AFTER_WORKLOAD_EXECUTION, priority=11)
except DeviceError as e:
raise InstrumentError('Could not configure GPIO on device: {}'.format(e))
def setup(self, context):
self.logger.debug('Initialising session.')
@@ -144,6 +224,10 @@ class Daq(Instrument):
self.logger.debug('Downloading data files.')
output_directory = _d(os.path.join(context.output_directory, 'daq'))
self._execute_command('get_data', output_directory=output_directory)
if self.merge_channels:
self._merge_channels(context)
for entry in os.listdir(output_directory):
context.add_iteration_artifact('DAQ_{}'.format(os.path.splitext(entry)[0]),
path=os.path.join('daq', entry),
@@ -151,30 +235,56 @@ class Daq(Instrument):
description='DAQ power measurments.')
port = os.path.splitext(entry)[0]
path = os.path.join(output_directory, entry)
key = (context.spec.id, context.workload.name, context.current_iteration)
key = (context.spec.id, context.spec.label, context.current_iteration)
if key not in self._results:
self._results[key] = {}
temp_file = os.path.join(tempfile.gettempdir(), entry)
writer, wfh = None, None
with open(path) as fh:
if self.negative_samples != 'keep':
wfh = open(temp_file, 'wb')
writer = csv.writer(wfh)
reader = csv.reader(fh)
metrics = reader.next()
data = [map(float, d) for d in zip(*list(reader))]
if writer:
writer.writerow(metrics)
self._metrics |= set(metrics)
rows = _get_rows(reader, writer, self.negative_samples)
data = zip(*rows)
if writer:
wfh.close()
shutil.move(temp_file, os.path.join(output_directory, entry))
n = len(data[0])
means = [s / n for s in map(sum, data)]
for metric, value in zip(metrics, means):
metric_name = '{}_{}'.format(port, metric)
context.result.add_metric(metric_name, round(value, 3), UNITS[metric])
self._results[key][metric_name] = round(value, 3)
energy = sum(data[metrics.index('power')]) * (self.sampling_rate / 1000000)
context.result.add_metric('{}_energy'.format(port), round(energy, 3), UNITS['energy'])
def teardown(self, context):
self.logger.debug('Terminating session.')
self._execute_command('close')
def validate(self):
def finalize(self, context):
if self.gpio_path:
unexport_path = self.device.path.join(GPIO_ROOT, 'unexport')
self.device.write_value(unexport_path, self.gpio_sync, verify=False)
def validate(self): # pylint: disable=too-many-branches
if not daq:
raise ImportError(import_error_mesg)
self._results = None
self._metrics = set()
if self.labels:
if not (len(self.labels) == len(self.resistor_values)): # pylint: disable=superfluous-parens
if len(self.labels) != len(self.resistor_values):
raise ConfigError('Number of DAQ port labels does not match the number of resistor values.')
else:
self.labels = ['PORT_{}'.format(i) for i, _ in enumerate(self.resistor_values)]
@@ -192,11 +302,34 @@ class Daq(Instrument):
self.device_config.validate()
except ConfigurationError, ex:
raise ConfigError('DAQ configuration: ' + ex.message) # Re-raise as a WA error
self.grouped_suffixes = defaultdict(str)
if isinstance(self.merge_channels, bool):
if self.merge_channels:
# Create a dict of potential prefixes and a list of their suffixes
grouped_suffixes = {label[:-1]: label for label in sorted(self.labels) if len(label) > 1}
# Only merge channels if more than one channel has the same prefix and the prefixes
# are consecutive letters starting with 'a'.
self.label_map = {}
for channel, suffixes in grouped_suffixes.iteritems():
if len(suffixes) > 1:
if "".join([s[-1] for s in suffixes]) in ascii_lowercase[:len(suffixes)]:
self.label_map[channel] = suffixes
elif isinstance(self.merge_channels, dict):
# Check if given channel names match labels
for old_names in self.merge_channels.values():
for name in old_names:
if name not in self.labels:
raise ConfigError("No channel with label {} specified".format(name))
self.label_map = self.merge_channels # pylint: disable=redefined-variable-type
self.merge_channels = True
else: # Should never reach here
raise AssertionError("``merge_channels`` is of invalid type")
def before_overall_results_processing(self, context):
if self._results:
headers = ['id', 'workload', 'iteration']
metrics = sorted(self._results.iteritems().next()[1].keys())
metrics = ['{}_{}'.format(p, m) for p in self.labels for m in sorted(self._metrics)]
headers += metrics
rows = [headers]
for key, value in self._results.iteritems():
@@ -207,9 +340,23 @@ class Daq(Instrument):
writer = csv.writer(fh)
writer.writerows(rows)
def insert_start_marker(self, context):
if self.gpio_path:
command = 'echo DAQ_START_MARKER > {}; echo 1 > {}'.format(TRACE_MARKER_PATH, self.gpio_path)
self.device.execute(command, as_root=self.device.is_rooted)
def insert_stop_marker(self, context):
if self.gpio_path:
command = 'echo DAQ_STOP_MARKER > {}; echo 0 > {}'.format(TRACE_MARKER_PATH, self.gpio_path)
self.device.execute(command, as_root=self.device.is_rooted)
def _execute_command(self, command, **kwargs):
# pylint: disable=E1101
result = daq.execute_command(self.server_config, command, **kwargs)
q = Queue()
p = Process(target=_send_daq_command, args=(q, self.server_config, command), kwargs=kwargs)
p.start()
result = q.get()
p.join()
if result.status == daq.Status.OK:
pass # all good
elif result.status == daq.Status.OKISH:
@@ -218,4 +365,52 @@ class Daq(Instrument):
raise InstrumentError('DAQ: {}'.format(result.message))
else:
raise InstrumentError('DAQ: Unexpected result: {} - {}'.format(result.status, result.message))
return result.data
return (result.status, result.data)
def _merge_channels(self, context): # pylint: disable=r0914
output_directory = _d(os.path.join(context.output_directory, 'daq'))
for name, labels in self.label_map.iteritems():
summed = None
for label in labels:
path = os.path.join(output_directory, "{}.csv".format(label))
with open(path) as fh:
reader = csv.reader(fh)
metrics = reader.next()
rows = _get_rows(reader, None, self.negative_samples)
if summed:
summed = [[x + y for x, y in zip(a, b)] for a, b in zip(rows, summed)]
else:
summed = rows
output_path = os.path.join(output_directory, "{}.csv".format(name))
with open(output_path, 'wb') as wfh:
writer = csv.writer(wfh)
writer.writerow(metrics)
for row in summed:
writer.writerow(row)
def _send_daq_command(q, *args, **kwargs):
result = daq.execute_command(*args, **kwargs)
q.put(result)
def _get_rows(reader, writer, negative_samples):
rows = []
for row in reader:
row = map(float, row)
if negative_samples == 'keep':
rows.append(row)
elif negative_samples == 'zero':
def nonneg(v):
return v if v >= 0 else 0
rows.append([nonneg(v) for v in row])
elif negative_samples == 'drop':
if all(v >= 0 for v in row):
rows.append(row)
elif negative_samples == 'abs':
rows.append([abs(v) for v in row])
else:
raise AssertionError(negative_samples) # should never get here
if writer:
writer.writerow(row)
return rows

View File

@@ -96,6 +96,16 @@ class DelayInstrument(Instrument):
.. note:: This cannot be specified at the same time as ``temperature_between_iterations``
"""),
Parameter('fixed_before_start', kind=int, default=None,
global_alias='fixed_delay_before_start',
description="""
How long to sleep (in seconds) after setup for an iteration has been perfromed but
before running the workload.
.. note:: This cannot be specified at the same time as ``temperature_before_start``
"""),
Parameter('active_cooling', kind=boolean, default=False,
global_alias='thermal_active_cooling',
@@ -116,7 +126,7 @@ class DelayInstrument(Instrument):
self.logger.debug('Setting temperature threshold between workload specs to {}'.format(temp))
self.temperature_between_specs = temp
def slow_on_iteration_start(self, context):
def very_slow_on_iteration_start(self, context):
if self.active_cooling:
self.device.stop_active_cooling()
if self.fixed_between_iterations:
@@ -126,7 +136,7 @@ class DelayInstrument(Instrument):
self.logger.debug('Waiting for temperature drop before iteration...')
self.wait_for_temperature(self.temperature_between_iterations)
def slow_on_spec_start(self, context):
def very_slow_on_spec_start(self, context):
if self.active_cooling:
self.device.stop_active_cooling()
if self.fixed_between_specs:
@@ -139,7 +149,10 @@ class DelayInstrument(Instrument):
def very_slow_start(self, context):
if self.active_cooling:
self.device.stop_active_cooling()
if self.temperature_before_start:
if self.fixed_before_start:
self.logger.debug('Waiting for a fixed period after iteration...')
time.sleep(self.fixed_before_start)
elif self.temperature_before_start:
self.logger.debug('Waiting for temperature drop before commencing execution...')
self.wait_for_temperature(self.temperature_before_start)
@@ -171,8 +184,13 @@ class DelayInstrument(Instrument):
self.fixed_between_iterations is not None):
raise ConfigError('Both fixed delay and thermal threshold specified for iterations.')
if (self.temperature_before_start is not None and
self.fixed_before_start is not None):
raise ConfigError('Both fixed delay and thermal threshold specified before start.')
if not any([self.temperature_between_specs, self.fixed_between_specs, self.temperature_before_start,
self.temperature_between_iterations, self.fixed_between_iterations]):
self.temperature_between_iterations, self.fixed_between_iterations,
self.fixed_before_start]):
raise ConfigError('delay instrument is enabled, but no delay is specified.')
if self.active_cooling and not self.device.has('active_cooling'):

View File

@@ -39,7 +39,7 @@ class DmesgInstrument(Instrument):
def setup(self, context):
if self.loglevel:
self.old_loglevel = self.device.get_sysfile_value(self.loglevel_file)
self.device.set_sysfile_value(self.loglevel_file, self.loglevel, verify=False)
self.device.write_value(self.loglevel_file, self.loglevel, verify=False)
self.before_file = _f(os.path.join(context.output_directory, 'dmesg', 'before'))
self.after_file = _f(os.path.join(context.output_directory, 'dmesg', 'after'))
@@ -57,6 +57,6 @@ class DmesgInstrument(Instrument):
def teardown(self, context): # pylint: disable=unused-argument
if self.loglevel:
self.device.set_sysfile_value(self.loglevel_file, self.old_loglevel, verify=False)
self.device.write_value(self.loglevel_file, self.old_loglevel, verify=False)

View File

@@ -0,0 +1,850 @@
# Copyright 2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#pylint: disable=attribute-defined-outside-init,access-member-before-definition,redefined-outer-name
from __future__ import division
import os
import math
import time
from tempfile import mktemp
from base64 import b64encode
from collections import Counter, namedtuple
try:
import jinja2
import pandas as pd
import matplotlib
matplotlib.use('AGG')
import matplotlib.pyplot as plt
import numpy as np
low_filter = np.vectorize(lambda x: x > 0 and x or 0) # pylint: disable=no-member
import_error = None
except ImportError as e:
import_error = e
jinja2 = None
pd = None
plt = None
np = None
low_filter = None
from wlauto import Instrument, Parameter, File
from wlauto.exceptions import ConfigError, InstrumentError, DeviceError
from wlauto.instrumentation import instrument_is_installed
from wlauto.utils.types import caseless_string, list_or_caseless_string, list_of_ints
from wlauto.utils.misc import list_to_mask
FREQ_TABLE_FILE = 'frequency_power_perf_data.csv'
CPUS_TABLE_FILE = 'projected_cap_power.csv'
MEASURED_CPUS_TABLE_FILE = 'measured_cap_power.csv'
IDLE_TABLE_FILE = 'idle_power_perf_data.csv'
REPORT_TEMPLATE_FILE = 'report.template'
EM_TEMPLATE_FILE = 'em.template'
IdlePowerState = namedtuple('IdlePowerState', ['power'])
CapPowerState = namedtuple('CapPowerState', ['cap', 'power'])
class EnergyModel(object):
def __init__(self):
self.big_cluster_idle_states = []
self.little_cluster_idle_states = []
self.big_cluster_cap_states = []
self.little_cluster_cap_states = []
self.big_core_idle_states = []
self.little_core_idle_states = []
self.big_core_cap_states = []
self.little_core_cap_states = []
def add_cap_entry(self, cluster, perf, clust_pow, core_pow):
if cluster == 'big':
self.big_cluster_cap_states.append(CapPowerState(perf, clust_pow))
self.big_core_cap_states.append(CapPowerState(perf, core_pow))
elif cluster == 'little':
self.little_cluster_cap_states.append(CapPowerState(perf, clust_pow))
self.little_core_cap_states.append(CapPowerState(perf, core_pow))
else:
raise ValueError('Unexpected cluster: {}'.format(cluster))
def add_cluster_idle(self, cluster, values):
for value in values:
if cluster == 'big':
self.big_cluster_idle_states.append(IdlePowerState(value))
elif cluster == 'little':
self.little_cluster_idle_states.append(IdlePowerState(value))
else:
raise ValueError('Unexpected cluster: {}'.format(cluster))
def add_core_idle(self, cluster, values):
for value in values:
if cluster == 'big':
self.big_core_idle_states.append(IdlePowerState(value))
elif cluster == 'little':
self.little_core_idle_states.append(IdlePowerState(value))
else:
raise ValueError('Unexpected cluster: {}'.format(cluster))
class PowerPerformanceAnalysis(object):
def __init__(self, data):
self.summary = {}
big_freqs = data[data.cluster == 'big'].frequency.unique()
little_freqs = data[data.cluster == 'little'].frequency.unique()
self.summary['frequency'] = max(set(big_freqs).intersection(set(little_freqs)))
big_sc = data[(data.cluster == 'big') &
(data.frequency == self.summary['frequency']) &
(data.cpus == 1)]
little_sc = data[(data.cluster == 'little') &
(data.frequency == self.summary['frequency']) &
(data.cpus == 1)]
self.summary['performance_ratio'] = big_sc.performance.item() / little_sc.performance.item()
self.summary['power_ratio'] = big_sc.power.item() / little_sc.power.item()
self.summary['max_performance'] = data[data.cpus == 1].performance.max()
self.summary['max_power'] = data[data.cpus == 1].power.max()
def build_energy_model(freq_power_table, cpus_power, idle_power, first_cluster_idle_state):
# pylint: disable=too-many-locals
em = EnergyModel()
idle_power_sc = idle_power[idle_power.cpus == 1]
perf_data = get_normalized_single_core_data(freq_power_table)
for cluster in ['little', 'big']:
cluster_cpus_power = cpus_power[cluster].dropna()
cluster_power = cluster_cpus_power['cluster'].apply(int)
core_power = (cluster_cpus_power['1'] - cluster_power).apply(int)
performance = (perf_data[perf_data.cluster == cluster].performance_norm * 1024 / 100).apply(int)
for perf, clust_pow, core_pow in zip(performance, cluster_power, core_power):
em.add_cap_entry(cluster, perf, clust_pow, core_pow)
all_idle_power = idle_power_sc[idle_power_sc.cluster == cluster].power.values
# CORE idle states
# We want the delta of each state w.r.t. the power
# consumption of the shallowest one at this level (core_ref)
idle_core_power = low_filter(all_idle_power[:first_cluster_idle_state] -
all_idle_power[first_cluster_idle_state - 1])
# CLUSTER idle states
# We want the absolute value of each idle state
idle_cluster_power = low_filter(all_idle_power[first_cluster_idle_state - 1:])
em.add_cluster_idle(cluster, idle_cluster_power)
em.add_core_idle(cluster, idle_core_power)
return em
def generate_em_c_file(em, big_core, little_core, em_template_file, outfile):
with open(em_template_file) as fh:
em_template = jinja2.Template(fh.read())
em_text = em_template.render(
big_core=big_core,
little_core=little_core,
em=em,
)
with open(outfile, 'w') as wfh:
wfh.write(em_text)
return em_text
def generate_report(freq_power_table, measured_cpus_table, cpus_table, idle_power_table, # pylint: disable=unused-argument
report_template_file, device_name, em_text, outfile):
# pylint: disable=too-many-locals
cap_power_analysis = PowerPerformanceAnalysis(freq_power_table)
single_core_norm = get_normalized_single_core_data(freq_power_table)
cap_power_plot = get_cap_power_plot(single_core_norm)
idle_power_plot = get_idle_power_plot(idle_power_table)
fig, axes = plt.subplots(1, 2)
fig.set_size_inches(16, 8)
for i, cluster in enumerate(reversed(cpus_table.columns.levels[0])):
projected = cpus_table[cluster].dropna(subset=['1'])
plot_cpus_table(projected, axes[i], cluster)
cpus_plot_data = get_figure_data(fig)
with open(report_template_file) as fh:
report_template = jinja2.Template(fh.read())
html = report_template.render(
device_name=device_name,
freq_power_table=freq_power_table.set_index(['cluster', 'cpus', 'frequency']).to_html(),
cap_power_analysis=cap_power_analysis,
cap_power_plot=get_figure_data(cap_power_plot),
idle_power_table=idle_power_table.set_index(['cluster', 'cpus', 'state']).to_html(),
idle_power_plot=get_figure_data(idle_power_plot),
cpus_table=cpus_table.to_html(),
cpus_plot=cpus_plot_data,
em_text=em_text,
)
with open(outfile, 'w') as wfh:
wfh.write(html)
return html
def wa_result_to_power_perf_table(df, performance_metric, index):
table = df.pivot_table(index=index + ['iteration'],
columns='metric', values='value').reset_index()
result_mean = table.groupby(index).mean()
result_std = table.groupby(index).std()
result_std.columns = [c + ' std' for c in result_std.columns]
result_count = table.groupby(index).count()
result_count.columns = [c + ' count' for c in result_count.columns]
count_sqrt = result_count.apply(lambda x: x.apply(math.sqrt))
count_sqrt.columns = result_std.columns # match column names for division
result_error = 1.96 * result_std / count_sqrt # 1.96 == 95% confidence interval
result_error.columns = [c + ' error' for c in result_mean.columns]
result = pd.concat([result_mean, result_std, result_count, result_error], axis=1)
del result['iteration']
del result['iteration std']
del result['iteration count']
del result['iteration error']
updated_columns = []
for column in result.columns:
if column == performance_metric:
updated_columns.append('performance')
elif column == performance_metric + ' std':
updated_columns.append('performance_std')
elif column == performance_metric + ' error':
updated_columns.append('performance_error')
else:
updated_columns.append(column.replace(' ', '_'))
result.columns = updated_columns
result = result[sorted(result.columns)]
result.reset_index(inplace=True)
return result
def get_figure_data(fig, fmt='png'):
tmp = mktemp()
fig.savefig(tmp, format=fmt, bbox_inches='tight')
with open(tmp, 'rb') as fh:
image_data = b64encode(fh.read())
os.remove(tmp)
return image_data
def get_normalized_single_core_data(data):
finite_power = np.isfinite(data.power) # pylint: disable=no-member
finite_perf = np.isfinite(data.performance) # pylint: disable=no-member
data_single_core = data[(data.cpus == 1) & finite_perf & finite_power].copy()
data_single_core['performance_norm'] = (data_single_core.performance /
data_single_core.performance.max() * 100).apply(int)
data_single_core['power_norm'] = (data_single_core.power /
data_single_core.power.max() * 100).apply(int)
return data_single_core
def get_cap_power_plot(data_single_core):
big_single_core = data_single_core[(data_single_core.cluster == 'big') &
(data_single_core.cpus == 1)]
little_single_core = data_single_core[(data_single_core.cluster == 'little') &
(data_single_core.cpus == 1)]
fig, axes = plt.subplots(1, 1, figsize=(12, 8))
axes.plot(big_single_core.performance_norm,
big_single_core.power_norm,
marker='o')
axes.plot(little_single_core.performance_norm,
little_single_core.power_norm,
marker='o')
axes.set_xlim(0, 105)
axes.set_ylim(0, 105)
axes.set_xlabel('Performance (Normalized)')
axes.set_ylabel('Power (Normalized)')
axes.grid()
axes.legend(['big cluster', 'little cluster'], loc=0)
return fig
def get_idle_power_plot(df):
fig, axes = plt.subplots(1, 2, figsize=(15, 7))
for cluster, ax in zip(['little', 'big'], axes):
data = df[df.cluster == cluster].pivot_table(index=['state'], columns='cpus', values='power')
err = df[df.cluster == cluster].pivot_table(index=['state'], columns='cpus', values='power_error')
data.plot(kind='bar', ax=ax, rot=30, yerr=err)
ax.set_title('{} cluster'.format(cluster))
ax.set_xlim(-1, len(data.columns) - 0.5)
ax.set_ylabel('Power (mW)')
return fig
def fit_polynomial(s, n):
# pylint: disable=no-member
coeffs = np.polyfit(s.index, s.values, n)
poly = np.poly1d(coeffs)
return poly(s.index)
def get_cpus_power_table(data, index, opps, leak_factors): # pylint: disable=too-many-locals
# pylint: disable=no-member
power_table = data[[index, 'cluster', 'cpus', 'power']].pivot_table(index=index,
columns=['cluster', 'cpus'],
values='power')
bs_power_table = pd.DataFrame(index=power_table.index, columns=power_table.columns)
for cluster in power_table.columns.levels[0]:
power_table[cluster, 0] = (power_table[cluster, 1] -
(power_table[cluster, 2] -
power_table[cluster, 1]))
bs_power_table.loc[power_table[cluster, 1].notnull(), (cluster, 1)] = fit_polynomial(power_table[cluster, 1].dropna(), 2)
bs_power_table.loc[power_table[cluster, 2].notnull(), (cluster, 2)] = fit_polynomial(power_table[cluster, 2].dropna(), 2)
if opps[cluster] is None:
bs_power_table.loc[bs_power_table[cluster, 1].notnull(), (cluster, 0)] = \
(2 * power_table[cluster, 1] - power_table[cluster, 2]).values
else:
voltages = opps[cluster].set_index('frequency').sort_index()
leakage = leak_factors[cluster] * 2 * voltages['voltage']**3 / 0.9**3
leakage_delta = leakage - leakage[leakage.index[0]]
bs_power_table.loc[:, (cluster, 0)] = \
(2 * bs_power_table[cluster, 1] + leakage_delta - bs_power_table[cluster, 2])
# re-order columns and rename colum '0' to 'cluster'
power_table = power_table[sorted(power_table.columns,
cmp=lambda x, y: cmp(y[0], x[0]) or cmp(x[1], y[1]))]
bs_power_table = bs_power_table[sorted(bs_power_table.columns,
cmp=lambda x, y: cmp(y[0], x[0]) or cmp(x[1], y[1]))]
old_levels = power_table.columns.levels
power_table.columns.set_levels([old_levels[0], list(map(str, old_levels[1])[:-1]) + ['cluster']],
inplace=True)
bs_power_table.columns.set_levels([old_levels[0], list(map(str, old_levels[1])[:-1]) + ['cluster']],
inplace=True)
return power_table, bs_power_table
def plot_cpus_table(projected, ax, cluster):
projected.T.plot(ax=ax, marker='o')
ax.set_title('{} cluster'.format(cluster))
ax.set_xticklabels(projected.columns)
ax.set_xticks(range(0, 5))
ax.set_xlim(-0.5, len(projected.columns) - 0.5)
ax.set_ylabel('Power (mW)')
ax.grid(True)
def opp_table(d):
if d is None:
return None
return pd.DataFrame(d.items(), columns=['frequency', 'voltage'])
class EnergyModelInstrument(Instrument):
name = 'energy_model'
desicription = """
Generates a power mode for the device based on specified workload.
This insturment will execute the workload specified by the agenda (currently, only ``sysbench`` is
supported) and will use the resulting performance and power measurments to generate a power mode for
the device.
This instrument requires certain features to be present in the kernel:
1. cgroups and cpusets must be enabled.
2. cpufreq and userspace governor must be enabled.
3. cpuidle must be enabled.
"""
parameters = [
Parameter('device_name', kind=caseless_string,
description="""The name of the device to be used in generating the model. If not specified,
``device.name`` will be used. """),
Parameter('big_core', kind=caseless_string,
description="""The name of the "big" core in the big.LITTLE system; must match
one of the values in ``device.core_names``. """),
Parameter('performance_metric', kind=caseless_string, mandatory=True,
description="""Metric to be used as the performance indicator."""),
Parameter('power_metric', kind=list_or_caseless_string,
description="""Metric to be used as the power indicator. The value may contain a
``{core}`` format specifier that will be replaced with names of big
and little cores to drive the name of the metric for that cluster.
Ether this or ``energy_metric`` must be specified but not both."""),
Parameter('energy_metric', kind=list_or_caseless_string,
description="""Metric to be used as the energy indicator. The value may contain a
``{core}`` format specifier that will be replaced with names of big
and little cores to drive the name of the metric for that cluster.
this metric will be used to derive power by deviding through by
execution time. Either this or ``power_metric`` must be specified, but
not both."""),
Parameter('power_scaling_factor', kind=float, default=1.0,
description="""Power model specfies power in milliWatts. This is a scaling factor that
power_metric values will be multiplied by to get milliWatts."""),
Parameter('big_frequencies', kind=list_of_ints,
description="""List of frequencies to be used for big cores. These frequencies must
be supported by the cores. If this is not specified, all available
frequencies for the core (as read from cpufreq) will be used."""),
Parameter('little_frequencies', kind=list_of_ints,
description="""List of frequencies to be used for little cores. These frequencies must
be supported by the cores. If this is not specified, all available
frequencies for the core (as read from cpufreq) will be used."""),
Parameter('idle_workload', kind=str, default='idle',
description="Workload to be used while measuring idle power."),
Parameter('idle_workload_params', kind=dict, default={},
description="Parameter to pass to the idle workload."),
Parameter('first_cluster_idle_state', kind=int, default=-1,
description='''The index of the first cluster idle state on the device. Previous states
are assumed to be core idles. The default is ``-1``, i.e. only the last
idle state is assumed to affect the entire cluster.'''),
Parameter('no_hotplug', kind=bool, default=False,
description='''This options allows running the instrument without hotpluging cores on and off.
Disabling hotplugging will most likely produce a less accurate power model.'''),
Parameter('num_of_freqs_to_thermal_adjust', kind=int, default=0,
description="""The number of frequencies begining from the highest, to be adjusted for
the thermal effect."""),
Parameter('big_opps', kind=opp_table,
description="""OPP table mapping frequency to voltage (kHz --> mV) for the big cluster."""),
Parameter('little_opps', kind=opp_table,
description="""OPP table mapping frequency to voltage (kHz --> mV) for the little cluster."""),
Parameter('big_leakage', kind=int, default=120,
description="""
Leakage factor for the big cluster (this is specific to a particular core implementation).
"""),
Parameter('little_leakage', kind=int, default=60,
description="""
Leakage factor for the little cluster (this is specific to a particular core implementation).
"""),
]
def validate(self):
if import_error:
message = 'energy_model instrument requires pandas, jinja2 and matplotlib Python packages to be installed; got: "{}"'
raise InstrumentError(message.format(import_error.message))
for capability in ['cgroups', 'cpuidle']:
if not self.device.has(capability):
message = 'The Device does not appear to support {}; does it have the right module installed?'
raise ConfigError(message.format(capability))
device_cores = set(self.device.core_names)
if (self.power_metric and self.energy_metric) or not (self.power_metric or self.energy_metric):
raise ConfigError('Either power_metric or energy_metric must be specified (but not both).')
if not device_cores:
raise ConfigError('The Device does not appear to have core_names configured.')
elif len(device_cores) != 2:
raise ConfigError('The Device does not appear to be a big.LITTLE device.')
if self.big_core and self.big_core not in self.device.core_names:
raise ConfigError('Specified big_core "{}" is in divice {}'.format(self.big_core, self.device.name))
if not self.big_core:
self.big_core = self.device.core_names[-1] # the last core is usually "big" in existing big.LITTLE devices
if not self.device_name:
self.device_name = self.device.name
if self.num_of_freqs_to_thermal_adjust and not instrument_is_installed('daq'):
self.logger.warn('Adjustment for thermal effect requires daq instrument. Disabling adjustment')
self.num_of_freqs_to_thermal_adjust = 0
def initialize(self, context):
self.number_of_cpus = {}
self.report_template_file = context.resolver.get(File(self, REPORT_TEMPLATE_FILE))
self.em_template_file = context.resolver.get(File(self, EM_TEMPLATE_FILE))
self.little_core = (set(self.device.core_names) - set([self.big_core])).pop()
self.perform_runtime_validation()
self.enable_all_cores()
self.configure_clusters()
self.discover_idle_states()
self.disable_thermal_management()
self.initialize_job_queue(context)
self.initialize_result_tracking()
def setup(self, context):
if not context.spec.label.startswith('idle_'):
return
for idle_state in self.get_device_idle_states(self.measured_cluster):
if idle_state.index > context.spec.idle_state_index:
idle_state.disable = 1
else:
idle_state.disable = 0
def fast_start(self, context): # pylint: disable=unused-argument
self.start_time = time.time()
def fast_stop(self, context): # pylint: disable=unused-argument
self.run_time = time.time() - self.start_time
def on_iteration_start(self, context):
self.setup_measurement(context.spec.cluster)
def thermal_correction(self, context):
if not self.num_of_freqs_to_thermal_adjust or self.num_of_freqs_to_thermal_adjust > len(self.big_frequencies):
return 0
freqs = self.big_frequencies[-self.num_of_freqs_to_thermal_adjust:]
spec = context.result.spec
if spec.frequency not in freqs:
return 0
data_path = os.path.join(context.output_directory, 'daq', '{}.csv'.format(self.big_core))
data = pd.read_csv(data_path)['power']
return _adjust_for_thermal(data, filt_method=lambda x: pd.rolling_median(x, 1000), thresh=0.9, window=5000)
# slow to make sure power results have been generated
def slow_update_result(self, context): # pylint: disable=too-many-branches
spec = context.result.spec
cluster = spec.cluster
is_freq_iteration = spec.label.startswith('freq_')
perf_metric = 0
power_metric = 0
thermal_adjusted_power = 0
if is_freq_iteration and cluster == 'big':
thermal_adjusted_power = self.thermal_correction(context)
for metric in context.result.metrics:
if metric.name == self.performance_metric:
perf_metric = metric.value
elif thermal_adjusted_power and metric.name in self.big_power_metrics:
power_metric += thermal_adjusted_power * self.power_scaling_factor
elif (cluster == 'big') and metric.name in self.big_power_metrics:
power_metric += metric.value * self.power_scaling_factor
elif (cluster == 'little') and metric.name in self.little_power_metrics:
power_metric += metric.value * self.power_scaling_factor
elif thermal_adjusted_power and metric.name in self.big_energy_metrics:
power_metric += thermal_adjusted_power / self.run_time * self.power_scaling_factor
elif (cluster == 'big') and metric.name in self.big_energy_metrics:
power_metric += metric.value / self.run_time * self.power_scaling_factor
elif (cluster == 'little') and metric.name in self.little_energy_metrics:
power_metric += metric.value / self.run_time * self.power_scaling_factor
if not (power_metric and (perf_metric or not is_freq_iteration)):
message = 'Incomplete results for {} iteration{}'
raise InstrumentError(message.format(context.result.spec.id, context.current_iteration))
if is_freq_iteration:
index_matter = [cluster, spec.num_cpus,
spec.frequency, context.result.iteration]
data = self.freq_data
else:
index_matter = [cluster, spec.num_cpus,
spec.idle_state_id, spec.idle_state_desc, context.result.iteration]
data = self.idle_data
if self.no_hotplug:
# due to that fact that hotpluging was disabled, power has to be artificially scaled
# to the number of cores that should have been active if hotplugging had occurred.
power_metric = spec.num_cpus * (power_metric / self.number_of_cpus[cluster])
data.append(index_matter + ['performance', perf_metric])
data.append(index_matter + ['power', power_metric])
def before_overall_results_processing(self, context):
# pylint: disable=too-many-locals
if not self.idle_data or not self.freq_data:
self.logger.warning('Run aborted early; not generating energy_model.')
return
output_directory = os.path.join(context.output_directory, 'energy_model')
os.makedirs(output_directory)
df = pd.DataFrame(self.idle_data, columns=['cluster', 'cpus', 'state_id',
'state', 'iteration', 'metric', 'value'])
idle_power_table = wa_result_to_power_perf_table(df, '', index=['cluster', 'cpus', 'state'])
idle_output = os.path.join(output_directory, IDLE_TABLE_FILE)
with open(idle_output, 'w') as wfh:
idle_power_table.to_csv(wfh, index=False)
context.add_artifact('idle_power_table', idle_output, 'export')
df = pd.DataFrame(self.freq_data,
columns=['cluster', 'cpus', 'frequency', 'iteration', 'metric', 'value'])
freq_power_table = wa_result_to_power_perf_table(df, self.performance_metric,
index=['cluster', 'cpus', 'frequency'])
freq_output = os.path.join(output_directory, FREQ_TABLE_FILE)
with open(freq_output, 'w') as wfh:
freq_power_table.to_csv(wfh, index=False)
context.add_artifact('freq_power_table', freq_output, 'export')
if self.big_opps is None or self.little_opps is None:
message = 'OPPs not specified for one or both clusters; cluster power will not be adjusted for leakage.'
self.logger.warning(message)
opps = {'big': self.big_opps, 'little': self.little_opps}
leakages = {'big': self.big_leakage, 'little': self.little_leakage}
try:
measured_cpus_table, cpus_table = get_cpus_power_table(freq_power_table, 'frequency', opps, leakages)
except (ValueError, KeyError, IndexError) as e:
self.logger.error('Could not create cpu power tables: {}'.format(e))
return
measured_cpus_output = os.path.join(output_directory, MEASURED_CPUS_TABLE_FILE)
with open(measured_cpus_output, 'w') as wfh:
measured_cpus_table.to_csv(wfh)
context.add_artifact('measured_cpus_table', measured_cpus_output, 'export')
cpus_output = os.path.join(output_directory, CPUS_TABLE_FILE)
with open(cpus_output, 'w') as wfh:
cpus_table.to_csv(wfh)
context.add_artifact('cpus_table', cpus_output, 'export')
em = build_energy_model(freq_power_table, cpus_table, idle_power_table, self.first_cluster_idle_state)
em_file = os.path.join(output_directory, '{}_em.c'.format(self.device_name))
em_text = generate_em_c_file(em, self.big_core, self.little_core,
self.em_template_file, em_file)
context.add_artifact('em', em_file, 'data')
report_file = os.path.join(output_directory, 'report.html')
generate_report(freq_power_table, measured_cpus_table, cpus_table,
idle_power_table, self.report_template_file,
self.device_name, em_text, report_file)
context.add_artifact('pm_report', report_file, 'export')
def initialize_result_tracking(self):
self.freq_data = []
self.idle_data = []
self.big_power_metrics = []
self.little_power_metrics = []
self.big_energy_metrics = []
self.little_energy_metrics = []
if self.power_metric:
self.big_power_metrics = [pm.format(core=self.big_core) for pm in self.power_metric]
self.little_power_metrics = [pm.format(core=self.little_core) for pm in self.power_metric]
else: # must be energy_metric
self.big_energy_metrics = [em.format(core=self.big_core) for em in self.energy_metric]
self.little_energy_metrics = [em.format(core=self.little_core) for em in self.energy_metric]
def configure_clusters(self):
self.measured_cores = None
self.measuring_cores = None
self.cpuset = self.device.get_cgroup_controller('cpuset')
self.cpuset.create_group('big', self.big_cpus, [0])
self.cpuset.create_group('little', self.little_cpus, [0])
for cluster in set(self.device.core_clusters):
self.device.set_cluster_governor(cluster, 'userspace')
def discover_idle_states(self):
online_cpu = self.device.get_online_cpus(self.big_core)[0]
self.big_idle_states = self.device.get_cpuidle_states(online_cpu)
online_cpu = self.device.get_online_cpus(self.little_core)[0]
self.little_idle_states = self.device.get_cpuidle_states(online_cpu)
if not (len(self.big_idle_states) >= 2 and len(self.little_idle_states) >= 2):
raise DeviceError('There do not appeart to be at least two idle states '
'on at least one of the clusters.')
def setup_measurement(self, measured):
measuring = 'big' if measured == 'little' else 'little'
self.measured_cluster = measured
self.measuring_cluster = measuring
self.measured_cpus = self.big_cpus if measured == 'big' else self.little_cpus
self.measuring_cpus = self.little_cpus if measured == 'big' else self.big_cpus
self.reset()
def reset(self):
self.enable_all_cores()
self.enable_all_idle_states()
self.reset_cgroups()
self.cpuset.move_all_tasks_to(self.measuring_cluster)
server_process = 'adbd' if self.device.os == 'android' else 'sshd'
server_pids = self.device.get_pids_of(server_process)
children_ps = [e for e in self.device.ps()
if e.ppid in server_pids and e.name != 'sshd']
children_pids = [e.pid for e in children_ps]
pids_to_move = server_pids + children_pids
self.cpuset.root.add_tasks(pids_to_move)
for pid in pids_to_move:
try:
self.device.execute('busybox taskset -p 0x{:x} {}'.format(list_to_mask(self.measuring_cpus), pid))
except DeviceError:
pass
def enable_all_cores(self):
counter = Counter(self.device.core_names)
for core, number in counter.iteritems():
self.device.set_number_of_online_cpus(core, number)
self.big_cpus = self.device.get_online_cpus(self.big_core)
self.little_cpus = self.device.get_online_cpus(self.little_core)
def enable_all_idle_states(self):
for cpu in self.device.online_cpus:
for state in self.device.get_cpuidle_states(cpu):
state.disable = 0
def reset_cgroups(self):
self.big_cpus = self.device.get_online_cpus(self.big_core)
self.little_cpus = self.device.get_online_cpus(self.little_core)
self.cpuset.big.set(self.big_cpus, 0)
self.cpuset.little.set(self.little_cpus, 0)
def perform_runtime_validation(self):
if not self.device.is_rooted:
raise InstrumentError('the device must be rooted to generate energy models')
if 'userspace' not in self.device.list_available_cluster_governors(0):
raise InstrumentError('userspace cpufreq governor must be enabled')
error_message = 'Frequency {} is not supported by {} cores'
available_frequencies = self.device.list_available_core_frequencies(self.big_core)
if self.big_frequencies:
for freq in self.big_frequencies:
if freq not in available_frequencies:
raise ConfigError(error_message.format(freq, self.big_core))
else:
self.big_frequencies = available_frequencies
available_frequencies = self.device.list_available_core_frequencies(self.little_core)
if self.little_frequencies:
for freq in self.little_frequencies:
if freq not in available_frequencies:
raise ConfigError(error_message.format(freq, self.little_core))
else:
self.little_frequencies = available_frequencies
def initialize_job_queue(self, context):
old_specs = []
for job in context.runner.job_queue:
if job.spec not in old_specs:
old_specs.append(job.spec)
new_specs = self.get_cluster_specs(old_specs, 'big', context)
new_specs.extend(self.get_cluster_specs(old_specs, 'little', context))
# Update config to refect jobs that will actually run.
context.config.workload_specs = new_specs
config_file = os.path.join(context.host_working_directory, 'run_config.json')
with open(config_file, 'wb') as wfh:
context.config.serialize(wfh)
context.runner.init_queue(new_specs)
def get_cluster_specs(self, old_specs, cluster, context):
core = self.get_core_name(cluster)
self.number_of_cpus[cluster] = sum([1 for c in self.device.core_names if c == core])
cluster_frequencies = self.get_frequencies_param(cluster)
if not cluster_frequencies:
raise InstrumentError('Could not read available frequencies for {}'.format(core))
min_frequency = min(cluster_frequencies)
idle_states = self.get_device_idle_states(cluster)
new_specs = []
for state in idle_states:
for num_cpus in xrange(1, self.number_of_cpus[cluster] + 1):
spec = old_specs[0].copy()
spec.workload_name = self.idle_workload
spec.workload_parameters = self.idle_workload_params
spec.idle_state_id = state.id
spec.idle_state_desc = state.desc
spec.idle_state_index = state.index
if not self.no_hotplug:
spec.runtime_parameters['{}_cores'.format(core)] = num_cpus
spec.runtime_parameters['{}_frequency'.format(core)] = min_frequency
spec.runtime_parameters['ui'] = 'off'
spec.cluster = cluster
spec.num_cpus = num_cpus
spec.id = '{}_idle_{}_{}'.format(cluster, state.id, num_cpus)
spec.label = 'idle_{}'.format(cluster)
spec.number_of_iterations = old_specs[0].number_of_iterations
spec.load(self.device, context.config.ext_loader)
spec.workload.init_resources(context)
spec.workload.validate()
new_specs.append(spec)
for old_spec in old_specs:
if old_spec.workload_name not in ['sysbench', 'dhrystone']:
raise ConfigError('Only sysbench and dhrystone workloads currently supported for energy_model generation.')
for freq in cluster_frequencies:
for num_cpus in xrange(1, self.number_of_cpus[cluster] + 1):
spec = old_spec.copy()
spec.runtime_parameters['{}_frequency'.format(core)] = freq
if not self.no_hotplug:
spec.runtime_parameters['{}_cores'.format(core)] = num_cpus
spec.runtime_parameters['ui'] = 'off'
spec.id = '{}_{}_{}'.format(cluster, num_cpus, freq)
spec.label = 'freq_{}_{}'.format(cluster, spec.label)
spec.workload_parameters['taskset_mask'] = list_to_mask(self.get_cpus(cluster))
spec.workload_parameters['threads'] = num_cpus
if old_spec.workload_name == 'sysbench':
# max_requests set to an arbitrary high values to make sure
# sysbench runs for full duriation even on highly
# performant cores.
spec.workload_parameters['max_requests'] = 10000000
spec.cluster = cluster
spec.num_cpus = num_cpus
spec.frequency = freq
spec.load(self.device, context.config.ext_loader)
spec.workload.init_resources(context)
spec.workload.validate()
new_specs.append(spec)
return new_specs
def disable_thermal_management(self):
if self.device.file_exists('/sys/class/thermal/thermal_zone0'):
tzone_paths = self.device.execute('ls /sys/class/thermal/thermal_zone*')
for tzpath in tzone_paths.strip().split():
mode_file = '{}/mode'.format(tzpath)
if self.device.file_exists(mode_file):
self.device.write_value(mode_file, 'disabled')
def get_device_idle_states(self, cluster):
if cluster == 'big':
online_cpus = self.device.get_online_cpus(self.big_core)
else:
online_cpus = self.device.get_online_cpus(self.little_core)
idle_states = []
for cpu in online_cpus:
idle_states.extend(self.device.get_cpuidle_states(cpu))
return idle_states
def get_core_name(self, cluster):
if cluster == 'big':
return self.big_core
else:
return self.little_core
def get_cpus(self, cluster):
if cluster == 'big':
return self.big_cpus
else:
return self.little_cpus
def get_frequencies_param(self, cluster):
if cluster == 'big':
return self.big_frequencies
else:
return self.little_frequencies
def _adjust_for_thermal(data, filt_method=lambda x: x, thresh=0.9, window=5000, tdiff_threshold=10000):
n = filt_method(data)
n = n[~np.isnan(n)] # pylint: disable=no-member
d = np.diff(n) # pylint: disable=no-member
d = d[~np.isnan(d)] # pylint: disable=no-member
dmin = min(d)
dmax = max(d)
index_up = np.max((d > dmax * thresh).nonzero()) # pylint: disable=no-member
index_down = np.min((d < dmin * thresh).nonzero()) # pylint: disable=no-member
low_average = np.average(n[index_up:index_up + window]) # pylint: disable=no-member
high_average = np.average(n[index_down - window:index_down]) # pylint: disable=no-member
if low_average > high_average or index_down - index_up < tdiff_threshold:
return 0
else:
return low_average
if __name__ == '__main__':
import sys # pylint: disable=wrong-import-position,wrong-import-order
indir, outdir = sys.argv[1], sys.argv[2]
device_name = 'odroidxu3'
big_core = 'a15'
little_core = 'a7'
first_cluster_idle_state = -1
this_dir = os.path.dirname(__file__)
report_template_file = os.path.join(this_dir, REPORT_TEMPLATE_FILE)
em_template_file = os.path.join(this_dir, EM_TEMPLATE_FILE)
freq_power_table = pd.read_csv(os.path.join(indir, FREQ_TABLE_FILE))
measured_cpus_table, cpus_table = pd.read_csv(os.path.join(indir, CPUS_TABLE_FILE), # pylint: disable=unbalanced-tuple-unpacking
header=range(2), index_col=0)
idle_power_table = pd.read_csv(os.path.join(indir, IDLE_TABLE_FILE))
if not os.path.exists(outdir):
os.makedirs(outdir)
report_file = os.path.join(outdir, 'report.html')
em_file = os.path.join(outdir, '{}_em.c'.format(device_name))
em = build_energy_model(freq_power_table, cpus_table,
idle_power_table, first_cluster_idle_state)
em_text = generate_em_c_file(em, big_core, little_core,
em_template_file, em_file)
generate_report(freq_power_table, measured_cpus_table, cpus_table,
idle_power_table, report_template_file, device_name,
em_text, report_file)

View File

@@ -0,0 +1,51 @@
static struct idle_state idle_states_cluster_{{ little_core|lower }}[] = {
{% for entry in em.little_cluster_idle_states -%}
{ .power = {{ entry.power }}, },
{% endfor %}
};
static struct idle_state idle_states_cluster_{{ big_core|lower }}[] = {
{% for entry in em.big_cluster_idle_states -%}
{ .power = {{ entry.power }}, },
{% endfor %}
};
static struct capacity_state cap_states_cluster_{{ little_core|lower }}[] = {
/* Power per cluster */
{% for entry in em.little_cluster_cap_states -%}
{ .cap = {{ entry.cap }}, .power = {{ entry.power }}, },
{% endfor %}
};
static struct capacity_state cap_states_cluster_{{ big_core|lower }}[] = {
/* Power per cluster */
{% for entry in em.big_cluster_cap_states -%}
{ .cap = {{ entry.cap }}, .power = {{ entry.power }}, },
{% endfor %}
};
static struct idle_state idle_states_core_{{ little_core|lower }}[] = {
{% for entry in em.little_core_idle_states -%}
{ .power = {{ entry.power }}, },
{% endfor %}
};
static struct idle_state idle_states_core_{{ big_core|lower }}[] = {
{% for entry in em.big_core_idle_states -%}
{ .power = {{ entry.power }}, },
{% endfor %}
};
static struct capacity_state cap_states_core_{{ little_core|lower }}[] = {
/* Power per cpu */
{% for entry in em.little_core_cap_states -%}
{ .cap = {{ entry.cap }}, .power = {{ entry.power }}, },
{% endfor %}
}
static struct capacity_state cap_states_core_{{ big_core|lower }}[] = {
/* Power per cpu */
{% for entry in em.big_core_cap_states -%}
{ .cap = {{ entry.cap }}, .power = {{ entry.power }}, },
{% endfor %}
};

View File

@@ -0,0 +1,123 @@
<html>
<body>
<style>
.toggle-box {
display: none;
}
.toggle-box + label {
cursor: pointer;
display: block;
font-weight: bold;
line-height: 21px;
margin-bottom: 5px;
}
.toggle-box + label + div {
display: none;
margin-bottom: 10px;
}
.toggle-box:checked + label + div {
display: block;
}
.toggle-box + label:before {
background-color: #4F5150;
-webkit-border-radius: 10px;
-moz-border-radius: 10px;
border-radius: 10px;
color: #FFFFFF;
content: "+";
display: block;
float: left;
font-weight: bold;
height: 20px;
line-height: 20px;
margin-right: 5px;
text-align: center;
width: 20px;
}
.toggle-box:checked + label:before {
content: "\2212";
}
.document {
width: 800px;
margin-left:auto;
margin-right:auto;
}
img {
margin-left:auto;
margin-right:auto;
}
h1.title {
text-align: center;
}
</style>
<div class="document">
<h1 class="title">{{ device_name }} Energy Model Report</h1>
<h2>Power/Performance Analysis</h2>
<div>
<h3>Summary</h3>
At {{ cap_power_analysis.summary['frequency']|round(2) }} Hz<br />
big is {{ cap_power_analysis.summary['performance_ratio']|round(2) }} times faster<br />
big consumes {{ cap_power_analysis.summary['power_ratio']|round(2) }} times more power<br />
<br />
max performance: {{ cap_power_analysis.summary['max_performance']|round(2) }}<br />
max power: {{ cap_power_analysis.summary['max_power']|round(2) }}<br />
</div>
<div>
<h3>Single Core Power/Perfromance Plot</h3>
These are the traditional power-performance curves for the single-core runs.
<img align="middle" width="600px" src="data:image/png;base64,{{ cap_power_plot }}" />
</div>
<div>
<input class="toggle-box" id="freq_table" type="checkbox" >
<label for="freq_table">Expand view all power/performance data</label>
<div>
{{ freq_power_table }}
</div>
</div>
<div>
<h3>CPUs Power Plot</h3>
Each line correspond to the cluster running at a different OPP. Each
point corresponds to the average power with a certain number of CPUs
executing. To get the contribution of the cluster we have to extend the
lines on the left (what it would be the average power of just the cluster).
<img align="middle" width="600px" src="data:image/png;base64,{{ cpus_plot }}" />
</div>
<div>
<input class="toggle-box" id="cpus_table" type="checkbox" >
<label for="cpus_table">Expand view CPUS power data</label>
<div>
{{ cpus_table }}
</div>
</div>
<div>
<h3>Idle Power</h3>
<img align="middle" width="600px" src="data:image/png;base64,{{ idle_power_plot }}" />
</div>
<div>
<input class="toggle-box" id="idle_power_table" type="checkbox" >
<label for="idle_power_table">Expand view idle power data</label>
<div>
{{ idle_power_table }}
</div>
</div>
</div>
</body>
</html>
<!-- vim: ft=htmljinja
-->

View File

@@ -53,6 +53,8 @@ class EnergyProbe(Instrument):
description="""The value of shunt resistors. This is a mandatory parameter."""),
Parameter('labels', kind=list, default=[],
description="""Meaningful labels for each of the monitored rails."""),
Parameter('device_entry', kind=str, default='/dev/ttyACM0',
description="""Path to /dev entry for the energy probe (it should be /dev/ttyACMx)"""),
]
MAX_CHANNELS = 3
@@ -84,7 +86,7 @@ class EnergyProbe(Instrument):
rstring = ""
for i, rval in enumerate(self.resistor_values):
rstring += '-r {}:{} '.format(i, rval)
self.command = 'caiman -l {} {}'.format(rstring, self.output_directory)
self.command = 'caiman -d {} -l {} {}'.format(self.device_entry, rstring, self.output_directory)
os.makedirs(self.output_directory)
def start(self, context):

View File

@@ -27,6 +27,10 @@ import tempfile
from distutils.version import LooseVersion
try:
import pandas as pd
except ImportError:
pd = None
from wlauto import Instrument, Parameter, IterationResult
from wlauto.instrumentation import instrument_is_installed
@@ -34,13 +38,9 @@ from wlauto.exceptions import (InstrumentError, WorkerThreadError, ConfigError,
DeviceNotRespondingError, TimeoutError)
from wlauto.utils.types import boolean, numeric
try:
import pandas as pd
except ImportError:
pd = None
VSYNC_INTERVAL = 16666667
PAUSE_LATENCY = 20
EPSYLON = 0.0001
@@ -70,6 +70,7 @@ class FpsInstrument(Instrument):
vsync cycle.
"""
supported_platforms = ['android']
parameters = [
Parameter('drop_threshold', kind=numeric, default=5,
@@ -80,10 +81,16 @@ class FpsInstrument(Instrument):
'except on loading screens, menus, etc, which '
'should not contribute to FPS calculation. '),
Parameter('keep_raw', kind=boolean, default=False,
description='If set to True, this will keep the raw dumpsys output '
description='If set to ``True``, this will keep the raw dumpsys output '
'in the results directory (this is maily used for debugging) '
'Note: frames.csv with collected frames data will always be '
'generated regardless of this setting.'),
Parameter('generate_csv', kind=boolean, default=True,
description='If set to ``True``, this will produce temporal fps data '
'in the results directory, in a file named fps.csv '
'Note: fps data will appear as discrete step-like values '
'in order to produce a more meainingfull representation,'
'a rolling mean can be applied.'),
Parameter('crash_check', kind=boolean, default=True,
description="""
Specifies wither the instrument should check for crashed content by examining
@@ -111,6 +118,7 @@ class FpsInstrument(Instrument):
super(FpsInstrument, self).__init__(device, **kwargs)
self.collector = None
self.outfile = None
self.fps_outfile = None
self.is_enabled = True
def validate(self):
@@ -124,6 +132,7 @@ class FpsInstrument(Instrument):
def setup(self, context):
workload = context.workload
if hasattr(workload, 'view'):
self.fps_outfile = os.path.join(context.output_directory, 'fps.csv')
self.outfile = os.path.join(context.output_directory, 'frames.csv')
self.collector = LatencyCollector(self.outfile, self.device, workload.view or '', self.keep_raw, self.logger)
self.device.execute(self.clear_command)
@@ -145,7 +154,10 @@ class FpsInstrument(Instrument):
if self.is_enabled:
data = pd.read_csv(self.outfile)
if not data.empty: # pylint: disable=maybe-no-member
self._update_stats(context, data)
per_frame_fps = self._update_stats(context, data)
if self.generate_csv:
per_frame_fps.to_csv(self.fps_outfile, index=False, header=True)
context.add_artifact('fps', path='fps.csv', kind='data')
else:
context.result.add_metric('FPS', float('nan'))
context.result.add_metric('frame_count', 0)
@@ -168,14 +180,17 @@ class FpsInstrument(Instrument):
result.status = IterationResult.FAILED
result.add_event('Content crash detected (actual/expected frames: {:.2}).'.format(ratio))
def _update_stats(self, context, data):
def _update_stats(self, context, data): # pylint: disable=too-many-locals
vsync_interval = self.collector.refresh_period
actual_present_time_deltas = (data.actual_present_time - data.actual_present_time.shift()).drop(0) # pylint: disable=E1103
# fiter out bogus frames.
actual_present_times = data.actual_present_time[data.actual_present_time != 0x7fffffffffffffff]
actual_present_time_deltas = (actual_present_times - actual_present_times.shift()).drop(0) # pylint: disable=E1103
vsyncs_to_compose = (actual_present_time_deltas / vsync_interval).apply(lambda x: int(round(x, 0)))
# drop values lower than drop_threshold FPS as real in-game frame
# rate is unlikely to drop below that (except on loading screens
# etc, which should not be factored in frame rate calculation).
keep_filter = (1.0 / (vsyncs_to_compose * (vsync_interval / 1e9))) > self.drop_threshold
per_frame_fps = (1.0 / (vsyncs_to_compose * (vsync_interval / 1e9)))
keep_filter = per_frame_fps > self.drop_threshold
filtered_vsyncs_to_compose = vsyncs_to_compose[keep_filter]
if not filtered_vsyncs_to_compose.empty:
total_vsyncs = filtered_vsyncs_to_compose.sum()
@@ -191,7 +206,7 @@ class FpsInstrument(Instrument):
vtc_deltas = filtered_vsyncs_to_compose - filtered_vsyncs_to_compose.shift()
vtc_deltas.index = range(0, vtc_deltas.size)
vtc_deltas = vtc_deltas.drop(0).abs()
janks = vtc_deltas.apply(lambda x: (x > EPSYLON) and 1 or 0).sum()
janks = vtc_deltas.apply(lambda x: (PAUSE_LATENCY > x > 1.5) and 1 or 0).sum()
not_at_vsync = vsyncs_to_compose.apply(lambda x: (abs(x - 1.0) > EPSYLON) and 1 or 0).sum()
context.result.add_metric('janks', janks)
context.result.add_metric('not_at_vsync', not_at_vsync)
@@ -200,6 +215,8 @@ class FpsInstrument(Instrument):
context.result.add_metric('frame_count', 0)
context.result.add_metric('janks', 0)
context.result.add_metric('not_at_vsync', 0)
per_frame_fps.name = 'fps'
return per_frame_fps
class LatencyCollector(threading.Thread):

View File

@@ -0,0 +1,151 @@
# Copyright 2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=access-member-before-definition,attribute-defined-outside-init
import os
from collections import OrderedDict
from wlauto import Instrument, Parameter
from wlauto.exceptions import ConfigError, InstrumentError
from wlauto.utils.misc import merge_dicts
from wlauto.utils.types import caseless_string
class FreqSweep(Instrument):
name = 'freq_sweep'
description = """
Sweeps workloads through all available frequencies on a device.
When enabled this instrument will take all workloads specified in an agenda
and run them at all available frequencies for all clusters.
Recommendations:
- Setting the runner to 'by_spec' increases the chance of successfully
completing an agenda without encountering hotplug issues
- If possible disable dynamic hotplug on the target device
"""
parameters = [
Parameter('sweeps', kind=list,
description="""
By default this instrument will sweep across all available
frequencies for all available clusters. If you wish to only
sweep across certain frequencies on particular clusters you
can do so by specifying this parameter.
Sweeps should be a lists of dictionaries that can contain:
- Cluster (mandatory): The name of the cluster this sweep will be
performed on. E.g A7
- Frequencies: A list of frequencies (in KHz) to use. If this is
not provided all frequencies available for this
cluster will be used.
E.g: [800000, 900000, 100000]
- label: Workload specs will be named '{spec id}_{label}_{frequency}'.
If a label is not provided it will be named 'sweep{sweep No.}'
Example sweep specification:
freq_sweep:
sweeps:
- cluster: A53
label: littles
frequencies: [800000, 900000, 100000]
- cluster: A57
label: bigs
"""),
]
def validate(self):
if not self.device.core_names:
raise ConfigError('The Device does not appear to have core_names configured.')
def initialize(self, context): # pylint: disable=r0912
if not self.device.is_rooted:
raise InstrumentError('The device must be rooted to sweep frequencies')
if 'userspace' not in self.device.list_available_cluster_governors(0):
raise InstrumentError("'userspace' cpufreq governor must be enabled")
# Create sweeps for each core type using num_cpus cores
if not self.sweeps:
self.sweeps = []
for core in set(self.device.core_names):
sweep_spec = {}
sweep_spec['cluster'] = core
sweep_spec['label'] = core
self.sweeps.append(sweep_spec)
new_specs = []
old_specs = []
for job in context.runner.job_queue:
if job.spec not in old_specs:
old_specs.append(job.spec)
# Validate sweeps, add missing sections and create workload specs
for i, sweep_spec in enumerate(self.sweeps):
if 'cluster' not in sweep_spec:
raise ConfigError('cluster must be define for all sweeps')
# Check if cluster exists on device
if caseless_string(sweep_spec['cluster']) not in self.device.core_names:
raise ConfigError('Only {} cores are present on this device, you specified {}'
.format(", ".join(set(self.device.core_names)), sweep_spec['cluster']))
# Default to all available frequencies
if 'frequencies' not in sweep_spec:
self.device.enable_cpu(self.device.core_names.index(sweep_spec['cluster']))
sweep_spec['frequencies'] = self.device.list_available_core_frequencies(sweep_spec['cluster'])
# Check that given frequencies are valid of the core cluster
else:
self.device.enable_cpu(self.device.core_names.index(sweep_spec['cluster']))
available_freqs = self.device.list_available_core_frequencies(sweep_spec['cluster'])
for freq in sweep_spec['frequencies']:
if freq not in available_freqs:
raise ConfigError('Frequency {} is not supported by {} cores'.format(freq, sweep_spec['cluster']))
# Add default labels
if 'label' not in sweep_spec:
sweep_spec['label'] = "sweep{}".format(i + 1)
new_specs.extend(self.get_sweep_workload_specs(old_specs, sweep_spec, context))
# Update config to refect jobs that will actually run.
context.config.workload_specs = new_specs
config_file = os.path.join(context.host_working_directory, 'run_config.json')
with open(config_file, 'wb') as wfh:
context.config.serialize(wfh)
context.runner.init_queue(new_specs)
def get_sweep_workload_specs(self, old_specs, sweep_spec, context):
new_specs = []
for old_spec in old_specs:
for freq in sweep_spec['frequencies']:
spec = old_spec.copy()
if 'runtime_params' in sweep_spec:
spec.runtime_parameters = merge_dicts(spec.runtime_parameters,
sweep_spec['runtime_params'],
dict_type=OrderedDict)
if 'workload_params' in sweep_spec:
spec.workload_parameters = merge_dicts(spec.workload_parameters,
sweep_spec['workload_params'],
dict_type=OrderedDict)
spec.runtime_parameters['{}_governor'.format(sweep_spec['cluster'])] = "userspace"
spec.runtime_parameters['{}_frequency'.format(sweep_spec['cluster'])] = freq
spec.id = '{}_{}_{}'.format(spec.id, sweep_spec['label'], freq)
spec.classifiers['core'] = sweep_spec['cluster']
spec.classifiers['freq'] = freq
spec.load(self.device, context.config.ext_loader)
spec.workload.init_resources(context)
spec.workload.validate()
new_specs.append(spec)
return new_specs

View File

@@ -63,6 +63,7 @@ class HwmonInstrument(Instrument):
parameters = [
Parameter('sensors', kind=list_of_strs, default=['energy', 'temp'],
global_alias='hwmon_sensors',
description='The kinds of sensors hwmon instrument will look for')
]
@@ -82,13 +83,15 @@ class HwmonInstrument(Instrument):
self.sensors = []
def setup(self, context):
def initialize(self, context):
self.sensors = []
self.logger.debug('Searching for HWMON sensors.')
discovered_sensors = discover_sensors(self.device, self.sensor_kinds.keys())
for sensor in sorted(discovered_sensors, key=lambda s: HWMON_SENSOR_PRIORITIES.index(s.kind)):
self.logger.debug('Adding {}'.format(sensor.filepath))
self.sensors.append(sensor)
def setup(self, context):
for sensor in self.sensors:
sensor.clear_readings()
@@ -110,6 +113,8 @@ class HwmonInstrument(Instrument):
context.result.add_metric(sensor.label, diff, units)
elif report_type == 'before/after':
before, after = sensor.readings
mean = conversion((after + before) / 2)
context.result.add_metric(sensor.label, mean, units)
context.result.add_metric(sensor.label + ' before', conversion(before), units)
context.result.add_metric(sensor.label + ' after', conversion(after), units)
else:

View File

@@ -25,6 +25,15 @@ from operator import itemgetter
from wlauto import Instrument, File, Parameter
from wlauto.exceptions import InstrumentError
UNIT_MAP = {
'curr': 'Amps',
'volt': 'Volts',
'cenr': 'Joules',
'pow': 'Watts',
}
JUNO_MAX_INT = 0x7fffffffffffffff
class JunoEnergy(Instrument):
@@ -43,7 +52,15 @@ class JunoEnergy(Instrument):
parameters = [
Parameter('period', kind=float, default=0.1,
description='Specifies the time, in Seconds, between polling energy counters.'),
description="""
Specifies the time, in Seconds, between polling energy counters.
"""),
Parameter('strict', kind=bool, default=True,
description="""
Setting this to ``False`` will omit the check that the ``device`` is
``"juno"``. This is useful if the underlying board is actually Juno
but WA connects via a different interface (e.g. ``generic_linux``).
"""),
]
def on_run_init(self, context):
@@ -64,14 +81,29 @@ class JunoEnergy(Instrument):
self.device.killall('readenergy', signal='TERM', as_root=True)
def update_result(self, context):
self.device.pull_file(self.device_output_file, self.host_output_file)
self.device.pull(self.device_output_file, self.host_output_file)
context.add_artifact('junoenergy', self.host_output_file, 'data')
with open(self.host_output_file) as fh:
reader = csv.reader(fh)
headers = reader.next()
columns = zip(*reader)
for header, data in zip(headers, columns):
data = map(float, data)
if header.endswith('cenr'):
value = data[-1] - data[0]
if value < 0: # counter wrapped
value = JUNO_MAX_INT + value
else: # not cumulative energy
value = sum(data) / len(data)
context.add_metric(header, value, UNIT_MAP[header.split('_')[-1]])
def teardown(self, conetext):
self.device.delete_file(self.device_output_file)
self.device.remove(self.device_output_file)
def validate(self):
if self.device.name.lower() != 'juno':
message = 'juno_energy instrument is only supported on juno devices; found {}'
raise InstrumentError(message.format(self.device.name))
if self.strict:
if self.device.name.lower() != 'juno':
message = 'juno_energy instrument is only supported on juno devices; found {}'
raise InstrumentError(message.format(self.device.name))

View File

@@ -33,9 +33,11 @@ import tarfile
from itertools import izip, izip_longest
from subprocess import CalledProcessError
from devlib.exception import TargetError
from wlauto import Instrument, Parameter
from wlauto.core import signal
from wlauto.exceptions import DeviceError
from wlauto.exceptions import ConfigError
from wlauto.utils.misc import diff_tokens, write_table, check_output, as_relative
from wlauto.utils.misc import ensure_file_directory_exists as _f
from wlauto.utils.misc import ensure_directory_exists as _d
@@ -58,12 +60,23 @@ class SysfsExtractor(Instrument):
mount_command = 'mount -t tmpfs -o size={} tmpfs {}'
extract_timeout = 30
tarname = 'sysfs.tar.gz'
DEVICE_PATH = 0
BEFORE_PATH = 1
AFTER_PATH = 2
DIFF_PATH = 3
parameters = [
Parameter('paths', kind=list_of_strings, mandatory=True,
description="""A list of paths to be pulled from the device. These could be directories
as well as files.""",
global_alias='sysfs_extract_dirs'),
Parameter('use_tmpfs', kind=bool, default=None,
description="""
Specifies whether tmpfs should be used to cache sysfile trees and then pull them down
as a tarball. This is significantly faster then just copying the directory trees from
the device directly, bur requres root and may not work on all devices. Defaults to
``True`` if the device is rooted and ``False`` if it is not.
"""),
Parameter('tmpfs_mount_point', default=None,
description="""Mount point for tmpfs partition used to store snapshots of paths."""),
Parameter('tmpfs_size', default='32m',
@@ -71,7 +84,12 @@ class SysfsExtractor(Instrument):
]
def initialize(self, context):
if self.device.is_rooted:
if not self.device.is_rooted and self.use_tmpfs: # pylint: disable=access-member-before-definition
raise ConfigError('use_tempfs must be False for an unrooted device.')
elif self.use_tmpfs is None: # pylint: disable=access-member-before-definition
self.use_tmpfs = self.device.is_rooted
if self.use_tmpfs:
self.on_device_before = self.device.path.join(self.tmpfs_mount_point, 'before')
self.on_device_after = self.device.path.join(self.tmpfs_mount_point, 'after')
@@ -81,20 +99,21 @@ class SysfsExtractor(Instrument):
as_root=True)
def setup(self, context):
self.before_dirs = [
before_dirs = [
_d(os.path.join(context.output_directory, 'before', self._local_dir(d)))
for d in self.paths
]
self.after_dirs = [
after_dirs = [
_d(os.path.join(context.output_directory, 'after', self._local_dir(d)))
for d in self.paths
]
self.diff_dirs = [
diff_dirs = [
_d(os.path.join(context.output_directory, 'diff', self._local_dir(d)))
for d in self.paths
]
self.device_and_host_paths = zip(self.paths, before_dirs, after_dirs, diff_dirs)
if self.device.is_rooted:
if self.use_tmpfs:
for d in self.paths:
before_dir = self.device.path.join(self.on_device_before,
self.device.path.dirname(as_relative(d)))
@@ -108,60 +127,67 @@ class SysfsExtractor(Instrument):
self.device.execute('mkdir -p {}'.format(after_dir), as_root=True)
def slow_start(self, context):
if self.device.is_rooted:
if self.use_tmpfs:
for d in self.paths:
dest_dir = self.device.path.join(self.on_device_before, as_relative(d))
if '*' in dest_dir:
dest_dir = self.device.path.dirname(dest_dir)
self.device.execute('busybox cp -Hr {} {}'.format(d, dest_dir),
self.device.execute('{} cp -Hr {} {}'.format(self.device.busybox, d, dest_dir),
as_root=True, check_exit_code=False)
else: # not rooted
for dev_dir, before_dir in zip(self.paths, self.before_dirs):
self.device.pull_file(dev_dir, before_dir)
for dev_dir, before_dir, _, _ in self.device_and_host_paths:
self.device.pull(dev_dir, before_dir)
def slow_stop(self, context):
if self.device.is_rooted:
if self.use_tmpfs:
for d in self.paths:
dest_dir = self.device.path.join(self.on_device_after, as_relative(d))
if '*' in dest_dir:
dest_dir = self.device.path.dirname(dest_dir)
self.device.execute('busybox cp -Hr {} {}'.format(d, dest_dir),
self.device.execute('{} cp -Hr {} {}'.format(self.device.busybox, d, dest_dir),
as_root=True, check_exit_code=False)
else: # not rooted
for dev_dir, after_dir in zip(self.paths, self.after_dirs):
self.device.pull_file(dev_dir, after_dir)
else: # not using tmpfs
for dev_dir, _, after_dir, _ in self.device_and_host_paths:
self.device.pull(dev_dir, after_dir)
def update_result(self, context):
if self.device.is_rooted:
if self.use_tmpfs:
on_device_tarball = self.device.path.join(self.device.working_directory, self.tarname)
on_host_tarball = self.device.path.join(context.output_directory, self.tarname)
self.device.execute('busybox tar czf {} -C {} .'.format(on_device_tarball, self.tmpfs_mount_point),
self.device.execute('{} tar czf {} -C {} .'.format(self.device.busybox,
on_device_tarball,
self.tmpfs_mount_point),
as_root=True)
self.device.execute('chmod 0777 {}'.format(on_device_tarball), as_root=True)
self.device.pull_file(on_device_tarball, on_host_tarball)
self.device.pull(on_device_tarball, on_host_tarball)
with tarfile.open(on_host_tarball, 'r:gz') as tf:
tf.extractall(context.output_directory)
self.device.delete_file(on_device_tarball)
self.device.remove(on_device_tarball)
os.remove(on_host_tarball)
for after_dir in self.after_dirs:
if not os.listdir(after_dir):
for paths in self.device_and_host_paths:
after_dir = paths[self.AFTER_PATH]
dev_dir = paths[self.DEVICE_PATH].strip('*') # remove potential trailing '*'
if (not os.listdir(after_dir) and
self.device.file_exists(dev_dir) and
self.device.listdir(dev_dir)):
self.logger.error('sysfs files were not pulled from the device.')
return
for diff_dir, before_dir, after_dir in zip(self.diff_dirs, self.before_dirs, self.after_dirs):
self.device_and_host_paths.remove(paths) # Path is removed to skip diffing it
for _, before_dir, after_dir, diff_dir in self.device_and_host_paths:
_diff_sysfs_dirs(before_dir, after_dir, diff_dir)
def teardown(self, context):
self._one_time_setup_done = []
def finalize(self, context):
if self.device.is_rooted:
if self.use_tmpfs:
try:
self.device.execute('umount {}'.format(self.tmpfs_mount_point), as_root=True)
except (DeviceError, CalledProcessError):
except (TargetError, CalledProcessError):
# assume a directory but not mount point
pass
self.device.execute('rm -rf {}'.format(self.tmpfs_mount_point), as_root=True)
self.device.execute('rm -rf {}'.format(self.tmpfs_mount_point),
as_root=True, check_exit_code=False)
def validate(self):
if not self.tmpfs_mount_point: # pylint: disable=access-member-before-definition
@@ -274,7 +300,7 @@ class DynamicFrequencyInstrument(SysfsExtractor):
def setup(self, context):
self.paths = ['/sys/devices/system/cpu']
if self.device.is_rooted:
if self.use_tmpfs:
self.paths.append('/sys/class/devfreq/*') # the '*' would cause problems for adb pull.
super(DynamicFrequencyInstrument, self).setup(context)
@@ -362,4 +388,3 @@ def _diff_sysfs_dirs(before, after, result): # pylint: disable=R0914
else:
dchunks = [diff_tokens(b, a) for b, a in zip(bchunks, achunks)]
dfh.write(''.join(dchunks))

View File

@@ -0,0 +1,192 @@
import os
import re
import csv
import tempfile
import logging
from datetime import datetime
from collections import defaultdict
from itertools import izip_longest
from wlauto import Instrument, Parameter
from wlauto import ApkFile
from wlauto.exceptions import DeviceError, HostError
from wlauto.utils.android import ApkInfo
from wlauto.utils.types import list_of_strings
THIS_DIR = os.path.dirname(__file__)
NETSTAT_REGEX = re.compile(r'I/(?P<tag>netstats-\d+)\(\s*\d*\): (?P<ts>\d+) '
r'"(?P<package>[^"]+)" TX: (?P<tx>\S+) RX: (?P<rx>\S+)')
def extract_netstats(filepath, tag=None):
netstats = []
with open(filepath) as fh:
for line in fh:
match = NETSTAT_REGEX.search(line)
if not match:
continue
if tag and match.group('tag') != tag:
continue
netstats.append((match.group('tag'),
match.group('ts'),
match.group('package'),
match.group('tx'),
match.group('rx')))
return netstats
def netstats_to_measurements(netstats):
measurements = defaultdict(list)
for row in netstats:
tag, ts, package, tx, rx = row # pylint: disable=unused-variable
measurements[package + '_tx'].append(tx)
measurements[package + '_rx'].append(rx)
return measurements
def write_measurements_csv(measurements, filepath):
headers = sorted(measurements.keys())
columns = [measurements[h] for h in headers]
with open(filepath, 'wb') as wfh:
writer = csv.writer(wfh)
writer.writerow(headers)
writer.writerows(izip_longest(*columns))
class NetstatsCollector(object):
def __init__(self, target, apk, service='.TrafficMetricsService'):
"""
Additional paramerter:
:apk: Path to the APK file that contains ``com.arm.devlab.netstats``
package. If not specified, it will be assumed that an APK with
name "netstats.apk" is located in the same directory as the
Python module for the instrument.
:service: Name of the service to be launched. This service must be
present in the APK.
"""
self.target = target
self.apk = apk
self.logger = logging.getLogger('netstat')
self.package = ApkInfo(self.apk).package
self.service = service
self.tag = None
self.command = None
self.stop_command = 'am kill {}'.format(self.package)
def setup(self, force=False):
if self.target.package_is_installed(self.package):
if force:
self.logger.debug('Re-installing {} (forced)'.format(self.package))
self.target.uninstall(self.package)
self.target.install(self.apk, timeout=300)
else:
self.logger.debug('{} already present on target'.format(self.package))
else:
self.logger.debug('Deploying {} to target'.format(self.package))
self.target.install(self.apk)
def reset(self, sites=None, period=None):
period_arg, packages_arg = '', ''
self.tag = 'netstats-{}'.format(datetime.now().strftime('%Y%m%d%H%M%s'))
tag_arg = ' --es tag {}'.format(self.tag)
if sites:
packages_arg = ' --es packages {}'.format(','.join(sites))
if period:
period_arg = ' --ei period {}'.format(period)
self.command = 'am startservice{}{}{} {}/{}'.format(tag_arg,
period_arg,
packages_arg,
self.package,
self.service)
self.target.execute(self.stop_command) # ensure the service is not running.
def start(self):
if self.command is None:
raise RuntimeError('reset() must be called before start()')
self.target.execute(self.command)
def stop(self):
self.target.execute(self.stop_command)
def get_data(self, outfile):
raw_log_file = tempfile.mktemp()
self.target.dump_logcat(raw_log_file)
data = extract_netstats(raw_log_file)
measurements = netstats_to_measurements(data)
write_measurements_csv(measurements, outfile)
os.remove(raw_log_file)
def teardown(self):
self.target.uninstall(self.package)
class NetstatsInstrument(Instrument):
# pylint: disable=unused-argument
name = 'netstats'
description = """
Measures transmit/receive network traffic on an Android divice on per-package
basis.
"""
parameters = [
Parameter('packages', kind=list_of_strings,
description="""
List of Android packages who's traffic will be monitored. If
unspecified, all packages in the device will be monitorred.
"""),
Parameter('period', kind=int, default=5,
description="""
Polling period for instrumentation on the device. Traffic statistics
will be updated every ``period`` seconds.
"""),
Parameter('force_reinstall', kind=bool, default=False,
description="""
If ``True``, instrumentation APK will always be re-installed even if
it already installed on the device.
"""),
Parameter('uninstall_on_completion', kind=bool, default=False,
global_alias='cleanup',
description="""
If ``True``, instrumentation will be uninstalled upon run completion.
"""),
]
def initialize(self, context):
if self.device.os != 'android':
raise DeviceError('nestats instrument only supports on Android devices.')
apk = context.resolver.get(ApkFile(self))
self.collector = NetstatsCollector(self.device, apk) # pylint: disable=attribute-defined-outside-init
self.collector.setup(force=self.force_reinstall)
def setup(self, context):
self.collector.reset(sites=self.packages, period=self.period)
def start(self, context):
self.collector.start()
def stop(self, context):
self.collector.stop()
def update_result(self, context):
outfile = os.path.join(context.output_directory, 'netstats.csv')
self.collector.get_data(outfile)
context.add_artifact('netstats', outfile, kind='data')
with open(outfile, 'rb') as fh:
reader = csv.reader(fh)
metrics = reader.next()
data = [c for c in izip_longest(*list(reader))]
for name, values in zip(metrics, data):
value = sum(map(int, [v for v in values if v]))
context.add_metric(name, value, units='bytes')
def finalize(self, context):
if self.uninstall_on_completion:
self.collector.teardown()

Binary file not shown.

View File

@@ -30,7 +30,7 @@ PERF_COMMAND_TEMPLATE = '{} stat {} {} sleep 1000 > {} 2>&1 '
DEVICE_RESULTS_FILE = '/data/local/perf_results.txt'
HOST_RESULTS_FILE_BASENAME = 'perf.txt'
PERF_COUNT_REGEX = re.compile(r'^\s*(\d+)\s*(.*?)\s*(\[\s*\d+\.\d+%\s*\])?\s*$')
PERF_COUNT_REGEX = re.compile(r'^(CPU\d+)?\s*(\d+)\s*(.*?)\s*(\[\s*\d+\.\d+%\s*\])?\s*$')
class PerfInstrument(Instrument):
@@ -69,24 +69,33 @@ class PerfInstrument(Instrument):
parameters = [
Parameter('events', kind=list_of_strs, default=['migrations', 'cs'],
global_alias='perf_events',
constraint=(lambda x: x, 'must not be empty.'),
description="""Specifies the events to be counted."""),
Parameter('optionstring', kind=list_or_string, default='-a',
global_alias='perf_options',
description="""Specifies options to be used for the perf command. This
may be a list of option strings, in which case, multiple instances of perf
will be kicked off -- one for each option string. This may be used to e.g.
collected different events from different big.LITTLE clusters.
"""),
Parameter('labels', kind=list_of_strs, default=None,
global_alias='perf_labels',
description="""Provides labels for pref output. If specified, the number of
labels must match the number of ``optionstring``\ s.
"""),
Parameter('force_install', kind=bool, default=False,
description="""
always install perf binary even if perf is already present on the device.
"""),
]
def on_run_init(self, context):
if not self.device.is_installed('perf'):
binary = context.resolver.get(Executable(self, self.device.abi, 'perf'))
self.device.install(binary)
binary = context.resolver.get(Executable(self, self.device.abi, 'perf'))
if self.force_install:
self.binary = self.device.install(binary)
else:
self.binary = self.device.install_if_needed(binary)
self.commands = self._build_commands()
def setup(self, context):
@@ -97,14 +106,15 @@ class PerfInstrument(Instrument):
self.device.kick_off(command)
def stop(self, context):
self.device.killall('sleep')
as_root = self.device.os == 'android'
self.device.killall('sleep', as_root=as_root)
def update_result(self, context):
for label in self.labels:
device_file = self._get_device_outfile(label)
host_relpath = os.path.join('perf', os.path.basename(device_file))
host_file = _f(os.path.join(context.output_directory, host_relpath))
self.device.pull_file(device_file, host_file)
self.device.pull(device_file, host_file)
context.add_iteration_artifact(label, kind='raw', path=host_relpath)
with open(host_file) as fh:
in_results_section = False
@@ -120,9 +130,13 @@ class PerfInstrument(Instrument):
line = line.split('#')[0] # comment
match = PERF_COUNT_REGEX.search(line)
if match:
count = int(match.group(1))
metric = '{}_{}'.format(label, match.group(2))
context.result.add_metric(metric, count)
classifiers = {}
cpu = match.group(1)
if cpu is not None:
classifiers['cpu'] = int(cpu.replace('CPU', ''))
count = int(match.group(2))
metric = '{}_{}'.format(label, match.group(3))
context.result.add_metric(metric, count, classifiers=classifiers)
def teardown(self, context): # pylint: disable=R0201
self._clean_device()
@@ -138,7 +152,7 @@ class PerfInstrument(Instrument):
self.events = [self.events]
if not self.labels: # pylint: disable=E0203
self.labels = ['perf_{}'.format(i) for i in xrange(len(self.optionstrings))]
if not len(self.labels) == len(self.optionstrings):
if len(self.labels) != len(self.optionstrings):
raise ConfigError('The number of labels must match the number of optstrings provided for perf.')
def _build_commands(self):
@@ -151,26 +165,15 @@ class PerfInstrument(Instrument):
def _clean_device(self):
for label in self.labels:
filepath = self._get_device_outfile(label)
self.device.delete_file(filepath)
self.device.remove(filepath)
def _get_device_outfile(self, label):
return self.device.path.join(self.device.working_directory, '{}.out'.format(label))
def _build_perf_command(self, options, events, label):
event_string = ' '.join(['-e {}'.format(e) for e in events])
command = PERF_COMMAND_TEMPLATE.format('perf',
command = PERF_COMMAND_TEMPLATE.format(self.binary,
options or '',
event_string,
self._get_device_outfile(label))
return command
class CCIPerfEvent(object):
def __init__(self, name, config):
self.name = name
self.config = config
def __str__(self):
return 'CCI/config={config},name={name}/'.format(**self.__dict__)

View File

@@ -60,6 +60,7 @@ class CciPmuLogger(Instrument):
parameters = [
Parameter('events', kind=list, default=DEFAULT_EVENTS,
global_alias='cci_pmu_events',
description="""
A list of strings, each representing an event to be counted. The length
of the list cannot exceed the number of PMU counters available (4 in CCI-400).
@@ -67,14 +68,17 @@ class CciPmuLogger(Instrument):
clusters will be counted by default. E.g. ``['0x63', '0x83']``.
"""),
Parameter('event_labels', kind=list, default=[],
global_alias='cci_pmu_event_labels',
description="""
A list of labels to be used when reporting PMU counts. If specified,
this must be of the same length as ``cci_pmu_events``. If not specified,
events will be labeled "event_<event_number>".
"""),
Parameter('period', kind=int, default=10,
global_alias='cci_pmu_period',
description='The period (in jiffies) between counter reads.'),
Parameter('install_module', kind=boolean, default=True,
global_alias='cci_pmu_install_module',
description="""
Specifies whether pmu_logger has been compiled as a .ko module that needs
to be installed by the instrument. (.ko binary must be in {}). If this is set
@@ -87,21 +91,21 @@ class CciPmuLogger(Instrument):
if self.install_module:
self.device_driver_file = self.device.path.join(self.device.working_directory, DRIVER)
host_driver_file = os.path.join(settings.dependencies_directory, DRIVER)
self.device.push_file(host_driver_file, self.device_driver_file)
self.device.push(host_driver_file, self.device_driver_file)
def setup(self, context):
if self.install_module:
self.device.execute('insmod {}'.format(self.device_driver_file), check_exit_code=False)
self.device.set_sysfile_value(CPL_PERIOD_FILE, self.period)
self.device.write_value(CPL_PERIOD_FILE, self.period)
for i, event in enumerate(self.events):
counter = CPL_BASE + 'counter{}'.format(i)
self.device.set_sysfile_value(counter, event, verify=False)
self.device.write_value(counter, event, verify=False)
def start(self, context):
self.device.set_sysfile_value(CPL_CONTROL_FILE, 1, verify=False)
self.device.write_value(CPL_CONTROL_FILE, 1, verify=False)
def stop(self, context):
self.device.set_sysfile_value(CPL_CONTROL_FILE, 1, verify=False)
self.device.write_value(CPL_CONTROL_FILE, 1, verify=False)
# Doing result processing inside teardown because need to make sure that
# trace-cmd has processed its results and generated the trace.txt
@@ -142,7 +146,7 @@ class CciPmuLogger(Instrument):
raise ConfigError('To use cci_pmu_logger, trace-cmd instrument must also be enabled.')
if not self.event_labels: # pylint: disable=E0203
self.event_labels = ['event_{}'.format(e) for e in self.events]
elif not len(self.events) == len(self.event_labels):
elif len(self.events) != len(self.event_labels):
raise ConfigError('cci_pmu_events and cci_pmu_event_labels must be of the same length.')
if len(self.events) > NUMBER_OF_CCI_PMU_COUNTERS:
raise ConfigError('The number cci_pmu_counters must be at most {}'.format(NUMBER_OF_CCI_PMU_COUNTERS))

View File

@@ -0,0 +1,80 @@
# Copyright 2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=unused-argument
import time
import threading
from wlauto import Instrument, Parameter
from wlauto.exceptions import InstrumentError
class ScreenMonitor(threading.Thread):
def __init__(self, device, polling_period):
super(ScreenMonitor, self).__init__()
self.device = device
self.polling_period = polling_period
self.stop_event = threading.Event()
def run(self):
last_poll = time.time()
while not self.stop_event.is_set():
time.sleep(1)
if (time.time() - last_poll) >= self.polling_period:
self.device.ensure_screen_is_on()
last_poll = time.time()
def stop(self):
self.stop_event.set()
self.join()
class ScreenOnInstrument(Instrument):
# pylint: disable=attribute-defined-outside-init
name = 'screenon'
description = """
Ensure screen is on before each iteration on Android devices.
A very basic instrument that checks that the screen is on on android devices. Optionally,
it call poll the device periodically to ensure that the screen is still on.
"""
parameters = [
Parameter('polling_period', kind=int,
description="""
Set this to a non-zero value to enable periodic (every
``polling_period`` seconds) polling of the screen on
the device to ensure it is on during a run.
"""),
]
def initialize(self, context):
self.monitor = None
if self.device.os != 'android':
raise InstrumentError('screenon instrument currently only supports Android devices.')
def slow_setup(self, context): # slow to run before most other setups
self.device.ensure_screen_is_on()
if self.polling_period:
self.monitor = ScreenMonitor(self.device, self.polling_period)
self.monitor.start()
def teardown(self, context):
if self.polling_period:
self.monitor.stop()

Some files were not shown because too many files have changed in this diff Show More