1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-07-14 19:13:37 +01:00

289 Commits
v3.1.1 ... v3.3

Author SHA1 Message Date
0e2a150170 fw/version: Bump release versions 2020-12-11 16:31:13 +00:00
69378b0873 Dockerfile: Bump release version 2020-12-11 16:31:13 +00:00
c543c49423 requirements.txt: Update to latest tested versions 2020-12-11 16:31:13 +00:00
dd07d2ec43 workloads/speedometer: Fix markdown formatting in docstring 2020-12-11 16:26:49 +00:00
94590e88ee docs/changes: Update changelog for v3.3 2020-12-11 16:26:49 +00:00
c2725ffaa2 fw/uiauto: Update to handle additional permissions screen
On the latest version of android (currently Q) for applications that
are designed to run on older versions of android, an additional
screen asking to confirm the required permissions can popup.
Enable confirming of the granted permissions.
2020-12-10 15:57:30 +00:00
751bbb19fe docs: Update Signal Dispatch diagram
Update the signal dispatch diagram to include the new optional reboot stage.
2020-12-09 07:48:20 +00:00
ae1bc2c031 fw/config: Add additional run_completed reboot policy
Add an additional `run_completed` reboot policy for when a run
has finished.
This complements the `initial` reboot policy and aims to leave
the device in a fresh state after WA has finished executing.
2020-12-09 07:48:20 +00:00
91b791665a workloads/googleplaybooks: Update to handle updated IDs
Resourceid and classes have been modified so update the
workload to handle these cases.
Additionally on some devices regex matches appear to fail
so workaround to match separately.
2020-11-29 19:42:30 +00:00
62c4f3837c workloads/googleslides: Update to accommodate newer versions
Add support for newer version of the apk.
Also add support for differing screen sizes, on larger devices
the direction of swipe to change slide differs, perform both
horizontal and vertical swipes to satisfy both layouts.
2020-11-29 19:42:30 +00:00
3c5bece01e workloads/aitutu: Improve reliability of results extraction
Wait for the device to become idle before attempting to extract
the test scores.
2020-11-29 19:42:30 +00:00
cb51ef4d47 workloads/googlephotos: Update to handle new popup
Bump the minor known working version APK and handle a new "missing out"
popup.
2020-11-29 19:42:30 +00:00
8e56a4c831 utils/doc: Fix output for lambda function
The "name" can be in the format "<class>.<lambda>" so
update to allow correct function with the updated format.
2020-11-13 16:27:39 +00:00
76032c1d05 workloads/rt_app: Remove timeout in file transfer
Remove the explict timeout when pushing to the device.
Allow the polling mechanims to monitor the transfer if required.
2020-11-13 15:42:00 +00:00
4c20fe814a workloads/exoplayer: Remove timeout in file transfer
Remove the explict timeout when pushing a media file to the device.
Allow the polling mechanims to monitor the transfer.
2020-11-13 15:42:00 +00:00
92e253d838 workloads/aitutu: Handle additional popup on launch
Allow agreeing to an updated Terms agreement on launch
2020-11-13 11:31:50 +00:00
18439e3b31 workloads/youtube: Update Youtube workload
The previous known working version of the youtube apk appears
to have stopped working. Update to support the new format.
2020-11-12 15:03:01 +00:00
5cfe452a35 fw/version: Bump dev version.
We are relying on a newly available variable in devlib
so bump the version to remain in sync.
2020-11-09 17:56:16 +00:00
f1aff6b5a8 fw/descriptor: Update sudo_cmd default
The WA default `sudo_cmd` is out of date compared to devlib.
Update the parameter to use the value directly from devlib
to prevent these from being out of sync in the future.
2020-11-09 17:56:16 +00:00
5dd3abe564 fw/execution: Fix Typos 2020-11-04 18:27:34 +00:00
e3ab798f6e wl/speedometer: Ensure test package is installed.
Check that the package specified for the test is installed on the
device.
2020-11-04 18:27:34 +00:00
ed925938dc fw/version: Development version bump
Bump the development version to synchronise the parameters for transfer
polling.
2020-11-03 10:02:08 +00:00
ed4eb8af5d target/descriptor: Add connection config for polls
Adds parameters needed for WA to support file transfer polling.

``poll_transfers`` of type ``bool``, default ``True`` sets whether
transfers should be polled

``transfer_wait_no_poll`` controls the initial time in seconds that the
poller should wait for the transfer to complete before polling its
progress.
2020-11-03 10:01:52 +00:00
a1bdb7de45 config/core: merge params from different files
Ensure that runtime and workload parameters specified across multiple
config files and the config section of the agenda are merged rather than
overwritten.
2020-10-30 12:05:30 +00:00
fbe9460995 pylint: Remove uneccessary pass statements 2020-10-30 11:49:54 +00:00
aa4df95a69 pep8: Ignore line break before binary operator
PEP8 has switched its guidance [1] for where a line break should occur
in relation to a binary operator, so don't raise this warning for
new code and update the code base to follow the new style.

[1] https://www.python.org/dev/peps/pep-0008/#should-a-line-break-before-or-after-a-binary-operator
2020-10-30 11:49:54 +00:00
fbb84eca72 Pylint Fixes
Update our version of pylint to use the latest version and update the
codebase to comply with the majority of the updates.

For now disable the additional checks for `super-with-arguments`,
`useless-object-inheritance`, `raise-missing-from`, `no-else-raise`,
`no-else-break`, `no-else-continue` to be consistent with the existing
codebase.
2020-10-30 11:49:54 +00:00
fbd6f4e90c fw/tm: Only finalize the assistant if instantiated 2020-10-30 11:47:56 +00:00
1c08360263 fw/runtime_config: Fix case where no available gov for cpu
Ensure there is a default iterable value in the case there is no
governor entry for a particular cpu.
2020-10-09 08:29:20 +01:00
ff220dfb44 pcmark: do not clear on reset
The PCMark Work2.0 data-set is cleared and downloaded before each run. This
operation is time-consuming and pollutes the benchmark instrumentation.
Disabling clear_data_on_reset for the PCMark workload bypass this per-run
download.
2020-09-16 18:58:24 +01:00
7489b487e1 workloads/speedometer: offline version of speedometer
This version replaces the previous uiauto version of Speedometer with a new
version.

* Supports both chrome and chromium again, this is selected with the
  chrome_package parameter.
* No longer needs internet access.
* Version 1.0 of Speedometer is no longer supported.
* Requires root:
  - sometimes uiautomator dump doesn't capture the score if not run as root
  - need to modify the browser's XML preferences file to bypass T&C acceptance
    screen
2020-09-16 18:58:01 +01:00
ba5a65aad7 target/runtime_config: Add support for stay-on
Adds runtime config support for the android setting
``stay_on_while_plugged_in``.
2020-09-10 15:53:03 +01:00
7bea3a69bb tests: Add tests to check job finalized
Adds tests to see if jobs are finalized after being skipped,
failed. Tests if jobs are labelled as skipped.
2020-09-10 15:52:20 +01:00
971289698b core,execution: Add run skipping on job failure
Add a global configuration parameter ``bail_on_job_failure`` that
allows all remaining jobs in a run to be skipped should a job fail its
initial execution and its retries. This is by default disabled.
2020-09-10 15:52:20 +01:00
66e220d444 docs/installation: Rephrase dependency information
Reduce emphasis on the Android SDK requirements to prevent confusion when
running with Linux only targets.
2020-09-03 11:40:24 +01:00
ae8a7bdfb5 docs/installation: Update WA's tested platform
Update WA's tested platform to a later version.
2020-09-03 11:40:24 +01:00
b0355194bc docs/installation: Add note about python3 commands 2020-09-03 11:40:24 +01:00
7817308bf7 docs/user_information: Fix formatting and typos 2020-09-03 11:40:24 +01:00
ab9e29bdae framework/output: Speedup discover_wa_outputs()
Avoid recursing into subdirectory of folders containing __meta, since
they are not of interest and recursing can take a very large amount of
time if there are lot of files, like if there is a sysfs dump.
2020-08-05 13:17:15 +01:00
9edb6b20f0 postgres_schemas: Add rules for cascading deletes
Add cascading deletes to foreign keys as well as a rule to delete large
objects when artifacts are deleted.

Deleting a run entry should delete all dependent data of that run.
2020-07-17 13:52:50 +01:00
879a491691 README: Update with more specific supported python version. 2020-07-16 12:20:02 +01:00
7086fa6b48 target: Force consistent logcat format
On some devices the default logcat format was inconsistent with what was
expected. This change explicitly sets the logcat format to be as
expected.
2020-07-16 11:38:06 +01:00
716e59daf5 framework/target: Add logcat buffer wrap detection
As WA currently supports either a single logcat dump after each job,
or fixed rate polling of logcat, it is possible that the fixed size
logcat buffer wraps around and overwrites data between each dump or
poll. This data may be used by output processors that should be
notified of the loss.

This change allows the detection of buffer wrapping by inserting a
known log entry into the buffer, although it cannot say how much data
was lost, and only applies to the "main" logcat buffer.

If buffer wrap is detected, a warning is logged by WA.
2020-07-16 11:38:06 +01:00
08fcc7d30f fw/getters: Fix typo 2020-07-15 15:04:31 +01:00
684121e2e7 fw: Replace usage of file locking with atomic writes
To prevent long timeouts occurring during to file locking on
both reads and writes replace locking with
atomic writes.

While this may results in cache entries being overwritten,
the amount of time used in duplicated retrievals will likely
be saved with the prevention of stalls due to waiting to
acquire the file lock.
2020-07-15 15:04:31 +01:00
0c1229df8c utils/misc: Implement atomic writes
To simulate atomic writes, use a context manager to write to
a temporary file location and then rename over the original
file.
This is performed using the `safe_move` method which performs
this operation and handles cases where the source and destination
are on separate file systems.
2020-07-15 15:04:31 +01:00
615cbbc94d fw/target_info: Prevent multiple parses of the target_info_cache
Instead of parsing the target_info_cache multiple times,allow for
it to be read it once and passed as a paramter to the coresponding
methods.
2020-07-15 15:04:31 +01:00
1425a6f6c9 Implement caching of ApkInfo
Allow caching of ApkInfo to prevent the requirement of re-parsing
of APK files.
2020-07-15 15:04:31 +01:00
4557da2f80 utils/android: Implement a Podable wrapper of ApkInfo
Add a Podable warpper to ApkInfo.
2020-07-15 15:04:31 +01:00
7cf5fbd8af framework, tests: Correct signal disconnection
While the Louie system operated on weakrefs for the callback
functions, the priority list wrapper did not. This difference led to
weakrefs to callback functions being compared to strong references in
list element operations within Louie's disconnect method, so that
handler methods were not disconnected from signals.

Converting the receiver to a weakref then allowed Louie to operate as
normal, which may include deleting and re-appending the handler method
to the receivers list. As ``append`` is a dummy method that allows the
priority list implementation, the handler method is then never added
back to the list of connected functions, so we must ``add`` it after
``connect`` is called.

Also included is a testcase to confirm the proper disconnection of
signals.
2020-07-14 17:31:38 +01:00
3f5a31de96 commands: Add report command
report provides a summary of a run and an optional list of all
jobs in the run, with any events that might have occurred during each
job and their current status.

report allows an output directory to be specified or will attempt to
discover possible output directories within the current directory.
2020-07-14 17:31:38 +01:00
7c6ebfb49c framework: Have Job hold its own JobState
JobState, previously handled by RunState, is now held in the
Job.

Changes and accesses to a Job's status access the Job's
JobState directly, so that there is only one place now that each Job's
state data is tracked.

This also means there is no use for update_job in RunState.
2020-07-14 17:31:38 +01:00
8640f4f69a framework: Add serializing Job status setter
When setting the job status through ExecutionContext, this change
should be accompanied by an update to the state file, so that the state
file accurately reflects execution state.

As Jobs should not be aware of the output, this method is added to
ExecutionContext, and couples setting job state with writing to the
state file.
2020-07-14 17:31:38 +01:00
460965363f framework: Fix serialized job retries set to 0
JobState serializations did not reflect the current state of
execution, as the 'retries' field was set to 0 instead of
JobState.retries.
2020-07-14 17:31:38 +01:00
d4057367d8 tests: Add run state testbench
Need to test:
- whether the state files properly track the state of wa
runs
- the state of the jobs and whether they are correctly
updated

State file consistency tests implemented for scenarios:
	- job passes on first try
	- job requires a retry
	- job fails all retries

Job object state test implemented for:
	- Job status should reset on job retry (from FAILED or PARTIAL
	to PENDING)
2020-07-14 17:31:38 +01:00
ef6cffd85a travis: Limit the maximum version of isort
The later versions of isort is not compatible with the version of
pylint we use so ensure we use a compatible version in the travis tests.
2020-07-09 14:45:28 +01:00
37f4d33015 WA/Jankbench: Update Pandas function to remove deprecated .ix access
Pandas removed .ix as a way to iterate the index, .loc is the replacement
in most cases. Jankbench as a workload fails on a clean install due to
this call.

Replacing this works for me on a native install of Lisa with Ubuntu 20.04

Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2020-07-03 12:13:40 +01:00
8c7320a1be workloads/gfxbench: Swtich to the home screen before run
Alter the element to check popups have closed to be present
on any main screen and ensure we switch to the homescreen of the
app before performing setup steps.
2020-06-30 16:51:11 +01:00
6d72a242ce doc: fix callback priority table
Correctly specify the decorator name that should be used to give an
instrument method appropriate priority when it's being invoked via
signal dispatch.

Previously, the table was listing the priority enum names in the column
that was labeled "decorator". Now both are shown and the distinction has
been made clearer.
2020-06-30 15:04:20 +01:00
0c2613c608 fw/execution: Fix missing parameter 2020-06-29 16:22:13 +01:00
b8301640f7 docs/dev_ref: Fix incorrect attribute name 2020-06-25 13:01:28 +01:00
c473cfa8fe docs/user_information: Fix references to Python 2.7
Remove references to Python 2.7 and update example paths
to Python3.
2020-06-25 13:01:28 +01:00
1f0da5facf Dockerfile: Update to store environment variables in separate file
Instead of storing the environment variables in `.bashrc` store them
in `/home/wa/.wa_environment`. This allows for sourcing of this file
in non interactive environments.
2020-06-19 11:25:34 +01:00
39121caf66 workloads/gfxbench: Fix using the correct scrollable element.
On smaller devices there can be multiple scrollable elements, ensure
we scroll the correct one to identify tests.
2020-06-19 11:24:46 +01:00
83da20ce9f ouput_processor/postgres: Fix events sql command 2020-06-15 15:30:56 +01:00
f664a00bdc config/core: Fix handling of depreciated parameters
Provide warning to user when attempting to set a depreciated
parameter instead of during validation and only raise the warning
if a value has been explicitly provided.
2020-06-12 09:24:51 +01:00
443358f513 workloads/gfxbench: Rework score detection
Rework how the result matching is performed. Some tests from
gfxbench provide more than 1 score per test and
some provide their output in a different format to others.
Update the matching to perform more flexible matching as well
as dealing with entries that do not fit on a single results screen.
2020-06-10 11:10:26 +01:00
586d95a4f0 Dockerfile: Add note about mounting volumes with selinux 2020-06-01 12:26:25 +01:00
58f3ea35ec workloads/gfxbench: Fix incorrect parameter name 2020-05-26 20:55:58 +01:00
7fe334b467 workloads/gfxbench: Fix incorrect parameter name 2020-05-26 20:38:58 +01:00
3967071a5e workloads/gfxbench: Fix incorrect parameter name 2020-05-26 20:13:53 +01:00
cd6f4541ca workloads/gfxbench: Move results extraction to the extraction stage 2020-05-21 12:39:25 +01:00
7e6eb089ab workloads/geekbench: Update result screen matching criteria
Update the element that is searched for as on some devices this can
match before all the tests are complete.
2020-05-21 12:39:25 +01:00
491dcd5b5b Dockerfile: Update with support for additional instruments
Ensure support is present in the Docker image for instruemnts that
require trace-cmd, monsoon or iio-capture.
2020-05-21 12:39:25 +01:00
7a085e586a workloads/gfxbench: Allow configuration of tests to be ran.
Allow the user to customise which tests are to be ran on the device.
2020-05-21 12:39:25 +01:00
0f47002e4e fw/getters: Use the assets_repository as the default for the filer 2020-05-21 12:39:25 +01:00
6ff5abdffe fw/config: Remove whitespace 2020-05-21 12:39:25 +01:00
82d09612cb fw/config: Add default to `assets_repository' 2020-05-21 12:39:25 +01:00
ecbfe32b9d docs: Update python2 style print statements 2020-05-21 12:39:25 +01:00
2d32d81acb utils/file_lock: Create lock files in system temp directory
Use the original file path to create a lock file in the system temp
directory. This prevents issues where we are attempting to lock a file
where wa does not have permission to create new files.
2020-05-19 17:55:40 +01:00
b9d593e578 fw/version: Development version bump
Bump dev version to synchronise interface for SSHConnection with devlib.
2020-05-13 16:43:03 +01:00
1f8be77331 Disable pep8 errors 2020-05-13 16:43:03 +01:00
66f0edec5b descriptor/SSHConnection: Expose use_scp parameter
Allow specifying to use scp for file transfer rather than sftp as
this is not supported by all targets.
2020-05-13 16:43:03 +01:00
e2489ea3a0 descriptor/ssh: Add note to password parameter for passwordless target
For a passwordless target the `password` parameter needs to be set to an
empty string to prevent attempting ssh key authentication.
2020-05-13 16:43:03 +01:00
16be8a70f5 Fix pcmark setup
* Pcmark sometimes auto installs without need for clicking button,
in such cases workload throws UiObjectNotFound exception.
* Added logic to check for installation button existence.
* Increased install wait time to 5 mins.
2020-04-28 09:57:41 +01:00
dce07e5095 Update gfxbench to log correct scores.
* Updated regex to reflect correct test name.

* Enabling/disabling tests on setup was missing tesselation on some devices
  so changed order of toggle.

* Sometimes whilst collecting scores the workload grabs the wrong score.
  Updated to check the name of the test before grabbing the score.

* Tested on mate20, xperia, s9-exynos, s10-exynos, pixel-4
2020-04-27 13:54:48 +01:00
711bff6a60 docs/plugins: Fix formatting and typos 2020-04-27 13:53:29 +01:00
2a8454db6a docs/how_tos: Update example workload creation
Clarify the workload creation example.
2020-04-27 13:53:29 +01:00
9b19f33186 target/descriptor: Fix overwriting variable
Ensure we don't overwrite `conn_params` in the inner for loop.
2020-04-17 13:03:27 +01:00
53faf159e8 target/descriptor: Cosmetic fixes
Fix typo and choose more descriptive variable name.
2020-04-17 13:03:27 +01:00
84a9526dd3 target/descriptor: Fix handling of custom Targets 2020-04-17 13:03:27 +01:00
a3cf2e5650 descriptor: Fix overriding of parameters
Make sure we only override parameters that are present in the current
config. This allows for connection parameters to be supplied for a
platform but only overridden if required for the connection.
2020-04-16 09:44:17 +01:00
607cff4c54 framework: Lock files which could be read/written to concurrently
Add file locking to files that could be read and written to concurrently
by separate wa processes causing race conditions.
2020-04-09 09:14:39 +01:00
d56f0fbe20 utils/misc: Add file locking context manager
Enable automation locking and unlocking of a file path provided. Used to
prevent synchronisation issues between multiple wa processes.
2020-04-09 09:14:39 +01:00
0f9c20dc69 target/descriptor: Add support for connection parameter overriding.
Allow for overriding connection parameters on a per platform basis, and
make the `host` parameter for `Juno` optional as this can be auto
detected via the serial connection.
2020-04-09 09:10:11 +01:00
310bad3966 target/descriptor: Rework how parameter defaults are overridden.
Instead of supplying only the parameter name and value to be set as a
default, allow for replacing the entire parameter object as this allow
more control over what needs overriding for a particular platform.
2020-04-09 09:10:11 +01:00
a8abf24db0 fw/descriptor: Add unsupported_platforms for a particular target
Allow for specifying a list of `Platforms` that a particular target does
not support, e.g. 'local_juno'
2020-04-09 09:10:11 +01:00
dad0a28b5e logcat_parsing: Replace errors when decoding logcat output
Some devices print non standard characters to logcat. If an error
occurs when parsing the output, replace the offending character instead
of raising an error.
2020-04-07 14:15:24 +01:00
2cd4bf7e31 Add initial issue templates. 2020-04-02 10:18:47 +01:00
5049e3663b Force speedometer to use chrome and change to ApkUiAutoWorkload
* Workload was failing when chrome was not set as default broser so
  altered to use chrome every time.

* Changed workload to an ApkuiAutoWorkload since chrome is now a
  dependency.

* Refactored opening speedometer to new method.

* Added wait time for scores to show up when test finished.
2020-03-31 10:49:25 +01:00
c9ddee761a Update framework to wait for object before dismissing chrome popup
* Added wait for exist for google terms accept.

* Reduced wait time for device sync negative button to reduce workload run
  time.
2020-03-31 10:49:25 +01:00
3be00b296d Androbench: Handle storage permissions prompt.
Updating the workload to handle the storage permissions that present themselves on certain devices.
2020-03-25 18:21:43 +00:00
9a931f42ee Handle the common chrome browser popup messages.
The Chrome browser presents a number of popups when run for
the first time. This update handles those popup messages.
2020-03-25 16:35:04 +00:00
06ba8409c1 target/descriptor: Make strict_host_check default to False
The majority of users will not find a benefit of the additional
check so make this parameter default to `False` instead.
2020-03-12 11:21:07 +00:00
2da9370920 target/descriptor: Ensure we set a default SSH port. 2020-03-06 19:16:47 +00:00
ef9b4c8919 fw/version: Dev version bump
Bump the dev version of WA and required devlib version to ensure
that both repos stay in sync to accommodate the SSH interface
change.
2020-03-06 17:34:30 +00:00
31f4c0fd5f fw/descriptor: Add parameter list for Telenet connections.
`TelnetConnection` no longer uses the same parameter list as
`SSHConnection` so create it's own parameter list.
2020-03-06 17:34:30 +00:00
62ca7c0c36 fw/SSHConnection: Deprecated parameters for Parimiko implementation
Deprecate parameters for the new implementation of the SSHConnection
based on Parimiko.
2020-03-06 17:34:30 +00:00
d0f099700a fw/ConfigutationPoints: Add support for deprecated parameters
Allow specifying a ConfigutationPoint is deprecated. This means that any
supplied configuration will not be used however execution will continue
with a warning displayed to the user.
2020-03-06 17:34:30 +00:00
5f00a94121 utils/types: fix toggle_set creation
Correctly handle the presence of both an element and its toggle in the
input, and handle them base on order, e.g.

toggle_set(['x', 'y', '~x']) --> {'y', '~x'}
toggle_set(['~x', 'y', 'x']) --> {'y', 'x'}
2020-02-19 17:02:58 +00:00
0f2de5f951 util/exec_control: add once_per_attribute_value
Add a decorator to run a method once for all instances that share the
value of the specified attribute.
2020-02-07 16:49:48 +00:00
51ffd60c06 instruments: add proc_stat
Add an instrument that monitors CPU load using data from /proc/stat
2020-02-07 14:11:31 +00:00
0a4164349b Merge pull request from setrofim/doc-fix
doc: fix Instrument documentation
2020-02-04 13:28:57 +00:00
fe50d75858 fw/instrument: derive Instrument from TargetedPlugin
Change Instrument to derive from TargetedPlugin rather than Plugin,
which it should have been all along.
2020-02-04 13:28:48 +00:00
b93a8cbbd6 doc: fix Instrument documentation
Remove reference to `extract_results`, which does not exist for
Instruments (this only exists for Workloads.)
2020-02-04 12:49:14 +00:00
79dec810f3 fw/plugin: move cleanup_assets to TargetedPlugin
Move cleanup_assets from Workload up into TargetedPlugin. This way,
Instruments may also utilize it if they deploy assets.

More generally, it makes sense for it to be inside TargetedPlugin, as
any plugin that interacts with the target may conceivably need to clean
up.
2020-02-04 10:43:26 +00:00
44cead2f76 fw/workload: Prefix TestPackages with "test_"
Previously, when pulling an apk from the target to the host, the default
package name was used for both regular apks and test apks. This could
result in one overwriting the other. To prevent this ensure
`TestPackages` have the "test_" prefixed to their filename.
2020-02-03 14:55:39 +00:00
c6d23ab01f workloads/exoplayer: Support Android 10
Android 10 appears to use a new format in logcat when displaying the
PlayerActivity. Update the regex to suport both formats.
2020-01-27 15:00:33 +00:00
6f9856cf2e pcmark: Update popup dismissal to be case insensitive.
Some devices use difference capitalisation so ignore case when matching.
2020-01-21 15:46:14 +00:00
0f9331dafe fw/job: copy classifiers from the spec
Now that classifiers may be added to the job during execution, its
classifiers dict should be unique to each job rather than just returning
them form spec (which may be shared between multiple jobs.)
2020-01-17 17:07:52 +00:00
659e60414f fw/exec: Add add_classifier() method
Add add_classifier() method to context. Allow plugins to add classifiers
to the current job, or the run as a whole. This will ensure that the new
classifiers are propagated to all relevant current and future artifacts
and metrics.
2020-01-17 16:38:58 +00:00
796f62d924 commands/process: partial results + write info
- Correct handling of skipped jobs -- the output directory would not
  have been generated, so do not try to write it.
- Do not attempt to process runs that are in progress, unless forced,
  and do not try to process jobs that have not completed yet.
- Write the run info as well as the result, allowing output processors
  to modify it (e.g. adjusting run names).
2020-01-17 10:59:56 +00:00
f60032a59d fw/target/manager: Update to use module_name_set
Allow for comparing which modules are installed on a Target when
additional module configuration is present.
2020-01-16 15:55:29 +00:00
977ce4995d utils/types: Add module_name_set type
The list of modules retrieved from a `Target` may include configuration
as a dictionary. This helper function will produce a set of only the
module names allowing for comparison.
2020-01-16 15:55:29 +00:00
a66251dd60 Antutu: Updating to work with major version 8.
The update to Antutu major version 8 has changed a lot of element names.
There have also been changes to the tests run in three of the four categories.

This commit handles those updates while also retaining backwards compatibility
with major version 7.
2020-01-15 12:59:43 +00:00
d3adfa1af9 fw/getters: Pylint fix 2020-01-14 13:24:51 +00:00
39a294ddbe utils/types: Update version_tuple to allow splitting on "-"
Some Apks use "-" characters to separate their version and identifier so
treat as a separator value.
2020-01-14 13:24:51 +00:00
164095e664 utils/types: Update version_tuple to use strings
The versionName field of an apk allows for containing non-numerical
characters so update the type to be a string.
2020-01-14 13:24:51 +00:00
24a4a032db fw/getters: update Executable resolution
Use Executable.match() rather than just checking the path inside
get_from_location(); this allows for alternative matching semantics
(e.g. globbing) inside derived implementations.
2020-01-10 13:56:11 +00:00
05857ec2bc utils/cpustates: update idle state naming
If idle state names for a cpu could not be discovered, use "idle[N]"
where N is the state number, instead of just making them all as
"unknown".
2020-01-10 13:32:40 +00:00
fd8a7e442c utils/trace_cmd: update for Python 3
re._pattern_type became re.Pattern in Python 3.
2020-01-10 13:31:30 +00:00
dfb4737e51 Development version bump 2019-12-20 16:25:01 +00:00
06518ad40a Version bump for release 2019-12-20 16:07:10 +00:00
009fd831b8 docs/changelog: Update changelog for version 3.2 release. 2019-12-20 16:07:10 +00:00
88284750e7 Dockerfile: Update to reference new release of WA and devlib 2019-12-20 16:07:10 +00:00
8b337768a3 Dockerfile: Update ubuntu base to 19.10 2019-12-20 16:07:10 +00:00
38aa9d12bd fw/entrypoint: Fix devlib version check
That absence of a value in the "dev" version field indicates a release
version, ensure this is taken into account when comparing version numbers.
2019-12-20 16:07:10 +00:00
769c883a3a requirements: Update to latest known working package versions 2019-12-20 16:07:10 +00:00
90db655959 instrument/perf: Fix incorrect argument 2019-12-20 16:07:10 +00:00
817d98ed72 wa/instruments: Refactor collectors to use Collector Inferface
Update the WA instruments which rely on the refactored devlib collectors
to reflect the new API.
2019-12-20 15:17:01 +00:00
d67668621c Geekbench: Adding 4.4.2 as a supported version
There have been no UI changes to the application so simply adding the
new supported version to the list of accepted versions.
2019-12-19 14:18:10 +00:00
1531ddcdef workloads/speedometer: Only close tabs on supported devices
Some devices don't have the option to close all tabs so don't error if
this element cannot be found.
2019-12-18 10:07:11 +00:00
322f9be2d3 workloads/googleplaybooks: Fix book selector
Do not try and use a parent element of the book entry, search for the entry
directly.
2019-12-18 10:07:11 +00:00
494424c8ea utils/types: Fix ParameterDict update method.
When updating a ParameterDict with another ParameterDict the unencoded
values were being merged. Ensure consistent behaviour by implicitally
iterating via `__iter__` which will cause ParameterDict values to be
decoded before being re-endcoded as expected.
2019-12-18 10:07:11 +00:00
ee54a68b65 Updating the framework to include the app package data
As part of our continous integration system it has become
clear that gathering the app package data as well as the
version name can provide useful.

Adding this functionality to mainline as it could prove
useful to other developers.
2019-12-03 17:48:59 +00:00
cc1cc6f77f tests: fix pytest warnings
Fix warnings reported when running unit tests via pytest.

- Rename TestDevice to MockDevice, so that it is not interpreted as a
  test case.
- Fix collections abstract base class imports.
- Add an ini file to ignore the same from "past".
2019-12-03 14:03:18 +00:00
da0ceab027 PCMark: Updating to handle new Android 10 permission warnings
Android 10 has introduced a new permission warning and a
seperate warning about the APK being built for an older version
of the Android OS. Have added two checks to accept these
permissions and continue with the workload and the given APK
rather than attempting to update.
2019-11-28 16:02:20 +00:00
683eec2377 PCMark: Editing the screen orientation
Some devices have proved to have a natural orientation
that does not lend itself well to this workload. Therefore
I have edited the orientation lock to portrait instead of
natural.
2019-11-22 16:29:59 +00:00
07e47de807 Geekbench: Updating supported versions
Adding support for Geekbench 4.3.4 and 4.4.0.

Adding support for Geekbench Corporate 5.0.1 and 5.0.3.

There are no changes required to the functional workload.
2019-11-22 12:07:50 +00:00
5906bca6b3 instruments/acme_cape: Fix missing parameter to get_instruments
The signature of `get_instruments` was missing the `keep_raw` parameter
so fix this and use it as part of the subsequent common invocation.
2019-11-18 16:09:09 +00:00
9556c3a004 docs: Fix typos 2019-11-05 08:35:57 +00:00
1f4bae92bf docs/device_setup: Explicitly mention load_default_modules
This is becoming a commonly used parameter in the `device_config` so
explicitly list its functionality.
2019-11-05 08:35:57 +00:00
dcbc00addd docs/faq: Add workaround for module initialisation failure 2019-11-05 08:35:57 +00:00
4ee75be7ab docs/userguide: Update reference to outdated output_processor 2019-11-05 08:35:57 +00:00
796dfb1de6 Update googlephotos workload to work against most recent version
* Updated googlephotos workload to work against version 4.28.0
2019-10-29 16:17:20 +00:00
f3e7b14b28 Update googlemaps workload to work against most recent version
* Updated google maps workload to work against apk version 10.19.1
2019-10-29 13:34:08 +00:00
e9839d52c4 output_processor/postgres: Fix out of range for hostid
Change the field type of `hostid` as part of `TargetInfo` from `Int` to
Bigint to prevent some ids from exceeding the maximum value.
2019-10-23 15:45:56 +01:00
7ebbb05934 target/info: Fix missing return statement
Add missing return statement when upgrading a `TargetInfo` POD to v5.
2019-10-23 15:45:56 +01:00
13166f66d1 Update gmail workload to work against most recent version
* Updated gmail to work against 2019.05.26.252424914.release.
2019-10-17 14:17:56 +01:00
ab5d12be72 output_processors/cpustates: Improve handling of missing cpuinfo data
Improve checking of whether cpu idle state information is available for
processing.
Add debug message to inform user if the cpuidle module is not detected
on the target.
2019-10-15 15:17:02 +01:00
298bc3a7f3 output_processors/cpustates: Deal with cpufreq data being unavailable
If the `cpufreq` module is not detected as present then warn the user
and processes the remaining data instead of crashing.
2019-10-15 15:17:02 +01:00
09d6f4dea1 uilts/cpustates: Fix inverted no_idle check
If there is no information about idle states then `no_idle` should be
set to `True` instead of `False`.
2019-10-15 15:17:02 +01:00
d7c95fa844 instrument/energy_measurement: Fix typo and formatting 2019-10-03 12:48:24 +01:00
0efd20cf59 uiauto: Update all applications to target SDK version 28
On devices running android 9 with google play services, PlayProtect
blocks the installation of our automation apks due to targeting a lower
SDK version. Update all apk builds to target SDK version 28 (Android 9)
however do not change the minimum version to maintain backwards
compatibility.
2019-10-03 12:18:48 +01:00
e41aa3c967 instruments/energy_measurement: Add a keep_raw parameter
Add a `keep_raw` parameter to control whether raw files should be
deleted during teardown.
2019-10-03 11:38:29 +01:00
3bef4fc92d instrument/energy_measurement: Invoke teardown method for backends
Forward the teardown method invocation to the instrument backend.
2019-10-03 08:39:53 +01:00
0166180f30 PCMark: Fixing click co-ordinates
The workload is clicking the run button in the centre
of the element but this is no longer starting the
run operation.

Refactoring the code to click in the topleft of the
object seems to rectify the issue.
2019-09-24 16:01:37 +01:00
a9f3ee9752 Update adobe reader workload to work with most recent version
Updated adobe reader workload to work with version 19.7.1.10709
2019-09-24 12:57:26 +01:00
35ce87068c Antutu: Updating to handle the new shortcut popup
Added logic to dismiss the new popup message which asks
to add a shortcut to the android homescreen.
2019-09-19 14:07:50 +01:00
6beac11ee2 Add simpleperf type to perf instrument
* Added simpleperf type to perf instrument as it's more stable
  on Android devices.
* Added record command to instrument
* Added output processing for simpleperf
2019-09-18 12:55:59 +01:00
2f231b5ce5 fw/target: detect module variations in TargetInfo
- Add modules entry to TargetInfo
- When retrieving TargetInfo from cache, make sure info modules match
  those for the current target, otherwise mark info as stale and
  re-generate.
2019-09-12 15:27:23 +01:00
75878e2f27 uiauto/build_scripts: Update to use python3
Ensure we are invoking python3 when attempting to import `wa` and
update the printing syntax to be compatible.
2019-09-06 11:02:13 +01:00
023cb88ab1 templates/setup: Update package setup template to specify python3 2019-09-06 11:02:13 +01:00
d27443deb5 workloads/rt_app: Update to use python3
Update the workload generation script to be python3 compatible and
invoke with python3.
2019-09-06 11:02:13 +01:00
1a15f5c761 Geekbench: Adding support for Version 5 of Geekbench Corporate 2019-09-06 10:34:41 +01:00
d3af4e7515 setup.py: Update pandas version restrictions
Pandas versions 0.25+ requires Python 3.5.3 as a minimum so ensure that
an older version of pandas is installed for older versions of Python.
2019-08-30 14:03:03 +01:00
73b0b0d709 readthedocs: Add ReadtheDocs configuration
Provide configuration file for building the documentation. We need to
specify to use python3 explicitly.
2019-08-28 15:03:38 +01:00
bb18a1a51c travis: Remove tests for python2.7 2019-08-28 11:46:26 +01:00
062be6d544 output_processors/postgresql: Don't reconnect if not initialised
Update check to clarify that we should not attempt to reconnect to
the database if the initialisation of the output processor failed.
2019-08-28 11:33:46 +01:00
c1e095be51 output_processors/postgresql: Ensure still connected to the database
Before exporting output to ensure that we are still connected to the
database. The connection may be dropped so reconnect if necessary, this
is a more of an issue with longer running jobs.
2019-08-28 11:33:46 +01:00
eeebd010b9 output_processors/postgresql: Group database connection operations
Refactors connection operations into the `connect_to_database`
method.
2019-08-28 11:33:46 +01:00
e387e3d9b7 Update to remove Python2 as supported version. 2019-07-19 17:07:46 +01:00
6042fa374a fw/version: Version Bump 2019-07-19 17:07:46 +01:00
050329a5ee fw/version: Update version for WA and required devlib 2019-07-19 16:37:00 +01:00
d9e7aa9af0 Dockerfile: Update to use the latest versions of WA and devlib 2019-07-19 16:37:00 +01:00
125cd3bb41 docs/changes: Changelog for v3.1.4 2019-07-19 16:37:00 +01:00
75ea78ea4f docs/faq: Fix formatting 2019-07-19 16:37:00 +01:00
12bb21045e instruments/SysfsExtractor: Add extracted directories as artifacts
Add the directories that have been extracted by the `SysfsExtractor` and
derived instruments as artifacts.
2019-07-19 16:36:11 +01:00
4bb1f4988f fw/DatabaseOutput: Only attempt to extract config if avaliable
Do not try to parse `kernel_config` if no data is present.
2019-07-19 16:36:11 +01:00
0ff6b4842a fw/DatabaseOutput: Fix the retrieval of job level artifacts 2019-07-19 16:36:11 +01:00
98b787e326 fw/DatabaseOutput: Enabled retrieving of directory artifacts
To provide the same user experience of accessing a directory
artifact from a standard `wa_output` when attempting to retrieve the
path of the artifact extract the stored tar file and extract it to a
temporary location on the host returning the path.
2019-07-19 16:36:11 +01:00
e915436661 commands/postgres: Upgrade the data base schema to v1.3
Upgrade the database schema to reflect the additions of directory
artifacts and the missing TargetInfo property.
2019-07-19 16:36:11 +01:00
68e1806c07 output_processors/postgresql: Add support for new directory Artifacts
Reflecting the addition to being able to store directories as Artifacts
enable uploading of a directory as a compressed tar file rather than
storing the file directly.
2019-07-19 16:36:11 +01:00
f19ebb79ee output_processors/postgresql: Add missing system_id field
When storing the `TargetInfo` the `system_id` attribute was ommited.
2019-07-19 16:36:11 +01:00
c950f5ec8f utils/postgres: Fix formatting 2019-07-19 16:36:11 +01:00
6aaa28781b fw/Artifact: Allows adding directories as artifacts
Adds a `is_dir` property to an `Artifact` to indicate that the
artifact represents a directory rather than an individual file.
2019-07-19 16:36:11 +01:00
d87025ad3a output_processors/postgres: Fix empty iterable
In the case of an empty iterable an empty string would be returned
however this was not an valid value so ensure that the brackets are
always inserted into the output.
2019-07-19 16:36:11 +01:00
ac5819da8e output_processors/postgres: Fix incorrect dict keys 2019-07-19 16:36:11 +01:00
31e08a6477 instruments/interrupts: Add interrupt files as artifacts
Ensure that the interrupt files pulled and diffed from the device are
added as artifacts.
2019-07-19 16:36:11 +01:00
47769cf28d Add a workload for Motionmark tests 2019-07-19 14:31:04 +01:00
d8601880ac setup.py: Add README as package description
Add the project README to be displayed as the project description on
PyPI.
2019-07-18 15:17:24 +01:00
0efc9b9ccd setup.py: Clean up extra dependencies
- Remove unused dependencies left over from WA2.
- Allow installing of the `daqpower` module as an option dependency.
2019-07-18 15:17:24 +01:00
501d3048a5 requirements: Add initial version
Adds a "requirements.txt" to the project. This will not be used during a
standard installation however will be used to indicate which are known
working packages in cases of conflicts.

Update README and documentaion to reflect this.
2019-07-18 15:17:24 +01:00
c4daccd800 README: Update installation instruction to match documentation.
When installing from github we recommend installing with setup.py as
install with pip does not always resolve dependencies correctly.
2019-07-18 15:17:24 +01:00
db944629f3 setup.py: Update classifiers 2019-07-18 15:17:24 +01:00
564738a2ad workloads/monoperf: Fix typos 2019-07-18 15:17:24 +01:00
c092128e94 workloads: Sets requires_network attribute for workloads
Both speedometer and aitutu require internet to function however this
attribute was missing from the workloads.
2019-07-18 15:17:24 +01:00
463840d2b7 docs/faq: Add question about non UTF-8 environments. 2019-07-12 13:32:28 +01:00
43633ab362 extras/Dockerfile: Ensure we are using utf-8 in our docker container
For compatibility we want to be using utf-8 by default when we interact
with files within WA so ensure that our environment is configured
accordingly.
2019-07-12 13:32:28 +01:00
a6f0ab31e4 fw/entrypoint: Add check for system default encoding
Check what the default encoding for the system is set to. If this is not
configured to use 'UTF-8', log a warning to the user as this is known
to cause issues when attempting to parse none ascii files during operation.
2019-07-12 13:32:28 +01:00
72fd5b5139 setup.py: Set maximum package version for python2.7 support
In the latest versions of panadas and numpy python2.7 support has been
dropped therefore restrict the maximum version of these packages.
2019-07-08 13:46:35 +01:00
766bb4da1a fw/uiauto: Allow specifying landscape and portrait orientation
Previously the `setScreenOrientation` function only accepted relative
orientations, this causes issue when attempt to use across tablets and
phones with different natural orientations. Now take into account the
current orientation and screen resolution to allow specifying portrait vs
landscape across different types of devices.
2019-07-04 13:18:48 +01:00
a5f0521353 utils/types: Fix typos 2019-06-28 17:56:13 +01:00
3435c36b98 fw/workload: Improve version matching and error propergation
Ensure that the appropriate error message is returned to the user to
outline what caused the version matching to fail.
Additionally fix the case where if specifying a package name directly
the version matching result would be ignored.
2019-06-28 17:56:13 +01:00
bd252a6471 fw/workload: Introduce max / min versions for apks
Allow specifying a maximum and minimum version of an APK to be used for
a workload.
2019-06-28 17:56:13 +01:00
f46851a3b4 utils/types: Add version_tuple
Allow for `version_tuple` to be used more generically to enable
natural comparing of versions encoded as strings.
2019-06-28 17:56:13 +01:00
8910234448 fw/workload: Don't override the package manager for ApkRevent workloads
`ApkRevent` workloads should be able to use the same Apk selection
criteria as `ApkWorkloads` therefore rely on the superclass to
instantiate the `PackageHandler`.
2019-06-28 17:56:13 +01:00
1108c5701e workloads: Update to better utilize cleanup_assets and uninstall
Update the workload classes to attempt and standardize the use of the
`cleanup_assets` parameter and the newly added `uninstall` parameter
2019-06-28 17:54:04 +01:00
f5d1a9e94a fw/workload: Add the uninstall parameter to the base workload class
In additional to being able to specify whether the APK should be
uninstalled as part of a `APKWorkload`s teardown add the `uninstall`
parameter to the base `workload` class in order to specify whether any
binaries installed for a workload should be uninstalled again.
2019-06-28 17:54:04 +01:00
959106d61b fw/workload: Update description of cleanup_assests parameter
Improve the description of the parameter as the parameter may be used in
other places aside from the teardown method.
2019-06-28 17:54:04 +01:00
0aea3abcaf workloads: Add support for UIBench Jank Tests
Add a workload that launches UIBenchJankTests. This differs from the
UIBench application as it adds automation and instrumentation to that
APK. This therefore requires a different implementation than classical
ApkWorkloads as 2 APKs are required (UIBench and UIBenchJankTests) and
the main APK is invoked through `am instrument` (as opposed to `am
start`).
2019-06-28 09:27:56 +01:00
24ccc024f8 framework.workload: am instrument APK manager
Add support for Android applications that are invoked through `am
instrument` (as opposed to `am start`) _i.e._ that have been
instrumented. See AOSP `/platform_testing/tests/` for examples of such
applications.
2019-06-28 09:27:56 +01:00
42ab811032 workloads/lmbench: Fix missing run method declaration 2019-06-19 11:28:28 +01:00
832ed797e1 fw/config/execution: Raise error if no jobs are available for running
If no jobs have been generated that are available for running then WA
will crash when trying to access the job queue. Add an explicit check to
ensure that a sensible error is raised in this case, for example if
attempting to run a specific job ID that is not found.
2019-06-06 15:17:42 +01:00
31b44e447e setup.py: Add missing dependency for building documentation
In later versions of sphinx the rtd theme needs installing explicitly
as it is no longer included in the main package.
2019-06-04 14:53:59 +01:00
179b2e2264 Dockerfile: Update to install all available extras for WA and devlib
Install all extras of WA and devliv to be able to use all available
features within the docker container.
2019-06-04 14:53:59 +01:00
22437359b6 setup.py: Change short hand to install all extras to all
In our documentation we detail being able to install the `all` extra
as a shorthand for installing all the available extra packages that WA
may require however this was actually implemented as `everything`.
2019-06-04 14:53:59 +01:00
2347c8c007 setup.py: Add postgres dependency in extras list 2019-06-04 14:53:59 +01:00
52a0a79012 build_plugin_docs: Pylint fix
Fix various pylint warnings.
2019-06-04 14:53:20 +01:00
60693e1b65 doc: Fix build_instrument_method_map script
Fix a wrong call to a function in the script execution path.
2019-06-04 14:53:20 +01:00
8ddf16dfea doc: Patch for doc generation under Py3
Patch scripts with methods that are supported under Py2.7 and Py3.
2019-06-04 14:53:20 +01:00
9aec4850c2 workloads/uibench: Pylint Fix 2019-05-28 09:33:15 +01:00
bdaa26d772 Geekbench: Updating supported versions to include 4.3.2 2019-05-24 17:47:49 +01:00
d7aedae69c workloads/uibench: Initial commit
Add support for Android's UIBench suite of tests as a WA workload.
2019-05-24 17:47:35 +01:00
45af8c69b8 ApkWorkload: Support implicit activity path
If the activity field of an instance of ApkWorkload does not the '.'
character, it is assumed that it is in the Java namespace of the
application. This is similar to how activities can be referred to with
relative paths:
    com.domain.app/.activity
instead of
    com.domain.app/com.domain.app.activity
2019-05-24 17:47:35 +01:00
e398083f6e PCMark: Removing hard coded index to make the workload more robust 2019-05-22 11:07:43 +01:00
4ce41407e9 tests/test_agenda_parser: Ensure anchors can be used as part of agenda
Ensure that yaml anchors and aliases can be used within a WA agenda.
2019-05-17 20:04:33 +01:00
aa0564e8f3 tests/test_agenda_parser: Use custom yaml loader for test cases
Instead of using the default yaml loader make sure to use our customised
loader. Also move the loading stage into our test cases as this should
be part of the test case to ensure that it functions for the individual
test case.
2019-05-17 20:04:33 +01:00
83f826d6fe utils/serializser: Re-fix support for YAML anchors
Include missing `flatten_mapping` call in our implementation of
`construct_mapping`. This is performed by a subclass in the default
implementation which was missing in our previous fix.
2019-05-17 20:04:33 +01:00
1599b59770 workloads: add aitutu
Add a workload to execute the Aitutu benchmark.
2019-05-17 13:26:36 +01:00
8cd9862e32 workloads/geekbench: Clean up version handling
The workload could attempt to use the version attribute before it was
discovered to assess the workload activity causing an error however the
whole process can be simplified using newer discovery features.
2019-05-17 09:15:23 +01:00
b4ea2798dd tests/test_agenda_parser: Remove attribute assignment
Do not try and assign the name attribute of the yaml loaded agenda as
this is not used.
2019-05-15 19:48:39 +01:00
76e6f14212 utils/serializer: pylint fixes 2019-05-15 19:48:39 +01:00
ce59318e66 utils/serializer: Fix using incorrect loader and imports
- Ensure that the new loader is used when opening files to ensure that our
custom constructors are used.
- Fix missing imports
2019-05-15 19:48:39 +01:00
5652057adb utils/serializer: fix support for YAML anchors.
Change the way maps get processed by YAML constructor to support YAML
features, such as anchors, while still maintaining dict ordering.
2019-05-15 09:59:14 +01:00
e9f5577237 utils/serializer: fix error reporting for YAML
When attempting to access the message of a exception check not only that
e.args is populated, but also that e.args[0] actually contains
something, before defaulting to str(e).
2019-05-15 09:57:52 +01:00
ec3d928b3b docs: Fix incorrect environment variable name 2019-04-26 08:05:51 +01:00
ee8bab365b docs/revent: Clarify the naming conventions for revent recordings
As per https://github.com/ARM-software/workload-automation/issues/968
the current documentation for detailing the naming scheme for an revent
recording is unclear. Reword the descriptions focusing on the typical
usecase rather then based on a customized target class.
2019-04-26 08:05:51 +01:00
e3406bdb74 instruments/delay: Convert module name to identifier
- Ensure cooling module name is converted to identifier when resolving
- Fix typo
2019-04-26 08:04:45 +01:00
55d983ecaf workloads/vellamo: Fix initialization order
Ensure that uiauto parameters are set before calling the super method.
2019-04-26 08:04:45 +01:00
f8908e8194 Dockerfile: Update to newer base and Python version
- Update the base ubunutu image to 18.10 and switch to using Python3 for
installing WA.
- Fix typo in documenation.
2019-04-18 10:48:00 +01:00
dd44d6fa16 docs/api/workload: Update documentation for activity attribute 2019-04-18 10:44:50 +01:00
753786a45c fw/workload: Add activity attribute to APK workloads
Allow specifying an `activity` attribute for an APK based workload which
will override the automatically detected activity from the resolved APK.
2019-04-18 10:44:50 +01:00
8647ceafd8 workloads/meabo: Fix incorrect add_metric call 2019-04-03 11:33:27 +01:00
2c2118ad23 fw/resource: Fix attempting to match against empty values
Update checking of attributes to allow for empty structures as they can
be set to empty lists etc. and therefore should not be checking if
explicitly `None`.
2019-04-02 07:54:05 +01:00
0ec8427d05 fw/output: Implement retriving "augmentations" for JobDatabaseOutputs
Enable retriving augmentations on a per job basis when using a Postgres
database backend.
2019-03-18 15:26:19 +00:00
cf5c3a2723 fw/output: Add missing "augmentation" attribute to JobOutput
Add attribute to `JobOutput` to allow easy listing of enabled augmentations
for individual jobs rather than just the overall run level.
2019-03-18 15:26:19 +00:00
8ddc1c1eba fw/version: Bump to development version 2019-03-04 15:50:13 +00:00
b5db4afc05 fw/version: Version Bump
Bump to the next revision release.
2019-03-04 15:50:13 +00:00
f977c3dfc8 setup.py: Update PyYaml Dependency 2019-03-04 15:50:13 +00:00
769aae3047 utils/serializer: Explicitly state yaml loader
In newer versions of PyYAML we need to manually specify the `Loader` to
be used as per https://msg.pyyaml.org/load.
`FullLoader` is now the default loader which attempts to avoid arbitrary
code execution, however if we are running an older version where this is
not available default back to the original Loader.
2019-03-04 15:50:13 +00:00
a1ba3c6f69 utils/misc: Update load structure to use WA's yaml wrapper 2019-03-04 15:50:13 +00:00
536fc7eb92 workloads/stress_ng: Update to use WA's yaml wrapper 2019-03-04 15:50:13 +00:00
de36dacb82 fw/version: Bump to development versions 2019-03-04 10:37:39 +00:00
637bf57cbc fw/version: Bump revison versions
Bump the revision version for WA and the required version for
devlib.
2019-03-04 10:37:39 +00:00
60ffd27bba extras/Dockerfile: Update to use the latest release version 2019-03-04 10:37:39 +00:00
984a74a6ca doc/changes: Update changelog for v3.1.2 2019-03-04 10:37:39 +00:00
5b8dc1779c setup.py: Limit the maximum version of PyYAML
Specify the latest stable release of PyYAML should be installed rather
than the latest pre-release.
2019-03-04 10:37:39 +00:00
ba0cd7f842 fw/target/info: Bump target info version
Due to mismatches in WA and devlib versions this previous upgrade method
could have been trigger before it was needed and would not be called a
second time. Now we can be sure that WA and devlib are updated together
bump the version number again to ensure the upgrade method is called a
second time to ensure the POD is upgraded correctly.
2019-03-04 10:37:39 +00:00
adb3ffa6aa fw/version: Introduce required version for devlib
To ensure that a compatible version of devlib is installed on the system
keep track of the version of devlib that is required by WA and provide a
more useful error message if this is not satisfied.
2019-03-04 10:37:39 +00:00
bedd3bf062 docs/faq: Add entry about missing kernel config errors
Although WA supports automatic updating during parsing of a serialized
`kernel_config` from devlib, if the installed versions of WA and devlib
have become out of sync where WA has "updated" the old implementation it
will not attempt to update it again when devlib is later updated to use
the new implementation and therefore will not trigger the existing
checks that are in place.
2019-02-21 11:57:32 +00:00
03e463ad4a docs/installation: Add warning about using pip to install from github 2019-02-20 16:30:53 +00:00
2ce8d6fc95 output_processors/postgresql: Add missing default
In the case of no screen resolution being present ensure that a default
is used instead of `None`.
2019-02-14 10:51:38 +00:00
1415f61e36 workloads/chrome: Fix for tablet devices
Some tablet devices use an alternate tab switching method due to the
larger screen space. Add support for adding new tabs via the menu
instead of via the tab switcher.
2019-02-08 14:32:58 +00:00
6ab1ae74a6 wa/apk_workloads: Update to not specify a default apk version.
No longer specify a default version to allow any available apks to be
detected and then choose the appropriate automation based on the
detected version.
Refactor to support new supported_versions attribute and since APK
resolution needs to have happened before setting uiauto parameter
move assignments to ``initialize``.
2019-02-08 13:56:55 +00:00
a1cecc0002 fw/workload: Add "support_versions" attribute to workloads
Allow for specifying a list of supported APK versions for a workload. If
a specific version is no specified then attempt to a resolve any valid
version for the workload.
2019-02-08 13:56:55 +00:00
0cba3c68dc fw/resource: Support matching APKs on multiple versions.
In the case where a range of apk versions are valid allow for the matching
process to accommodate a list of versions instead of a single value.
2019-02-08 13:56:55 +00:00
f267fc9277 fw/workload: Use apk version for workload if not set.
If a workloads `version` attribute is not set, and an APK file is
found, use this as the version number. This allows for workloads to not
specify a default version via parameters and for an available APK to be
automatically chosen.
2019-02-08 13:56:55 +00:00
462a5b651a fw/output: add label property to Metric
Add a "label" property to Metric that combines its name with its
classifiers into a single string.
2019-02-05 10:27:06 +00:00
7cd7b73f58 Fixed an error emptying the reading buffer of the poller
Fixed identation

Fixed identation
2019-02-04 09:46:13 +00:00
4a9a2ad105 fw/target/info: Fix for KernelConfig refactor
The Devlib KernelConfig object was refactored in commit
f65130b7c7
therefore update the way KernelConfig objects are deserialized to reflect the new
implementation and provide a conversion for PODs.
2019-01-31 09:44:30 +00:00
9f88459f56 fw/workload: Fix Typo 2019-01-30 15:46:54 +00:00
a2087ea467 workloads/manual: Fix incorrect attribute used to access target 2019-01-30 15:46:54 +00:00
31a5a95803 output_processors/postgresql: Ensure screen resolution is a list
Ensure that the screen resolution is converted to a list to prevent
casting errors.
2019-01-30 15:46:54 +00:00
3f202205a5 doc/faq: Add answer on how to fall back to surfaceflinger 2019-01-28 12:45:10 +00:00
ce7720b26d instruments/fps: Fix Typo 2019-01-28 12:45:10 +00:00
766b96e2ad fw/workload: Add a 'View' parameter to ApkWorkloads
Allow for easy configuring of a view for a particular workload as this
can vary depending on the device which can be used when using certain
instruments for example `fps`.
2019-01-11 10:12:42 +00:00
3c9de98a4b setup: Update devlib requirements to development versions. 2019-01-11 10:12:26 +00:00
5263cfd6f8 fw/version: Add development tag to version
Add a development tag to the version format instead of using the
revision field.
2019-01-11 10:12:26 +00:00
237 changed files with 5902 additions and 1310 deletions
.github/ISSUE_TEMPLATE
.readthedocs.yml.travis.ymlREADME.rst
dev_scripts
doc
extras
pytest.inirequirements.txtsetup.py
tests
wa
__init__.py
commands
framework
instruments
output_processors
utils
workloads
adobereader
aitutu
androbench
antutu
applaunch
benchmarkpi
chrome
dhrystone
exoplayer
geekbench
gfxbench
glbenchmark
gmail
googlemaps
googlephotos
googleplaybooks
googleslides
hackbench
hwuitest
idle.py
jankbench
lmbench
manual
meabo
memcpy
mongoperf
motionmark
openssl
pcmark
rt_app
shellscript
speedometer
stress_ng
sysbench
uibench
uibenchjanktests
vellamo
youtube

16
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file

@ -0,0 +1,16 @@
---
name: Bug report
about: Create a report to help resolve an issue.
title: ''
labels: bug
assignees: ''
---
**Describe the issue**
A clear and concise description of what the bug is.
**Run Log**
Please attach your `run.log` detailing the issue.
**Other comments (optional)**

@ -0,0 +1,17 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context about the feature request here.

@ -0,0 +1,10 @@
---
name: 'Question / Support '
about: Ask a question or reqeust support
title: ''
labels: question
assignees: ''
---
**

11
.github/ISSUE_TEMPLATE/question.md vendored Normal file

@ -0,0 +1,11 @@
---
name: Question
about: Ask a question
title: ''
labels: question
assignees: ''
---
**Describe you query**
What would you like to know / what are you trying to achieve?

21
.readthedocs.yml Normal file

@ -0,0 +1,21 @@
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: html
configuration: doc/source/conf.py
# Build the docs in additional formats such as PDF and ePub
formats: all
# Set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- method: setuptools
path: .

@ -17,13 +17,12 @@ language: python
python:
- "3.6"
- "2.7"
install:
- pip install nose
- pip install nose2
- pip install flake8
- pip install pylint==1.9.2
- pip install pylint==2.6.0
- git clone -v https://github.com/ARM-software/devlib.git /tmp/devlib && cd /tmp/devlib && python setup.py install
- cd $TRAVIS_BUILD_DIR && python setup.py install
@ -45,10 +44,3 @@ env:
- TEST="$PROCESS_CMD && $SHOW_CMD && $LIST_CMD && $CREATE_CMD"
script:
- echo $TEST && eval $TEST
matrix:
exclude:
- python: "2.7"
env: TEST=$PYLINT
- python: "2.7"
env: TEST=$PEP8

@ -18,7 +18,7 @@ workloads, instruments or output processing.
Requirements
============
- Python 2.7 or Python 3
- Python 3.5+
- Linux (should work on other Unixes, but untested)
- Latest Android SDK (ANDROID_HOME must be set) for Android devices, or
- SSH for Linux devices
@ -30,7 +30,11 @@ Installation
To install::
git clone git@github.com:ARM-software/workload-automation.git workload-automation
sudo -H pip install ./workload-automation
sudo -H python setup [install|develop]
Note: A `requirements.txt` is included however this is designed to be used as a
reference for known working versions rather than as part of a standard
installation.
Please refer to the `installation section <http://workload-automation.readthedocs.io/en/latest/user_information.html#install>`_
in the documentation for more details.

@ -6,7 +6,7 @@ DEFAULT_DIRS=(
EXCLUDE=wa/tests,wa/framework/target/descriptor.py
EXCLUDE_COMMA=
IGNORE=E501,E265,E266,W391,E401,E402,E731,W504,W605,F401
IGNORE=E501,E265,E266,W391,E401,E402,E731,W503,W605,F401
if ! hash flake8 2>/dev/null; then
echo "flake8 not found in PATH"

@ -1,5 +1,5 @@
#!/usr/bin/env python
# Copyright 2015-2015 ARM Limited
# Copyright 2015-2019 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -26,10 +26,11 @@ OUTPUT_TEMPLATE_FILE = os.path.join(os.path.dirname(__file__), 'source', 'instr
def generate_instrument_method_map(outfile):
signal_table = format_simple_table([(k, v) for k, v in SIGNAL_MAP.iteritems()],
signal_table = format_simple_table([(k, v) for k, v in SIGNAL_MAP.items()],
headers=['method name', 'signal'], align='<<')
priority_table = format_simple_table(zip(CallbackPriority.names, CallbackPriority.values),
headers=['decorator', 'priority'], align='<>')
decorator_names = map(lambda x: x.replace('high', 'fast').replace('low', 'slow'), CallbackPriority.names)
priority_table = format_simple_table(zip(decorator_names, CallbackPriority.names, CallbackPriority.values),
headers=['decorator', 'CallbackPriority name', 'CallbackPriority value'], align='<>')
with open(OUTPUT_TEMPLATE_FILE) as fh:
template = string.Template(fh.read())
with open(outfile, 'w') as wfh:
@ -37,4 +38,4 @@ def generate_instrument_method_map(outfile):
if __name__ == '__main__':
generate_instrumentation_method_map(sys.argv[1])
generate_instrument_method_map(sys.argv[1])

@ -1,5 +1,5 @@
#!/usr/bin/env python
# Copyright 2014-2015 ARM Limited
# Copyright 2014-2019 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -25,7 +25,12 @@ from wa.utils.doc import (strip_inlined_text, get_rst_from_plugin,
get_params_rst, underline, line_break)
from wa.utils.misc import capitalize
GENERATE_FOR_PACKAGES = ['wa.workloads', 'wa.instruments', 'wa.output_processors']
GENERATE_FOR_PACKAGES = [
'wa.workloads',
'wa.instruments',
'wa.output_processors',
]
def insert_contents_table(title='', depth=1):
"""
@ -41,6 +46,7 @@ def insert_contents_table(title='', depth=1):
def generate_plugin_documentation(source_dir, outdir, ignore_paths):
# pylint: disable=unused-argument
pluginloader.clear()
pluginloader.update(packages=GENERATE_FOR_PACKAGES)
if not os.path.exists(outdir):
@ -57,7 +63,7 @@ def generate_plugin_documentation(source_dir, outdir, ignore_paths):
exts = pluginloader.list_plugins(ext_type)
sorted_exts = iter(sorted(exts, key=lambda x: x.name))
try:
wfh.write(get_rst_from_plugin(sorted_exts.next()))
wfh.write(get_rst_from_plugin(next(sorted_exts)))
except StopIteration:
return
for ext in sorted_exts:
@ -73,9 +79,11 @@ def generate_target_documentation(outdir):
'juno_linux',
'juno_android']
intro = '\nThis is a list of commonly used targets and their device '\
'parameters, to see a complete for a complete reference please use the '\
'WA :ref:`list command <list-command>`.\n\n\n'
intro = (
'\nThis is a list of commonly used targets and their device '
'parameters, to see a complete for a complete reference please use the'
' WA :ref:`list command <list-command>`.\n\n\n'
)
pluginloader.clear()
pluginloader.update(packages=['wa.framework.target.descriptor'])
@ -112,7 +120,8 @@ def generate_config_documentation(config, outdir):
if not os.path.exists(outdir):
os.mkdir(outdir)
outfile = os.path.join(outdir, '{}.rst'.format('_'.join(config.name.split())))
config_name = '_'.join(config.name.split())
outfile = os.path.join(outdir, '{}.rst'.format(config_name))
with open(outfile, 'w') as wfh:
wfh.write(get_params_rst(config.config_points))

@ -284,6 +284,13 @@ methods
:return: A list of `str` labels of workloads that were part of this run.
.. method:: RunOutput.add_classifier(name, value, overwrite=False)
Add a classifier to the run as a whole. If a classifier with the specified
``name`` already exists, a``ValueError`` will be raised, unless
`overwrite=True` is specified.
:class:`RunDatabaseOutput`
---------------------------
@ -315,9 +322,12 @@ methods
.. method:: RunDatabaseOutput.get_artifact_path(name)
Returns a `StringIO` object containing the contents of the artifact
specified by ``name``. This will only look at the run artifacts; this will
not search the artifacts of the individual jobs.
If the artifcat is a file this method returns a `StringIO` object containing
the contents of the artifact specified by ``name``. If the aritifcat is a
directory, the method returns a path to a locally extracted version of the
directory which is left to the user to remove after use. This will only look
at the run artifacts; this will not search the artifacts of the individual
jobs.
:param name: The name of the artifact who's path to retrieve.
:return: A `StringIO` object with the contents of the artifact
@ -399,7 +409,7 @@ artifacts, metadata, and configuration. It has the following attributes:
methods
~~~~~~~
.. method:: RunOutput.get_artifact(name)
.. method:: JobOutput.get_artifact(name)
Return the :class:`Artifact` specified by ``name`` associated with this job.
@ -407,7 +417,7 @@ methods
:return: The :class:`Artifact` with that name
:raises HostError: If the artifact with the specified name does not exist.
.. method:: RunOutput.get_artifact_path(name)
.. method:: JobOutput.get_artifact_path(name)
Return the path to the file backing the artifact specified by ``name``,
associated with this job.
@ -416,13 +426,20 @@ methods
:return: The path to the artifact
:raises HostError: If the artifact with the specified name does not exist.
.. method:: RunOutput.get_metric(name)
.. method:: JobOutput.get_metric(name)
Return the :class:`Metric` associated with this job with the specified
`name`.
:return: The :class:`Metric` object for the metric with the specified name.
.. method:: JobOutput.add_classifier(name, value, overwrite=False)
Add a classifier to the job. The classifier will be propagated to all
existing artifacts and metrics, as well as those added afterwards. If a
classifier with the specified ``name`` already exists, a ``ValueError`` will
be raised, unless `overwrite=True` is specified.
:class:`JobDatabaseOutput`
---------------------------
@ -452,8 +469,11 @@ methods
.. method:: JobDatabaseOutput.get_artifact_path(name)
Returns a ``StringIO`` object containing the contents of the artifact
specified by ``name`` associated with this job.
If the artifcat is a file this method returns a `StringIO` object containing
the contents of the artifact specified by ``name`` associated with this job.
If the aritifcat is a directory, the method returns a path to a locally
extracted version of the directory which is left to the user to remove after
use.
:param name: The name of the artifact who's path to retrieve.
:return: A `StringIO` object with the contents of the artifact
@ -497,6 +517,11 @@ A :class:`Metric` has the following attributes:
or they may have been added by the workload to help distinguish between
otherwise identical metrics.
``label``
This is a string constructed from the name and classifiers, to provide a
more unique identifier, e.g. for grouping values across iterations. The
format is in the form ``name/cassifier1=value1/classifier2=value2/...``.
:class:`Artifact`
-----------------
@ -597,6 +622,12 @@ The available attributes of the class are as follows:
The name of the target class that was uised ot interact with the device
during the run E.g. ``"AndroidTarget"``, ``"LinuxTarget"`` etc.
``modules``
A list of names of modules that have been loaded by the target. Modules
provide additional functionality, such as access to ``cpufreq`` and which
modules are installed may impact how much of the ``TargetInfo`` has been
populated.
``cpus``
A list of :class:`CpuInfo` objects describing the capabilities of each CPU.

@ -178,6 +178,16 @@ methods.
locations) and device will be searched for an application with a matching
package name.
``supported_versions``
This attribute should be a list of apk versions that are suitable for this
workload, if a specific apk version is not specified then any available
supported version may be chosen.
``activity``
This attribute can be optionally set to override the default activity that
will be extracted from the selected APK file which will be used when
launching the APK.
``view``
This is the "view" associated with the application. This is used by
instruments like ``fps`` to monitor the current framerate being generated by

@ -2,6 +2,268 @@
What's New in Workload Automation
=================================
***********
Version 3.3
***********
New Features:
==============
Commands:
---------
- Add ``report`` command to provide a summary of a run.
Instruments:
------------
- Add ``proc_stat`` instrument to monitor CPU load using data from ``/proc/stat``.
Framework:
----------
- Add support for simulating atomic writes to prevent race conditions when running current instances of WA.
- Add support file transfer for SSH connections via SFTP and falling back to using SCP implementation.
- Support detection of logcat buffer overflow and present a warning if this occurs.
- Allow skipping all remaining jobs if a job had exhausted all of its retires.
- Add polling mechanism for file transfers rather than relying on timeouts.
- Add `run_completed` reboot policy to enable rebooting a target after a run has been completed.
Android Devices:
----------------
- Enable configuration of whether to keep the screen on while the device is plugged in.
Output Processors:
------------------
- Enable the use of cascading deletion in Postgres databases to clean up after deletion of a run entry.
Fixes/Improvements
==================
Framework:
----------
- Improvements to the ``process`` command to correctly handle skipped and in process jobs.
- Add support for deprecated parameters allowing for a warning to be raised when providing
a parameter that will no longer have an effect.
- Switch implementation of SSH connections to use Paramiko for greater stability.
- By default use sftp for file transfers with SSH connections, allow falling back to scp
by setting ``use_scp``.
- Fix callbacks not being disconnected correctly when requested.
- ``ApkInfo`` objects are now cached to reduce re-parsing of APK files.
- Speed up discovery of wa output directories.
- Fix merge handling of parameters from multiple files.
Dockerfile:
-----------
- Install additional instruments for use in the docker environment.
- Fix environment variables not being defined in non interactive environments.
Instruments:
------------
- ``trace_cmd`` additional fixes for python 3 support.
Output Processors:
------------------
- ``postgres``: Fixed SQL command when creating a new event.
Workloads:
----------
- ``aitutu``: Improve reliability of results extraction.
- ``androbench``: Enabling dismissing of additional popups on some devices.
- ``antutu``: Now supports major version 8 in additional to version 7.X.
- ``exoplayer``: Add support for Android 10.
- ``googlephotos``: Support newer apk version.
- ``gfxbench``: Allow user configuration for which tests should be ran.
- ``gfxbench``: Improved score detection for a wider range of devices.
- ``gfxbench``: Moved results extraction out of run stage.
- ``jankbench``: Support newer versions of Pandas for processing.
- ``pcmark``: Add support for handling additional popups and installation flows.
- ``pcmark``: No longer clear and re-download test data before each execution.
- ``speedometer``: Enable the workload to run offline and drops requirement for
UiAutomator. To support this root access is now required to run the workload.
- ``youtube``: Update to support later versions of the apk.
Other:
------
- ``cpustates``: Improved name handling for unknown idle states.
***********
Version 3.2
***********
.. warning:: This release only supports Python 3.5+. Python 2 support has now
been dropped.
Fixes/Improvements
==================
Framework:
----------
- ``TargetInfo`` now tracks installed modules and will ensure the cache is
also updated on module change.
- Migrated the build scripts for uiauto based workloads to Python 3.
- Uiauto applications now target SDK version 28 to prevent PlayProtect
blocking the installation of the automation apks on some devices.
- The workload metadata now includes the apk package name if applicable.
Instruments:
------------
- ``energy_instruments`` will now have their ``teardown`` method called
correctly.
- ``energy_instruments``: Added a ``keep_raw`` parameter to control whether
raw files generated during execution should be deleted upon teardown.
- Update relevant instruments to make use of the new devlib collector
interface, for more information please see the
`devlib documentation <https://devlib.readthedocs.io/en/latest/collectors.html>`_.
Output Processors:
------------------
- ``postgres``: If initialisation fails then the output processor will no
longer attempt to reconnect at a later point during the run.
- ``postgres``: Will now ensure that the connection to the database is
re-established if it is dropped e.g. due to a long expecting workload.
- ``postgres``: Change the type of the ``hostid`` field to ``Bigint`` to
allow a larger range of ids.
- ``postgres``: Bump schema version to 1.5.
- ``perf``: Added support for the ``simpleperf`` profiling tool for android
devices.
- ``perf``: Added support for the perf ``record`` command.
- ``cpustates``: Improve handling of situations where cpufreq and/or cpuinfo
data is unavailable.
Workloads:
----------
- ``adodereader``: Now support apk version 19.7.1.10709.
- ``antutu``: Supports dismissing of popup asking to create a shortcut on
the homescreen.
- ``gmail``: Now supports apk version 2019.05.26.252424914.
- ``googlemaps``: Now supports apk version 10.19.1.
- ``googlephotos``: Now supports apk version 4.28.0.
- ``geekbench``: Added support for versions 4.3.4, 4.4.0 and 4.4.2.
- ``geekbench-corporate``: Added support for versions 5.0.1 and 5.0.3.
- ``pcmark``: Now locks device orientation to portrait to increase
compatibility.
- ``pcmark``: Supports dismissing new Android 10 permission warnings.
Other:
------
- Improve documentation to help debugging module installation errors.
*************
Version 3.1.4
*************
.. warning:: This is the last release that supports Python 2. Subsequent versions
will be support Python 3.5+ only.
New Features:
==============
Framework:
----------
- ``ApkWorkload``: Allow specifying A maximum and minimum version of an APK
instead of requiring a specific version.
- ``TestPackageHandler``: Added to support running android applications that
are invoked via ``am instrument``.
- Directories can now be added as ``Artifacts``.
Workloads:
----------
- ``aitutu``: Executes the Aitutu Image Speed/Accuracy and Object
Speed/Accuracy tests.
- ``uibench``: Run a configurable activity of the UIBench workload suite.
- ``uibenchjanktests``: Run an automated and instrument version of the
UIBench JankTests.
- ``motionmark``: Run a browser graphical benchmark.
Other:
------
- Added ``requirements.txt`` as a reference for known working package versions.
Fixes/Improvements
==================
Framework:
----------
- ``JobOuput``: Added an ``augmentation`` attribute to allow listing of
enabled augmentations for individual jobs.
- Better error handling for misconfiguration job selection.
- All ``Workload`` classes now have an ``uninstall`` parameter to control whether
any binaries installed to the target should be uninstalled again once the
run has completed.
- The ``cleanup_assets`` parameter is now more consistently utilized across
workloads.
- ``ApkWorkload``: Added an ``activity`` attribute to allow for overriding the
automatically detected version from the APK.
- ``ApkWorkload`` Added support for providing an implicit activity path.
- Fixed retrieving job level artifacts from a database backend.
Output Processors:
------------------
- ``SysfsExtractor``: Ensure that the extracted directories are added as
``Artifacts``.
- ``InterruptStatsInstrument``: Ensure that the output files are added as
``Artifacts``.
- ``Postgres``: Fix missing ``system_id`` field from ``TargetInfo``.
- ``Postgres``: Support uploading directory ``Artifacts``.
- ``Postgres``: Bump the schema version to v1.3.
Workloads:
----------
- ``geekbench``: Improved apk version handling.
- ``geekbench``: Now supports apk version 4.3.2.
Other:
------
- ``Dockerfile``: Now installs all optional extras for use with WA.
- Fixed support for YAML anchors.
- Fixed building of documentation with Python 3.
- Changed shorthand of installing all of WA extras to `all` as per
the documentation.
- Upgraded the Dockerfile to use Ubuntu 18.10 and Python 3.
- Restricted maximum versions of ``numpy`` and ``pandas`` for Python 2.7.
*************
Version 3.1.3
*************
Fixes/Improvements
==================
Other:
------
- Security update for PyYAML to attempt prevention of arbitrary code execution
during parsing.
*************
Version 3.1.2
*************
Fixes/Improvements
==================
Framework:
----------
- Implement an explicit check for Devlib versions to ensure that versions
are kept in sync with each other.
- Added a ``View`` parameter to ApkWorkloads for use with certain instruments
for example ``fps``.
- Added ``"supported_versions"`` attribute to workloads to allow specifying a
list of supported version for a particular workload.
- Change default behaviour to run any available version of a workload if a
specific version is not specified.
Output Processors:
------------------
- ``Postgres``: Fix handling of ``screen_resoultion`` during processing.
Other
-----
- Added additional information to documentation
- Added fix for Devlib's ``KernelConfig`` refactor
- Added a ``"label"`` property to ``Metrics``
*************
Version 3.1.1
*************

File diff suppressed because one or more lines are too long

Before

(image error) Size: 63 KiB

After

(image error) Size: 74 KiB

@ -37,8 +37,8 @@ This section contains reference information common to plugins of all types.
The Context
~~~~~~~~~~~
.. note:: For clarification on the meaning of "workload specification" ("spec"), "job"
and "workload" and the distiction between them, please see the :ref:`glossary <glossary>`.
.. note:: For clarification on the meaning of "workload specification" "spec", "job"
and "workload" and the distinction between them, please see the :ref:`glossary <glossary>`.
The majority of methods in plugins accept a context argument. This is an
instance of :class:`wa.framework.execution.ExecutionContext`. It contains
@ -119,7 +119,7 @@ context.output_directory
This is the output directory for the current iteration. This will an
iteration-specific subdirectory under the main results location. If
there is no current iteration (e.g. when processing overall run results)
this will point to the same location as ``root_output_directory``.
this will point to the same location as ``run_output_directory``.
Additionally, the global ``wa.settings`` object exposes on other location:
@ -158,7 +158,7 @@ irrespective of the host's path notation. For example:
.. note:: Output processors, unlike workloads and instruments, do not have their
own target attribute as they are designed to be able to be run offline.
.. _plugin-parmeters:
.. _plugin-parameters:
Parameters
~~~~~~~~~~~

@ -5,10 +5,12 @@ Convention for Naming revent Files for Revent Workloads
-------------------------------------------------------------------------------
There is a convention for naming revent files which you should follow if you
want to record your own revent files. Each revent file must start with the
device name(case sensitive) then followed by a dot '.' then the stage name
then '.revent'. All your custom revent files should reside at
``'~/.workload_automation/dependencies/WORKLOAD NAME/'``. These are the current
want to record your own revent files. Each revent file must be called (case sensitive)
``<device name>.<stage>.revent``,
where ``<device name>`` is the name of your device (as defined by the model
name of your device which can be retrieved with
``adb shell getprop ro.product.model`` or by the ``name`` attribute of your
customized device class), and ``<stage>`` is one of the following currently
supported stages:
:setup: This stage is where the application is loaded (if present). It is
@ -26,10 +28,12 @@ Only the run stage is mandatory, the remaining stages will be replayed if a
recording is present otherwise no actions will be performed for that particular
stage.
For instance, to add a custom revent files for a device named "mydevice" and
a workload name "myworkload", you need to add the revent files to the directory
``/home/$WA_USER_HOME/dependencies/myworkload/revent_files`` creating it if
necessary. ::
All your custom revent files should reside at
``'$WA_USER_DIRECTORY/dependencies/WORKLOAD NAME/'``. So
typically to add a custom revent files for a device named "mydevice" and a
workload name "myworkload", you would need to add the revent files to the
directory ``~/.workload_automation/dependencies/myworkload/revent_files``
creating the directory structure if necessary. ::
mydevice.setup.revent
mydevice.run.revent
@ -332,6 +336,6 @@ recordings in scripts. Here is an example:
from wa.utils.revent import ReventRecording
with ReventRecording('/path/to/recording.revent') as recording:
print "Recording: {}".format(recording.filepath)
print "There are {} input events".format(recording.num_events)
print "Over a total of {} seconds".format(recording.duration)
print("Recording: {}".format(recording.filepath))
print("There are {} input events".format(recording.num_events))
print("Over a total of {} seconds".format(recording.duration))

@ -58,22 +58,28 @@ will automatically generate a workload in the your ``WA_CONFIG_DIR/plugins``. If
you wish to specify a custom location this can be provided with ``-p
<path>``
A typical invocation of the :ref:`create <create-command>` command would be in
the form::
wa create workload -k <workload_kind> <workload_name>
.. _adding-a-basic-workload-example:
Adding a Basic Workload
-----------------------
To add a basic workload you can simply use the command::
To add a ``basic`` workload template for our example workload we can simply use the
command::
wa create workload basic
wa create workload -k basic ziptest
This will generate a very basic workload with dummy methods for the workload
interface and it is left to the developer to add any required functionality to
the workload.
This will generate a very basic workload with dummy methods for the each method in
the workload interface and it is left to the developer to add any required functionality.
Not all the methods are required to be implemented, this example shows how a
subset might be used to implement a simple workload that times how long it takes
to compress a file of a particular size on the device.
Not all the methods from the interface are required to be implemented, this
example shows how a subset might be used to implement a simple workload that
times how long it takes to compress a file of a particular size on the device.
.. note:: This is intended as an example of how to implement the Workload
@ -87,14 +93,15 @@ in this example we are implementing a very simple workload and do not
require any additional feature so shall inherit directly from the the base
:class:`Workload` class. We then need to provide a ``name`` for our workload
which is what will be used to identify your workload for example in an
agenda or via the show command.
agenda or via the show command, if you used the `create` command this will
already be populated for you.
.. code-block:: python
import os
from wa import Workload, Parameter
class ZipTestWorkload(Workload):
class ZipTest(Workload):
name = 'ziptest'
@ -113,7 +120,7 @@ separated by a new line.
'''
In order to allow for additional configuration of the workload from a user a
list of :ref:`parameters <plugin-parmeters>` can be supplied. These can be
list of :ref:`parameters <plugin-parameters>` can be supplied. These can be
configured in a variety of different ways. For example here we are ensuring that
the value of the parameter is an integer and larger than 0 using the ``kind``
and ``constraint`` options, also if no value is provided we are providing a
@ -533,21 +540,19 @@ again decorated the method. ::
Once we have generated our result data we need to retrieve it from the device
for further processing or adding directly to WA's output for that job. For
example for trace data we will want to pull it to the device and add it as a
:ref:`artifact <artifact>` to WA's :ref:`context <context>` as shown below::
:ref:`artifact <artifact>` to WA's :ref:`context <context>`. Once we have
retrieved the data, we can now do any further processing and add any relevant
:ref:`Metrics <metrics>` to the :ref:`context <context>`. For this we will use
the the ``add_metric`` method to add the results to the final output for that
workload. The method can be passed 4 params, which are the metric `key`,
`value`, `unit` and `lower_is_better`. ::
def extract_results(self, context):
def update_output(self, context):
# pull the trace file from the target
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.target.pull(self.result, context.working_directory)
context.add_artifact('error_trace', self.result, kind='export')
Once we have retrieved the data we can now do any further processing and add any
relevant :ref:`Metrics <metrics>` to the :ref:`context <context>`. For this we
will use the the ``add_metric`` method to add the results to the final output
for that workload. The method can be passed 4 params, which are the metric
`key`, `value`, `unit` and `lower_is_better`. ::
def update_output(self, context):
# parse the file if needs to be parsed, or add result directly to
# context.
@ -588,12 +593,11 @@ So the full example would look something like::
def stop(self, context):
self.target.execute('{} stop'.format(self.trace_on_target))
def extract_results(self, context):
def update_output(self, context):
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.target.pull(self.result, context.working_directory)
context.add_artifact('error_trace', self.result, kind='export')
def update_output(self, context):
metric = # ..
context.add_metric('number_of_errors', metric, lower_is_better=True

@ -69,7 +69,72 @@ WA3 config file.
**Q:** My Juno board keeps resetting upon starting WA even if it hasn't crashed.
--------------------------------------------------------------------------------
Please ensure that you do not have any other terminals (e.g. ``screen``
**A** Please ensure that you do not have any other terminals (e.g. ``screen``
sessions) connected to the board's UART. When WA attempts to open the connection
for its own use this can cause the board to reset if a connection is already
present.
**Q:** I'm using the FPS instrument but I do not get any/correct results for my workload
-----------------------------------------------------------------------------------------
**A:** If your device is running with Android 6.0 + then the default utility for
collecting fps metrics will be ``gfxinfo`` however this does not seem to be able
to extract any meaningful information for some workloads. In this case please
try setting the ``force_surfaceflinger`` parameter for the ``fps`` augmentation
to ``True``. This will attempt to guess the "View" for the workload
automatically however this is device specific and therefore may need
customizing. If this is required please open the application and execute
``dumpsys SurfaceFlinger --list`` on the device via adb. This will provide a
list of all views available for measuring.
As an example, when trying to find the view for the AngryBirds Rio workload you
may get something like:
.. code-block:: none
...
AppWindowToken{41dfe54 token=Token{77819a7 ActivityRecord{a151266 u0 com.rovio.angrybirdsrio/com.rovio.fusion.App t506}}}#0
a3d001c com.rovio.angrybirdsrio/com.rovio.fusion.App#0
Background for -SurfaceView - com.rovio.angrybirdsrio/com.rovio.fusion.App#0
SurfaceView - com.rovio.angrybirdsrio/com.rovio.fusion.App#0
com.rovio.angrybirdsrio/com.rovio.fusion.App#0
boostedAnimationLayer#0
mAboveAppWindowsContainers#0
...
From these ``"SurfaceView - com.rovio.angrybirdsrio/com.rovio.fusion.App#0"`` is
the mostly likely the View that needs to be set as the ``view`` workload
parameter and will be picked up be the ``fps`` augmentation.
**Q:** I am getting an error which looks similar to ``'CONFIG_SND_BT87X is not exposed in kernel config'...``
-------------------------------------------------------------------------------------------------------------
**A:** If you are receiving this under normal operation this can be caused by a
mismatch of your WA and devlib versions. Please update both to their latest
versions and delete your ``$USER_HOME/.workload_automation/cache/targets.json``
(or equivalent) file.
**Q:** I get an error which looks similar to ``UnicodeDecodeError('ascii' codec can't decode byte...``
------------------------------------------------------------------------------------------------------
**A:** If you receive this error or a similar warning about your environment,
please ensure that you configure your environment to use a locale which supports
UTF-8. Otherwise this can cause issues when attempting to parse files containing
none ascii characters.
**Q:** I get the error ``Module "X" failed to install on target``
------------------------------------------------------------------------------------------------------
**A:** By default a set of devlib modules will be automatically loaded onto the
target designed to add additional functionality. If the functionality provided
by the module is not required then the module can be safely disabled by setting
``load_default_modules`` to ``False`` in the ``device_config`` entry of the
:ref:`agenda <config-agenda-entry>` and then re-enabling any specific modules
that are still required. An example agenda snippet is shown below:
.. code-block:: none
config:
device: generic_android
device_config:
load_default_modules: False
modules: ['list', 'of', 'modules', 'to', 'enable']

@ -13,10 +13,11 @@ these signals are dispatched during execution please see the
$signal_names
The methods above may be decorated with on the listed decorators to set the
priority of the Instrument method relative to other callbacks registered for the
signal (within the same priority level, callbacks are invoked in the order they
were registered). The table below shows the mapping of the decorator to the
corresponding priority:
priority (a value in the ``wa.framework.signal.CallbackPriority`` enum) of the
Instrument method relative to other callbacks registered for the signal (within
the same priority level, callbacks are invoked in the order they were
registered). The table below shows the mapping of the decorator to the
corresponding priority name and level:
$priority_prefixes

@ -16,7 +16,7 @@ Configuration
Default configuration file change
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Instead of the standard ``config.py`` file located at
``$WA_USER_HOME/config.py`` WA now uses a ``confg.yaml`` file (at the same
``$WA_USER_DIRECTORY/config.py`` WA now uses a ``confg.yaml`` file (at the same
location) which is written in the YAML format instead of python. Additionally
upon first invocation WA3 will automatically try and detect whether a WA2 config
file is present and convert it to use the new WA3 format. During this process

@ -17,6 +17,8 @@ further configuration will be required.
Android
-------
.. _android-general-device-setup:
General Device Setup
^^^^^^^^^^^^^^^^^^^^
@ -44,12 +46,15 @@ common parameters you might want to change are outlined below.
Android builds. If this is not the case for your device, you will need to
specify an alternative working directory (e.g. under ``/data/local``).
:load_default_modules: A number of "default" modules (e.g. for cpufreq
subsystem) are loaded automatically, unless explicitly disabled. If you
encounter an issue with one of the modules then this setting can be set to
``False`` and any specific modules that you require can be request via the
``modules`` entry.
:modules: A list of additional modules to be installed for the target. Devlib
implements functionality for particular subsystems as modules. A number of
"default" modules (e.g. for cpufreq subsystem) are loaded automatically,
unless explicitly disabled. If additional modules need to be loaded, they
may be specified using this parameter.
implements functionality for particular subsystems as modules. If additional
modules need to be loaded, they may be specified using this parameter.
Please see the `devlib documentation <http://devlib.readthedocs.io/en/latest/modules.html>`_
for information on the available modules.
@ -83,6 +88,7 @@ or a more specific config could be:
device_config:
device: 0123456789ABCDEF
working_direcory: '/sdcard/wa-working'
load_default_modules: True
modules: ['hotplug', 'cpufreq']
core_names : ['a7', 'a7', 'a7', 'a15', 'a15']
# ...

@ -14,9 +14,9 @@ Using revent with workloads
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Some workloads (pretty much all games) rely on recorded revents for their
execution. ReventWorkloads will require between 1 and 4 revent files be be ran.
There is one mandatory recording ``run`` for performing the actual execution of
the workload and the remaining are optional. ``setup`` can be used to perform
execution. ReventWorkloads require between 1 and 4 revent files to be ran.
There is one mandatory recording, ``run``, for performing the actual execution of
the workload and the remaining stages are optional. ``setup`` can be used to perform
the initial setup (navigating menus, selecting game modes, etc).
``extract_results`` can be used to perform any actions after the main stage of
the workload for example to navigate a results or summary screen of the app. And
@ -26,17 +26,21 @@ exiting the app.
Because revents are very device-specific\ [*]_, these files would need to
be recorded for each device.
The files must be called ``<device name>.(setup|run|extract_results|teardown).revent``
, where ``<device name>`` is the name of your device (as defined by the ``name``
attribute of your device's class). WA will look for these files in two
places: ``<install dir>/wa/workloads/<workload name>/revent_files``
and ``~/.workload_automation/dependencies/<workload name>``. The first
location is primarily intended for revent files that come with WA (and if
The files must be called ``<device name>.(setup|run|extract_results|teardown).revent``,
where ``<device name>`` is the name of your device (as defined by the model
name of your device which can be retrieved with
``adb shell getprop ro.product.model`` or by the ``name`` attribute of your
customized device class).
WA will look for these files in two places:
``<installdir>/wa/workloads/<workload name>/revent_files`` and
``$WA_USER_DIRECTORY/dependencies/<workload name>``. The
first location is primarily intended for revent files that come with WA (and if
you did a system-wide install, you'll need sudo to add files there), so it's
probably easier to use the second location for the files you record. Also,
if revent files for a workload exist in both locations, the files under
``~/.workload_automation/dependencies`` will be used in favour of those
installed with WA.
probably easier to use the second location for the files you record. Also, if
revent files for a workload exist in both locations, the files under
``$WA_USER_DIRECTORY/dependencies`` will be used in favour
of those installed with WA.
.. [*] It's not just about screen resolution -- the event codes may be different
even if devices use the same screen.

@ -12,8 +12,9 @@ Installation
.. module:: wa
This page describes the 3 methods of installing Workload Automation 3. The first
option is to use :ref:`pip` which
will install the latest release of WA, the latest development version from :ref:`github <github>` or via a :ref:`dockerfile`.
option is to use :ref:`pip` which will install the latest release of WA, the
latest development version from :ref:`github <github>` or via a
:ref:`dockerfile`.
Prerequisites
@ -22,11 +23,11 @@ Prerequisites
Operating System
----------------
WA runs on a native Linux install. It was tested with Ubuntu 14.04,
but any recent Linux distribution should work. It should run on either
32-bit or 64-bit OS, provided the correct version of Android (see below)
was installed. Officially, **other environments are not supported**. WA
has been known to run on Linux Virtual machines and in Cygwin environments,
WA runs on a native Linux install. It has been tested on recent Ubuntu releases,
but other recent Linux distributions should work as well. It should run on
either 32-bit or 64-bit OS, provided the correct version of dependencies (see
below) are installed. Officially, **other environments are not supported**.
WA has been known to run on Linux Virtual machines and in Cygwin environments,
though additional configuration may be required in both cases (known issues
include makings sure USB/serial connections are passed to the VM, and wrong
python/pip binaries being picked up in Cygwin). WA *should* work on other
@ -45,7 +46,8 @@ possible to get limited functionality with minimal porting effort).
Android SDK
-----------
You need to have the Android SDK with at least one platform installed.
To interact with Android devices you will need to have the Android SDK
with at least one platform installed.
To install it, download the ADT Bundle from here_. Extract it
and add ``<path_to_android_sdk>/sdk/platform-tools`` and ``<path_to_android_sdk>/sdk/tools``
to your ``PATH``. To test that you've installed it properly, run ``adb
@ -72,7 +74,11 @@ the install location of the SDK (i.e. ``<path_to_android_sdk>/sdk``).
Python
------
Workload Automation 3 currently supports both Python 2.7 and Python 3.
Workload Automation 3 currently supports Python 3.5+
.. note:: If your system's default python version is still Python 2, please
replace the commands listed here with their Python3 equivalent
(e.g. python3, pip3 etc.)
.. _pip:
@ -94,11 +100,11 @@ similar distributions, this may be done with APT::
sudo -H pip install --upgrade pip
sudo -H pip install --upgrade setuptools
If you do run into this issue after already installing some packages,
If you do run into this issue after already installing some packages,
you can resolve it by running ::
sudo chmod -R a+r /usr/local/lib/python2.7/dist-packagessudo
find /usr/local/lib/python2.7/dist-packages -type d -exec chmod a+x {} \;
sudo chmod -R a+r /usr/local/lib/python3.X/dist-packages
sudo find /usr/local/lib/python3.X/dist-packages -type d -exec chmod a+x {} \;
(The paths above will work for Ubuntu; they may need to be adjusted
for other distros).
@ -171,9 +177,11 @@ install them upfront (e.g. if you're planning to use WA to an environment that
may not always have Internet access).
* nose
* PyDAQmx
* pymongo
* jinja2
* mock
* daqpower
* sphinx
* sphinx_rtd_theme
* psycopg2-binary
@ -199,6 +207,18 @@ Alternatively, you can also install the latest development version from GitHub
cd workload-automation
sudo -H python setup.py install
.. note:: Please note that if using pip to install from github this will most
likely result in an older and incompatible version of devlib being
installed alongside WA. If you wish to use pip please also manually
install the latest version of
`devlib <https://github.com/ARM-software/devlib>`_.
.. note:: Please note that while a `requirements.txt` is included, this is
designed to be a reference of known working packages rather to than to
be used as part of a standard installation. The version restrictions
in place as part of `setup.py` should automatically ensure the correct
packages are install however if encountering issues please try
updating/downgrading to the package versions list within.
If the above succeeds, try ::
@ -222,7 +242,7 @@ image in a container.
The Dockerfile can be found in the "extras" directory or online at
`<https://github.com/ARM-software /workload- automation/blob/next/extras/Dockerfile>`_
which contains addional information about how to build and to use the file.
which contains additional information about how to build and to use the file.
(Optional) Post Installation

@ -20,7 +20,7 @@ Install
.. note:: This is a quick summary. For more detailed instructions, please see
the :ref:`installation` section.
Make sure you have Python 2.7 or Python 3 and a recent Android SDK with API
Make sure you have Python 3.5+ and a recent Android SDK with API
level 18 or above installed on your system. A complete install of the Android
SDK is required, as WA uses a number of its utilities, not just adb. For the
SDK, make sure that either ``ANDROID_HOME`` environment variable is set, or that
@ -125,7 +125,7 @@ There are multiple options for configuring your device depending on your
particular use case.
You can either add your configuration to the default configuration file
``config.yaml``, under the ``$WA_USER_HOME/`` directory or you can specify it in
``config.yaml``, under the ``$WA_USER_DIRECTORY/`` directory or you can specify it in
the ``config`` section of your agenda directly.
Alternatively if you are using multiple devices, you may want to create separate
@ -318,7 +318,7 @@ like this:
config:
augmentations:
- ~execution_time
- json
- targz
iterations: 2
workloads:
- memcpy
@ -332,7 +332,7 @@ This agenda:
- Specifies two workloads: memcpy and dhrystone.
- Specifies that dhrystone should run in one thread and execute five million loops.
- Specifies that each of the two workloads should be run twice.
- Enables json output processor, in addition to the output processors enabled in
- Enables the targz output processor, in addition to the output processors enabled in
the config.yaml.
- Disables execution_time instrument, if it is enabled in the config.yaml
@ -352,13 +352,13 @@ in-depth information please see the :ref:`Create Command <create-command>` docum
In order to populate the agenda with relevant information you can supply all of
the plugins you wish to use as arguments to the command, for example if we want
to create an agenda file for running ``dhystrone`` on a 'generic android' device and we
to create an agenda file for running ``dhrystone`` on a `generic_android` device and we
want to enable the ``execution_time`` and ``trace-cmd`` instruments and display the
metrics using the ``csv`` output processor. We would use the following command::
wa create agenda generic_android dhrystone execution_time trace-cmd csv -o my_agenda.yaml
This will produce a `my_agenda.yaml` file containing all the relevant
This will produce a ``my_agenda.yaml`` file containing all the relevant
configuration for the specified plugins along with their default values as shown
below:
@ -483,14 +483,14 @@ that parses the contents of the output directory:
>>> ro = RunOutput('./wa_output')
>>> for job in ro.jobs:
... if job.status != 'OK':
... print 'Job "{}" did not complete successfully: {}'.format(job, job.status)
... print('Job "{}" did not complete successfully: {}'.format(job, job.status))
... continue
... print 'Job "{}":'.format(job)
... print('Job "{}":'.format(job))
... for metric in job.metrics:
... if metric.units:
... print '\t{}: {} {}'.format(metric.name, metric.value, metric.units)
... print('\t{}: {} {}'.format(metric.name, metric.value, metric.units))
... else:
... print '\t{}: {}'.format(metric.name, metric.value)
... print('\t{}: {}'.format(metric.name, metric.value))
...
Job "wk1-dhrystone-1":
thread 0 score: 20833333

@ -30,7 +30,7 @@ An example agenda can be seen here:
device: generic_android
device_config:
device: R32C801B8XY # Th adb name of our device we want to run on
device: R32C801B8XY # The adb name of our device we want to run on
disable_selinux: true
load_default_modules: true
package_data_directory: /data/data
@ -116,7 +116,9 @@ whole will behave. The most common options that that you may want to specify are
to connect to (e.g. ``host`` for an SSH connection or
``device`` to specific an ADB name) as well as configure other
options for the device for example the ``working_directory``
or the list of ``modules`` to be loaded onto the device.
or the list of ``modules`` to be loaded onto the device. (For
more information please see
:ref:`here <android-general-device-setup>`)
:execution_order: Defines the order in which the agenda spec will be executed.
:reboot_policy: Defines when during execution of a run a Device will be rebooted.
:max_retries: The maximum number of times failed jobs will be retried before giving up.
@ -124,7 +126,7 @@ whole will behave. The most common options that that you may want to specify are
For more information and a full list of these configuration options please see
:ref:`Run Configuration <run-configuration>` and
:ref:`"Meta Configuration" <meta-configuration>`.
:ref:`Meta Configuration <meta-configuration>`.
Plugins

@ -40,7 +40,7 @@ Will display help for this subcommand that will look something like this:
AGENDA Agenda for this workload automation run. This defines
which workloads will be executed, how many times, with
which tunables, etc. See example agendas in
/usr/local/lib/python2.7/dist-packages/wa for an
/usr/local/lib/python3.X/dist-packages/wa for an
example of how this file should be structured.
optional arguments:

@ -6,7 +6,7 @@
#
# docker build -t wa .
#
# This will create an image called wadocker, which is preconfigured to
# This will create an image called wa, which is preconfigured to
# run WA and devlib. Please note that the build process automatically
# accepts the licenses for the Android SDK, so please be sure that you
# are willing to accept these prior to building and running the image
@ -17,6 +17,13 @@
#
# docker run -it --privileged -v /dev/bus/usb:/dev/bus/usb --volume ${PWD}:/workspace --workdir /workspace wa
#
# If using selinux you may need to add the `z` option when mounting
# volumes e.g.:
# --volume ${PWD}:/workspace:z
# Warning: Please ensure you do not use this option when mounting
# system directores. For more information please see:
# https://docs.docker.com/storage/bind-mounts/#configure-the-selinux-label
#
# The above command starts the container in privileged mode, with
# access to USB devices. The current directory is mounted into the
# image, allowing you to work from there. Any files written to this
@ -32,27 +39,76 @@
#
# When you are finished, please run `exit` to leave the container.
#
# The relevant environment variables are stored in a separate
# file which is automatically sourced in an interactive shell.
# If running from a non-interactive environment this can
# be manually sourced with `source /home/wa/.wa_environment`
#
# NOTE: Please make sure that the ADB server is NOT running on the
# host. If in doubt, run `adb kill-server` before running the docker
# container.
#
# We want to make sure to base this on a recent ubuntu release
FROM ubuntu:17.10
FROM ubuntu:19.10
# Please update the references below to use different versions of
# devlib, WA or the Android SDK
ARG DEVLIB_REF=v1.1.0
ARG WA_REF=v3.1.1
ARG DEVLIB_REF=v1.3
ARG WA_REF=v3.3
ARG ANDROID_SDK_URL=https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip
RUN apt-get update
RUN apt-get install -y python-pip git wget zip openjdk-8-jre-headless vim emacs nano curl sshpass ssh usbutils
RUN pip install pandas
RUN apt-get update && apt-get install -y \
apache2-utils \
bison \
cmake \
curl \
emacs \
flex \
git \
libcdk5-dev \
libiio-dev \
libxml2 \
libxml2-dev \
locales \
nano \
openjdk-8-jre-headless \
python3 \
python3-pip \
ssh \
sshpass \
sudo \
trace-cmd \
usbutils \
vim \
wget \
zip
# Clone and download iio-capture
RUN git clone -v https://github.com/BayLibre/iio-capture.git /tmp/iio-capture && \
cd /tmp/iio-capture && \
make && \
make install
RUN pip3 install pandas
# Ensure we're using utf-8 as our default encoding
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
# Let's get the two repos we need, and install them
RUN git clone -v https://github.com/ARM-software/devlib.git /tmp/devlib && cd /tmp/devlib && git checkout $DEVLIB_REF && python setup.py install
RUN git clone -v https://github.com/ARM-software/workload-automation.git /tmp/wa && cd /tmp/wa && git checkout $WA_REF && python setup.py install
RUN git clone -v https://github.com/ARM-software/devlib.git /tmp/devlib && \
cd /tmp/devlib && \
git checkout $DEVLIB_REF && \
python3 setup.py install && \
pip3 install .[full]
RUN git clone -v https://github.com/ARM-software/workload-automation.git /tmp/wa && \
cd /tmp/wa && \
git checkout $WA_REF && \
python3 setup.py install && \
pip3 install .[all]
# Clean-up
RUN rm -R /tmp/devlib /tmp/wa
@ -66,10 +122,19 @@ RUN mkdir -p /home/wa/.android
RUN mkdir -p /home/wa/AndroidSDK && cd /home/wa/AndroidSDK && wget $ANDROID_SDK_URL -O sdk.zip && unzip sdk.zip
RUN cd /home/wa/AndroidSDK/tools/bin && yes | ./sdkmanager --licenses && ./sdkmanager platform-tools && ./sdkmanager 'build-tools;27.0.3'
# Update the path
RUN echo 'export PATH=/home/wa/AndroidSDK/platform-tools:${PATH}' >> /home/wa/.bashrc
RUN echo 'export PATH=/home/wa/AndroidSDK/build-tools:${PATH}' >> /home/wa/.bashrc
RUN echo 'export ANDROID_HOME=/home/wa/AndroidSDK' >> /home/wa/.bashrc
# Download Monsoon
RUN mkdir -p /home/wa/monsoon
RUN curl https://android.googlesource.com/platform/cts/+/master/tools/utils/monsoon.py\?format\=TEXT | base64 --decode > /home/wa/monsoon/monsoon.py
RUN chmod +x /home/wa/monsoon/monsoon.py
# Update WA's required environment variables.
RUN echo 'export PATH=/home/wa/monsoon:${PATH}' >> /home/wa/.wa_environment
RUN echo 'export PATH=/home/wa/AndroidSDK/platform-tools:${PATH}' >> /home/wa/.wa_environment
RUN echo 'export PATH=/home/wa/AndroidSDK/build-tools:${PATH}' >> /home/wa/.wa_environment
RUN echo 'export ANDROID_HOME=/home/wa/AndroidSDK' >> /home/wa/.wa_environment
# Source WA environment variables in an interactive environment
RUN echo 'source /home/wa/.wa_environment' >> /home/wa/.bashrc
# Generate some ADB keys. These will change each time the image is build but will otherwise persist.
RUN /home/wa/AndroidSDK/platform-tools/adb keygen /home/wa/.android/adbkey

@ -43,7 +43,7 @@ ignore=external
# https://bitbucket.org/logilab/pylint/issue/232/wrong-hanging-indentation-false-positive
# TODO: disabling no-value-for-parameter and logging-format-interpolation, as they appear to be broken
# in version 1.4.1 and return a lot of false postives; should be re-enabled once fixed.
disable=C0301,C0103,C0111,W0142,R0903,R0904,R0922,W0511,W0141,I0011,R0921,W1401,C0330,no-value-for-parameter,logging-format-interpolation,no-else-return,inconsistent-return-statements,keyword-arg-before-vararg,consider-using-enumerate,no-member
disable=C0301,C0103,C0111,W0142,R0903,R0904,R0922,W0511,W0141,I0011,R0921,W1401,C0330,no-value-for-parameter,logging-format-interpolation,no-else-return,inconsistent-return-statements,keyword-arg-before-vararg,consider-using-enumerate,no-member,super-with-arguments,useless-object-inheritance,raise-missing-from,no-else-raise,no-else-break,no-else-continue
[FORMAT]
max-module-lines=4000

3
pytest.ini Normal file

@ -0,0 +1,3 @@
[pytest]
filterwarnings=
ignore::DeprecationWarning:past[.*]

29
requirements.txt Normal file

@ -0,0 +1,29 @@
bcrypt==3.2.0
certifi==2020.12.5
cffi==1.14.4
chardet==3.0.4
colorama==0.4.4
cryptography==3.3.1
devlib==1.3.0
future==0.18.2
idna==2.10
Louie-latest==1.3.1
nose==1.3.7
numpy==1.19.4
pandas==1.1.5
paramiko==2.7.2
pexpect==4.8.0
pkg-resources==0.0.0
ptyprocess==0.6.0
pycparser==2.20
PyNaCl==1.4.0
pyserial==3.5
python-dateutil==2.8.1
pytz==2020.4
PyYAML==5.3.1
requests==2.25.0
scp==0.13.3
six==1.15.0
urllib3==1.26.2
wlauto==3.0.0
wrapt==1.12.1

@ -29,7 +29,8 @@ except ImportError:
wa_dir = os.path.join(os.path.dirname(__file__), 'wa')
sys.path.insert(0, os.path.join(wa_dir, 'framework'))
from version import get_wa_version, get_wa_version_with_commit
from version import (get_wa_version, get_wa_version_with_commit,
format_version, required_devlib_version)
# happens if falling back to distutils
warnings.filterwarnings('ignore', "Unknown distribution option: 'install_requires'")
@ -61,9 +62,14 @@ for root, dirs, files in os.walk(wa_dir):
scripts = [os.path.join('scripts', s) for s in os.listdir('scripts')]
with open("README.rst", "r") as fh:
long_description = fh.read()
devlib_version = format_version(required_devlib_version)
params = dict(
name='wlauto',
description='A framework for automating workload execution and measurement collection on ARM devices.',
long_description=long_description,
version=get_wa_version_with_commit(),
packages=packages,
package_data=data_files,
@ -74,41 +80,43 @@ params = dict(
maintainer='ARM Architecture & Technology Device Lab',
maintainer_email='workload-automation@arm.com',
setup_requires=[
'numpy'
'numpy<=1.16.4; python_version<"3"',
'numpy; python_version>="3"',
],
install_requires=[
'python-dateutil', # converting between UTC and local time.
'pexpect>=3.3', # Send/receive to/from device
'pyserial', # Serial port interface
'colorama', # Printing with colors
'pyYAML', # YAML-formatted agenda parsing
'pyYAML>=5.1b3', # YAML-formatted agenda parsing
'requests', # Fetch assets over HTTP
'devlib>=1.1.0', # Interacting with devices
'devlib>={}'.format(devlib_version), # Interacting with devices
'louie-latest', # callbacks dispatch
'wrapt', # better decorators
'pandas>=0.23.0', # Data analysis and manipulation
'pandas>=0.23.0,<=0.24.2; python_version<"3.5.3"', # Data analysis and manipulation
'pandas>=0.23.0; python_version>="3.5.3"', # Data analysis and manipulation
'future', # Python 2-3 compatiblity
],
dependency_links=['https://github.com/ARM-software/devlib/tarball/master#egg=devlib-{}'.format(devlib_version)],
extras_require={
'other': ['jinja2'],
'test': ['nose', 'mock'],
'mongodb': ['pymongo'],
'notify': ['notify2'],
'doc': ['sphinx'],
'doc': ['sphinx', 'sphinx_rtd_theme'],
'postgres': ['psycopg2-binary'],
'daq': ['daqpower'],
},
# https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
'Development Status :: 4 - Beta',
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'License :: OSI Approved :: Apache Software License',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
],
)
all_extras = list(chain(iter(params['extras_require'].values())))
params['extras_require']['everything'] = all_extras
params['extras_require']['all'] = all_extras
class sdist(orig_sdist):
@ -122,7 +130,6 @@ class sdist(orig_sdist):
orig_sdist.initialize_options(self)
self.strip_commit = False
def run(self):
if self.strip_commit:
self.distribution.get_version = get_wa_version

@ -17,7 +17,7 @@
from wa import Plugin
class TestDevice(Plugin):
class MockDevice(Plugin):
name = 'test-device'
kind = 'device'

@ -18,7 +18,6 @@
# pylint: disable=R0201
import os
import sys
import yaml
from collections import defaultdict
from unittest import TestCase
@ -31,6 +30,7 @@ os.environ['WA_USER_DIRECTORY'] = os.path.join(DATA_DIR, 'includes')
from wa.framework.configuration.execution import ConfigManager
from wa.framework.configuration.parsers import AgendaParser
from wa.framework.exception import ConfigError
from wa.utils.serializer import yaml
from wa.utils.types import reset_all_counters
@ -44,8 +44,6 @@ workloads:
workload_parameters:
test: 1
"""
invalid_agenda = yaml.load(invalid_agenda_text)
invalid_agenda.name = 'invalid1'
duplicate_agenda_text = """
global:
@ -58,14 +56,10 @@ workloads:
- id: "1"
workload_name: benchmarkpi
"""
duplicate_agenda = yaml.load(duplicate_agenda_text)
duplicate_agenda.name = 'invalid2'
short_agenda_text = """
workloads: [antutu, dhrystone, benchmarkpi]
"""
short_agenda = yaml.load(short_agenda_text)
short_agenda.name = 'short'
default_ids_agenda_text = """
workloads:
@ -78,8 +72,6 @@ workloads:
cpus: 1
- vellamo
"""
default_ids_agenda = yaml.load(default_ids_agenda_text)
default_ids_agenda.name = 'default_ids'
sectioned_agenda_text = """
sections:
@ -102,8 +94,6 @@ sections:
workloads:
- memcpy
"""
sectioned_agenda = yaml.load(sectioned_agenda_text)
sectioned_agenda.name = 'sectioned'
dup_sectioned_agenda_text = """
sections:
@ -116,8 +106,22 @@ sections:
workloads:
- memcpy
"""
dup_sectioned_agenda = yaml.load(dup_sectioned_agenda_text)
dup_sectioned_agenda.name = 'dup-sectioned'
yaml_anchors_agenda_text = """
workloads:
- name: dhrystone
params: &dhrystone_single_params
cleanup_assets: true
cpus: 0
delay: 3
duration: 0
mloops: 10
threads: 1
- name: dhrystone
params:
<<: *dhrystone_single_params
threads: 4
"""
class AgendaTest(TestCase):
@ -132,6 +136,8 @@ class AgendaTest(TestCase):
assert_equal(len(self.config.jobs_config.root_node.workload_entries), 4)
def test_duplicate_id(self):
duplicate_agenda = yaml.load(duplicate_agenda_text)
try:
self.parser.load(self.config, duplicate_agenda, 'test')
except ConfigError as e:
@ -140,6 +146,8 @@ class AgendaTest(TestCase):
raise Exception('ConfigError was not raised for an agenda with duplicate ids.')
def test_yaml_missing_field(self):
invalid_agenda = yaml.load(invalid_agenda_text)
try:
self.parser.load(self.config, invalid_agenda, 'test')
except ConfigError as e:
@ -148,20 +156,26 @@ class AgendaTest(TestCase):
raise Exception('ConfigError was not raised for an invalid agenda.')
def test_defaults(self):
short_agenda = yaml.load(short_agenda_text)
self.parser.load(self.config, short_agenda, 'test')
workload_entries = self.config.jobs_config.root_node.workload_entries
assert_equal(len(workload_entries), 3)
assert_equal(workload_entries[0].config['workload_name'], 'antutu')
assert_equal(workload_entries[0].id, 'wk1')
def test_default_id_assignment(self):
default_ids_agenda = yaml.load(default_ids_agenda_text)
self.parser.load(self.config, default_ids_agenda, 'test2')
workload_entries = self.config.jobs_config.root_node.workload_entries
assert_equal(workload_entries[0].id, 'wk2')
assert_equal(workload_entries[3].id, 'wk3')
def test_sections(self):
sectioned_agenda = yaml.load(sectioned_agenda_text)
self.parser.load(self.config, sectioned_agenda, 'test')
root_node_workload_entries = self.config.jobs_config.root_node.workload_entries
leaves = list(self.config.jobs_config.root_node.leaves())
section1_workload_entries = leaves[0].workload_entries
@ -171,8 +185,22 @@ class AgendaTest(TestCase):
assert_true(section1_workload_entries[0].config['workload_parameters']['markers_enabled'])
assert_equal(section2_workload_entries[0].config['workload_name'], 'antutu')
def test_yaml_anchors(self):
yaml_anchors_agenda = yaml.load(yaml_anchors_agenda_text)
self.parser.load(self.config, yaml_anchors_agenda, 'test')
workload_entries = self.config.jobs_config.root_node.workload_entries
assert_equal(len(workload_entries), 2)
assert_equal(workload_entries[0].config['workload_name'], 'dhrystone')
assert_equal(workload_entries[0].config['workload_parameters']['threads'], 1)
assert_equal(workload_entries[0].config['workload_parameters']['delay'], 3)
assert_equal(workload_entries[1].config['workload_name'], 'dhrystone')
assert_equal(workload_entries[1].config['workload_parameters']['threads'], 4)
assert_equal(workload_entries[1].config['workload_parameters']['delay'], 3)
@raises(ConfigError)
def test_dup_sections(self):
dup_sectioned_agenda = yaml.load(dup_sectioned_agenda_text)
self.parser.load(self.config, dup_sectioned_agenda, 'test')
@raises(ConfigError)

@ -16,6 +16,7 @@
import unittest
from nose.tools import assert_equal
from wa.framework.configuration.execution import ConfigManager
from wa.utils.misc import merge_config_values
@ -38,3 +39,21 @@ class TestConfigUtils(unittest.TestCase):
if v2 is not None:
assert_equal(type(result), type(v2))
class TestConfigParser(unittest.TestCase):
def test_param_merge(self):
config = ConfigManager()
config.load_config({'workload_params': {'one': 1, 'three': {'ex': 'x'}}, 'runtime_params': {'aye': 'a'}}, 'file_one')
config.load_config({'workload_params': {'two': 2, 'three': {'why': 'y'}}, 'runtime_params': {'bee': 'b'}}, 'file_two')
assert_equal(
config.jobs_config.job_spec_template['workload_parameters'],
{'one': 1, 'two': 2, 'three': {'why': 'y'}},
)
assert_equal(
config.jobs_config.job_spec_template['runtime_parameters'],
{'aye': 'a', 'bee': 'b'},
)

@ -21,9 +21,10 @@ from nose.tools import assert_equal, assert_raises
from wa.utils.exec_control import (init_environment, reset_environment,
activate_environment, once,
once_per_class, once_per_instance)
once_per_class, once_per_instance,
once_per_attribute_value)
class TestClass(object):
class MockClass(object):
called = 0
@ -32,7 +33,7 @@ class TestClass(object):
@once
def called_once(self):
TestClass.called += 1
MockClass.called += 1
@once
def initilize_once(self):
@ -50,7 +51,7 @@ class TestClass(object):
return '{}: Called={}'.format(self.__class__.__name__, self.called)
class SubClass(TestClass):
class SubClass(MockClass):
def __init__(self):
super(SubClass, self).__init__()
@ -110,7 +111,19 @@ class AnotherClass(object):
self.count += 1
class AnotherSubClass(TestClass):
class NamedClass:
count = 0
def __init__(self, name):
self.name = name
@once_per_attribute_value('name')
def initilize(self):
NamedClass.count += 1
class AnotherSubClass(MockClass):
def __init__(self):
super(AnotherSubClass, self).__init__()
@ -142,7 +155,7 @@ class EnvironmentManagementTest(TestCase):
def test_reset_current_environment(self):
activate_environment('CURRENT_ENVIRONMENT')
t1 = TestClass()
t1 = MockClass()
t1.initilize_once()
assert_equal(t1.count, 1)
@ -152,7 +165,7 @@ class EnvironmentManagementTest(TestCase):
def test_switch_environment(self):
activate_environment('ENVIRONMENT1')
t1 = TestClass()
t1 = MockClass()
t1.initilize_once()
assert_equal(t1.count, 1)
@ -166,7 +179,7 @@ class EnvironmentManagementTest(TestCase):
def test_reset_environment_name(self):
activate_environment('ENVIRONMENT')
t1 = TestClass()
t1 = MockClass()
t1.initilize_once()
assert_equal(t1.count, 1)
@ -195,7 +208,7 @@ class OnlyOnceEnvironmentTest(TestCase):
reset_environment('TEST_ENVIRONMENT')
def test_single_instance(self):
t1 = TestClass()
t1 = MockClass()
ac = AnotherClass()
t1.initilize_once()
@ -209,8 +222,8 @@ class OnlyOnceEnvironmentTest(TestCase):
def test_mulitple_instances(self):
t1 = TestClass()
t2 = TestClass()
t1 = MockClass()
t2 = MockClass()
t1.initilize_once()
assert_equal(t1.count, 1)
@ -220,7 +233,7 @@ class OnlyOnceEnvironmentTest(TestCase):
def test_sub_classes(self):
t1 = TestClass()
t1 = MockClass()
sc = SubClass()
ss = SubSubClass()
asc = AnotherSubClass()
@ -250,7 +263,7 @@ class OncePerClassEnvironmentTest(TestCase):
reset_environment('TEST_ENVIRONMENT')
def test_single_instance(self):
t1 = TestClass()
t1 = MockClass()
ac = AnotherClass()
t1.initilize_once_per_class()
@ -264,8 +277,8 @@ class OncePerClassEnvironmentTest(TestCase):
def test_mulitple_instances(self):
t1 = TestClass()
t2 = TestClass()
t1 = MockClass()
t2 = MockClass()
t1.initilize_once_per_class()
assert_equal(t1.count, 1)
@ -275,7 +288,7 @@ class OncePerClassEnvironmentTest(TestCase):
def test_sub_classes(self):
t1 = TestClass()
t1 = MockClass()
sc1 = SubClass()
sc2 = SubClass()
ss1 = SubSubClass()
@ -308,7 +321,7 @@ class OncePerInstanceEnvironmentTest(TestCase):
reset_environment('TEST_ENVIRONMENT')
def test_single_instance(self):
t1 = TestClass()
t1 = MockClass()
ac = AnotherClass()
t1.initilize_once_per_instance()
@ -322,8 +335,8 @@ class OncePerInstanceEnvironmentTest(TestCase):
def test_mulitple_instances(self):
t1 = TestClass()
t2 = TestClass()
t1 = MockClass()
t2 = MockClass()
t1.initilize_once_per_instance()
assert_equal(t1.count, 1)
@ -333,7 +346,7 @@ class OncePerInstanceEnvironmentTest(TestCase):
def test_sub_classes(self):
t1 = TestClass()
t1 = MockClass()
sc = SubClass()
ss = SubSubClass()
asc = AnotherSubClass()
@ -352,3 +365,30 @@ class OncePerInstanceEnvironmentTest(TestCase):
asc.initilize_once_per_instance()
asc.initilize_once_per_instance()
assert_equal(asc.count, 2)
class OncePerAttributeValueTest(TestCase):
def setUp(self):
activate_environment('TEST_ENVIRONMENT')
def tearDown(self):
reset_environment('TEST_ENVIRONMENT')
def test_once_attribute_value(self):
classes = [
NamedClass('Rick'),
NamedClass('Morty'),
NamedClass('Rick'),
NamedClass('Morty'),
NamedClass('Morty'),
NamedClass('Summer'),
]
for c in classes:
c.initilize()
for c in classes:
c.initilize()
assert_equal(NamedClass.count, 3)

315
tests/test_execution.py Normal file

@ -0,0 +1,315 @@
# Copyright 2020 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import tempfile
from unittest import TestCase
from mock.mock import Mock
from nose.tools import assert_equal
from datetime import datetime
from wa.framework.configuration import RunConfiguration
from wa.framework.configuration.core import JobSpec, Status
from wa.framework.execution import ExecutionContext, Runner
from wa.framework.job import Job
from wa.framework.output import RunOutput, init_run_output
from wa.framework.output_processor import ProcessorManager
import wa.framework.signal as signal
from wa.framework.run import JobState
from wa.framework.exception import ExecutionError
class MockConfigManager(Mock):
@property
def jobs(self):
return self._joblist
@property
def loaded_config_sources(self):
return []
@property
def plugin_cache(self):
return MockPluginCache()
def __init__(self, *args, **kwargs):
super(MockConfigManager, self).__init__(*args, **kwargs)
self._joblist = None
self.run_config = RunConfiguration()
def to_pod(self):
return {}
class MockPluginCache(Mock):
def list_plugins(self, kind=None):
return []
class MockProcessorManager(Mock):
def __init__(self, *args, **kwargs):
super(MockProcessorManager, self).__init__(*args, **kwargs)
def get_enabled(self):
return []
class JobState_force_retry(JobState):
@property
def status(self):
return self._status
@status.setter
def status(self, value):
if(self.retries != self.times_to_retry) and (value == Status.RUNNING):
self._status = Status.FAILED
if self.output:
self.output.status = Status.FAILED
else:
self._status = value
if self.output:
self.output.status = value
def __init__(self, to_retry, *args, **kwargs):
self.retries = 0
self._status = Status.NEW
self.times_to_retry = to_retry
self.output = None
super(JobState_force_retry, self).__init__(*args, **kwargs)
class Job_force_retry(Job):
'''This class imitates a job that retries as many times as specified by
``retries`` in its constructor'''
def __init__(self, to_retry, *args, **kwargs):
super(Job_force_retry, self).__init__(*args, **kwargs)
self.state = JobState_force_retry(to_retry, self.id, self.label, self.iteration, Status.NEW)
self.initialized = False
self.finalized = False
def initialize(self, context):
self.initialized = True
return super().initialize(context)
def finalize(self, context):
self.finalized = True
return super().finalize(context)
class TestRunState(TestCase):
def setUp(self):
self.path = tempfile.mkstemp()[1]
os.remove(self.path)
self.initialise_signals()
self.context = get_context(self.path)
self.job_spec = get_jobspec()
def tearDown(self):
signal.disconnect(self._verify_serialized_state, signal.RUN_INITIALIZED)
signal.disconnect(self._verify_serialized_state, signal.JOB_STARTED)
signal.disconnect(self._verify_serialized_state, signal.JOB_RESTARTED)
signal.disconnect(self._verify_serialized_state, signal.JOB_COMPLETED)
signal.disconnect(self._verify_serialized_state, signal.JOB_FAILED)
signal.disconnect(self._verify_serialized_state, signal.JOB_ABORTED)
signal.disconnect(self._verify_serialized_state, signal.RUN_FINALIZED)
def test_job_state_transitions_pass(self):
'''Tests state equality when the job passes first try'''
job = Job(self.job_spec, 1, self.context)
job.workload = Mock()
self.context.cm._joblist = [job]
self.context.run_state.add_job(job)
runner = Runner(self.context, MockProcessorManager())
runner.run()
def test_job_state_transitions_fail(self):
'''Tests state equality when job fails completely'''
job = Job_force_retry(3, self.job_spec, 1, self.context)
job.workload = Mock()
self.context.cm._joblist = [job]
self.context.run_state.add_job(job)
runner = Runner(self.context, MockProcessorManager())
runner.run()
def test_job_state_transitions_retry(self):
'''Tests state equality when job fails initially'''
job = Job_force_retry(1, self.job_spec, 1, self.context)
job.workload = Mock()
self.context.cm._joblist = [job]
self.context.run_state.add_job(job)
runner = Runner(self.context, MockProcessorManager())
runner.run()
def initialise_signals(self):
signal.connect(self._verify_serialized_state, signal.RUN_INITIALIZED)
signal.connect(self._verify_serialized_state, signal.JOB_STARTED)
signal.connect(self._verify_serialized_state, signal.JOB_RESTARTED)
signal.connect(self._verify_serialized_state, signal.JOB_COMPLETED)
signal.connect(self._verify_serialized_state, signal.JOB_FAILED)
signal.connect(self._verify_serialized_state, signal.JOB_ABORTED)
signal.connect(self._verify_serialized_state, signal.RUN_FINALIZED)
def _verify_serialized_state(self, _):
fs_state = RunOutput(self.path).state
ex_state = self.context.run_output.state
assert_equal(fs_state.status, ex_state.status)
fs_js_zip = zip(
[value for key, value in fs_state.jobs.items()],
[value for key, value in ex_state.jobs.items()]
)
for fs_jobstate, ex_jobstate in fs_js_zip:
assert_equal(fs_jobstate.iteration, ex_jobstate.iteration)
assert_equal(fs_jobstate.retries, ex_jobstate.retries)
assert_equal(fs_jobstate.status, ex_jobstate.status)
class TestJobState(TestCase):
def test_job_retry_status(self):
job_spec = get_jobspec()
context = get_context()
job = Job_force_retry(2, job_spec, 1, context)
job.workload = Mock()
context.cm._joblist = [job]
context.run_state.add_job(job)
verifier = lambda _: assert_equal(job.status, Status.PENDING)
signal.connect(verifier, signal.JOB_RESTARTED)
runner = Runner(context, MockProcessorManager())
runner.run()
signal.disconnect(verifier, signal.JOB_RESTARTED)
def test_skipped_job_state(self):
# Test, if the first job fails and the bail parameter set,
# that the remaining jobs have status: SKIPPED
job_spec = get_jobspec()
context = get_context()
context.cm.run_config.bail_on_job_failure = True
job1 = Job_force_retry(3, job_spec, 1, context)
job2 = Job(job_spec, 1, context)
job1.workload = Mock()
job2.workload = Mock()
context.cm._joblist = [job1, job2]
context.run_state.add_job(job1)
context.run_state.add_job(job2)
runner = Runner(context, MockProcessorManager())
try:
runner.run()
except ExecutionError:
assert_equal(job2.status, Status.SKIPPED)
else:
assert False, "ExecutionError not raised"
def test_normal_job_finalized(self):
# Test that a job is initialized then finalized normally
job_spec = get_jobspec()
context = get_context()
job = Job_force_retry(0, job_spec, 1, context)
job.workload = Mock()
context.cm._joblist = [job]
context.run_state.add_job(job)
runner = Runner(context, MockProcessorManager())
runner.run()
assert_equal(job.initialized, True)
assert_equal(job.finalized, True)
def test_skipped_job_finalized(self):
# Test that a skipped job has been finalized
job_spec = get_jobspec()
context = get_context()
context.cm.run_config.bail_on_job_failure = True
job1 = Job_force_retry(3, job_spec, 1, context)
job2 = Job_force_retry(0, job_spec, 1, context)
job1.workload = Mock()
job2.workload = Mock()
context.cm._joblist = [job1, job2]
context.run_state.add_job(job1)
context.run_state.add_job(job2)
runner = Runner(context, MockProcessorManager())
try:
runner.run()
except ExecutionError:
assert_equal(job2.finalized, True)
else:
assert False, "ExecutionError not raised"
def test_failed_job_finalized(self):
# Test that a failed job, while the bail parameter is set,
# is finalized
job_spec = get_jobspec()
context = get_context()
context.cm.run_config.bail_on_job_failure = True
job1 = Job_force_retry(3, job_spec, 1, context)
job1.workload = Mock()
context.cm._joblist = [job1]
context.run_state.add_job(job1)
runner = Runner(context, MockProcessorManager())
try:
runner.run()
except ExecutionError:
assert_equal(job1.finalized, True)
else:
assert False, "ExecutionError not raised"
def get_context(path=None):
if not path:
path = tempfile.mkstemp()[1]
os.remove(path)
config = MockConfigManager()
output = init_run_output(path, config)
return ExecutionContext(config, Mock(), output)
def get_jobspec():
job_spec = JobSpec()
job_spec.augmentations = {}
job_spec.finalize()
return job_spec

@ -30,6 +30,27 @@ class Callable(object):
return self.val
class TestSignalDisconnect(unittest.TestCase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.callback_ctr = 0
def setUp(self):
signal.connect(self._call_me_once, 'first')
signal.connect(self._call_me_once, 'second')
def test_handler_disconnected(self):
signal.send('first')
signal.send('second')
def _call_me_once(self):
assert_equal(self.callback_ctr, 0)
self.callback_ctr += 1
signal.disconnect(self._call_me_once, 'first')
signal.disconnect(self._call_me_once, 'second')
class TestPriorityDispatcher(unittest.TestCase):
def setUp(self):
@ -61,12 +82,16 @@ class TestPriorityDispatcher(unittest.TestCase):
def test_wrap_propagate(self):
d = {'before': False, 'after': False, 'success': False}
def before():
d['before'] = True
def after():
d['after'] = True
def success():
d['success'] = True
signal.connect(before, signal.BEFORE_WORKLOAD_SETUP)
signal.connect(after, signal.AFTER_WORKLOAD_SETUP)
signal.connect(success, signal.SUCCESSFUL_WORKLOAD_SETUP)
@ -76,7 +101,7 @@ class TestPriorityDispatcher(unittest.TestCase):
with signal.wrap('WORKLOAD_SETUP'):
raise RuntimeError()
except RuntimeError:
caught=True
caught = True
assert_true(d['before'])
assert_true(d['after'])

@ -190,3 +190,10 @@ class TestToggleSet(TestCase):
ts6 = ts2.merge_into(ts3).merge_with(ts1)
assert_equal(ts6, toggle_set(['one', 'two', 'three', 'four', 'five', '~~']))
def test_order_on_create(self):
ts1 = toggle_set(['one', 'two', 'three', '~one'])
assert_equal(ts1, toggle_set(['~one', 'two', 'three']))
ts1 = toggle_set(['~one', 'two', 'three', 'one'])
assert_equal(ts1, toggle_set(['one', 'two', 'three']))

@ -33,7 +33,7 @@ from wa.framework.target.descriptor import (TargetDescriptor, TargetDescription,
create_target_description, add_description_for_target)
from wa.framework.workload import (Workload, ApkWorkload, ApkUiautoWorkload,
ApkReventWorkload, UIWorkload, UiautoWorkload,
ReventWorkload)
PackageHandler, ReventWorkload, TestPackageHandler)
from wa.framework.version import get_wa_version, get_wa_version_with_commit

@ -106,8 +106,8 @@ class CreateDatabaseSubcommand(SubCommand):
def execute(self, state, args): # pylint: disable=too-many-branches
if not psycopg2:
raise CommandError(
'The module psycopg2 is required for the wa ' +
'create database command.')
'The module psycopg2 is required for the wa '
+ 'create database command.')
if args.dbname == 'postgres':
raise ValueError('Databasename to create cannot be postgres.')
@ -131,8 +131,8 @@ class CreateDatabaseSubcommand(SubCommand):
config = yaml.load(config_file)
if 'postgres' in config and not args.force_update_config:
raise CommandError(
"The entry 'postgres' already exists in the config file. " +
"Please specify the -F flag to force an update.")
"The entry 'postgres' already exists in the config file. "
+ "Please specify the -F flag to force an update.")
possible_connection_errors = [
(
@ -261,8 +261,8 @@ class CreateDatabaseSubcommand(SubCommand):
else:
if not self.force:
raise CommandError(
"Database {} already exists. ".format(self.dbname) +
"Please specify the -f flag to create it from afresh."
"Database {} already exists. ".format(self.dbname)
+ "Please specify the -f flag to create it from afresh."
)
def _create_database_postgres(self):
@ -400,14 +400,14 @@ class CreateWorkloadSubcommand(SubCommand):
self.parser.add_argument('name', metavar='NAME',
help='Name of the workload to be created')
self.parser.add_argument('-p', '--path', metavar='PATH', default=None,
help='The location at which the workload will be created. If not specified, ' +
'this defaults to "~/.workload_automation/plugins".')
help='The location at which the workload will be created. If not specified, '
+ 'this defaults to "~/.workload_automation/plugins".')
self.parser.add_argument('-f', '--force', action='store_true',
help='Create the new workload even if a workload with the specified ' +
'name already exists.')
help='Create the new workload even if a workload with the specified '
+ 'name already exists.')
self.parser.add_argument('-k', '--kind', metavar='KIND', default='basic', choices=list(create_funcs.keys()),
help='The type of workload to be created. The available options ' +
'are: {}'.format(', '.join(list(create_funcs.keys()))))
help='The type of workload to be created. The available options '
+ 'are: {}'.format(', '.join(list(create_funcs.keys()))))
def execute(self, state, args): # pylint: disable=R0201
where = args.path or 'local'
@ -430,8 +430,8 @@ class CreatePackageSubcommand(SubCommand):
self.parser.add_argument('name', metavar='NAME',
help='Name of the package to be created')
self.parser.add_argument('-p', '--path', metavar='PATH', default=None,
help='The location at which the new package will be created. If not specified, ' +
'current working directory will be used.')
help='The location at which the new package will be created. If not specified, '
+ 'current working directory will be used.')
self.parser.add_argument('-f', '--force', action='store_true',
help='Create the new package even if a file or directory with the same name '
'already exists at the specified location.')

@ -1,4 +1,4 @@
--!VERSION!1.2!ENDVERSION!
--!VERSION!1.6!ENDVERSION!
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "lo";
@ -61,7 +61,7 @@ CREATE TABLE Runs (
CREATE TABLE Jobs (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
status status_enum,
retry int,
label text,
@ -76,12 +76,13 @@ CREATE TABLE Jobs (
CREATE TABLE Targets (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
target text,
modules text[],
cpus text[],
os text,
os_version jsonb,
hostid int,
hostid bigint,
hostname text,
abi text,
is_rooted boolean,
@ -96,12 +97,13 @@ CREATE TABLE Targets (
android_id text,
_pod_version int,
_pod_serialization_version int,
system_id text,
PRIMARY KEY (oid)
);
CREATE TABLE Events (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid),
timestamp timestamp,
message text,
@ -112,28 +114,28 @@ CREATE TABLE Events (
CREATE TABLE Resource_Getters (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
name text,
PRIMARY KEY (oid)
);
CREATE TABLE Augmentations (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
name text,
PRIMARY KEY (oid)
);
CREATE TABLE Jobs_Augs (
oid uuid NOT NULL,
job_oid uuid NOT NULL references Jobs(oid),
augmentation_oid uuid NOT NULL references Augmentations(oid),
job_oid uuid NOT NULL references Jobs(oid) ON DELETE CASCADE,
augmentation_oid uuid NOT NULL references Augmentations(oid) ON DELETE CASCADE,
PRIMARY KEY (oid)
);
CREATE TABLE Metrics (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid),
name text,
value double precision,
@ -156,7 +158,7 @@ CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON LargeObjects
CREATE TABLE Artifacts (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid),
name text,
large_object_uuid uuid NOT NULL references LargeObjects(oid),
@ -164,15 +166,22 @@ CREATE TABLE Artifacts (
kind text,
_pod_version int,
_pod_serialization_version int,
is_dir boolean,
PRIMARY KEY (oid)
);
CREATE RULE del_lo AS
ON DELETE TO Artifacts
DO DELETE FROM LargeObjects
WHERE LargeObjects.oid = old.large_object_uuid
;
CREATE TABLE Classifiers (
oid uuid NOT NULL,
artifact_oid uuid references Artifacts(oid),
metric_oid uuid references Metrics(oid),
job_oid uuid references Jobs(oid),
run_oid uuid references Runs(oid),
artifact_oid uuid references Artifacts(oid) ON DELETE CASCADE,
metric_oid uuid references Metrics(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid) ON DELETE CASCADE,
run_oid uuid references Runs(oid) ON DELETE CASCADE,
key text,
value text,
PRIMARY KEY (oid)
@ -180,7 +189,7 @@ CREATE TABLE Classifiers (
CREATE TABLE Parameters (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid),
augmentation_oid uuid references Augmentations(oid),
resource_getter_oid uuid references Resource_Getters(oid),

@ -0,0 +1,3 @@
ALTER TABLE targets ADD COLUMN system_id text;
ALTER TABLE artifacts ADD COLUMN is_dir boolean;

@ -0,0 +1,2 @@
ALTER TABLE targets ADD COLUMN modules text[];

@ -0,0 +1 @@
ALTER TABLE targets ALTER hostid TYPE BIGINT;

@ -0,0 +1,109 @@
ALTER TABLE jobs
DROP CONSTRAINT jobs_run_oid_fkey,
ADD CONSTRAINT jobs_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE targets
DROP CONSTRAINT targets_run_oid_fkey,
ADD CONSTRAINT targets_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE events
DROP CONSTRAINT events_run_oid_fkey,
ADD CONSTRAINT events_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE resource_getters
DROP CONSTRAINT resource_getters_run_oid_fkey,
ADD CONSTRAINT resource_getters_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE augmentations
DROP CONSTRAINT augmentations_run_oid_fkey,
ADD CONSTRAINT augmentations_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE jobs_augs
DROP CONSTRAINT jobs_augs_job_oid_fkey,
DROP CONSTRAINT jobs_augs_augmentation_oid_fkey,
ADD CONSTRAINT jobs_augs_job_oid_fkey
FOREIGN KEY (job_oid)
REFERENCES Jobs(oid)
ON DELETE CASCADE,
ADD CONSTRAINT jobs_augs_augmentation_oid_fkey
FOREIGN KEY (augmentation_oid)
REFERENCES Augmentations(oid)
ON DELETE CASCADE
;
ALTER TABLE metrics
DROP CONSTRAINT metrics_run_oid_fkey,
ADD CONSTRAINT metrics_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE artifacts
DROP CONSTRAINT artifacts_run_oid_fkey,
ADD CONSTRAINT artifacts_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
CREATE RULE del_lo AS
ON DELETE TO Artifacts
DO DELETE FROM LargeObjects
WHERE LargeObjects.oid = old.large_object_uuid
;
ALTER TABLE classifiers
DROP CONSTRAINT classifiers_artifact_oid_fkey,
DROP CONSTRAINT classifiers_metric_oid_fkey,
DROP CONSTRAINT classifiers_job_oid_fkey,
DROP CONSTRAINT classifiers_run_oid_fkey,
ADD CONSTRAINT classifiers_artifact_oid_fkey
FOREIGN KEY (artifact_oid)
REFERENCES artifacts(oid)
ON DELETE CASCADE,
ADD CONSTRAINT classifiers_metric_oid_fkey
FOREIGN KEY (metric_oid)
REFERENCES metrics(oid)
ON DELETE CASCADE,
ADD CONSTRAINT classifiers_job_oid_fkey
FOREIGN KEY (job_oid)
REFERENCES jobs(oid)
ON DELETE CASCADE,
ADD CONSTRAINT classifiers_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE parameters
DROP CONSTRAINT parameters_run_oid_fkey,
ADD CONSTRAINT parameters_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;

@ -17,6 +17,7 @@ import os
from wa import Command
from wa import discover_wa_outputs
from wa.framework.configuration.core import Status
from wa.framework.exception import CommandError
from wa.framework.output import RunOutput
from wa.framework.output_processor import ProcessorManager
@ -57,8 +58,9 @@ class ProcessCommand(Command):
""")
self.parser.add_argument('-f', '--force', action='store_true',
help="""
Run processors that have already been
run. By default these will be skipped.
Run processors that have already been run. By
default these will be skipped. Also, forces
processing of in-progress runs.
""")
self.parser.add_argument('-r', '--recursive', action='store_true',
help="""
@ -76,10 +78,15 @@ class ProcessCommand(Command):
if not args.recursive:
output_list = [RunOutput(process_directory)]
else:
output_list = [output for output in discover_wa_outputs(process_directory)]
output_list = list(discover_wa_outputs(process_directory))
pc = ProcessContext()
for run_output in output_list:
if run_output.status < Status.OK and not args.force:
msg = 'Skipping {} as it has not completed -- {}'
self.logger.info(msg.format(run_output.basepath, run_output.status))
continue
pc.run_output = run_output
pc.target_info = run_output.target_info
@ -112,6 +119,12 @@ class ProcessCommand(Command):
pm.initialize(pc)
for job_output in run_output.jobs:
if job_output.status < Status.OK or job_output.status in [Status.SKIPPED, Status.ABORTED]:
msg = 'Skipping job {} {} iteration {} -- {}'
self.logger.info(msg.format(job_output.id, job_output.label,
job_output.iteration, job_output.status))
continue
pc.job_output = job_output
pm.enable_all()
if not args.force:
@ -142,5 +155,6 @@ class ProcessCommand(Command):
pm.export_run_output(pc)
pm.finalize(pc)
run_output.write_info()
run_output.write_result()
self.logger.info('Done.')

288
wa/commands/report.py Normal file

@ -0,0 +1,288 @@
from collections import Counter
from datetime import datetime, timedelta
import logging
import os
from wa import Command, settings
from wa.framework.configuration.core import Status
from wa.framework.output import RunOutput, discover_wa_outputs
from wa.utils.doc import underline
from wa.utils.log import COLOR_MAP, RESET_COLOR
from wa.utils.terminalsize import get_terminal_size
class ReportCommand(Command):
name = 'report'
description = '''
Monitor an ongoing run and provide information on its progress.
Specify the output directory of the run you would like the monitor;
alternatively report will attempt to discover wa output directories
within the current directory. The output includes run information such as
the UUID, start time, duration, project name and a short summary of the
run's progress (number of completed jobs, the number of jobs in each
different status).
If verbose output is specified, the output includes a list of all events
labelled as not specific to any job, followed by a list of the jobs in the
order executed, with their retries (if any), current status and, if the job
is finished, a list of events that occurred during that job's execution.
This is an example of a job status line:
wk1 (exoplayer) [1] - 2, PARTIAL
It contains two entries delimited by a comma: the job's descriptor followed
by its completion status (``PARTIAL``, in this case). The descriptor
consists of the following elements:
- the job ID (``wk1``)
- the job label (which defaults to the workload name) in parentheses
- job iteration number in square brakets (``1`` in this case)
- a hyphen followed by the retry attempt number.
(note: this will only be shown if the job has been retried as least
once. If the job has not yet run, or if it completed on the first
attempt, the hyphen and retry count -- which in that case would be
zero -- will not appear).
'''
def initialize(self, context):
self.parser.add_argument('-d', '--directory',
help='''
Specify the WA output path. report will
otherwise attempt to discover output
directories in the current directory.
''')
def execute(self, state, args):
if args.directory:
output_path = args.directory
run_output = RunOutput(output_path)
else:
possible_outputs = list(discover_wa_outputs(os.getcwd()))
num_paths = len(possible_outputs)
if num_paths > 1:
print('More than one possible output directory found,'
' please choose a path from the following:'
)
for i in range(num_paths):
print("{}: {}".format(i, possible_outputs[i].basepath))
while True:
try:
select = int(input())
except ValueError:
print("Please select a valid path number")
continue
if select not in range(num_paths):
print("Please select a valid path number")
continue
break
run_output = possible_outputs[select]
else:
run_output = possible_outputs[0]
rm = RunMonitor(run_output)
print(rm.generate_output(args.verbose))
class RunMonitor:
@property
def elapsed_time(self):
if self._elapsed is None:
if self.ro.info.duration is None:
self._elapsed = datetime.utcnow() - self.ro.info.start_time
else:
self._elapsed = self.ro.info.duration
return self._elapsed
@property
def job_outputs(self):
if self._job_outputs is None:
self._job_outputs = {
(j_o.id, j_o.label, j_o.iteration): j_o for j_o in self.ro.jobs
}
return self._job_outputs
@property
def projected_duration(self):
elapsed = self.elapsed_time.total_seconds()
proj = timedelta(seconds=elapsed * (len(self.jobs) / len(self.segmented['finished'])))
return proj - self.elapsed_time
def __init__(self, ro):
self.ro = ro
self._elapsed = None
self._p_duration = None
self._job_outputs = None
self._termwidth = None
self._fmt = _simple_formatter()
self.get_data()
def get_data(self):
self.jobs = [state for label_id, state in self.ro.state.jobs.items()]
if self.jobs:
rc = self.ro.run_config
self.segmented = segment_jobs_by_state(self.jobs,
rc.max_retries,
rc.retry_on_status
)
def generate_run_header(self):
info = self.ro.info
header = underline('Run Info')
header += "UUID: {}\n".format(info.uuid)
if info.run_name:
header += "Run name: {}\n".format(info.run_name)
if info.project:
header += "Project: {}\n".format(info.project)
if info.project_stage:
header += "Project stage: {}\n".format(info.project_stage)
if info.start_time:
duration = _seconds_as_smh(self.elapsed_time.total_seconds())
header += ("Start time: {}\n"
"Duration: {:02}:{:02}:{:02}\n"
).format(info.start_time,
duration[2], duration[1], duration[0],
)
if self.segmented['finished'] and not info.end_time:
p_duration = _seconds_as_smh(self.projected_duration.total_seconds())
header += "Projected time remaining: {:02}:{:02}:{:02}\n".format(
p_duration[2], p_duration[1], p_duration[0]
)
elif self.ro.info.end_time:
header += "End time: {}\n".format(info.end_time)
return header + '\n'
def generate_job_summary(self):
total = len(self.jobs)
num_fin = len(self.segmented['finished'])
summary = underline('Job Summary')
summary += 'Total: {}, Completed: {} ({}%)\n'.format(
total, num_fin, (num_fin / total) * 100
) if total > 0 else 'No jobs created\n'
ctr = Counter()
for run_state, jobs in ((k, v) for k, v in self.segmented.items() if v):
if run_state == 'finished':
ctr.update([job.status.name.lower() for job in jobs])
else:
ctr[run_state] += len(jobs)
return summary + ', '.join(
[str(count) + ' ' + self._fmt.highlight_keyword(status) for status, count in ctr.items()]
) + '\n\n'
def generate_job_detail(self):
detail = underline('Job Detail')
for job in self.jobs:
detail += ('{} ({}) [{}]{}, {}\n').format(
job.id,
job.label,
job.iteration,
' - ' + str(job.retries)if job.retries else '',
self._fmt.highlight_keyword(str(job.status))
)
job_output = self.job_outputs[(job.id, job.label, job.iteration)]
for event in job_output.events:
detail += self._fmt.fit_term_width(
'\t{}\n'.format(event.summary)
)
return detail
def generate_run_detail(self):
detail = underline('Run Events') if self.ro.events else ''
for event in self.ro.events:
detail += '{}\n'.format(event.summary)
return detail + '\n'
def generate_output(self, verbose):
if not self.jobs:
return 'No jobs found in output directory\n'
output = self.generate_run_header()
output += self.generate_job_summary()
if verbose:
output += self.generate_run_detail()
output += self.generate_job_detail()
return output
def _seconds_as_smh(seconds):
seconds = int(seconds)
hours = seconds // 3600
minutes = (seconds % 3600) // 60
seconds = seconds % 60
return seconds, minutes, hours
def segment_jobs_by_state(jobstates, max_retries, retry_status):
finished_states = [
Status.PARTIAL, Status.FAILED,
Status.ABORTED, Status.OK, Status.SKIPPED
]
segmented = {
'finished': [], 'other': [], 'running': [],
'pending': [], 'uninitialized': []
}
for jobstate in jobstates:
if (jobstate.status in retry_status) and jobstate.retries < max_retries:
segmented['running'].append(jobstate)
elif jobstate.status in finished_states:
segmented['finished'].append(jobstate)
elif jobstate.status == Status.RUNNING:
segmented['running'].append(jobstate)
elif jobstate.status == Status.PENDING:
segmented['pending'].append(jobstate)
elif jobstate.status == Status.NEW:
segmented['uninitialized'].append(jobstate)
else:
segmented['other'].append(jobstate)
return segmented
class _simple_formatter:
color_map = {
'running': COLOR_MAP[logging.INFO],
'partial': COLOR_MAP[logging.WARNING],
'failed': COLOR_MAP[logging.CRITICAL],
'aborted': COLOR_MAP[logging.ERROR]
}
def __init__(self):
self.termwidth = get_terminal_size()[0]
self.color = settings.logging['color']
def fit_term_width(self, text):
text = text.expandtabs()
if len(text) <= self.termwidth:
return text
else:
return text[0:self.termwidth - 4] + " ...\n"
def highlight_keyword(self, kw):
if not self.color or kw not in _simple_formatter.color_map:
return kw
color = _simple_formatter.color_map[kw.lower()]
return '{}{}{}'.format(color, kw, RESET_COLOR)

@ -96,8 +96,8 @@ class RecordCommand(Command):
if args.workload and args.output:
self.logger.error("Output file cannot be specified with Workload")
sys.exit()
if not args.workload and (args.setup or args.extract_results or
args.teardown or args.all):
if not args.workload and (args.setup or args.extract_results
or args.teardown or args.all):
self.logger.error("Cannot specify a recording stage without a Workload")
sys.exit()
if args.workload and not any([args.all, args.teardown, args.extract_results, args.run, args.setup]):

@ -7,3 +7,22 @@
was done following an extended discussion and tests that verified
that the savings in processing power were not enough to warrant
the creation of a dedicated server or file handler.
## 1.2
- Rename the `resourcegetters` table to `resource_getters` for consistency.
- Add Job and Run level classifiers.
- Add missing android specific properties to targets.
- Add new POD meta data to relevant tables.
- Correct job column name from `retires` to `retry`.
- Add missing run information.
## 1.3
- Add missing "system_id" field from TargetInfo.
- Enable support for uploading Artifact that represent directories.
## 1.4
- Add "modules" field to TargetInfo to list the modules loaded by the target
during the run.
## 1.5
- Change the type of the "hostid" in TargetInfo from Int to Bigint.
## 1.6
- Add cascading deletes to most tables to allow easy deletion of a run
and its associated data
- Add rule to delete associated large object on deletion of artifact

@ -59,7 +59,7 @@ params = dict(
'Environment :: Console',
'License :: Other/Proprietary License',
'Operating System :: Unix',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
],
)

@ -1,12 +1,12 @@
apply plugin: 'com.android.application'
android {
compileSdkVersion 18
buildToolsVersion '25.0.0'
compileSdkVersion 28
buildToolsVersion '28.0.0'
defaultConfig {
applicationId "${package_name}"
minSdkVersion 18
targetSdkVersion 25
targetSdkVersion 28
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {

@ -16,7 +16,7 @@ fi
# Copy base class library from wlauto dist
libs_dir=app/libs
base_class=`python -c "import os, wa; print os.path.join(os.path.dirname(wa.__file__), 'framework', 'uiauto', 'uiauto.aar')"`
base_class=`python3 -c "import os, wa; print(os.path.join(os.path.dirname(wa.__file__), 'framework', 'uiauto', 'uiauto.aar'))"`
mkdir -p $$libs_dir
cp $$base_class $$libs_dir

@ -65,7 +65,6 @@ class SubCommand(object):
options to the command's parser). ``context`` is always ``None``.
"""
pass
def execute(self, state, args):
"""

@ -13,6 +13,7 @@
# limitations under the License.
import os
import logging
from copy import copy, deepcopy
from collections import OrderedDict, defaultdict
@ -36,6 +37,8 @@ Status = enum(['UNKNOWN', 'NEW', 'PENDING',
'STARTED', 'CONNECTED', 'INITIALIZED', 'RUNNING',
'OK', 'PARTIAL', 'FAILED', 'ABORTED', 'SKIPPED'])
logger = logging.getLogger('config')
##########################
### CONFIG POINT TYPES ###
@ -55,10 +58,11 @@ class RebootPolicy(object):
executing the first workload spec.
:each_spec: The device will be rebooted before running a new workload spec.
:each_iteration: The device will be rebooted before each new iteration.
:run_completion: The device will be rebooted after the run has been completed.
"""
valid_policies = ['never', 'as_needed', 'initial', 'each_spec', 'each_job']
valid_policies = ['never', 'as_needed', 'initial', 'each_spec', 'each_job', 'run_completion']
@staticmethod
def from_pod(pod):
@ -89,6 +93,10 @@ class RebootPolicy(object):
def reboot_on_each_spec(self):
return self.policy == 'each_spec'
@property
def reboot_on_run_completion(self):
return self.policy == 'run_completion'
def __str__(self):
return self.policy
@ -192,7 +200,8 @@ class ConfigurationPoint(object):
constraint=None,
merge=False,
aliases=None,
global_alias=None):
global_alias=None,
deprecated=False):
"""
Create a new Parameter object.
@ -243,10 +252,12 @@ class ConfigurationPoint(object):
:param global_alias: An alias for this parameter that can be specified at
the global level. A global_alias can map onto many
ConfigurationPoints.
:param deprecated: Specify that this parameter is deprecated and its
config should be ignored. If supplied WA will display
a warning to the user however will continue execution.
"""
self.name = identifier(name)
if kind in KIND_MAP:
kind = KIND_MAP[kind]
kind = KIND_MAP.get(kind, kind)
if kind is not None and not callable(kind):
raise ValueError('Kind must be callable.')
self.kind = kind
@ -266,6 +277,7 @@ class ConfigurationPoint(object):
self.merge = merge
self.aliases = aliases or []
self.global_alias = global_alias
self.deprecated = deprecated
if self.default is not None:
try:
@ -281,6 +293,11 @@ class ConfigurationPoint(object):
return False
def set_value(self, obj, value=None, check_mandatory=True):
if self.deprecated:
if value is not None:
msg = 'Depreciated parameter supplied for "{}" in "{}". The value will be ignored.'
logger.warning(msg.format(self.name, obj.name))
return
if value is None:
if self.default is not None:
value = self.kind(self.default)
@ -302,6 +319,8 @@ class ConfigurationPoint(object):
setattr(obj, self.name, value)
def validate(self, obj, check_mandatory=True):
if self.deprecated:
return
value = getattr(obj, self.name, None)
if value is not None:
self.validate_value(obj.name, value)
@ -450,6 +469,7 @@ class MetaConfiguration(Configuration):
description="""
The local mount point for the filer hosting WA assets.
""",
default=''
),
ConfigurationPoint(
'logging',
@ -466,7 +486,6 @@ class MetaConfiguration(Configuration):
contain bash color escape codes. Set this to ``False`` if
console output will be piped somewhere that does not know
how to handle those.
""",
),
ConfigurationPoint(
@ -523,6 +542,10 @@ class MetaConfiguration(Configuration):
def target_info_cache_file(self):
return os.path.join(self.cache_directory, 'targets.json')
@property
def apk_info_cache_file(self):
return os.path.join(self.cache_directory, 'apk_info.json')
def __init__(self, environ=None):
super(MetaConfiguration, self).__init__()
if environ is None:
@ -646,6 +669,9 @@ class RunConfiguration(Configuration):
.. note:: this acts the same as each_job when execution order
is set to by_iteration
''"run_completion"''
The device will be reboot after the run has been completed.
'''),
ConfigurationPoint(
'device',
@ -706,6 +732,17 @@ class RunConfiguration(Configuration):
failed, but continue attempting to run others.
'''
),
ConfigurationPoint(
'bail_on_job_failure',
kind=bool,
default=False,
description='''
When a job fails during its run phase, WA will attempt to retry the
job, then continue with remaining jobs after. Setting this to
``True`` means WA will skip remaining jobs and end the run if a job
has retried the maximum number of times, and still fails.
'''
),
ConfigurationPoint(
'allow_phone_home',
kind=bool, default=True,
@ -793,12 +830,12 @@ class JobSpec(Configuration):
description='''
The name of the workload to run.
'''),
ConfigurationPoint('workload_parameters', kind=obj_dict,
ConfigurationPoint('workload_parameters', kind=obj_dict, merge=True,
aliases=["params", "workload_params", "parameters"],
description='''
Parameter to be passed to the workload
'''),
ConfigurationPoint('runtime_parameters', kind=obj_dict,
ConfigurationPoint('runtime_parameters', kind=obj_dict, merge=True,
aliases=["runtime_params"],
description='''
Runtime parameters to be set prior to running

@ -24,7 +24,7 @@ from wa.framework.configuration.core import (MetaConfiguration, RunConfiguration
JobGenerator, settings)
from wa.framework.configuration.parsers import ConfigParser
from wa.framework.configuration.plugin_cache import PluginCache
from wa.framework.exception import NotFoundError
from wa.framework.exception import NotFoundError, ConfigError
from wa.framework.job import Job
from wa.utils import log
from wa.utils.serializer import Podable
@ -148,6 +148,9 @@ class ConfigManager(object):
def generate_jobs(self, context):
job_specs = self.jobs_config.generate_job_specs(context.tm)
if not job_specs:
msg = 'No jobs available for running.'
raise ConfigError(msg)
exec_order = self.run_config.execution_order
log.indent()
for spec, i in permute_iterations(job_specs, exec_order):

@ -297,7 +297,7 @@ def merge_augmentations(raw):
raise ConfigError(msg.format(value, n, exc))
# Make sure none of the specified aliases conflict with each other
to_check = [e for e in entries]
to_check = list(entries)
while len(to_check) > 1:
check_entry = to_check.pop()
for e in to_check:

@ -84,9 +84,9 @@ class PluginCache(object):
'defined in a config file, move the entry content into the top level'
raise ConfigError(msg.format((plugin_name)))
if (not self.loader.has_plugin(plugin_name) and
plugin_name not in self.targets and
plugin_name not in GENERIC_CONFIGS):
if (not self.loader.has_plugin(plugin_name)
and plugin_name not in self.targets
and plugin_name not in GENERIC_CONFIGS):
msg = 'configuration provided for unknown plugin "{}"'
raise ConfigError(msg.format(plugin_name))
@ -95,8 +95,8 @@ class PluginCache(object):
raise ConfigError(msg.format(plugin_name, repr(values), type(values)))
for name, value in values.items():
if (plugin_name not in GENERIC_CONFIGS and
name not in self.get_plugin_parameters(plugin_name)):
if (plugin_name not in GENERIC_CONFIGS
and name not in self.get_plugin_parameters(plugin_name)):
msg = "'{}' is not a valid parameter for '{}'"
raise ConfigError(msg.format(name, plugin_name))

@ -33,6 +33,7 @@ class JobSpecSource(object):
def id(self):
return self.config['id']
@property
def name(self):
raise NotImplementedError()

@ -16,19 +16,25 @@
import sys
import argparse
import locale
import logging
import os
import warnings
import devlib
try:
from devlib.utils.version import version as installed_devlib_version
except ImportError:
installed_devlib_version = None
from wa.framework import pluginloader
from wa.framework.command import init_argument_parser
from wa.framework.configuration import settings
from wa.framework.configuration.execution import ConfigManager
from wa.framework.host import init_user_directory, init_config
from wa.framework.exception import ConfigError
from wa.framework.version import get_wa_version_with_commit
from wa.framework.exception import ConfigError, HostError
from wa.framework.version import (get_wa_version_with_commit, format_version,
required_devlib_version)
from wa.utils import log
from wa.utils.doc import format_body
@ -64,6 +70,27 @@ def split_joined_options(argv):
return output
# Instead of presenting an obscure error due to a version mismatch explicitly warn the user.
def check_devlib_version():
if not installed_devlib_version or installed_devlib_version[:-1] <= required_devlib_version[:-1]:
# Check the 'dev' field separately to account for comparing with release versions.
if installed_devlib_version.dev and installed_devlib_version.dev < required_devlib_version.dev:
msg = 'WA requires Devlib version >={}. Please update the currently installed version {}'
raise HostError(msg.format(format_version(required_devlib_version), devlib.__version__))
# If the default encoding is not UTF-8 warn the user as this may cause compatibility issues
# when parsing files.
def check_system_encoding():
system_encoding = locale.getpreferredencoding()
msg = 'System Encoding: {}'.format(system_encoding)
if 'UTF-8' not in system_encoding:
logger.warning(msg)
logger.warning('To prevent encoding issues please use a locale setting which supports UTF-8')
else:
logger.debug(msg)
def main():
if not os.path.exists(settings.user_directory):
init_user_directory()
@ -102,6 +129,8 @@ def main():
logger.debug('Version: {}'.format(get_wa_version_with_commit()))
logger.debug('devlib version: {}'.format(devlib.__full_version__))
logger.debug('Command Line: {}'.format(' '.join(sys.argv)))
check_devlib_version()
check_system_encoding()
# each command will add its own subparser
subparsers = parser.add_subparsers(dest='command')

@ -30,60 +30,49 @@ class WAError(Exception):
class NotFoundError(WAError):
"""Raised when the specified item is not found."""
pass
class ValidationError(WAError):
"""Raised on failure to validate an extension."""
pass
class ExecutionError(WAError):
"""Error encountered by the execution framework."""
pass
class WorkloadError(WAError):
"""General Workload error."""
pass
class JobError(WAError):
"""Job execution error."""
pass
class InstrumentError(WAError):
"""General Instrument error."""
pass
class OutputProcessorError(WAError):
"""General OutputProcessor error."""
pass
class ResourceError(WAError):
"""General Resolver error."""
pass
class CommandError(WAError):
"""Raised by commands when they have encountered an error condition
during execution."""
pass
class ToolError(WAError):
"""Raised by tools when they have encountered an error condition
during execution."""
pass
class ConfigError(WAError):
"""Raised when configuration provided is invalid. This error suggests that
the user should modify their config and try again."""
pass
class SerializerSyntaxError(Exception):

@ -25,7 +25,7 @@ from datetime import datetime
import wa.framework.signal as signal
from wa.framework import instrument as instrumentation
from wa.framework.configuration.core import Status
from wa.framework.exception import TargetError, HostError, WorkloadError
from wa.framework.exception import TargetError, HostError, WorkloadError, ExecutionError
from wa.framework.exception import TargetNotRespondingError, TimeoutError # pylint: disable=redefined-builtin
from wa.framework.job import Job
from wa.framework.output import init_job_output
@ -128,8 +128,8 @@ class ExecutionContext(object):
self.run_state.status = status
self.run_output.status = status
self.run_output.info.end_time = datetime.utcnow()
self.run_output.info.duration = (self.run_output.info.end_time -
self.run_output.info.start_time)
self.run_output.info.duration = (self.run_output.info.end_time
- self.run_output.info.start_time)
self.write_output()
def finalize(self):
@ -141,21 +141,24 @@ class ExecutionContext(object):
self.current_job = self.job_queue.pop(0)
job_output = init_job_output(self.run_output, self.current_job)
self.current_job.set_output(job_output)
self.update_job_state(self.current_job)
return self.current_job
def end_job(self):
if not self.current_job:
raise RuntimeError('No jobs in progress')
self.completed_jobs.append(self.current_job)
self.update_job_state(self.current_job)
self.output.write_result()
self.current_job = None
def set_status(self, status, force=False):
def set_status(self, status, force=False, write=True):
if not self.current_job:
raise RuntimeError('No jobs in progress')
self.current_job.set_status(status, force)
self.set_job_status(self.current_job, status, force, write)
def set_job_status(self, job, status, force=False, write=True):
job.set_status(status, force)
if write:
self.run_output.write_state()
def extract_results(self):
self.tm.extract_results(self)
@ -163,13 +166,8 @@ class ExecutionContext(object):
def move_failed(self, job):
self.run_output.move_failed(job.output)
def update_job_state(self, job):
self.run_state.update_job(job)
self.run_output.write_state()
def skip_job(self, job):
job.status = Status.SKIPPED
self.run_state.update_job(job)
self.set_job_status(job, Status.SKIPPED, force=True)
self.completed_jobs.append(job)
def skip_remaining_jobs(self):
@ -249,6 +247,11 @@ class ExecutionContext(object):
def add_event(self, message):
self.output.add_event(message)
def add_classifier(self, name, value, overwrite=False):
self.output.add_classifier(name, value, overwrite)
if self.current_job:
self.current_job.add_classifier(name, value, overwrite)
def add_metadata(self, key, *args, **kwargs):
self.output.add_metadata(key, *args, **kwargs)
@ -288,7 +291,7 @@ class ExecutionContext(object):
try:
job.initialize(self)
except WorkloadError as e:
job.set_status(Status.FAILED)
self.set_job_status(job, Status.FAILED, write=False)
log.log_error(e, self.logger)
failed_ids.append(job.id)
@ -298,6 +301,7 @@ class ExecutionContext(object):
new_queue.append(job)
self.job_queue = new_queue
self.write_state()
def _load_resource_getters(self):
self.logger.debug('Loading resource discoverers')
@ -333,7 +337,7 @@ class Executor(object):
returning.
The initial context set up involves combining configuration from various
sources, loading of requided workloads, loading and installation of
sources, loading of required workloads, loading and installation of
instruments and output processors, etc. Static validation of the combined
configuration is also performed.
@ -349,7 +353,7 @@ class Executor(object):
def execute(self, config_manager, output):
"""
Execute the run specified by an agenda. Optionally, selectors may be
used to only selecute a subset of the specified agenda.
used to only execute a subset of the specified agenda.
Params::
@ -399,7 +403,7 @@ class Executor(object):
attempts = context.cm.run_config.max_retries
while attempts:
try:
self.target_manager.reboot()
self.target_manager.reboot(context)
except TargetError as e:
if attempts:
attempts -= 1
@ -445,7 +449,7 @@ class Executor(object):
for status in reversed(Status.levels):
if status in counter:
parts.append('{} {}'.format(counter[status], status))
self.logger.info(status_summary + ', '.join(parts))
self.logger.info('{}{}'.format(status_summary, ', '.join(parts)))
self.logger.info('Results can be found in {}'.format(output.basepath))
@ -533,6 +537,9 @@ class Runner(object):
self.pm.process_run_output(self.context)
self.pm.export_run_output(self.context)
self.pm.finalize(self.context)
if self.context.reboot_policy.reboot_on_run_completion:
self.logger.info('Rebooting target on run completion.')
self.context.tm.reboot(self.context)
signal.disconnect(self._error_signalled_callback, signal.ERROR_LOGGED)
signal.disconnect(self._warning_signalled_callback, signal.WARNING_LOGGED)
@ -552,15 +559,15 @@ class Runner(object):
with signal.wrap('JOB', self, context):
context.tm.start()
self.do_run_job(job, context)
job.set_status(Status.OK)
context.set_job_status(job, Status.OK)
except (Exception, KeyboardInterrupt) as e: # pylint: disable=broad-except
log.log_error(e, self.logger)
if isinstance(e, KeyboardInterrupt):
context.run_interrupted = True
job.set_status(Status.ABORTED)
context.set_job_status(job, Status.ABORTED)
raise e
else:
job.set_status(Status.FAILED)
context.set_job_status(job, Status.FAILED)
if isinstance(e, TargetNotRespondingError):
raise e
elif isinstance(e, TargetError):
@ -583,7 +590,7 @@ class Runner(object):
self.context.skip_job(job)
return
job.set_status(Status.RUNNING)
context.set_job_status(job, Status.RUNNING)
self.send(signal.JOB_STARTED)
job.configure_augmentations(context, self.pm)
@ -594,7 +601,7 @@ class Runner(object):
try:
job.setup(context)
except Exception as e:
job.set_status(Status.FAILED)
context.set_job_status(job, Status.FAILED)
log.log_error(e, self.logger)
if isinstance(e, (TargetError, TimeoutError)):
context.tm.verify_target_responsive(context)
@ -607,10 +614,10 @@ class Runner(object):
job.run(context)
except KeyboardInterrupt:
context.run_interrupted = True
job.set_status(Status.ABORTED)
context.set_job_status(job, Status.ABORTED)
raise
except Exception as e:
job.set_status(Status.FAILED)
context.set_job_status(job, Status.FAILED)
log.log_error(e, self.logger)
if isinstance(e, (TargetError, TimeoutError)):
context.tm.verify_target_responsive(context)
@ -623,7 +630,7 @@ class Runner(object):
self.pm.process_job_output(context)
self.pm.export_job_output(context)
except Exception as e:
job.set_status(Status.PARTIAL)
context.set_job_status(job, Status.PARTIAL)
if isinstance(e, (TargetError, TimeoutError)):
context.tm.verify_target_responsive(context)
self.context.record_ui_state('output-error')
@ -631,7 +638,7 @@ class Runner(object):
except KeyboardInterrupt:
context.run_interrupted = True
job.set_status(Status.ABORTED)
context.set_status(Status.ABORTED)
raise
finally:
# If setup was successfully completed, teardown must
@ -653,6 +660,9 @@ class Runner(object):
self.logger.error(msg.format(job.id, job.iteration, job.status))
self.context.failed_jobs += 1
self.send(signal.JOB_FAILED)
if rc.bail_on_job_failure:
raise ExecutionError('Job {} failed, bailing.'.format(job.id))
else: # status not in retry_on_status
self.logger.info('Job completed with status {}'.format(job.status))
if job.status != 'ABORTED':
@ -664,8 +674,9 @@ class Runner(object):
def retry_job(self, job):
retry_job = Job(job.spec, job.iteration, self.context)
retry_job.workload = job.workload
retry_job.state = job.state
retry_job.retries = job.retries + 1
retry_job.set_status(Status.PENDING)
self.context.set_job_status(retry_job, Status.PENDING, force=True)
self.context.job_queue.insert(0, retry_job)
self.send(signal.JOB_RESTARTED)

@ -31,7 +31,7 @@ import requests
from wa import Parameter, settings, __file__ as _base_filepath
from wa.framework.resource import ResourceGetter, SourcePriority, NO_ONE
from wa.framework.exception import ResourceError
from wa.utils.misc import (ensure_directory_exists as _d,
from wa.utils.misc import (ensure_directory_exists as _d, atomic_write_path,
ensure_file_directory_exists as _f, sha256, urljoin)
from wa.utils.types import boolean, caseless_string
@ -78,15 +78,20 @@ def get_path_matches(resource, files):
return matches
# pylint: disable=too-many-return-statements
def get_from_location(basepath, resource):
if resource.kind == 'file':
path = os.path.join(basepath, resource.path)
if os.path.exists(path):
return path
elif resource.kind == 'executable':
path = os.path.join(basepath, 'bin', resource.abi, resource.filename)
if os.path.exists(path):
return path
bin_dir = os.path.join(basepath, 'bin', resource.abi)
if not os.path.exists(bin_dir):
return None
for entry in os.listdir(bin_dir):
path = os.path.join(bin_dir, entry)
if resource.match(path):
return path
elif resource.kind == 'revent':
path = os.path.join(basepath, 'revent_files')
if os.path.exists(path):
@ -234,7 +239,7 @@ class Http(ResourceGetter):
index_url = urljoin(self.url, 'index.json')
response = self.geturl(index_url)
if response.status_code != http.client.OK:
message = 'Could not fetch "{}"; recieved "{} {}"'
message = 'Could not fetch "{}"; received "{} {}"'
self.logger.error(message.format(index_url,
response.status_code,
response.reason))
@ -249,6 +254,7 @@ class Http(ResourceGetter):
url = urljoin(self.url, owner_name, asset['path'])
local_path = _f(os.path.join(settings.dependencies_directory, '__remote',
owner_name, asset['path'].replace('/', os.sep)))
if os.path.exists(local_path) and not self.always_fetch:
local_sha = sha256(local_path)
if local_sha == asset['sha256']:
@ -257,14 +263,15 @@ class Http(ResourceGetter):
self.logger.debug('Downloading {}'.format(url))
response = self.geturl(url, stream=True)
if response.status_code != http.client.OK:
message = 'Could not download asset "{}"; recieved "{} {}"'
message = 'Could not download asset "{}"; received "{} {}"'
self.logger.warning(message.format(url,
response.status_code,
response.reason))
return
with open(local_path, 'wb') as wfh:
for chunk in response.iter_content(chunk_size=self.chunk_size):
wfh.write(chunk)
with atomic_write_path(local_path) as at_path:
with open(at_path, 'wb') as wfh:
for chunk in response.iter_content(chunk_size=self.chunk_size):
wfh.write(chunk)
return local_path
def geturl(self, url, stream=False):
@ -322,7 +329,8 @@ class Filer(ResourceGetter):
"""
parameters = [
Parameter('remote_path', global_alias='remote_assets_path', default='',
Parameter('remote_path', global_alias='remote_assets_path',
default=settings.assets_repository,
description="""
Path, on the local system, where the assets are located.
"""),

@ -50,6 +50,7 @@ def init_user_directory(overwrite_existing=False): # pylint: disable=R0914
# If running with sudo on POSIX, change the ownership to the real user.
real_user = os.getenv('SUDO_USER')
if real_user:
# pylint: disable=import-outside-toplevel
import pwd # done here as module won't import on win32
user_entry = pwd.getpwnam(real_user)
uid, gid = user_entry.pw_uid, user_entry.pw_gid

@ -104,7 +104,7 @@ import inspect
from collections import OrderedDict
from wa.framework import signal
from wa.framework.plugin import Plugin
from wa.framework.plugin import TargetedPlugin
from wa.framework.exception import (TargetNotRespondingError, TimeoutError, # pylint: disable=redefined-builtin
WorkloadError, TargetError)
from wa.utils.log import log_error
@ -421,14 +421,13 @@ def get_disabled():
return [i for i in installed if not i.is_enabled]
class Instrument(Plugin):
class Instrument(TargetedPlugin):
"""
Base class for instrument implementations.
"""
kind = "instrument"
def __init__(self, target, **kwargs):
super(Instrument, self).__init__(**kwargs)
self.target = target
def __init__(self, *args, **kwargs):
super(Instrument, self).__init__(*args, **kwargs)
self.is_enabled = True
self.is_broken = False

@ -23,6 +23,7 @@ from datetime import datetime
from wa.framework import pluginloader, signal, instrument
from wa.framework.configuration.core import Status
from wa.utils.log import indentcontext
from wa.framework.run import JobState
class Job(object):
@ -37,24 +38,29 @@ class Job(object):
def label(self):
return self.spec.label
@property
def classifiers(self):
return self.spec.classifiers
@property
def status(self):
return self._status
return self.state.status
@property
def has_been_initialized(self):
return self._has_been_initialized
@property
def retries(self):
return self.state.retries
@status.setter
def status(self, value):
self._status = value
self.state.status = value
self.state.timestamp = datetime.utcnow()
if self.output:
self.output.status = value
@retries.setter
def retries(self, value):
self.state.retries = value
def __init__(self, spec, iteration, context):
self.logger = logging.getLogger('job')
self.spec = spec
@ -63,9 +69,9 @@ class Job(object):
self.workload = None
self.output = None
self.run_time = None
self.retries = 0
self.classifiers = copy(self.spec.classifiers)
self._has_been_initialized = False
self._status = Status.NEW
self.state = JobState(self.id, self.label, self.iteration, Status.NEW)
def load(self, target, loader=pluginloader):
self.logger.info('Loading job {}'.format(self))
@ -91,7 +97,6 @@ class Job(object):
self.workload.initialize(context)
self.set_status(Status.PENDING)
self._has_been_initialized = True
context.update_job_state(self)
def configure_augmentations(self, context, pm):
self.logger.info('Configuring augmentations')
@ -181,6 +186,11 @@ class Job(object):
if force or self.status < status:
self.status = status
def add_classifier(self, name, value, overwrite=False):
if name in self.classifiers and not overwrite:
raise ValueError('Cannot overwrite "{}" classifier.'.format(name))
self.classifiers[name] = value
def __str__(self):
return '{} ({}) [{}]'.format(self.id, self.label, self.iteration)

@ -23,6 +23,8 @@ except ImportError:
import logging
import os
import shutil
import tarfile
import tempfile
from collections import OrderedDict, defaultdict
from copy import copy, deepcopy
from datetime import datetime
@ -145,9 +147,10 @@ class Output(object):
if not os.path.exists(path):
msg = 'Attempting to add non-existing artifact: {}'
raise HostError(msg.format(path))
is_dir = os.path.isdir(path)
path = os.path.relpath(path, self.basepath)
self.result.add_artifact(name, path, kind, description, classifiers)
self.result.add_artifact(name, path, kind, description, classifiers, is_dir)
def add_event(self, message):
self.result.add_event(message)
@ -162,6 +165,9 @@ class Output(object):
artifact = self.get_artifact(name)
return self.get_path(artifact.path)
def add_classifier(self, name, value, overwrite=False):
self.result.add_classifier(name, value, overwrite)
def add_metadata(self, key, *args, **kwargs):
self.result.add_metadata(key, *args, **kwargs)
@ -262,8 +268,8 @@ class RunOutput(Output, RunOutputCommon):
self._combined_config = None
self.jobs = []
self.job_specs = []
if (not os.path.isfile(self.statefile) or
not os.path.isfile(self.infofile)):
if (not os.path.isfile(self.statefile)
or not os.path.isfile(self.infofile)):
msg = '"{}" does not exist or is not a valid WA output directory.'
raise ValueError(msg.format(self.basepath))
self.reload()
@ -346,6 +352,13 @@ class JobOutput(Output):
self.spec = None
self.reload()
@property
def augmentations(self):
job_augs = set([])
for aug in self.spec.augmentations:
job_augs.add(aug)
return list(job_augs)
class Result(Podable):
@ -378,9 +391,10 @@ class Result(Podable):
logger.debug('Adding metric: {}'.format(metric))
self.metrics.append(metric)
def add_artifact(self, name, path, kind, description=None, classifiers=None):
def add_artifact(self, name, path, kind, description=None, classifiers=None,
is_dir=False):
artifact = Artifact(name, path, kind, description=description,
classifiers=classifiers)
classifiers=classifiers, is_dir=is_dir)
logger.debug('Adding artifact: {}'.format(artifact))
self.artifacts.append(artifact)
@ -399,6 +413,21 @@ class Result(Podable):
return artifact
raise HostError('Artifact "{}" not found'.format(name))
def add_classifier(self, name, value, overwrite=False):
if name in self.classifiers and not overwrite:
raise ValueError('Cannot overwrite "{}" classifier.'.format(name))
self.classifiers[name] = value
for metric in self.metrics:
if name in metric.classifiers and not overwrite:
raise ValueError('Cannot overwrite "{}" classifier; clashes with {}.'.format(name, metric))
metric.classifiers[name] = value
for artifact in self.artifacts:
if name in artifact.classifiers and not overwrite:
raise ValueError('Cannot overwrite "{}" classifier; clashes with {}.'.format(name, artifact))
artifact.classifiers[name] = value
def add_metadata(self, key, *args, **kwargs):
force = kwargs.pop('force', False)
if kwargs:
@ -516,7 +545,7 @@ class Artifact(Podable):
"""
_pod_serialization_version = 1
_pod_serialization_version = 2
@staticmethod
def from_pod(pod):
@ -525,9 +554,11 @@ class Artifact(Podable):
pod['kind'] = ArtifactType(pod['kind'])
instance = Artifact(**pod)
instance._pod_version = pod_version # pylint: disable =protected-access
instance.is_dir = pod.pop('is_dir')
return instance
def __init__(self, name, path, kind, description=None, classifiers=None):
def __init__(self, name, path, kind, description=None, classifiers=None,
is_dir=False):
""""
:param name: Name that uniquely identifies this artifact.
:param path: The *relative* path of the artifact. Depending on the
@ -543,7 +574,6 @@ class Artifact(Podable):
:param classifiers: A set of key-value pairs to further classify this
metric beyond current iteration (e.g. this can be
used to identify sub-tests).
"""
super(Artifact, self).__init__()
self.name = name
@ -555,11 +585,13 @@ class Artifact(Podable):
raise ValueError(msg.format(kind, ARTIFACT_TYPES))
self.description = description
self.classifiers = classifiers or {}
self.is_dir = is_dir
def to_pod(self):
pod = super(Artifact, self).to_pod()
pod.update(self.__dict__)
pod['kind'] = str(self.kind)
pod['is_dir'] = self.is_dir
return pod
@staticmethod
@ -567,11 +599,17 @@ class Artifact(Podable):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
@staticmethod
def _pod_upgrade_v2(pod):
pod['is_dir'] = pod.get('is_dir', False)
return pod
def __str__(self):
return self.path
def __repr__(self):
return '{} ({}): {}'.format(self.name, self.kind, self.path)
ft = 'dir' if self.is_dir else 'file'
return '{} ({}) ({}): {}'.format(self.name, ft, self.kind, self.path)
class Metric(Podable):
@ -602,6 +640,12 @@ class Metric(Podable):
instance._pod_version = pod_version # pylint: disable =protected-access
return instance
@property
def label(self):
parts = ['{}={}'.format(n, v) for n, v in self.classifiers.items()]
parts.insert(0, self.name)
return '/'.join(parts)
def __init__(self, name, value, units=None, lower_is_better=False,
classifiers=None):
super(Metric, self).__init__()
@ -732,9 +776,13 @@ def init_job_output(run_output, job):
def discover_wa_outputs(path):
for root, dirs, _ in os.walk(path):
# Use topdown=True to allow pruning dirs
for root, dirs, _ in os.walk(path, topdown=True):
if '__meta' in dirs:
yield RunOutput(root)
# Avoid recursing into the artifact as it can be very lengthy if a
# large number of file is present (sysfs dump)
dirs.clear()
def _save_raw_config(meta_dir, state):
@ -798,6 +846,19 @@ class DatabaseOutput(Output):
def get_artifact_path(self, name):
artifact = self.get_artifact(name)
if artifact.is_dir:
return self._read_dir_artifact(artifact)
else:
return self._read_file_artifact(artifact)
def _read_dir_artifact(self, artifact):
artifact_path = tempfile.mkdtemp(prefix='wa_')
with tarfile.open(fileobj=self.conn.lobject(int(artifact.path), mode='b'), mode='r|gz') as tar_file:
tar_file.extractall(artifact_path)
self.conn.commit()
return artifact_path
def _read_file_artifact(self, artifact):
artifact = StringIO(self.conn.lobject(int(artifact.path)).read())
self.conn.commit()
return artifact
@ -886,13 +947,15 @@ class DatabaseOutput(Output):
def _get_artifacts(self):
columns = ['artifacts.name', 'artifacts.description', 'artifacts.kind',
('largeobjects.lo_oid', 'path'), 'artifacts.oid',
('largeobjects.lo_oid', 'path'), 'artifacts.oid', 'artifacts.is_dir',
'artifacts._pod_version', 'artifacts._pod_serialization_version']
tables = ['largeobjects', 'artifacts']
joins = [('classifiers', 'classifiers.artifact_oid = artifacts.oid')]
conditions = ['artifacts.{}_oid = \'{}\''.format(self.kind, self.oid),
'artifacts.large_object_uuid = largeobjects.oid',
'artifacts.job_oid IS NULL']
'artifacts.large_object_uuid = largeobjects.oid']
# If retrieving run level artifacts we want those that don't also belong to a job
if self.kind == 'run':
conditions.append('artifacts.job_oid IS NULL')
pod = self._read_db(columns, tables, conditions, joins)
for artifact in pod:
artifact['path'] = str(artifact['path'])
@ -907,8 +970,9 @@ class DatabaseOutput(Output):
def kernel_config_from_db(raw):
kernel_config = {}
for k, v in zip(raw[0], raw[1]):
kernel_config[k] = v
if raw:
for k, v in zip(raw[0], raw[1]):
kernel_config[k] = v
return kernel_config
@ -942,9 +1006,10 @@ class RunDatabaseOutput(DatabaseOutput, RunOutputCommon):
@property
def _db_targetfile(self):
columns = ['os', 'is_rooted', 'target', 'abi', 'cpus', 'os_version',
columns = ['os', 'is_rooted', 'target', 'modules', 'abi', 'cpus', 'os_version',
'hostid', 'hostname', 'kernel_version', 'kernel_release',
'kernel_sha1', 'kernel_config', 'sched_features',
'kernel_sha1', 'kernel_config', 'sched_features', 'page_size_kb',
'system_id', 'screen_resolution', 'prop', 'android_id',
'_pod_version', '_pod_serialization_version']
tables = ['targets']
conditions = ['targets.run_oid = \'{}\''.format(self.oid)]
@ -997,6 +1062,7 @@ class RunDatabaseOutput(DatabaseOutput, RunOutputCommon):
jobs = self._read_db(columns, tables, conditions)
for job in jobs:
job['augmentations'] = self._get_job_augmentations(job['oid'])
job['workload_parameters'] = workload_params.pop(job['oid'], {})
job['runtime_parameters'] = runtime_params.pop(job['oid'], {})
job.pop('oid')
@ -1160,6 +1226,15 @@ class RunDatabaseOutput(DatabaseOutput, RunOutputCommon):
logger.debug('Failed to deserialize job_oid:{}-"{}":"{}"'.format(job_oid, k, v))
return parm_dict
def _get_job_augmentations(self, job_oid):
columns = ['jobs_augs.augmentation_oid', 'augmentations.name',
'augmentations.oid', 'jobs_augs.job_oid']
tables = ['jobs_augs', 'augmentations']
conditions = ['jobs_augs.job_oid = \'{}\''.format(job_oid),
'jobs_augs.augmentation_oid = augmentations.oid']
augmentations = self._read_db(columns, tables, conditions)
return [aug['name'] for aug in augmentations]
def _list_runs(self):
columns = ['runs.run_uuid', 'runs.run_name', 'runs.project',
'runs.project_stage', 'runs.status', 'runs.start_time', 'runs.end_time']
@ -1211,3 +1286,11 @@ class JobDatabaseOutput(DatabaseOutput):
def __str__(self):
return '{}-{}-{}'.format(self.id, self.label, self.iteration)
@property
def augmentations(self):
job_augs = set([])
if self.spec:
for aug in self.spec.augmentations:
job_augs.add(aug)
return list(job_augs)

@ -157,6 +157,7 @@ class Alias(object):
raise ConfigError(msg.format(param, self.name, ext.name))
# pylint: disable=bad-mcs-classmethod-argument
class PluginMeta(type):
"""
This basically adds some magic to plugins to make implementing new plugins,
@ -246,7 +247,7 @@ class Plugin(with_metaclass(PluginMeta, object)):
@classmethod
def get_default_config(cls):
return {p.name: p.default for p in cls.parameters}
return {p.name: p.default for p in cls.parameters if not p.deprecated}
@property
def dependencies_directory(self):
@ -367,7 +368,7 @@ class Plugin(with_metaclass(PluginMeta, object)):
self._modules.append(module)
def __str__(self):
return self.name
return str(self.name)
def __repr__(self):
params = []
@ -384,6 +385,16 @@ class TargetedPlugin(Plugin):
"""
suppoted_targets = []
parameters = [
Parameter('cleanup_assets', kind=bool,
global_alias='cleanup_assets',
aliases=['clean_up'],
default=True,
description="""
If ``True``, assets that are deployed or created by the
plugin will be removed again from the device.
"""),
]
@classmethod
def check_compatible(cls, target):

@ -35,6 +35,7 @@ class __LoaderWrapper(object):
def reset(self):
# These imports cannot be done at top level, because of
# sys.modules manipulation below
# pylint: disable=import-outside-toplevel
from wa.framework.plugin import PluginLoader
from wa.framework.configuration.core import settings
self._loader = PluginLoader(settings.plugin_packages,

@ -16,15 +16,14 @@ import logging
import os
import re
from devlib.utils.android import ApkInfo
from wa.framework import pluginloader
from wa.framework.plugin import Plugin
from wa.framework.exception import ResourceError
from wa.framework.configuration import settings
from wa.utils import log
from wa.utils.android import get_cacheable_apk_info
from wa.utils.misc import get_object_name
from wa.utils.types import enum, list_or_string, prioritylist
from wa.utils.types import enum, list_or_string, prioritylist, version_tuple
SourcePriority = enum(['package', 'remote', 'lan', 'local',
@ -142,10 +141,12 @@ class ApkFile(Resource):
def __init__(self, owner, variant=None, version=None,
package=None, uiauto=False, exact_abi=False,
supported_abi=None):
supported_abi=None, min_version=None, max_version=None):
super(ApkFile, self).__init__(owner)
self.variant = variant
self.version = version
self.max_version = max_version
self.min_version = min_version
self.package = package
self.uiauto = uiauto
self.exact_abi = exact_abi
@ -158,21 +159,25 @@ class ApkFile(Resource):
def match(self, path):
name_matches = True
version_matches = True
version_range_matches = True
package_matches = True
abi_matches = True
uiauto_matches = uiauto_test_matches(path, self.uiauto)
if self.version is not None:
if self.version:
version_matches = apk_version_matches(path, self.version)
if self.variant is not None:
if self.max_version or self.min_version:
version_range_matches = apk_version_matches_range(path, self.min_version,
self.max_version)
if self.variant:
name_matches = file_name_matches(path, self.variant)
if self.package is not None:
if self.package:
package_matches = package_name_matches(path, self.package)
if self.supported_abi is not None:
if self.supported_abi:
abi_matches = apk_abi_matches(path, self.supported_abi,
self.exact_abi)
return name_matches and version_matches and \
uiauto_matches and package_matches and \
abi_matches
version_range_matches and uiauto_matches \
and package_matches and abi_matches
def __str__(self):
text = '<{}\'s apk'.format(self.owner)
@ -273,15 +278,40 @@ class ResourceResolver(object):
def apk_version_matches(path, version):
info = ApkInfo(path)
if info.version_name == version or info.version_code == version:
return True
return loose_version_matching(version, info.version_name)
version = list_or_string(version)
info = get_cacheable_apk_info(path)
for v in version:
if v in (info.version_name, info.version_code):
return True
if loose_version_matching(v, info.version_name):
return True
return False
def apk_version_matches_range(path, min_version=None, max_version=None):
info = get_cacheable_apk_info(path)
return range_version_matching(info.version_name, min_version, max_version)
def range_version_matching(apk_version, min_version=None, max_version=None):
if not apk_version:
return False
apk_version = version_tuple(apk_version)
if max_version:
max_version = version_tuple(max_version)
if apk_version > max_version:
return False
if min_version:
min_version = version_tuple(min_version)
if apk_version < min_version:
return False
return True
def loose_version_matching(config_version, apk_version):
config_version = config_version.split('.')
apk_version = apk_version.split('.')
config_version = version_tuple(config_version)
apk_version = version_tuple(apk_version)
if len(apk_version) < len(config_version):
return False # More specific version requested than available
@ -302,18 +332,18 @@ def file_name_matches(path, pattern):
def uiauto_test_matches(path, uiauto):
info = ApkInfo(path)
info = get_cacheable_apk_info(path)
return uiauto == ('com.arm.wa.uiauto' in info.package)
def package_name_matches(path, package):
info = ApkInfo(path)
info = get_cacheable_apk_info(path)
return info.package == package
def apk_abi_matches(path, supported_abi, exact_abi=False):
supported_abi = list_or_string(supported_abi)
info = ApkInfo(path)
info = get_cacheable_apk_info(path)
# If no native code present, suitable for all devices.
if not info.native_code:
return True

@ -102,13 +102,7 @@ class RunState(Podable):
self.timestamp = datetime.utcnow()
def add_job(self, job):
job_state = JobState(job.id, job.label, job.iteration, job.status)
self.jobs[(job_state.id, job_state.iteration)] = job_state
def update_job(self, job):
state = self.jobs[(job.id, job.iteration)]
state.status = job.status
state.timestamp = datetime.utcnow()
self.jobs[(job.state.id, job.state.iteration)] = job.state
def get_status_counts(self):
counter = Counter()
@ -163,7 +157,7 @@ class JobState(Podable):
pod['label'] = self.label
pod['iteration'] = self.iteration
pod['status'] = self.status.to_pod()
pod['retries'] = 0
pod['retries'] = self.retries
pod['timestamp'] = self.timestamp
return pod

@ -15,7 +15,7 @@
"""
This module wraps louie signalling mechanism. It relies on modified version of loiue
This module wraps louie signalling mechanism. It relies on modified version of louie
that has prioritization added to handler invocation.
"""
@ -23,8 +23,9 @@ import sys
import logging
from contextlib import contextmanager
from louie import dispatcher, saferef # pylint: disable=wrong-import-order
from louie.dispatcher import _remove_receiver
import wrapt
from louie import dispatcher # pylint: disable=wrong-import-order
from wa.utils.types import prioritylist, enum
@ -242,8 +243,8 @@ def connect(handler, signal, sender=dispatcher.Any, priority=0):
receivers = signals[signal]
else:
receivers = signals[signal] = _prioritylist_wrapper()
receivers.add(handler, priority)
dispatcher.connect(handler, signal, sender)
receivers.add(saferef.safe_ref(handler, on_delete=_remove_receiver), priority)
def disconnect(handler, signal, sender=dispatcher.Any):
@ -268,7 +269,7 @@ def send(signal, sender=dispatcher.Anonymous, *args, **kwargs):
"""
Sends a signal, causing connected handlers to be invoked.
Paramters:
Parameters:
:signal: Signal to be sent. This must be an instance of :class:`wa.core.signal.Signal`
or its subclasses.

@ -21,9 +21,11 @@ import tempfile
import threading
import time
from wa.framework.plugin import Parameter
from wa.framework.exception import WorkerThreadError
from wa.framework.plugin import Parameter
from wa.utils.android import LogcatParser
from wa.utils.misc import touch
import wa.framework.signal as signal
class LinuxAssistant(object):
@ -33,6 +35,9 @@ class LinuxAssistant(object):
def __init__(self, target):
self.target = target
def initialize(self):
pass
def start(self):
pass
@ -42,6 +47,9 @@ class LinuxAssistant(object):
def stop(self):
pass
def finalize(self):
pass
class AndroidAssistant(object):
@ -66,40 +74,111 @@ class AndroidAssistant(object):
temporary locaiton on the host. Setting the value of the poll
period enables this behavior.
"""),
Parameter('stay_on_mode', kind=int,
constraint=lambda x: 0 <= x <= 7,
description="""
Specify whether the screen should stay on while the device is
charging:
0: never stay on
1: with AC charger
2: with USB charger
4: with wireless charger
Values can be OR-ed together to produce combinations, for
instance ``7`` will cause the screen to stay on when charging
under any method.
"""),
]
def __init__(self, target, logcat_poll_period=None, disable_selinux=True):
def __init__(self, target, logcat_poll_period=None, disable_selinux=True, stay_on_mode=None):
self.target = target
self.logcat_poll_period = logcat_poll_period
self.disable_selinux = disable_selinux
self.stay_on_mode = stay_on_mode
self.orig_stay_on_mode = self.target.get_stay_on_mode() if stay_on_mode is not None else None
self.logcat_poller = None
self.logger = logging.getLogger('logcat')
self._logcat_marker_msg = None
self._logcat_marker_tag = None
signal.connect(self._before_workload, signal.BEFORE_WORKLOAD_EXECUTION)
if self.logcat_poll_period:
signal.connect(self._after_workload, signal.AFTER_WORKLOAD_EXECUTION)
def initialize(self):
if self.target.is_rooted and self.disable_selinux:
self.do_disable_selinux()
if self.stay_on_mode is not None:
self.target.set_stay_on_mode(self.stay_on_mode)
def start(self):
if self.logcat_poll_period:
self.logcat_poller = LogcatPoller(self.target, self.logcat_poll_period)
self.logcat_poller.start()
else:
if not self._logcat_marker_msg:
self._logcat_marker_msg = 'WA logcat marker for wrap detection'
self._logcat_marker_tag = 'WAlog'
def stop(self):
if self.logcat_poller:
self.logcat_poller.stop()
def finalize(self):
if self.stay_on_mode is not None:
self.target.set_stay_on_mode(self.orig_stay_on_mode)
def extract_results(self, context):
logcat_file = os.path.join(context.output_directory, 'logcat.log')
self.dump_logcat(logcat_file)
context.add_artifact('logcat', logcat_file, kind='log')
self.clear_logcat()
if not self._check_logcat_nowrap(logcat_file):
self.logger.warning('The main logcat buffer wrapped and lost data;'
' results that rely on this buffer may be'
' inaccurate or incomplete.'
)
def dump_logcat(self, outfile):
if self.logcat_poller:
self.logcat_poller.write_log(outfile)
else:
self.target.dump_logcat(outfile)
self.target.dump_logcat(outfile, logcat_format='threadtime')
def clear_logcat(self):
if self.logcat_poller:
self.logcat_poller.clear_buffer()
else:
self.target.clear_logcat()
def _before_workload(self, _):
if self.logcat_poller:
self.logcat_poller.start_logcat_wrap_detect()
else:
self.insert_logcat_marker()
def _after_workload(self, _):
self.logcat_poller.stop_logcat_wrap_detect()
def _check_logcat_nowrap(self, outfile):
if self.logcat_poller:
return self.logcat_poller.check_logcat_nowrap(outfile)
else:
parser = LogcatParser()
for event in parser.parse(outfile):
if (event.tag == self._logcat_marker_tag
and event.message == self._logcat_marker_msg):
return True
return False
def insert_logcat_marker(self):
self.logger.debug('Inserting logcat marker')
self.target.execute(
'log -t "{}" "{}"'.format(
self._logcat_marker_tag, self._logcat_marker_msg
)
)
def do_disable_selinux(self):
# SELinux was added in Android 4.3 (API level 18). Trying to
@ -119,15 +198,21 @@ class LogcatPoller(threading.Thread):
self.period = period
self.timeout = timeout
self.stop_signal = threading.Event()
self.lock = threading.Lock()
self.lock = threading.RLock()
self.buffer_file = tempfile.mktemp()
self.last_poll = 0
self.daemon = True
self.exc = None
self._logcat_marker_tag = 'WALog'
self._logcat_marker_msg = 'WA logcat marker for wrap detection:{}'
self._marker_count = 0
self._start_marker = None
self._end_marker = None
def run(self):
self.logger.debug('Starting polling')
try:
self.insert_logcat_marker()
while True:
if self.stop_signal.is_set():
break
@ -135,6 +220,7 @@ class LogcatPoller(threading.Thread):
current_time = time.time()
if (current_time - self.last_poll) >= self.period:
self.poll()
self.insert_logcat_marker()
time.sleep(0.5)
except Exception: # pylint: disable=W0703
self.exc = WorkerThreadError(self.name, sys.exc_info())
@ -170,9 +256,49 @@ class LogcatPoller(threading.Thread):
def poll(self):
self.last_poll = time.time()
self.target.dump_logcat(self.buffer_file, append=True, timeout=self.timeout)
self.target.dump_logcat(self.buffer_file, append=True, timeout=self.timeout, logcat_format='threadtime')
self.target.clear_logcat()
def insert_logcat_marker(self):
self.logger.debug('Inserting logcat marker')
with self.lock:
self.target.execute(
'log -t "{}" "{}"'.format(
self._logcat_marker_tag,
self._logcat_marker_msg.format(self._marker_count)
)
)
self._marker_count += 1
def check_logcat_nowrap(self, outfile):
parser = LogcatParser()
counter = self._start_marker
for event in parser.parse(outfile):
message = self._logcat_marker_msg.split(':')[0]
if not (event.tag == self._logcat_marker_tag
and event.message.split(':')[0] == message):
continue
number = int(event.message.split(':')[1])
if number > counter:
return False
elif number == counter:
counter += 1
if counter == self._end_marker:
return True
return False
def start_logcat_wrap_detect(self):
with self.lock:
self._start_marker = self._marker_count
self.insert_logcat_marker()
def stop_logcat_wrap_detect(self):
with self.lock:
self._end_marker = self._marker_count
class ChromeOsAssistant(LinuxAssistant):

@ -14,14 +14,13 @@
#
import inspect
from collections import OrderedDict
from copy import copy
from devlib import (LinuxTarget, AndroidTarget, LocalLinuxTarget,
ChromeOsTarget, Platform, Juno, TC2, Gem5SimulationPlatform,
AdbConnection, SshConnection, LocalConnection,
Gem5Connection)
TelnetConnection, Gem5Connection)
from devlib.target import DEFAULT_SHELL_PROMPT
from devlib.utils.ssh import DEFAULT_SSH_SUDO_COMMAND
from wa.framework import pluginloader
from wa.framework.configuration.core import get_config_point_map
@ -69,11 +68,14 @@ def instantiate_target(tdesc, params, connect=None, extra_platform_params=None):
for name, value in params.items():
if name in target_params:
tp[name] = value
if not target_params[name].deprecated:
tp[name] = value
elif name in platform_params:
pp[name] = value
if not platform_params[name].deprecated:
pp[name] = value
elif name in conn_params:
cp[name] = value
if not conn_params[name].deprecated:
cp[name] = value
elif name in assistant_params:
pass
else:
@ -129,7 +131,8 @@ class TargetDescription(object):
config = {}
for pattr in param_attrs:
for p in getattr(self, pattr):
config[p.name] = p.default
if not p.deprecated:
config[p.name] = p.default
return config
def _set(self, attr, vals):
@ -262,7 +265,6 @@ VEXPRESS_PLATFORM_PARAMS = [
``dtr``: toggle the DTR line on the serial connection
``reboottxt``: create ``reboot.txt`` in the root of the VEMSD mount.
'''),
]
@ -300,6 +302,37 @@ CONNECTION_PARAMS = {
description="""
ADB server to connect to.
"""),
Parameter(
'poll_transfers', kind=bool,
default=True,
description="""
File transfers will be polled for activity. Inactive
file transfers are cancelled.
"""),
Parameter(
'start_transfer_poll_delay', kind=int,
default=30,
description="""
How long to wait (s) for a transfer to complete
before polling transfer activity. Requires ``poll_transfers``
to be set.
"""),
Parameter(
'total_transfer_timeout', kind=int,
default=3600,
description="""
The total time to elapse before a transfer is cancelled, regardless
of its activity. Requires ``poll_transfers`` to be set.
"""),
Parameter(
'transfer_poll_period', kind=int,
default=30,
description="""
The period at which transfer activity is sampled. Requires
``poll_transfers`` to be set. Too small values may cause
the destination size to appear the same over one or more sample
periods, causing improper transfer cancellation.
"""),
],
SshConnection: [
Parameter(
@ -316,6 +349,8 @@ CONNECTION_PARAMS = {
'password', kind=str,
description="""
Password to use.
(When connecting to a passwordless machine set to an
empty string to prevent attempting ssh key authentication.)
"""),
Parameter(
'keyfile', kind=str,
@ -324,14 +359,101 @@ CONNECTION_PARAMS = {
"""),
Parameter(
'port', kind=int,
default=22,
description="""
The port SSH server is listening on on the target.
"""),
Parameter(
'telnet', kind=bool, default=False,
'strict_host_check', kind=bool, default=False,
description="""
If set to ``True``, a Telnet connection, rather than
SSH will be used.
Specify whether devices should be connected to if
their host key does not match the systems known host keys. """),
Parameter(
'sudo_cmd', kind=str,
default=DEFAULT_SSH_SUDO_COMMAND,
description="""
Sudo command to use. Must have ``{}`` specified
somewhere in the string it indicate where the command
to be run via sudo is to go.
"""),
Parameter(
'use_scp', kind=bool,
default=False,
description="""
Allow using SCP as method of file transfer instead
of the default SFTP.
"""),
Parameter(
'poll_transfers', kind=bool,
default=True,
description="""
File transfers will be polled for activity. Inactive
file transfers are cancelled.
"""),
Parameter(
'start_transfer_poll_delay', kind=int,
default=30,
description="""
How long to wait (s) for a transfer to complete
before polling transfer activity. Requires ``poll_transfers``
to be set.
"""),
Parameter(
'total_transfer_timeout', kind=int,
default=3600,
description="""
The total time to elapse before a transfer is cancelled, regardless
of its activity. Requires ``poll_transfers`` to be set.
"""),
Parameter(
'transfer_poll_period', kind=int,
default=30,
description="""
The period at which transfer activity is sampled. Requires
``poll_transfers`` to be set. Too small values may cause
the destination size to appear the same over one or more sample
periods, causing improper transfer cancellation.
"""),
# Deprecated Parameters
Parameter(
'telnet', kind=str,
description="""
Original shell prompt to expect.
""",
deprecated=True),
Parameter(
'password_prompt', kind=str,
description="""
Password prompt to expect
""",
deprecated=True),
Parameter(
'original_prompt', kind=str,
description="""
Original shell prompt to expect.
""",
deprecated=True),
],
TelnetConnection: [
Parameter(
'host', kind=str, mandatory=True,
description="""
Host name or IP address of the target.
"""),
Parameter(
'username', kind=str, mandatory=True,
description="""
User name to connect with
"""),
Parameter(
'password', kind=str,
description="""
Password to use.
"""),
Parameter(
'port', kind=int,
description="""
The port SSH server is listening on on the target.
"""),
Parameter(
'password_prompt', kind=str,
@ -411,16 +533,16 @@ CONNECTION_PARAMS['ChromeOsConnection'] = \
CONNECTION_PARAMS[AdbConnection] + CONNECTION_PARAMS[SshConnection]
# name --> ((target_class, conn_class), params_list, defaults)
# name --> ((target_class, conn_class, unsupported_platforms), params_list, defaults)
TARGETS = {
'linux': ((LinuxTarget, SshConnection), COMMON_TARGET_PARAMS, None),
'android': ((AndroidTarget, AdbConnection), COMMON_TARGET_PARAMS +
'linux': ((LinuxTarget, SshConnection, []), COMMON_TARGET_PARAMS, None),
'android': ((AndroidTarget, AdbConnection, []), COMMON_TARGET_PARAMS +
[Parameter('package_data_directory', kind=str, default='/data/data',
description='''
Directory containing Android data
'''),
], None),
'chromeos': ((ChromeOsTarget, 'ChromeOsConnection'), COMMON_TARGET_PARAMS +
'chromeos': ((ChromeOsTarget, 'ChromeOsConnection', []), COMMON_TARGET_PARAMS +
[Parameter('package_data_directory', kind=str, default='/data/data',
description='''
Directory containing Android data
@ -441,7 +563,8 @@ TARGETS = {
the need for privilege elevation.
'''),
], None),
'local': ((LocalLinuxTarget, LocalConnection), COMMON_TARGET_PARAMS, None),
'local': ((LocalLinuxTarget, LocalConnection, [Juno, Gem5SimulationPlatform, TC2]),
COMMON_TARGET_PARAMS, None),
}
# name --> assistant
@ -452,31 +575,87 @@ ASSISTANTS = {
'chromeos': ChromeOsAssistant
}
# name --> ((platform_class, conn_class), params_list, defaults, target_defaults)
# Platform specific parameter overrides.
JUNO_PLATFORM_OVERRIDES = [
Parameter('baudrate', kind=int, default=115200,
description='''
Baud rate for the serial connection.
'''),
Parameter('vemsd_mount', kind=str, default='/media/JUNO',
description='''
VExpress MicroSD card mount location. This is a MicroSD card in
the VExpress device that is mounted on the host via USB. The card
contains configuration files for the platform and firmware and
kernel images to be flashed.
'''),
Parameter('bootloader', kind=str, default='u-boot',
allowed_values=['uefi', 'uefi-shell', 'u-boot', 'bootmon'],
description='''
Selects the bootloader mechanism used by the board. Depending on
firmware version, a number of possible boot mechanisms may be use.
Please see ``devlib`` documentation for descriptions.
'''),
Parameter('hard_reset_method', kind=str, default='dtr',
allowed_values=['dtr', 'reboottxt'],
description='''
There are a couple of ways to reset VersatileExpress board if the
software running on the board becomes unresponsive. Both require
configuration to be enabled (please see ``devlib`` documentation).
``dtr``: toggle the DTR line on the serial connection
``reboottxt``: create ``reboot.txt`` in the root of the VEMSD mount.
'''),
]
TC2_PLATFORM_OVERRIDES = [
Parameter('baudrate', kind=int, default=38400,
description='''
Baud rate for the serial connection.
'''),
Parameter('vemsd_mount', kind=str, default='/media/VEMSD',
description='''
VExpress MicroSD card mount location. This is a MicroSD card in
the VExpress device that is mounted on the host via USB. The card
contains configuration files for the platform and firmware and
kernel images to be flashed.
'''),
Parameter('bootloader', kind=str, default='bootmon',
allowed_values=['uefi', 'uefi-shell', 'u-boot', 'bootmon'],
description='''
Selects the bootloader mechanism used by the board. Depending on
firmware version, a number of possible boot mechanisms may be use.
Please see ``devlib`` documentation for descriptions.
'''),
Parameter('hard_reset_method', kind=str, default='reboottxt',
allowed_values=['dtr', 'reboottxt'],
description='''
There are a couple of ways to reset VersatileExpress board if the
software running on the board becomes unresponsive. Both require
configuration to be enabled (please see ``devlib`` documentation).
``dtr``: toggle the DTR line on the serial connection
``reboottxt``: create ``reboot.txt`` in the root of the VEMSD mount.
'''),
]
# name --> ((platform_class, conn_class, conn_overrides), params_list, defaults, target_overrides)
# Note: normally, connection is defined by the Target name, but
# platforms may choose to override it
# Note: the target_defaults allows you to override common target_params for a
# Note: the target_overrides allows you to override common target_params for a
# particular platform. Parameters you can override are in COMMON_TARGET_PARAMS
# Example of overriding one of the target parameters: Replace last None with:
# {'shell_prompt': CUSTOM__SHELL_PROMPT}
# Example of overriding one of the target parameters: Replace last `None` with
# a list of `Parameter` objects to be used instead.
PLATFORMS = {
'generic': ((Platform, None), COMMON_PLATFORM_PARAMS, None, None),
'juno': ((Juno, None), COMMON_PLATFORM_PARAMS + VEXPRESS_PLATFORM_PARAMS,
{
'vemsd_mount': '/media/JUNO',
'baudrate': 115200,
'bootloader': 'u-boot',
'hard_reset_method': 'dtr',
},
None),
'tc2': ((TC2, None), COMMON_PLATFORM_PARAMS + VEXPRESS_PLATFORM_PARAMS,
{
'vemsd_mount': '/media/VEMSD',
'baudrate': 38400,
'bootloader': 'bootmon',
'hard_reset_method': 'reboottxt',
}, None),
'gem5': ((Gem5SimulationPlatform, Gem5Connection), GEM5_PLATFORM_PARAMS, None, None),
'generic': ((Platform, None, None), COMMON_PLATFORM_PARAMS, None, None),
'juno': ((Juno, None, [
Parameter('host', kind=str, mandatory=False,
description="Host name or IP address of the target."),
]
), COMMON_PLATFORM_PARAMS + VEXPRESS_PLATFORM_PARAMS, JUNO_PLATFORM_OVERRIDES, None),
'tc2': ((TC2, None, None), COMMON_PLATFORM_PARAMS + VEXPRESS_PLATFORM_PARAMS,
TC2_PLATFORM_OVERRIDES, None),
'gem5': ((Gem5SimulationPlatform, Gem5Connection, None), GEM5_PLATFORM_PARAMS, None, None),
}
@ -496,16 +675,17 @@ class DefaultTargetDescriptor(TargetDescriptor):
# pylint: disable=attribute-defined-outside-init,too-many-locals
result = []
for target_name, target_tuple in TARGETS.items():
(target, conn), target_params = self._get_item(target_tuple)
(target, conn, unsupported_platforms), target_params = self._get_item(target_tuple)
assistant = ASSISTANTS[target_name]
conn_params = CONNECTION_PARAMS[conn]
for platform_name, platform_tuple in PLATFORMS.items():
platform_target_defaults = platform_tuple[-1]
platform_tuple = platform_tuple[0:-1]
(platform, plat_conn), platform_params = self._get_item(platform_tuple)
(platform, plat_conn, conn_defaults), platform_params = self._get_item(platform_tuple)
if platform in unsupported_platforms:
continue
# Add target defaults specified in the Platform tuple
target_params = self._apply_param_defaults(target_params,
platform_target_defaults)
target_params = self._override_params(target_params, platform_target_defaults)
name = '{}_{}'.format(platform_name, target_name)
td = TargetDescription(name, self)
td.target = target
@ -517,31 +697,31 @@ class DefaultTargetDescriptor(TargetDescriptor):
if plat_conn:
td.conn = plat_conn
td.conn_params = CONNECTION_PARAMS[plat_conn]
td.conn_params = self._override_params(CONNECTION_PARAMS[plat_conn],
conn_defaults)
else:
td.conn = conn
td.conn_params = conn_params
td.conn_params = self._override_params(conn_params, conn_defaults)
result.append(td)
return result
def _apply_param_defaults(self, params, defaults): # pylint: disable=no-self-use
'''Adds parameters in the defaults dict to params list.
Return updated params as a list (idempotent function).'''
if not defaults:
def _override_params(self, params, overrides): # pylint: disable=no-self-use
''' Returns a new list of parameters replacing any parameter with the
corresponding parameter in overrides'''
if not overrides:
return params
param_map = OrderedDict((p.name, copy(p)) for p in params)
for name, value in defaults.items():
if name not in param_map:
raise ValueError('Unexpected default "{}"'.format(name))
param_map[name].default = value
# Convert the OrderedDict to a list to return the same type
param_map = {p.name: p for p in params}
for override in overrides:
if override.name in param_map:
param_map[override.name] = override
# Return the list of overriden parameters
return list(param_map.values())
def _get_item(self, item_tuple):
cls, params, defaults = item_tuple
updated_params = self._apply_param_defaults(params, defaults)
return cls, updated_params
cls_tuple, params, defaults = item_tuple
updated_params = self._override_params(params, defaults)
return cls_tuple, updated_params
_adhoc_target_descriptions = []
@ -584,7 +764,7 @@ def _get_target_defaults(target):
def add_description_for_target(target, description=None, **kwargs):
(base_name, ((_, base_conn), base_params, _)) = _get_target_defaults(target)
(base_name, ((_, base_conn, _), base_params, _)) = _get_target_defaults(target)
if 'target_params' not in kwargs:
kwargs['target_params'] = base_params
@ -592,7 +772,7 @@ def add_description_for_target(target, description=None, **kwargs):
if 'platform' not in kwargs:
kwargs['platform'] = Platform
if 'platform_params' not in kwargs:
for (plat, conn), params, _, _ in PLATFORMS.values():
for (plat, conn, _), params, _, _ in PLATFORMS.values():
if plat == kwargs['platform']:
kwargs['platform_params'] = params
if conn is not None and kwargs['conn'] is None:

@ -23,6 +23,7 @@ from devlib.utils.android import AndroidProperties
from wa.framework.configuration.core import settings
from wa.framework.exception import ConfigError
from wa.utils.serializer import read_pod, write_pod, Podable
from wa.utils.misc import atomic_write_path
def cpuinfo_from_pod(pod):
@ -53,9 +54,9 @@ def kernel_version_from_pod(pod):
def kernel_config_from_pod(pod):
config = KernelConfig('')
config._config = pod['kernel_config']
config.typed_config._config = pod['kernel_config']
lines = []
for key, value in config._config.items():
for key, value in config.items():
if value == 'n':
lines.append('# {} is not set'.format(key))
else:
@ -221,6 +222,7 @@ class CpuInfo(Podable):
def get_target_info(target):
info = TargetInfo()
info.target = target.__class__.__name__
info.modules = target.modules
info.os = target.os
info.os_version = target.os_version
info.system_id = target.system_id
@ -285,11 +287,13 @@ def read_target_info_cache():
def write_target_info_cache(cache):
if not os.path.exists(settings.cache_directory):
os.makedirs(settings.cache_directory)
write_pod(cache, settings.target_info_cache_file)
with atomic_write_path(settings.target_info_cache_file) as at_path:
write_pod(cache, at_path)
def get_target_info_from_cache(system_id):
cache = read_target_info_cache()
def get_target_info_from_cache(system_id, cache=None):
if cache is None:
cache = read_target_info_cache()
pod = cache.get(system_id, None)
if not pod:
@ -303,8 +307,9 @@ def get_target_info_from_cache(system_id):
return TargetInfo.from_pod(pod)
def cache_target_info(target_info, overwrite=False):
cache = read_target_info_cache()
def cache_target_info(target_info, overwrite=False, cache=None):
if cache is None:
cache = read_target_info_cache()
if target_info.system_id in cache and not overwrite:
raise ValueError('TargetInfo for {} is already in cache.'.format(target_info.system_id))
cache[target_info.system_id] = target_info.to_pod()
@ -313,12 +318,13 @@ def cache_target_info(target_info, overwrite=False):
class TargetInfo(Podable):
_pod_serialization_version = 2
_pod_serialization_version = 5
@staticmethod
def from_pod(pod):
instance = super(TargetInfo, TargetInfo).from_pod(pod)
instance.target = pod['target']
instance.modules = pod['modules']
instance.abi = pod['abi']
instance.cpus = [CpuInfo.from_pod(c) for c in pod['cpus']]
instance.os = pod['os']
@ -343,6 +349,7 @@ class TargetInfo(Podable):
def __init__(self):
super(TargetInfo, self).__init__()
self.target = None
self.modules = []
self.cpus = []
self.os = None
self.os_version = None
@ -362,6 +369,7 @@ class TargetInfo(Podable):
def to_pod(self):
pod = super(TargetInfo, self).to_pod()
pod['target'] = self.target
pod['modules'] = self.modules
pod['abi'] = self.abi
pod['cpus'] = [c.to_pod() for c in self.cpus]
pod['os'] = self.os
@ -401,3 +409,20 @@ class TargetInfo(Podable):
pod['page_size_kb'] = pod.get('page_size_kb')
pod['_pod_version'] = pod.get('format_version', 0)
return pod
@staticmethod
def _pod_upgrade_v3(pod):
config = {}
for key, value in pod['kernel_config'].items():
config[key.upper()] = value
pod['kernel_config'] = config
return pod
@staticmethod
def _pod_upgrade_v4(pod):
return TargetInfo._pod_upgrade_v3(pod)
@staticmethod
def _pod_upgrade_v5(pod):
pod['modules'] = pod.get('modules') or []
return pod

@ -24,8 +24,10 @@ from wa.framework.plugin import Parameter
from wa.framework.target.descriptor import (get_target_description,
instantiate_target,
instantiate_assistant)
from wa.framework.target.info import get_target_info, get_target_info_from_cache, cache_target_info
from wa.framework.target.info import (get_target_info, get_target_info_from_cache,
cache_target_info, read_target_info_cache)
from wa.framework.target.runtime_parameter_manager import RuntimeParameterManager
from wa.utils.types import module_name_set
class TargetManager(object):
@ -55,6 +57,7 @@ class TargetManager(object):
def initialize(self):
self._init_target()
self.assistant.initialize()
# If target supports hotplugging, online all cpus before perform discovery
# and restore original configuration after completed.
@ -75,6 +78,8 @@ class TargetManager(object):
def finalize(self):
if not self.target:
return
if self.assistant:
self.assistant.finalize()
if self.disconnect or isinstance(self.target.platform, Gem5SimulationPlatform):
self.logger.info('Disconnecting from the device')
with signal.wrap('TARGET_DISCONNECT'):
@ -91,10 +96,20 @@ class TargetManager(object):
@memoized
def get_target_info(self):
info = get_target_info_from_cache(self.target.system_id)
cache = read_target_info_cache()
info = get_target_info_from_cache(self.target.system_id, cache=cache)
if info is None:
info = get_target_info(self.target)
cache_target_info(info)
cache_target_info(info, cache=cache)
else:
# If module configuration has changed form when the target info
# was previously cached, it is possible additional info will be
# available, so should re-generate the cache.
if module_name_set(info.modules) != module_name_set(self.target.modules):
info = get_target_info(self.target)
cache_target_info(info, overwrite=True, cache=cache)
return info
def reboot(self, context, hard=False):

@ -694,7 +694,7 @@ class CpufreqRuntimeConfig(RuntimeConfig):
else:
common_freqs = common_freqs.intersection(self.supported_cpu_freqs.get(cpu) or set())
all_freqs = all_freqs.union(self.supported_cpu_freqs.get(cpu) or set())
common_gov = common_gov.intersection(self.supported_cpu_governors.get(cpu))
common_gov = common_gov.intersection(self.supported_cpu_governors.get(cpu) or set())
return all_freqs, common_freqs, common_gov
@ -732,7 +732,7 @@ class IdleStateValue(object):
'''Checks passed state and converts to its ID'''
value = caseless_string(value)
for s_id, s_name, s_desc in self.values:
if value == s_id or value == s_name or value == s_desc:
if value in (s_id, s_name, s_desc):
return s_id
msg = 'Invalid IdleState: "{}"; Must be in {}'
raise ValueError(msg.format(value, self.values))

@ -1,11 +1,11 @@
apply plugin: 'com.android.library'
android {
compileSdkVersion 25
buildToolsVersion '25.0.3'
compileSdkVersion 28
buildToolsVersion '28.0.3'
defaultConfig {
minSdkVersion 18
targetSdkVersion 25
targetSdkVersion 28
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
}

@ -45,7 +45,7 @@ public class BaseUiAutomation {
public enum FindByCriteria { BY_ID, BY_TEXT, BY_DESC };
public enum Direction { UP, DOWN, LEFT, RIGHT, NULL };
public enum ScreenOrientation { RIGHT, NATURAL, LEFT };
public enum ScreenOrientation { RIGHT, NATURAL, LEFT, PORTRAIT, LANDSCAPE };
public enum PinchType { IN, OUT, NULL };
// Time in milliseconds
@ -176,6 +176,8 @@ public class BaseUiAutomation {
}
public void setScreenOrientation(ScreenOrientation orientation) throws Exception {
int width = mDevice.getDisplayWidth();
int height = mDevice.getDisplayHeight();
switch (orientation) {
case RIGHT:
mDevice.setOrientationRight();
@ -186,6 +188,30 @@ public class BaseUiAutomation {
case LEFT:
mDevice.setOrientationLeft();
break;
case LANDSCAPE:
if (mDevice.isNaturalOrientation()){
if (height > width){
mDevice.setOrientationRight();
}
}
else {
if (height > width){
mDevice.setOrientationNatural();
}
}
break;
case PORTRAIT:
if (mDevice.isNaturalOrientation()){
if (height < width){
mDevice.setOrientationRight();
}
}
else {
if (height < width){
mDevice.setOrientationNatural();
}
}
break;
default:
throw new Exception("No orientation specified");
}
@ -547,9 +573,29 @@ public class BaseUiAutomation {
}
}
// If an an app is not designed for running on the latest version of android
// (currently Q) an additional screen can popup asking to confirm permissions.
public void dismissAndroidPermissionPopup() throws Exception {
UiObject permissionAccess =
mDevice.findObject(new UiSelector().textMatches(
".*Choose what to allow .* to access"));
UiObject continueButton =
mDevice.findObject(new UiSelector().resourceId("com.android.permissioncontroller:id/continue_button")
.textContains("Continue"));
if (permissionAccess.exists() && continueButton.exists()) {
continueButton.click();
}
}
// If an an app is not designed for running on the latest version of android
// (currently Q) dissmiss the warning popup if present.
public void dismissAndroidVersionPopup() throws Exception {
// Ensure we have dissmied any permission screens before looking for the version popup
dismissAndroidPermissionPopup();
UiObject warningText =
mDevice.findObject(new UiSelector().textContains(
"This app was built for an older version of Android"));
@ -562,6 +608,29 @@ public class BaseUiAutomation {
}
// If Chrome is a fresh install then these popups may be presented
// dismiss them if visible.
public void dismissChromePopup() throws Exception {
UiObject accept =
mDevice.findObject(new UiSelector().resourceId("com.android.chrome:id/terms_accept")
.className("android.widget.Button"));
if (accept.waitForExists(3000)){
accept.click();
UiObject negative =
mDevice.findObject(new UiSelector().resourceId("com.android.chrome:id/negative_button")
.className("android.widget.Button"));
if (negative.waitForExists(10000)) {
negative.click();
}
}
UiObject lite =
mDevice.findObject(new UiSelector().resourceId("com.android.chrome:id/button_secondary")
.className("android.widget.Button"));
if (lite.exists()){
lite.click();
}
}
// Override getParams function to decode a url encoded parameter bundle before
// passing it to workloads.
public Bundle getParams() {

Binary file not shown.

@ -19,15 +19,23 @@ from collections import namedtuple
from subprocess import Popen, PIPE
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision'])
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev'])
version = VersionTuple(3, 1, 1)
version = VersionTuple(3, 3, 0, '')
required_devlib_version = VersionTuple(1, 3, 0, '')
def format_version(v):
version_string = '{}.{}.{}'.format(
v.major, v.minor, v.revision)
if v.dev:
version_string += '.{}'.format(v.dev)
return version_string
def get_wa_version():
version_string = '{}.{}.{}'.format(
version.major, version.minor, version.revision)
return version_string
return format_version(version)
def get_wa_version_with_commit():

@ -1,4 +1,4 @@
# Copyright 2014-2018 ARM Limited
# Copyright 2014-2019 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -14,17 +14,25 @@
#
import logging
import os
import threading
import time
from devlib.utils.android import ApkInfo
try:
from shlex import quote
except ImportError:
from pipes import quote
from wa.utils.android import get_cacheable_apk_info
from wa.framework.plugin import TargetedPlugin, Parameter
from wa.framework.resource import (ApkFile, ReventFile,
File, loose_version_matching)
File, loose_version_matching,
range_version_matching)
from wa.framework.exception import WorkloadError, ConfigError
from wa.utils.types import ParameterDict
from wa.utils.types import ParameterDict, list_or_string, version_tuple
from wa.utils.revent import ReventRecorder
from wa.utils.exec_control import once_per_instance
from wa.utils.misc import atomic_write_path
class Workload(TargetedPlugin):
@ -37,14 +45,12 @@ class Workload(TargetedPlugin):
kind = 'workload'
parameters = [
Parameter('cleanup_assets', kind=bool,
global_alias='cleanup_assets',
aliases=['clean_up'],
Parameter('uninstall', kind=bool,
default=True,
description="""
If ``True``, if assets are deployed as part of the workload they
will be removed again from the device as part of finalize.
""")
If ``True``, executables that are installed to the device
as part of the workload will be uninstalled again.
"""),
]
# Set this to True to mark that this workload poses a risk of exposing
@ -73,7 +79,7 @@ class Workload(TargetedPlugin):
supported_platforms = getattr(self, 'supported_platforms', [])
if supported_platforms and self.target.os not in supported_platforms:
msg = 'Supported platforms for "{}" are "{}", attemping to run on "{}"'
msg = 'Supported platforms for "{}" are "{}", attempting to run on "{}"'
raise WorkloadError(msg.format(self.name, ' '.join(self.supported_platforms),
self.target.os))
@ -118,13 +124,11 @@ class Workload(TargetedPlugin):
Execute the workload. This is the method that performs the actual
"work" of the workload.
"""
pass
def extract_results(self, context):
"""
Extract results on the target
"""
pass
def update_output(self, context):
"""
@ -132,11 +136,9 @@ class Workload(TargetedPlugin):
metrics and artifacts for this workload iteration.
"""
pass
def teardown(self, context):
""" Perform any final clean up for the Workload. """
pass
@once_per_instance
def finalize(self, context):
@ -174,6 +176,8 @@ class ApkWorkload(Workload):
# Times are in seconds
loading_time = 10
package_names = []
supported_versions = []
activity = None
view = None
clear_data_on_reset = True
@ -198,6 +202,16 @@ class ApkWorkload(Workload):
description="""
The version of the package to be used.
"""),
Parameter('max_version', kind=str,
default=None,
description="""
The maximum version of the package to be used.
"""),
Parameter('min_version', kind=str,
default=None,
description="""
The minimum version of the package to be used.
"""),
Parameter('variant', kind=str,
default=None,
description="""
@ -217,6 +231,7 @@ class ApkWorkload(Workload):
"""),
Parameter('uninstall', kind=bool,
default=False,
override=True,
description="""
If ``True``, will uninstall workload\'s APK as part of teardown.'
"""),
@ -235,6 +250,12 @@ class ApkWorkload(Workload):
will fall back to the version on the target if available. If
``False`` then the version on the target is preferred instead.
"""),
Parameter('view', kind=str, default=None, merge=True,
description="""
Manually override the 'View' of the workload for use with
instruments such as the ``fps`` instrument. If not specified,
a workload dependant 'View' will be automatically generated.
"""),
]
@property
@ -249,22 +270,40 @@ class ApkWorkload(Workload):
raise ConfigError('Target does not appear to support Android')
super(ApkWorkload, self).__init__(target, **kwargs)
if self.activity is not None and '.' not in self.activity:
# If we're receiving just the activity name, it's taken relative to
# the package namespace:
self.activity = '.' + self.activity
self.apk = PackageHandler(self,
package_name=self.package_name,
variant=self.variant,
strict=self.strict,
version=self.version,
version=self.version or self.supported_versions,
force_install=self.force_install,
install_timeout=self.install_timeout,
uninstall=self.uninstall,
exact_abi=self.exact_abi,
prefer_host_package=self.prefer_host_package,
clear_data_on_reset=self.clear_data_on_reset)
clear_data_on_reset=self.clear_data_on_reset,
activity=self.activity,
min_version=self.min_version,
max_version=self.max_version)
def validate(self):
if self.min_version and self.max_version:
if version_tuple(self.min_version) > version_tuple(self.max_version):
msg = 'Cannot specify min version ({}) greater than max version ({})'
raise ConfigError(msg.format(self.min_version, self.max_version))
@once_per_instance
def initialize(self, context):
super(ApkWorkload, self).initialize(context)
self.apk.initialize(context)
# pylint: disable=access-member-before-definition, attribute-defined-outside-init
if self.version is None:
self.version = self.apk.apk_info.version_name
if self.view is None:
self.view = 'SurfaceView - {}/{}'.format(self.apk.package,
self.apk.activity)
@ -282,7 +321,6 @@ class ApkWorkload(Workload):
Perform the setup necessary to rerun the workload. Only called if
``requires_rerun`` is set.
"""
pass
def teardown(self, context):
super(ApkWorkload, self).teardown(context)
@ -327,7 +365,8 @@ class ApkUIWorkload(ApkWorkload):
@once_per_instance
def finalize(self, context):
super(ApkUIWorkload, self).finalize(context)
self.gui.remove()
if self.cleanup_assets:
self.gui.remove()
class ApkUiautoWorkload(ApkUIWorkload):
@ -365,7 +404,6 @@ class ApkReventWorkload(ApkUIWorkload):
def __init__(self, target, **kwargs):
super(ApkReventWorkload, self).__init__(target, **kwargs)
self.apk = PackageHandler(self)
self.gui = ReventGUI(self, target,
self.setup_timeout,
self.run_timeout,
@ -407,7 +445,8 @@ class UIWorkload(Workload):
@once_per_instance
def finalize(self, context):
super(UIWorkload, self).finalize(context)
self.gui.remove()
if self.cleanup_assets:
self.gui.remove()
class UiautoWorkload(UIWorkload):
@ -479,7 +518,7 @@ class UiAutomatorGUI(object):
def init_resources(self, resolver):
self.uiauto_file = resolver.get(ApkFile(self.owner, uiauto=True))
if not self.uiauto_package:
uiauto_info = ApkInfo(self.uiauto_file)
uiauto_info = get_cacheable_apk_info(self.uiauto_file)
self.uiauto_package = uiauto_info.package
def init_commands(self):
@ -603,12 +642,12 @@ class ReventGUI(object):
if self.revent_teardown_file:
self.revent_recorder.replay(self.on_target_teardown_revent,
timeout=self.teardown_timeout)
def remove(self):
self.target.remove(self.on_target_setup_revent)
self.target.remove(self.on_target_run_revent)
self.target.remove(self.on_target_extract_results_revent)
self.target.remove(self.on_target_teardown_revent)
def remove(self):
self.revent_recorder.remove()
def _check_revent_files(self):
@ -637,18 +676,24 @@ class PackageHandler(object):
@property
def activity(self):
if self._activity:
return self._activity
if self.apk_info is None:
return None
return self.apk_info.activity
# pylint: disable=too-many-locals
def __init__(self, owner, install_timeout=300, version=None, variant=None,
package_name=None, strict=False, force_install=False, uninstall=False,
exact_abi=False, prefer_host_package=True, clear_data_on_reset=True):
exact_abi=False, prefer_host_package=True, clear_data_on_reset=True,
activity=None, min_version=None, max_version=None):
self.logger = logging.getLogger('apk')
self.owner = owner
self.target = self.owner.target
self.install_timeout = install_timeout
self.version = version
self.min_version = min_version
self.max_version = max_version
self.variant = variant
self.package_name = package_name
self.strict = strict
@ -657,6 +702,7 @@ class PackageHandler(object):
self.exact_abi = exact_abi
self.prefer_host_package = prefer_host_package
self.clear_data_on_reset = clear_data_on_reset
self._activity = activity
self.supported_abi = self.target.supported_abi
self.apk_file = None
self.apk_info = None
@ -669,6 +715,7 @@ class PackageHandler(object):
def setup(self, context):
context.update_metadata('app_version', self.apk_info.version_name)
context.update_metadata('app_name', self.apk_info.package)
self.initialize_package(context)
self.start_activity()
self.target.execute('am kill-all') # kill all *background* activities
@ -690,7 +737,7 @@ class PackageHandler(object):
self.resolve_package_from_host(context)
if self.apk_file:
self.apk_info = ApkInfo(self.apk_file)
self.apk_info = get_cacheable_apk_info(self.apk_file)
else:
if self.error_msg:
raise WorkloadError(self.error_msg)
@ -714,7 +761,9 @@ class PackageHandler(object):
version=self.version,
package=self.package_name,
exact_abi=self.exact_abi,
supported_abi=self.supported_abi),
supported_abi=self.supported_abi,
min_version=self.min_version,
max_version=self.max_version),
strict=self.strict)
else:
available_packages = []
@ -724,47 +773,57 @@ class PackageHandler(object):
version=self.version,
package=package,
exact_abi=self.exact_abi,
supported_abi=self.supported_abi),
supported_abi=self.supported_abi,
min_version=self.min_version,
max_version=self.max_version),
strict=self.strict)
if apk_file:
available_packages.append(apk_file)
if len(available_packages) == 1:
self.apk_file = available_packages[0]
elif len(available_packages) > 1:
msg = 'Multiple matching packages found for "{}" on host: {}'
self.error_msg = msg.format(self.owner, available_packages)
self.error_msg = self._get_package_error_msg('host')
def resolve_package_from_target(self): # pylint: disable=too-many-branches
self.logger.debug('Resolving package on target')
found_package = None
if self.package_name:
if not self.target.package_is_installed(self.package_name):
return
else:
installed_versions = [self.package_name]
else:
installed_versions = []
for package in self.owner.package_names:
if self.target.package_is_installed(package):
installed_versions.append(package)
if self.version:
matching_packages = []
for package in installed_versions:
package_version = self.target.get_package_version(package)
if loose_version_matching(self.version, package_version):
if self.version or self.min_version or self.max_version:
matching_packages = []
for package in installed_versions:
package_version = self.target.get_package_version(package)
if self.version:
for v in list_or_string(self.version):
if loose_version_matching(v, package_version):
matching_packages.append(package)
else:
if range_version_matching(package_version, self.min_version,
self.max_version):
matching_packages.append(package)
if len(matching_packages) == 1:
self.package_name = matching_packages[0]
elif len(matching_packages) > 1:
msg = 'Multiple matches for version "{}" found on device.'
self.error_msg = msg.format(self.version)
else:
if len(installed_versions) == 1:
self.package_name = installed_versions[0]
elif len(installed_versions) > 1:
self.error_msg = 'Package version not set and multiple versions found on device.'
if self.package_name:
if len(matching_packages) == 1:
found_package = matching_packages[0]
elif len(matching_packages) > 1:
self.error_msg = self._get_package_error_msg('device')
else:
if len(installed_versions) == 1:
found_package = installed_versions[0]
elif len(installed_versions) > 1:
self.error_msg = 'Package version not set and multiple versions found on device.'
if found_package:
self.logger.debug('Found matching package on target; Pulling to host.')
self.apk_file = self.pull_apk(self.package_name)
self.apk_file = self.pull_apk(found_package)
self.package_name = found_package
def initialize_package(self, context):
installed_version = self.target.get_package_version(self.apk_info.package)
@ -794,11 +853,11 @@ class PackageHandler(object):
self.apk_version = host_version
def start_activity(self):
if not self.apk_info.activity:
if not self.activity:
cmd = 'am start -W {}'.format(self.apk_info.package)
else:
cmd = 'am start -W -n {}/{}'.format(self.apk_info.package,
self.apk_info.activity)
self.activity)
output = self.target.execute(cmd)
if 'Error:' in output:
# this will dismiss any error dialogs
@ -833,12 +892,93 @@ class PackageHandler(object):
message = 'Cannot retrieve "{}" as not installed on Target'
raise WorkloadError(message.format(package))
package_info = self.target.get_package_info(package)
self.target.pull(package_info.apk_path, self.owner.dependencies_directory,
timeout=self.install_timeout)
apk_name = self.target.path.basename(package_info.apk_path)
return os.path.join(self.owner.dependencies_directory, apk_name)
apk_name = self._get_package_name(package_info.apk_path)
host_path = os.path.join(self.owner.dependencies_directory, apk_name)
with atomic_write_path(host_path) as at_path:
self.target.pull(package_info.apk_path, at_path,
timeout=self.install_timeout)
return host_path
def teardown(self):
self.target.execute('am force-stop {}'.format(self.apk_info.package))
if self.uninstall:
self.target.uninstall_package(self.apk_info.package)
def _get_package_name(self, apk_path):
return self.target.path.basename(apk_path)
def _get_package_error_msg(self, location):
if self.version:
msg = 'Multiple matches for "{version}" found on {location}.'
elif self.min_version and self.max_version:
msg = 'Multiple matches between versions "{min_version}" and "{max_version}" found on {location}.'
elif self.max_version:
msg = 'Multiple matches less than or equal to "{max_version}" found on {location}.'
elif self.min_version:
msg = 'Multiple matches greater or equal to "{min_version}" found on {location}.'
else:
msg = ''
return msg.format(version=self.version, min_version=self.min_version,
max_version=self.max_version, location=location)
class TestPackageHandler(PackageHandler):
"""Class wrapping an APK used through ``am instrument``.
"""
def __init__(self, owner, instrument_args=None, raw_output=False,
instrument_wait=True, no_hidden_api_checks=False,
*args, **kwargs):
if instrument_args is None:
instrument_args = {}
super(TestPackageHandler, self).__init__(owner, *args, **kwargs)
self.raw = raw_output
self.args = instrument_args
self.wait = instrument_wait
self.no_checks = no_hidden_api_checks
self.cmd = ''
self.instrument_thread = None
self._instrument_output = None
def setup(self, context):
self.initialize_package(context)
words = ['am', 'instrument']
if self.raw:
words.append('-r')
if self.wait:
words.append('-w')
if self.no_checks:
words.append('--no-hidden-api-checks')
for k, v in self.args.items():
words.extend(['-e', str(k), str(v)])
words.append(str(self.apk_info.package))
if self.apk_info.activity:
words[-1] += '/{}'.format(self.apk_info.activity)
self.cmd = ' '.join(quote(x) for x in words)
self.instrument_thread = threading.Thread(target=self._start_instrument)
def start_activity(self):
self.instrument_thread.start()
def wait_instrument_over(self):
self.instrument_thread.join()
if 'Error:' in self._instrument_output:
cmd = 'am force-stop {}'.format(self.apk_info.package)
self.target.execute(cmd)
raise WorkloadError(self._instrument_output)
def _start_instrument(self):
self._instrument_output = self.target.execute(self.cmd)
self.logger.debug(self._instrument_output)
def _get_package_name(self, apk_path):
return 'test_{}'.format(self.target.path.basename(apk_path))
@property
def instrument_output(self):
if self.instrument_thread.is_alive():
self.instrument_thread.join() # writes self._instrument_output
return self._instrument_output

@ -20,6 +20,7 @@ import time
from wa import Instrument, Parameter
from wa.framework.exception import ConfigError, InstrumentError
from wa.framework.instrument import extremely_slow
from wa.utils.types import identifier
class DelayInstrument(Instrument):
@ -32,7 +33,7 @@ class DelayInstrument(Instrument):
The delay may be specified as either a fixed period or a temperature
threshold that must be reached.
Optionally, if an active cooling solution is available on the device tqgitq
Optionally, if an active cooling solution is available on the device to
speed up temperature drop between runs, it may be controlled using this
instrument.
@ -200,16 +201,16 @@ class DelayInstrument(Instrument):
reading = self.target.read_int(self.temperature_file)
def validate(self):
if (self.temperature_between_specs is not None and
self.fixed_between_specs is not None):
if (self.temperature_between_specs is not None
and self.fixed_between_specs is not None):
raise ConfigError('Both fixed delay and thermal threshold specified for specs.')
if (self.temperature_between_jobs is not None and
self.fixed_between_jobs is not None):
if (self.temperature_between_jobs is not None
and self.fixed_between_jobs is not None):
raise ConfigError('Both fixed delay and thermal threshold specified for jobs.')
if (self.temperature_before_start is not None and
self.fixed_before_start is not None):
if (self.temperature_before_start is not None
and self.fixed_before_start is not None):
raise ConfigError('Both fixed delay and thermal threshold specified before start.')
if not any([self.temperature_between_specs, self.fixed_between_specs,
@ -222,7 +223,7 @@ class DelayInstrument(Instrument):
for module in self.active_cooling_modules:
if self.target.has(module):
if not cooling_module:
cooling_module = getattr(self.target, module)
cooling_module = getattr(self.target, identifier(module))
else:
msg = 'Multiple cooling modules found "{}" "{}".'
raise InstrumentError(msg.format(cooling_module.name, module))

@ -144,7 +144,13 @@ class DAQBackend(EnergyInstrumentBackend):
connector on the DAQ (varies between DAQ models). The default
assumes DAQ 6363 and similar with AI channels on connectors
0-7 and 16-23.
""")
"""),
Parameter('keep_raw', kind=bool, default=False,
description="""
If set to ``True``, this will prevent the raw files obtained
from the device before processing from being deleted
(this is maily used for debugging).
"""),
]
instrument = DaqInstrument
@ -189,6 +195,12 @@ class EnergyProbeBackend(EnergyInstrumentBackend):
description="""
Path to /dev entry for the energy probe (it should be /dev/ttyACMx)
"""),
Parameter('keep_raw', kind=bool, default=False,
description="""
If set to ``True``, this will prevent the raw files obtained
from the device before processing from being deleted
(this is maily used for debugging).
"""),
]
instrument = EnergyProbeInstrument
@ -224,6 +236,12 @@ class ArmEnergyProbeBackend(EnergyInstrumentBackend):
description="""
Path to config file of the AEP
"""),
Parameter('keep_raw', kind=bool, default=False,
description="""
If set to ``True``, this will prevent the raw files obtained
from the device before processing from being deleted
(this is maily used for debugging).
"""),
]
instrument = ArmEnergyProbeInstrument
@ -282,11 +300,17 @@ class AcmeCapeBackend(EnergyInstrumentBackend):
description="""
Size of the capture buffer (in KB).
"""),
Parameter('keep_raw', kind=bool, default=False,
description="""
If set to ``True``, this will prevent the raw files obtained
from the device before processing from being deleted
(this is maily used for debugging).
"""),
]
# pylint: disable=arguments-differ
def get_instruments(self, target, metadir,
iio_capture, host, iio_devices, buffer_size):
iio_capture, host, iio_devices, buffer_size, keep_raw):
#
# Devlib's ACME instrument uses iio-capture under the hood, which can
@ -307,7 +331,7 @@ class AcmeCapeBackend(EnergyInstrumentBackend):
for iio_device in iio_devices:
ret[iio_device] = AcmeCapeInstrument(
target, iio_capture=iio_capture, host=host,
iio_device=iio_device, buffer_size=buffer_size)
iio_device=iio_device, buffer_size=buffer_size, keep_raw=keep_raw)
return ret
@ -510,3 +534,7 @@ class EnergyMeasurement(Instrument):
units = metrics[0].units
value = sum(m.value for m in metrics)
context.add_metric(name, value, units)
def teardown(self, context):
for instrument in self.instruments.values():
instrument.teardown()

@ -164,7 +164,7 @@ class FpsInstrument(Instrument):
os.remove(entry)
if not frame_count.value:
context.add_event('Could not frind frames data in gfxinfo output')
context.add_event('Could not find frames data in gfxinfo output')
context.set_status('PARTIAL')
self.check_for_crash(context, fps.value, frame_count.value,

@ -32,7 +32,6 @@ import tarfile
from subprocess import CalledProcessError
from devlib.exception import TargetError
from devlib.utils.android import ApkInfo
from wa import Instrument, Parameter, very_fast
from wa.framework.exception import ConfigError
@ -42,6 +41,7 @@ from wa.utils.misc import as_relative
from wa.utils.misc import ensure_file_directory_exists as _f
from wa.utils.misc import ensure_directory_exists as _d
from wa.utils.types import list_of_strings
from wa.utils.android import get_cacheable_apk_info
logger = logging.getLogger(__name__)
@ -169,13 +169,19 @@ class SysfsExtractor(Instrument):
for paths in self.device_and_host_paths:
after_dir = paths[self.AFTER_PATH]
dev_dir = paths[self.DEVICE_PATH].strip('*') # remove potential trailing '*'
if (not os.listdir(after_dir) and
self.target.file_exists(dev_dir) and
self.target.list_directory(dev_dir)):
if (not os.listdir(after_dir)
and self.target.file_exists(dev_dir)
and self.target.list_directory(dev_dir)):
self.logger.error('sysfs files were not pulled from the device.')
self.device_and_host_paths.remove(paths) # Path is removed to skip diffing it
for _, before_dir, after_dir, diff_dir in self.device_and_host_paths:
for dev_dir, before_dir, after_dir, diff_dir in self.device_and_host_paths:
diff_sysfs_dirs(before_dir, after_dir, diff_dir)
context.add_artifact('{} [before]'.format(dev_dir), before_dir,
kind='data', classifiers={'stage': 'before'})
context.add_artifact('{} [after]'.format(dev_dir), after_dir,
kind='data', classifiers={'stage': 'after'})
context.add_artifact('{} [diff]'.format(dev_dir), diff_dir,
kind='data', classifiers={'stage': 'diff'})
def teardown(self, context):
self._one_time_setup_done = []
@ -238,7 +244,7 @@ class ApkVersion(Instrument):
def setup(self, context):
if hasattr(context.workload, 'apk_file'):
self.apk_info = ApkInfo(context.workload.apk_file)
self.apk_info = get_cacheable_apk_info(context.workload.apk_file)
else:
self.apk_info = None
@ -276,9 +282,15 @@ class InterruptStatsInstrument(Instrument):
wfh.write(self.target.execute('cat /proc/interrupts'))
def update_output(self, context):
context.add_artifact('interrupts [before]', self.before_file, kind='data',
classifiers={'stage': 'before'})
# If workload execution failed, the after_file may not have been created.
if os.path.isfile(self.after_file):
diff_interrupt_files(self.before_file, self.after_file, _f(self.diff_file))
context.add_artifact('interrupts [after]', self.after_file, kind='data',
classifiers={'stage': 'after'})
context.add_artifact('interrupts [diff]', self.diff_file, kind='data',
classifiers={'stage': 'diff'})
class DynamicFrequencyInstrument(SysfsExtractor):

@ -15,13 +15,14 @@
# pylint: disable=unused-argument
import csv
import os
import re
from devlib.trace.perf import PerfCollector
from devlib.collector.perf import PerfCollector
from wa import Instrument, Parameter
from wa.utils.types import list_or_string, list_of_strs
from wa.utils.types import list_or_string, list_of_strs, numeric
PERF_COUNT_REGEX = re.compile(r'^(CPU\d+)?\s*(\d+)\s*(.*?)\s*(\[\s*\d+\.\d+%\s*\])?\s*$')
@ -31,29 +32,40 @@ class PerfInstrument(Instrument):
name = 'perf'
description = """
Perf is a Linux profiling with performance counters.
Simpleperf is an Android profiling tool with performance counters.
It is highly recomended to use perf_type = simpleperf when using this instrument
on android devices since it recognises android symbols in record mode and is much more stable
when reporting record .data files. For more information see simpleperf documentation at:
https://android.googlesource.com/platform/system/extras/+/master/simpleperf/doc/README.md
Performance counters are CPU hardware registers that count hardware events
such as instructions executed, cache-misses suffered, or branches
mispredicted. They form a basis for profiling applications to trace dynamic
control flow and identify hotspots.
pref accepts options and events. If no option is given the default '-a' is
used. For events, the default events are migrations and cs. They both can
be specified in the config file.
perf accepts options and events. If no option is given the default '-a' is
used. For events, the default events for perf are migrations and cs. The default
events for simpleperf are raw-cpu-cycles, raw-l1-dcache, raw-l1-dcache-refill, raw-instructions-retired.
They both can be specified in the config file.
Events must be provided as a list that contains them and they will look like
this ::
perf_events = ['migrations', 'cs']
(for perf_type = perf ) perf_events = ['migrations', 'cs']
(for perf_type = simpleperf) perf_events = ['raw-cpu-cycles', 'raw-l1-dcache']
Events can be obtained by typing the following in the command line on the
device ::
perf list
simpleperf list
Whereas options, they can be provided as a single string as following ::
perf_options = '-a -i'
perf_options = '--app com.adobe.reader'
Options can be obtained by running the following in the command line ::
@ -61,21 +73,32 @@ class PerfInstrument(Instrument):
"""
parameters = [
Parameter('events', kind=list_of_strs, default=['migrations', 'cs'],
global_alias='perf_events',
constraint=(lambda x: x, 'must not be empty.'),
Parameter('perf_type', kind=str, allowed_values=['perf', 'simpleperf'], default='perf',
global_alias='perf_type', description="""Specifies which type of perf binaries
to install. Use simpleperf for collecting perf data on android systems."""),
Parameter('command', kind=str, default='stat', allowed_values=['stat', 'record'],
global_alias='perf_command', description="""Specifies which perf command to use. If in record mode
report command will also be executed and results pulled from target along with raw data
file"""),
Parameter('events', kind=list_of_strs, global_alias='perf_events',
description="""Specifies the events to be counted."""),
Parameter('optionstring', kind=list_or_string, default='-a',
global_alias='perf_options',
description="""Specifies options to be used for the perf command. This
may be a list of option strings, in which case, multiple instances of perf
will be kicked off -- one for each option string. This may be used to e.g.
collected different events from different big.LITTLE clusters.
collected different events from different big.LITTLE clusters. In order to
profile a particular application process for android with simpleperf use
the --app option e.g. --app com.adobe.reader
"""),
Parameter('report_option_string', kind=str, global_alias='perf_report_options', default=None,
description="""Specifies options to be used to gather report when record command
is used. It's highly recommended to use perf_type simpleperf when running on
android devices as reporting options are unstable with perf"""),
Parameter('labels', kind=list_of_strs, default=None,
global_alias='perf_labels',
description="""Provides labels for pref output. If specified, the number of
labels must match the number of ``optionstring``\ s.
description="""Provides labels for perf/simpleperf output for each optionstring.
If specified, the number of labels must match the number of ``optionstring``\ s.
"""),
Parameter('force_install', kind=bool, default=False,
description="""
@ -86,15 +109,21 @@ class PerfInstrument(Instrument):
def __init__(self, target, **kwargs):
super(PerfInstrument, self).__init__(target, **kwargs)
self.collector = None
self.outdir = None
def initialize(self, context):
self.collector = PerfCollector(self.target,
self.perf_type,
self.command,
self.events,
self.optionstring,
self.report_option_string,
self.labels,
self.force_install)
def setup(self, context):
self.outdir = os.path.join(context.output_directory, self.perf_type)
self.collector.set_output(self.outdir)
self.collector.reset()
def start(self, context):
@ -105,12 +134,32 @@ class PerfInstrument(Instrument):
def update_output(self, context):
self.logger.info('Extracting reports from target...')
outdir = os.path.join(context.output_directory, 'perf')
self.collector.get_trace(outdir)
self.collector.get_data()
for host_file in os.listdir(outdir):
if self.perf_type == 'perf':
self._process_perf_output(context)
else:
self._process_simpleperf_output(context)
def teardown(self, context):
self.collector.reset()
def _process_perf_output(self, context):
if self.command == 'stat':
self._process_perf_stat_output(context)
elif self.command == 'record':
self._process_perf_record_output(context)
def _process_simpleperf_output(self, context):
if self.command == 'stat':
self._process_simpleperf_stat_output(context)
elif self.command == 'record':
self._process_simpleperf_record_output(context)
def _process_perf_stat_output(self, context):
for host_file in os.listdir(self.outdir):
label = host_file.split('.out')[0]
host_file_path = os.path.join(outdir, host_file)
host_file_path = os.path.join(self.outdir, host_file)
context.add_artifact(label, host_file_path, 'raw')
with open(host_file_path) as fh:
in_results_section = False
@ -118,21 +167,150 @@ class PerfInstrument(Instrument):
if 'Performance counter stats' in line:
in_results_section = True
next(fh) # skip the following blank line
if in_results_section:
if not line.strip(): # blank line
in_results_section = False
break
else:
line = line.split('#')[0] # comment
match = PERF_COUNT_REGEX.search(line)
if match:
classifiers = {}
cpu = match.group(1)
if cpu is not None:
classifiers['cpu'] = int(cpu.replace('CPU', ''))
count = int(match.group(2))
metric = '{}_{}'.format(label, match.group(3))
context.add_metric(metric, count, classifiers=classifiers)
if not in_results_section:
continue
if not line.strip(): # blank line
in_results_section = False
break
else:
self._add_perf_stat_metric(line, label, context)
def teardown(self, context):
self.collector.reset()
@staticmethod
def _add_perf_stat_metric(line, label, context):
line = line.split('#')[0] # comment
match = PERF_COUNT_REGEX.search(line)
if not match:
return
classifiers = {}
cpu = match.group(1)
if cpu is not None:
classifiers['cpu'] = int(cpu.replace('CPU', ''))
count = int(match.group(2))
metric = '{}_{}'.format(label, match.group(3))
context.add_metric(metric, count, classifiers=classifiers)
def _process_perf_record_output(self, context):
for host_file in os.listdir(self.outdir):
label, ext = os.path.splitext(host_file)
context.add_artifact(label, os.path.join(self.outdir, host_file), 'raw')
column_headers = []
column_header_indeces = []
event_type = ''
if ext == '.rpt':
with open(os.path.join(self.outdir, host_file)) as fh:
for line in fh:
words = line.split()
if not words:
continue
event_type = self._get_report_event_type(words, event_type)
column_headers = self._get_report_column_headers(column_headers, words, 'perf')
for column_header in column_headers:
column_header_indeces.append(line.find(column_header))
self._add_report_metric(column_headers,
column_header_indeces,
line,
words,
context,
event_type,
label)
@staticmethod
def _get_report_event_type(words, event_type):
if words[0] != '#':
return event_type
if len(words) == 6 and words[4] == 'event':
event_type = words[5]
event_type = event_type.strip("'")
return event_type
def _process_simpleperf_stat_output(self, context):
labels = []
for host_file in os.listdir(self.outdir):
labels.append(host_file.split('.out')[0])
for opts, label in zip(self.optionstring, labels):
stat_file = os.path.join(self.outdir, '{}{}'.format(label, '.out'))
if '--csv' in opts:
self._process_simpleperf_stat_from_csv(stat_file, context, label)
else:
self._process_simpleperf_stat_from_raw(stat_file, context, label)
@staticmethod
def _process_simpleperf_stat_from_csv(stat_file, context, label):
with open(stat_file) as csv_file:
readCSV = csv.reader(csv_file, delimiter=',')
line_num = 0
for row in readCSV:
if line_num > 0 and 'Total test time' not in row:
classifiers = {'scaled from(%)': row[len(row) - 2].replace('(', '').replace(')', '').replace('%', '')}
context.add_metric('{}_{}'.format(label, row[1]), row[0], 'count', classifiers=classifiers)
line_num += 1
@staticmethod
def _process_simpleperf_stat_from_raw(stat_file, context, label):
with open(stat_file) as fh:
for line in fh:
if '#' in line:
tmp_line = line.split('#')[0]
tmp_line = line.strip()
count, metric = tmp_line.split(' ')[0], tmp_line.split(' ')[2]
count = int(count.replace(',', ''))
scaled_percentage = line.split('(')[1].strip().replace(')', '').replace('%', '')
scaled_percentage = int(scaled_percentage)
metric = '{}_{}'.format(label, metric)
context.add_metric(metric, count, 'count', classifiers={'scaled from(%)': scaled_percentage})
def _process_simpleperf_record_output(self, context):
for host_file in os.listdir(self.outdir):
label, ext = os.path.splitext(host_file)
context.add_artifact(label, os.path.join(self.outdir, host_file), 'raw')
if ext != '.rpt':
continue
column_headers = []
column_header_indeces = []
event_type = ''
with open(os.path.join(self.outdir, host_file)) as fh:
for line in fh:
words = line.split()
if not words:
continue
if words[0] == 'Event:':
event_type = words[1]
column_headers = self._get_report_column_headers(column_headers,
words,
'simpleperf')
for column_header in column_headers:
column_header_indeces.append(line.find(column_header))
self._add_report_metric(column_headers,
column_header_indeces,
line,
words,
context,
event_type,
label)
@staticmethod
def _get_report_column_headers(column_headers, words, perf_type):
if 'Overhead' not in words:
return column_headers
if perf_type == 'perf':
words.remove('#')
column_headers = words
# Concatonate Shared Objects header
if 'Shared' in column_headers:
shared_index = column_headers.index('Shared')
column_headers[shared_index:shared_index + 2] = ['{} {}'.format(column_headers[shared_index],
column_headers[shared_index + 1])]
return column_headers
@staticmethod
def _add_report_metric(column_headers, column_header_indeces, line, words, context, event_type, label):
if '%' not in words[0]:
return
classifiers = {}
for i in range(1, len(column_headers)):
classifiers[column_headers[i]] = line[column_header_indeces[i]:column_header_indeces[i + 1]].strip()
context.add_metric('{}_{}_Overhead'.format(label, event_type),
numeric(words[0].strip('%')),
'percent',
classifiers=classifiers)

Binary file not shown.

Binary file not shown.

@ -196,7 +196,7 @@ int main(int argc, char ** argv) {
strip(buf);
printf(",%s", buf);
buf[0] = '\0'; // "Empty" buffer
memset(buf, 0, sizeof(buf)); // "Empty" buffer
}
printf("\n");
usleep(interval);

@ -0,0 +1,94 @@
# Copyright 2020 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import time
from datetime import datetime, timedelta
import pandas as pd
from wa import Instrument, Parameter, File, InstrumentError
class ProcStatCollector(Instrument):
name = 'proc_stat'
description = '''
Collect CPU load information from /proc/stat.
'''
parameters = [
Parameter('period', int, default=5,
constraint=lambda x: x > 0,
description='''
Time (in seconds) between collections.
'''),
]
def initialize(self, context): # pylint: disable=unused-argument
self.host_script = context.get_resource(File(self, 'gather-load.sh'))
self.target_script = self.target.install(self.host_script)
self.target_output = self.target.get_workpath('proc-stat-raw.csv')
self.stop_file = self.target.get_workpath('proc-stat-stop.signal')
def setup(self, context): # pylint: disable=unused-argument
self.command = '{} sh {} {} {} {} {}'.format(
self.target.busybox,
self.target_script,
self.target.busybox,
self.target_output,
self.period,
self.stop_file,
)
self.target.remove(self.target_output)
self.target.remove(self.stop_file)
def start(self, context): # pylint: disable=unused-argument
self.target.kick_off(self.command)
def stop(self, context): # pylint: disable=unused-argument
self.target.execute('{} touch {}'.format(self.target.busybox, self.stop_file))
def update_output(self, context):
self.logger.debug('Waiting for collector script to terminate...')
self._wait_for_script()
self.logger.debug('Waiting for collector script to terminate...')
host_output = os.path.join(context.output_directory, 'proc-stat-raw.csv')
self.target.pull(self.target_output, host_output)
context.add_artifact('proc-stat-raw', host_output, kind='raw')
df = pd.read_csv(host_output)
no_ts = df[df.columns[1:]]
deltas = (no_ts - no_ts.shift())
total = deltas.sum(axis=1)
util = (total - deltas.idle) / total * 100
out_df = pd.concat([df.timestamp, util], axis=1).dropna()
out_df.columns = ['timestamp', 'cpu_util']
util_file = os.path.join(context.output_directory, 'proc-stat.csv')
out_df.to_csv(util_file, index=False)
context.add_artifact('proc-stat', util_file, kind='data')
def finalize(self, context): # pylint: disable=unused-argument
if self.cleanup_assets and getattr(self, 'target_output'):
self.target.remove(self.target_output)
self.target.remove(self.target_script)
def _wait_for_script(self):
start_time = datetime.utcnow()
timeout = timedelta(seconds=300)
while self.target.file_exists(self.stop_file):
delta = datetime.utcnow() - start_time
if delta > timeout:
raise InstrumentError('Timed out wating for /proc/stat collector to terminate..')

@ -0,0 +1,23 @@
#!/bin/sh
BUSYBOX=$1
OUTFILE=$2
PERIOD=$3
STOP_SIGNAL_FILE=$4
if [ "$#" != "4" ]; then
echo "USAGE: gather-load.sh BUSYBOX OUTFILE PERIOD STOP_SIGNAL_FILE"
exit 1
fi
echo "timestamp,user,nice,system,idle,iowait,irq,softirq,steal,guest,guest_nice" > $OUTFILE
while true; do
echo -n $(${BUSYBOX} date -Iseconds) >> $OUTFILE
${BUSYBOX} cat /proc/stat | ${BUSYBOX} head -n 1 | \
${BUSYBOX} cut -d ' ' -f 2- | ${BUSYBOX} sed 's/ /,/g' >> $OUTFILE
if [ -f $STOP_SIGNAL_FILE ]; then
rm $STOP_SIGNAL_FILE
break
else
sleep $PERIOD
fi
done

@ -15,7 +15,7 @@
import os
from devlib.trace.screencapture import ScreenCaptureCollector
from devlib.collector.screencapture import ScreenCaptureCollector
from wa import Instrument, Parameter
@ -47,8 +47,9 @@ class ScreenCaptureInstrument(Instrument):
output_path = os.path.join(context.output_directory, "screen-capture")
os.mkdir(output_path)
self.collector = ScreenCaptureCollector(self.target,
output_path,
self.period)
self.collector.set_output(output_path)
self.collector.reset()
def start(self, context): # pylint: disable=unused-argument
self.collector.start()

@ -47,35 +47,36 @@ class SerialMon(Instrument):
def __init__(self, target, **kwargs):
super(SerialMon, self).__init__(target, **kwargs)
self._collector = SerialTraceCollector(target, self.serial_port, self.baudrate)
self._collector.reset()
def start_logging(self, context):
def start_logging(self, context, filename="serial.log"):
outpath = os.path.join(context.output_directory, filename)
self._collector.set_output(outpath)
self._collector.reset()
self.logger.debug("Acquiring serial port ({})".format(self.serial_port))
if self._collector.collecting:
self.stop_logging(context)
self._collector.start()
def stop_logging(self, context, filename="serial.log", identifier="job"):
def stop_logging(self, context, identifier="job"):
self.logger.debug("Releasing serial port ({})".format(self.serial_port))
if self._collector.collecting:
self._collector.stop()
outpath = os.path.join(context.output_directory, filename)
self._collector.get_trace(outpath)
context.add_artifact("{}_serial_log".format(identifier),
outpath, kind="log")
data = self._collector.get_data()
for l in data: # noqa: E741
context.add_artifact("{}_serial_log".format(identifier),
l.path, kind="log")
def on_run_start(self, context):
self.start_logging(context)
self.start_logging(context, "preamble_serial.log")
def before_job_queue_execution(self, context):
self.stop_logging(context, "preamble_serial.log", "preamble")
self.stop_logging(context, "preamble")
def after_job_queue_execution(self, context):
self.start_logging(context)
self.start_logging(context, "postamble_serial.log")
def on_run_end(self, context):
self.stop_logging(context, "postamble_serial.log", "postamble")
self.stop_logging(context, "postamble")
def on_job_start(self, context):
self.start_logging(context)

@ -203,7 +203,8 @@ class TraceCmdInstrument(Instrument):
def update_output(self, context): # NOQA pylint: disable=R0912
self.logger.info('Extracting trace from target...')
outfile = os.path.join(context.output_directory, 'trace.dat')
self.collector.get_trace(outfile)
self.collector.set_output(outfile)
self.collector.get_data()
context.add_artifact('trace-cmd-bin', outfile, 'data')
if self.report:
textfile = os.path.join(context.output_directory, 'trace.txt')

@ -94,6 +94,12 @@ class CpuStatesProcessor(OutputProcessor):
if not trace_file:
self.logger.warning('Text trace does not appear to have been generated; skipping this iteration.')
return
if 'cpufreq' not in target_info.modules:
msg = '"cpufreq" module not detected on target, cpu frequency information may be missing.'
self.logger.warning(msg)
if 'cpuidle' not in target_info.modules:
msg = '"cpuidle" module not detected on target, cpu idle information may be missing.'
self.logger.debug(msg)
self.logger.info('Generating power state reports from trace...')
reports = report_power_stats( # pylint: disable=unbalanced-tuple-unpacking
@ -128,8 +134,8 @@ class CpuStatesProcessor(OutputProcessor):
parallel_rows.append([job_id, workload, iteration] + record)
for state in sorted(powerstate_report.state_stats):
stats = powerstate_report.state_stats[state]
powerstate_rows.append([job_id, workload, iteration, state] +
['{:.3f}'.format(s if s is not None else 0)
powerstate_rows.append([job_id, workload, iteration, state]
+ ['{:.3f}'.format(s if s is not None else 0)
for s in stats])
outpath = output.get_path('parallel-stats.csv')

@ -90,8 +90,8 @@ class CsvReportProcessor(OutputProcessor):
outfile = output.get_path('results.csv')
with csvwriter(outfile) as writer:
writer.writerow(['id', 'workload', 'iteration', 'metric', ] +
extra_columns + ['value', 'units'])
writer.writerow(['id', 'workload', 'iteration', 'metric', ]
+ extra_columns + ['value', 'units'])
for o in outputs:
if o.kind == 'job':
@ -106,8 +106,8 @@ class CsvReportProcessor(OutputProcessor):
'Output of kind "{}" unrecognised by csvproc'.format(o.kind))
for metric in o.result.metrics:
row = (header + [metric.name] +
[str(metric.classifiers.get(c, ''))
for c in extra_columns] +
[str(metric.value), metric.units or ''])
row = (header + [metric.name]
+ [str(metric.classifiers.get(c, ''))
for c in extra_columns]
+ [str(metric.value), metric.units or ''])
writer.writerow(row)

@ -16,6 +16,7 @@
import os
import uuid
import collections
import tarfile
try:
import psycopg2
@ -24,6 +25,7 @@ try:
except ImportError as e:
psycopg2 = None
import_error_msg = e.args[0] if e.args else str(e)
from devlib.target import KernelVersion, KernelConfig
from wa import OutputProcessor, Parameter, OutputProcessorError
@ -88,11 +90,11 @@ class PostgresqlResultProcessor(OutputProcessor):
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)",
"update_run": "UPDATE Runs SET event_summary=%s, status=%s, timestamp=%s, end_time=%s, duration=%s, state=%s WHERE oid=%s;",
"create_job": "INSERT INTO Jobs (oid, run_oid, status, retry, label, job_id, iterations, workload_name, metadata, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s);",
"create_target": "INSERT INTO Targets (oid, run_oid, target, cpus, os, os_version, hostid, hostname, abi, is_rooted, kernel_version, kernel_release, kernel_sha1, kernel_config, sched_features, page_size_kb, screen_resolution, prop, android_id, _pod_version, _pod_serialization_version) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)",
"create_event": "INSERT INTO Events (oid, run_oid, job_oid, timestamp, message, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s",
"create_artifact": "INSERT INTO Artifacts (oid, run_oid, job_oid, name, large_object_uuid, description, kind, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)",
"create_metric": "INSERT INTO Metrics (oid, run_oid, job_oid, name, value, units, lower_is_better, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s , %s, %s, %s)",
"create_target": "INSERT INTO Targets (oid, run_oid, target, modules, cpus, os, os_version, hostid, hostname, abi, is_rooted, kernel_version, kernel_release, kernel_sha1, kernel_config, sched_features, page_size_kb, system_id, screen_resolution, prop, android_id, _pod_version, _pod_serialization_version) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)",
"create_event": "INSERT INTO Events (oid, run_oid, job_oid, timestamp, message, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s)",
"create_artifact": "INSERT INTO Artifacts (oid, run_oid, job_oid, name, large_object_uuid, description, kind, is_dir, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)",
"create_metric": "INSERT INTO Metrics (oid, run_oid, job_oid, name, value, units, lower_is_better, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)",
"create_augmentation": "INSERT INTO Augmentations (oid, run_oid, name) VALUES (%s, %s, %s)",
"create_classifier": "INSERT INTO Classifiers (oid, artifact_oid, metric_oid, job_oid, run_oid, key, value) VALUES (%s, %s, %s, %s, %s, %s, %s)",
"create_parameter": "INSERT INTO Parameters (oid, run_oid, job_oid, augmentation_oid, resource_getter_oid, name, value, value_type, type) "
@ -122,12 +124,10 @@ class PostgresqlResultProcessor(OutputProcessor):
if not psycopg2:
raise ImportError(
'The psycopg2 module is required for the ' +
'Postgresql Output Processor: {}'.format(import_error_msg))
'The psycopg2 module is required for the '
+ 'Postgresql Output Processor: {}'.format(import_error_msg))
# N.B. Typecasters are for postgres->python and adapters the opposite
self.connect_to_database()
self.cursor = self.conn.cursor()
self.verify_schema_versions()
# Register the adapters and typecasters for enum types
self.cursor.execute("SELECT NULL::status_enum")
@ -190,6 +190,7 @@ class PostgresqlResultProcessor(OutputProcessor):
self.target_uuid,
self.run_uuid,
target_pod['target'],
target_pod['modules'],
target_pod['cpus'],
target_pod['os'],
target_pod['os_version'],
@ -205,12 +206,13 @@ class PostgresqlResultProcessor(OutputProcessor):
target_info.kernel_config,
target_pod['sched_features'],
target_pod['page_size_kb'],
target_pod['system_id'],
# Android Specific
target_pod.get('screen_resolution'),
list(target_pod.get('screen_resolution', [])),
target_pod.get('prop'),
target_pod.get('android_id'),
target_pod.get('pod_version'),
target_pod.get('pod_serialization_version'),
target_pod.get('_pod_version'),
target_pod.get('_pod_serialization_version'),
)
)
@ -221,6 +223,8 @@ class PostgresqlResultProcessor(OutputProcessor):
''' Run once for each job to upload information that is
updated on a job by job basis.
'''
# Ensure we're still connected to the database.
self.connect_to_database()
job_uuid = uuid.uuid4()
# Create a new job
self.cursor.execute(
@ -302,8 +306,11 @@ class PostgresqlResultProcessor(OutputProcessor):
''' A final export of the RunOutput that updates existing parameters
and uploads ones which are only generated after jobs have run.
'''
if not self.cursor: # Database did not connect correctly.
if self.cursor is None: # Output processor did not initialise correctly.
return
# Ensure we're still connected to the database.
self.connect_to_database()
# Update the job statuses following completion of the run
for job in run_output.jobs:
job_id = job.id
@ -508,8 +515,10 @@ class PostgresqlResultProcessor(OutputProcessor):
self.conn = connect(dsn=dsn)
except Psycopg2Error as e:
raise OutputProcessorError(
"Database error, if the database doesn't exist, " +
"please use 'wa create database' to create the database: {}".format(e))
"Database error, if the database doesn't exist, "
+ "please use 'wa create database' to create the database: {}".format(e))
self.cursor = self.conn.cursor()
self.verify_schema_versions()
def execute_sql_line_by_line(self, sql):
cursor = self.conn.cursor()
@ -532,7 +541,7 @@ class PostgresqlResultProcessor(OutputProcessor):
'with the create command'
raise OutputProcessorError(msg.format(db_schema_version, local_schema_version))
def _sql_write_lobject(self, source, lobject):
def _sql_write_file_lobject(self, source, lobject):
with open(source) as lobj_file:
lobj_data = lobj_file.read()
if len(lobj_data) > 50000000: # Notify if LO inserts larger than 50MB
@ -540,10 +549,18 @@ class PostgresqlResultProcessor(OutputProcessor):
lobject.write(lobj_data)
self.conn.commit()
def _sql_write_dir_lobject(self, source, lobject):
with tarfile.open(fileobj=lobject, mode='w|gz') as lobj_dir:
lobj_dir.add(source, arcname='.')
self.conn.commit()
def _sql_update_artifact(self, artifact, output_object):
self.logger.debug('Updating artifact: {}'.format(artifact))
lobj = self.conn.lobject(oid=self.artifacts_already_added[artifact], mode='w')
self._sql_write_lobject(os.path.join(output_object.basepath, artifact.path), lobj)
if artifact.is_dir:
self._sql_write_dir_lobject(os.path.join(output_object.basepath, artifact.path), lobj)
else:
self._sql_write_file_lobject(os.path.join(output_object.basepath, artifact.path), lobj)
def _sql_create_artifact(self, artifact, output_object, record_in_added=False, job_uuid=None):
self.logger.debug('Uploading artifact: {}'.format(artifact))
@ -551,8 +568,10 @@ class PostgresqlResultProcessor(OutputProcessor):
lobj = self.conn.lobject()
loid = lobj.oid
large_object_uuid = uuid.uuid4()
self._sql_write_lobject(os.path.join(output_object.basepath, artifact.path), lobj)
if artifact.is_dir:
self._sql_write_dir_lobject(os.path.join(output_object.basepath, artifact.path), lobj)
else:
self._sql_write_file_lobject(os.path.join(output_object.basepath, artifact.path), lobj)
self.cursor.execute(
self.sql_command['create_large_object'],
@ -571,6 +590,7 @@ class PostgresqlResultProcessor(OutputProcessor):
large_object_uuid,
artifact.description,
str(artifact.kind),
artifact.is_dir,
artifact._pod_version, # pylint: disable=protected-access
artifact._pod_serialization_version, # pylint: disable=protected-access
)

@ -14,16 +14,25 @@
#
import logging
import os
from datetime import datetime
from devlib.utils.android import ApkInfo as _ApkInfo
from wa.framework.configuration import settings
from wa.utils.serializer import read_pod, write_pod, Podable
from wa.utils.types import enum
from wa.utils.misc import atomic_write_path
LogcatLogLevel = enum(['verbose', 'debug', 'info', 'warn', 'error', 'assert'], start=2)
log_level_map = ''.join(n[0].upper() for n in LogcatLogLevel.names)
logger = logging.getLogger('logcat')
logcat_logger = logging.getLogger('logcat')
apk_info_cache_logger = logging.getLogger('apk_info_cache')
apk_info_cache = None
class LogcatEvent(object):
@ -51,7 +60,7 @@ class LogcatEvent(object):
class LogcatParser(object):
def parse(self, filepath):
with open(filepath) as fh:
with open(filepath, errors='replace') as fh:
for line in fh:
event = self.parse_line(line)
if event:
@ -74,7 +83,116 @@ class LogcatParser(object):
tag = (parts.pop(0) if parts else '').strip()
except Exception as e: # pylint: disable=broad-except
message = 'Invalid metadata for line:\n\t{}\n\tgot: "{}"'
logger.warning(message.format(line, e))
logcat_logger.warning(message.format(line, e))
return None
return LogcatEvent(timestamp, pid, tid, level, tag, message)
# pylint: disable=protected-access,attribute-defined-outside-init
class ApkInfo(_ApkInfo, Podable):
'''Implement ApkInfo as a Podable class.'''
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
instance = ApkInfo()
instance.path = pod['path']
instance.package = pod['package']
instance.activity = pod['activity']
instance.label = pod['label']
instance.version_name = pod['version_name']
instance.version_code = pod['version_code']
instance.native_code = pod['native_code']
instance.permissions = pod['permissions']
instance._apk_path = pod['_apk_path']
instance._activities = pod['_activities']
instance._methods = pod['_methods']
return instance
def __init__(self, path=None):
super().__init__(path)
self._pod_version = self._pod_serialization_version
def to_pod(self):
pod = super().to_pod()
pod['path'] = self.path
pod['package'] = self.package
pod['activity'] = self.activity
pod['label'] = self.label
pod['version_name'] = self.version_name
pod['version_code'] = self.version_code
pod['native_code'] = self.native_code
pod['permissions'] = self.permissions
pod['_apk_path'] = self._apk_path
pod['_activities'] = self.activities # Force extraction
pod['_methods'] = self.methods # Force extraction
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
class ApkInfoCache:
@staticmethod
def _check_env():
if not os.path.exists(settings.cache_directory):
os.makedirs(settings.cache_directory)
def __init__(self, path=settings.apk_info_cache_file):
self._check_env()
self.path = path
self.last_modified = None
self.cache = {}
self._update_cache()
def store(self, apk_info, apk_id, overwrite=True):
self._update_cache()
if apk_id in self.cache and not overwrite:
raise ValueError('ApkInfo for {} is already in cache.'.format(apk_info.path))
self.cache[apk_id] = apk_info.to_pod()
with atomic_write_path(self.path) as at_path:
write_pod(self.cache, at_path)
self.last_modified = os.stat(self.path)
def get_info(self, key):
self._update_cache()
pod = self.cache.get(key)
info = ApkInfo.from_pod(pod) if pod else None
return info
def _update_cache(self):
if not os.path.exists(self.path):
return
if self.last_modified != os.stat(self.path):
apk_info_cache_logger.debug('Updating cache {}'.format(self.path))
self.cache = read_pod(self.path)
self.last_modified = os.stat(self.path)
def get_cacheable_apk_info(path):
# pylint: disable=global-statement
global apk_info_cache
if not path:
return
stat = os.stat(path)
modified = stat.st_mtime
apk_id = '{}-{}'.format(path, modified)
info = apk_info_cache.get_info(apk_id)
if info:
msg = 'Using ApkInfo ({}) from cache'.format(info.package)
else:
info = ApkInfo(path)
apk_info_cache.store(info, apk_id, overwrite=True)
msg = 'Storing ApkInfo ({}) in cache'.format(info.package)
apk_info_cache_logger.debug(msg)
return info
apk_info_cache = ApkInfoCache()

@ -151,7 +151,7 @@ class PowerStateProcessor(object):
def __init__(self, cpus, wait_for_marker=True, no_idle=None):
if no_idle is None:
no_idle = True if cpus[0].cpuidle else False
no_idle = not (cpus[0].cpuidle and cpus[0].cpuidle.states)
self.power_state = SystemPowerState(len(cpus), no_idle=no_idle)
self.requested_states = {} # cpu_id -> requeseted state
self.wait_for_marker = wait_for_marker
@ -370,6 +370,8 @@ class PowerStateTimeline(object):
row.append('Running (unknown kHz)')
elif idle_state is None:
row.append('unknown')
elif not self.idle_state_names[cpu_idx]:
row.append('idle[{}]'.format(idle_state))
else:
row.append(self.idle_state_names[cpu_idx][idle_state])
else: # frequency is not None
@ -403,7 +405,7 @@ class ParallelStats(object):
for i, clust in enumerate(clusters):
self.clusters[str(i)] = set(clust)
self.clusters['all'] = set([cpu.id for cpu in cpus])
self.clusters['all'] = {cpu.id for cpu in cpus}
self.first_timestamp = None
self.last_timestamp = None
@ -499,7 +501,7 @@ class PowerStateStats(object):
state = 'Running (unknown KHz)'
elif freq:
state = '{}-{:07}KHz'.format(self.idle_state_names[cpu][idle], freq)
elif idle is not None:
elif idle is not None and self.idle_state_names[cpu]:
state = self.idle_state_names[cpu][idle]
else:
state = 'unknown'
@ -560,12 +562,12 @@ class CpuUtilizationTimeline(object):
headers = ['ts'] + ['{} CPU{}'.format(cpu.name, cpu.id) for cpu in cpus]
self.writer.writerow(headers)
self._max_freq_list = [cpu.cpufreq.available_frequencies[-1] for cpu in cpus]
self._max_freq_list = [cpu.cpufreq.available_frequencies[-1] for cpu in cpus if cpu.cpufreq.available_frequencies]
def update(self, timestamp, core_states): # NOQA
row = [timestamp]
for core, [_, frequency] in enumerate(core_states):
if frequency is not None:
if frequency is not None and core in self._max_freq_list:
frequency /= float(self._max_freq_list[core])
row.append(frequency)
else:

@ -95,8 +95,8 @@ def diff_sysfs_dirs(before, after, result): # pylint: disable=R0914
logger.debug('Token length mismatch in {} on line {}'.format(bfile, i))
dfh.write('xxx ' + bline)
continue
if ((len([c for c in bchunks if c.strip()]) == len([c for c in achunks if c.strip()]) == 2) and
(bchunks[0] == achunks[0])):
if ((len([c for c in bchunks if c.strip()]) == len([c for c in achunks if c.strip()]) == 2)
and (bchunks[0] == achunks[0])):
# if there are only two columns and the first column is the
# same, assume it's a "header" column and do not diff it.
dchunks = [bchunks[0]] + [diff_tokens(b, a) for b, a in zip(bchunks[1:], achunks[1:])]

Some files were not shown because too many files have changed in this diff Show More