1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-04-13 06:10:50 +01:00

Compare commits

...

545 Commits

Author SHA1 Message Date
Sebastian Goscik
2d14c82f92 Added option to re-open files to poller.
Some times a sysfs/debug fs will only generate a value on open. Subsequent seek/read will not vield any new values. This patch adds the option to reopen all files on each read.
2025-03-10 17:39:14 -05:00
Metin Kaya
8598d1ba3c speedometer: Add version 3.0 support
Port v3.0 of Speedometer from Webkit [1] repo and update tarballs.

"version" field can be added to the workload agenda file to specify
which version of Speedomer should be used.

Size of v3.0 tarball is around 12 MB if the compression type is gzip.
Thus, in order to reduce total size of the repo, compress Speedometer
archives in LZMA format.

1. https://github.com/WebKit/WebKit/tree/main/Websites/browserbench.org/Speedometer3.0

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2025-03-10 17:35:32 -05:00
Metin Kaya
523fb3f659 speedometer: Introduce trivial cleanups
- Remove unused imports
- Handle the case that @candidate_files may be undefined
- Customize the log message regarding Speedometer timeout

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2025-03-10 17:35:32 -05:00
Elif Topuz
0732fa9cf0 workloads/geekbench: Add support for Geekbench command-line build
Add Geekbench command-line build workload for Android targets.
Geekbench apks allow to user to run the tests altogether. Using the
command-line, a single test or multiple tests can be specified.

Signed-off-by: Elif Topuz <elif.topuz@arm.com>
2025-03-05 17:24:46 -06:00
Metin Kaya
b03f28d1d5 instruments/trace_cmd: Add tracing mode support to TraceCmdInstrument()
Implement tracing mode support (mainly for write-to-disk mode) in
TraceCmdInstrument, enabling efficient collection of large trace
datasets without encountering memory limitations.

This feature is particularly useful for scenarios requiring extensive
trace data.

Additional changes:
- Replace hardcoded strings with corresponding string literals for
  improved maintainability.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2025-03-01 16:57:49 -06:00
Marc Bonnici
f125fd340d version: Bump required devlib version
Bump the require devlib version that exports new trace-command
functionality.
2025-03-01 16:52:19 -06:00
Marc Bonnici
75cfb56b38 Remove dependency on distutils
Align with devlib and remove dependencies on distutils.

[1] https://github.com/ARM-software/devlib/pull/631/
2025-03-01 16:40:54 -06:00
Marc Bonnici
b734e90de1 ci: Bump python versions and pin ubuntu version
Update CI to run with later python versions and update to
align with latest available versions provided by github actions.

Pin to using ubuntu version 22.04 as this is the latest that support all
python versions.
2025-03-01 16:40:54 -06:00
Metin Kaya
5670e571e1 workloads/speedometer: Fix SyntaxWarning exceptions in regex pattern
The regex pattern for extracting speedometer score causes these
exceptions due to unescaped \d and \/ sequences:

wa/workloads/speedometer/__init__.py:109: SyntaxWarning: invalid escape
sequence '\d'
  '(?:text|content-desc)="(?P<value>\d+.\d+)"[^>]*'
wa/workloads/speedometer/__init__.py:110: SyntaxWarning: invalid escape
sequence '\/'
  '(?(Z)|resource-id="result-number")[^>]*\/>'

Fix the problem via defining the regex pattern as raw string literal to
properly escape backslashes.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2025-02-17 16:41:58 -06:00
dependabot[bot]
45f09a66be build(deps): bump cryptography from 42.0.4 to 43.0.1
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.4 to 43.0.1.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/42.0.4...43.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 17:57:09 -05:00
dependabot[bot]
9638a084f9 build(deps): bump certifi from 2023.7.22 to 2024.7.4
Bumps [certifi](https://github.com/certifi/python-certifi) from 2023.7.22 to 2024.7.4.
- [Commits](https://github.com/certifi/python-certifi/compare/2023.07.22...2024.07.04)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 17:57:00 -05:00
dependabot[bot]
4da8b0691f build(deps): bump urllib3 from 1.26.18 to 1.26.19
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.18 to 1.26.19.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/1.26.19/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.18...1.26.19)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 17:56:50 -05:00
Metin Kaya
412a785068 target/descriptor: Support adb_port parameter
devlib/AdbConnection class supports customizing ADB port number. Enable
that feature in WA side.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-07-11 18:55:56 -05:00
dependabot[bot]
6fc5340f2f ---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 17:09:04 -05:00
dependabot[bot]
da667b58ac build(deps): bump idna from 3.4 to 3.7
Bumps [idna](https://github.com/kjd/idna) from 3.4 to 3.7.
- [Release notes](https://github.com/kjd/idna/releases)
- [Changelog](https://github.com/kjd/idna/blob/master/HISTORY.rst)
- [Commits](https://github.com/kjd/idna/compare/v3.4...v3.7)

---
updated-dependencies:
- dependency-name: idna
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 17:08:15 -05:00
Elif Topuz
4e9d402c24 workloads/speedometer: Dismiss notification pop-up
When speedometer is running on a Chrome package on Android 14, a pop-up
window was showing on the screen. Chrome preferences file is modified
to dismiss the window.
2024-05-30 12:56:53 -05:00
Ola Olsson
e0bf7668b8 Adding support for not validating PMU counters 2024-04-02 14:32:30 -05:00
Fabian Gruber
4839ab354f configuration: Allow including multiple files into one mapping.
The key for 'include#' can now be either a scalar or a list.
A scalar triggers the same behaviour as before.
If the value is a list it must be a list of scalars (filepaths).
The paths will be loaded and merged in order, and finally the resulting
dict is included into the current scope.
2024-03-28 20:06:30 -05:00
Elif Topuz
b6ecc18763 workloads/speedometer: Edit regex search to get the score 2024-03-20 12:17:30 +00:00
Marc Bonnici
7315041e90 fw/uiauto: Fix uiauto builds and update apks
The version of gradle being used was out of date, update to a later
version to fix building of uiauto apks.
2024-03-20 12:17:10 +00:00
dependabot[bot]
adbb647fa7 build(deps): bump cryptography from 41.0.6 to 42.0.4
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.6 to 42.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.6...42.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-20 12:16:59 +00:00
Metin Kaya
366f59ebf7 utils/misc: Clean duplicated code
- ``load_struct_from_yaml()`` has been moved to devlib [1].
- ``LoadSyntaxError()`` is already implemented in devlib.
- Remove ``load_struct_from_file()`` and ``RAND_MOD_NAME_LEN`` since
  they are not used at all.

[1] https://github.com/ARM-software/devlib/commit/591825834028

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-02-23 12:49:03 -08:00
dependabot[bot]
0eb17bf8f0 build(deps): bump paramiko from 3.1.0 to 3.4.0
Bumps [paramiko](https://github.com/paramiko/paramiko) from 3.1.0 to 3.4.0.
- [Commits](https://github.com/paramiko/paramiko/compare/3.1.0...3.4.0)

---
updated-dependencies:
- dependency-name: paramiko
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-09 12:24:16 -08:00
Douglas Raillard
f166ac742e utils/misc: Fix linters violation
A mix of pylint and PEP8 violations that GitHub action enforces.
2024-01-09 12:20:26 -08:00
Douglas Raillard
6fe4bce68d Remove Python 2 support
Python 2 is long dead and devlib does not support it anymore, so cleanup
old Python 2-only code.
2024-01-09 12:20:26 -08:00
Douglas Raillard
28b78a93f1 utils/misc: Replace deprecated __import__ by importlib
Use importlib.import_module instead of __import__ as per Python doc
recommendation.

This will also fix the case where the class is in a
package's submodule (since __import__ returns the top-level package),
e.g. "foo.bar.Class".
2024-01-09 12:20:26 -08:00
Douglas Raillard
77ebefba08 wa: Remove dependency on "imp" module
Python 3.12 removed the "imp" module, so use importlib instead.
2024-01-09 12:20:26 -08:00
Metin Kaya
41f7984243 fw/rt_config: Add unlock_screen config option in runtime_parameters
Introduce 'unlock_screen' option in order to help in automating Android
tests by unlocking device screen automatically. Surely this works only
if no passcode is set.

'unlock_screen' option implicitly requires turning on the screen. IOW,
it will override value of 'screen_on' option.

'diagonal', 'vertical' and 'horizontal' are valid values for
'unlock_screen' option as of now.

Note that this patch depends on
https://github.com/ARM-software/devlib/pull/659 in devlib repo.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-09 07:42:53 -08:00
Metin Kaya
23fcb2c120 framework/plugin: Fix typo at suppoted_targets
Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2023-12-20 08:26:13 -08:00
dependabot[bot]
e38b51b242 build(deps): bump cryptography from 41.0.4 to 41.0.6
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.4 to 41.0.6.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.4...41.0.6)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-15 13:10:50 -08:00
dependabot[bot]
ea08a4f9e6 build(deps): bump urllib3 from 1.26.17 to 1.26.18
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.17...1.26.18)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-15 13:10:40 -08:00
Elif Topuz
5b56210d5f UIBenchJankTests:modification to support Android 14 version
"--user <USER_ID" (current user: 0) option is added to
activity manager (am) command because of "Invalid userId" command.
Tested with other benchmarks (geekbench) as well.
2023-12-04 16:23:19 -08:00
dependabot[bot]
0179202c90 build(deps): bump urllib3 from 1.26.15 to 1.26.17
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.15 to 1.26.17.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.15...1.26.17)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 16:31:02 -07:00
dependabot[bot]
617306fdda build(deps): bump cryptography from 41.0.3 to 41.0.4
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.3 to 41.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.3...41.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 16:30:49 -07:00
Kajetan Puchalski
8d4fe9556b instruments: Add Perfetto instrument
Add an instrument that uses devlib's PerfettoCollector to collect a
Perfetto trace during the execution of a workload.
The instrument takes a path to a Perfetto config file which specifies
how Perfetto should be configured for the tracing.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-09-07 17:29:27 -05:00
Marc Bonnici
775b24f7a3 docs: fix RTD build
- Bump python version used when building documentation.
- `setuptools` is a deprecated installation method on RTD so switch to `pip`.
- Add additional dependency on devlib master branch as RTD env is not
  respecting dependency_links during installation.
- Bump copyright year.
2023-08-21 16:53:53 -05:00
Kajetan Puchalski
13f9c64513 target/descriptor: Expose adb_as_root for AdbConnection
Expose devlib's AdbConnection `adb_as_root` parameter in the target
descriptor.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-21 16:41:23 -05:00
Kajetan Puchalski
6cd1c60715 workloads/speedometer: Add Bromite to package names
Bromite is a fork of Chromium that's easily available for Android. Apart
from small changes it works the same as Chromium and works with this
speedometer workload. Add it to the 'package_names' list to allow using
it as an option.

https://www.bromite.org/

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-18 17:36:17 -05:00
Douglas Raillard
05eab42f27 workloads/rt-app: Update rt-app binaries
Update rt-app binaries to the latest version of the "lisa" branch in
douglas-raillard-arm GitHub fork. This tracks the upstream master branch
with a number of critical patches required notably to work with uclamp.
2023-08-15 17:50:43 -05:00
Kajetan Puchalski
b113a8b351 instruments/trace_cmd: Allow setting top_buffer_size
Allow optionally setting the top level ftrace buffer size separately
from the devlib buffer size. The parameter will be passed to the devlib
FtraceCollector and take effect there.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-15 17:50:26 -05:00
Douglas Raillard
d67d9bd2a4 fw/plugin: Try to load plugin paths as Python module name
WA_PLUGIN_PATHS currently contains a list of filesystem paths to scan
for plugins. This is appropriate for end-user plugins, but this is
problematic for plugins distributed by a 3rd party, such as a plugin
installed from PyPI.

In those cases, the path to the sources is unknown and typically depends
on the specify Python version, local setup etc. What is constant is
Python name of the package, e.g. "lisa.wa.plugins".

Extend the input allowed in WA_PLUGIN_PATHS by trying to load entries as
a Python package name if:
    * There is no filesystem path with that name
    * The entry is a "relative path" (from an fs point of view)
2023-08-14 19:23:56 -05:00
Kajetan Puchalski
11374aae3f worklads/drarm: Set view for FPS instrument
Set the view parameter so that the FPS instrument can collect frame data
from the workload.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-14 19:15:05 -05:00
Douglas Raillard
839242d636 target/descriptor: Add max_async generic target parameter 2023-08-09 17:35:51 -05:00
dependabot[bot]
b9b02f83fc build(deps): bump cryptography from 41.0.0 to 41.0.3
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.0 to 41.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.0...41.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-09 16:45:37 -05:00
dependabot[bot]
6aa1caad94 build(deps): bump certifi from 2022.12.7 to 2023.7.22
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22.
- [Commits](https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-09 16:45:23 -05:00
Kajetan Puchalski
bf72a576e6 uibenchjanktests: Rework to allow listing subtests
Rework the uibenchjanktests workload to allow specifying a list of
subtests. The activity will be re-launched for each provided subtest. If
none are specified, all available tests will be run in alphabetical order.

The workload output will now include metrics with their respective test
names as classifiers.

Add a 'full' parameter to revert back to the old default 'full run'
behaviour with restarts between subtests.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-06-30 12:29:21 -05:00
Kajetan Puchalski
951eec991c drarm: Add DrArm workload
Add a workload for the Dr Arm demo app. Includes functionality for
automatically pulling the ADPF FPS report file from the target if one
was generated by the app.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-06-08 12:38:13 -05:00
dependabot[bot]
0b64b51259 build(deps): bump cryptography from 40.0.2 to 41.0.0
Bumps [cryptography](https://github.com/pyca/cryptography) from 40.0.2 to 41.0.0.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/40.0.2...41.0.0)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-02 16:53:29 -05:00
dependabot[bot]
f4ebca39a1 build(deps): bump requests from 2.29.0 to 2.31.0
Bumps [requests](https://github.com/psf/requests) from 2.29.0 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.29.0...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-30 17:38:26 -05:00
Kajetan Puchalski
88b085c11b perf: Fix instrument for Android 13
The simpleperf included with Android 13 now does not show the percentage
when no counter multiplexing took place. This causes the perf instrument
to crash when processing the output. This fix checks whether the percentage
exists before trying to extract it.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-05-30 17:38:09 -05:00
Marc Bonnici
36a909dda2 fw/ApkWorkload: Allow workloads to provide apk arguments
In some workloads it is beneficial to be able to provide arguments
when launching the required APK. Add a new `arpk_arguments` property
to allow a workload to provide a dict of parameter names and values
that should be used when launching the APK.

The python types of the parameters are used to determine the data
type provided to the APK. Currently supported types are string, bool,
int and float.
2023-04-29 17:54:41 -05:00
Marc Bonnici
3228a3187c Mitigate CVE-2007-4995
Prevent potential directory path traversal attacks (see
https://www.trellix.com/en-us/about/newsroom/stories/research/tarfile-exploiting-the-world.html)
2023-04-29 17:35:54 -05:00
Marc Bonnici
5e0c59babb version: Bump minor version number
Bump the minor version to prepare for dropping Python < 3.7 support.
2023-04-29 17:29:43 -05:00
Marc Bonnici
dc2fc99e98 fw/version: Bump release versions 2023-04-29 17:29:43 -05:00
Marc Bonnici
46ff6e1f62 Dockerfile: Bump release version 2023-04-29 17:29:43 -05:00
Marc Bonnici
8b3f58e726 doc/changes: Update the changelog for v3.3.1 2023-04-29 17:29:43 -05:00
Marc Bonnici
fe7a88e43e requirements.txt: Update to latest tested versions 2023-04-29 17:29:43 -05:00
Marc Bonnici
61bb162350 workloads/gmail: Update workload to latest apk version
The Google services required in the old apk appear to no
longer be avaliable. Update to support a newer version.
2023-04-29 17:29:43 -05:00
Marc Bonnici
d1e960e9b0 workloads/googleplaybooks: Fix ui match
The resource id of the book list cannot always be discovered, search
for the only scrollable list instead.
2023-04-29 17:29:43 -05:00
Marc Bonnici
29a5a7fd43 fw/config: Fix RunConfiguration descriptions
Fix quotations in the RunConfiguration description.
2023-04-29 17:29:43 -05:00
Marc Bonnici
37346fe1b1 instruments/trace_cmd: Handle setup failure
In the case that the trace_cmd collector fails to
initialise, do not attempt to use the collector in
subsequent methods.
2023-04-29 17:29:43 -05:00
Kajetan Puchalski
40a118c8cd geekbench: Add support for Geekbench 6
Add support for Geekbench 6 as a workload on Android.
This commit adds 6.*.* as a valid version for the Geekbench workload and
updates the UIAuto apk accordingly.

It also refactors the update_result function seeing as the one
originally used for GB4 can now be used for 4, 5 and 6 and so it makes
more sense to treat it as a 'generic' update_result function. The
functionality should stay the same.

Backwards compatibility with GB2 & GB3 should be maintained.
2023-03-06 19:28:40 -06:00
Marc Bonnici
c4535320fa docs: Update plugin How To Guides
Fix the example instrument code and add additional note to indicate where
new plugins should be stored to be detected by WA's default configuration
to improve clarity.
2023-01-13 10:14:15 +00:00
dependabot[bot]
08b87291f8 build(deps): bump certifi from 2020.12.5 to 2022.12.7
Bumps [certifi](https://github.com/certifi/python-certifi) from 2020.12.5 to 2022.12.7.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2020.12.05...2022.12.07)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-08 09:26:41 +00:00
Steven Schaus
a3eacb877c Load RuntimeConfig from plugins
Implements support for dynamically loading additional RuntimeConfig
and associated RuntimeParameter that are defined in a plugin.

Currently, the various RuntimeConfig's are hard coded in a list within
WA. This patch extends RuntimeParameterManager to use PluginLoader to
load RuntimeConfig classes and append to the hard coded list.

The implementation, as written, does not allow loading RuntimeConfig
from a plugin if it has the same name as one of the hard coded
RuntimeConfig. This is meant to prevent conflicts and unexpected
behavior.
2022-12-08 09:26:29 +00:00
Kajetan Puchalski
48152224a8 schbench: Add support for schbench
Add support for running schbench as a workload.
Includes the arm64 binary for use on Android devices.
2022-08-23 12:54:57 +01:00
Marc Bonnici
095d6bc100 docs/.readthedocs: Bump the python version
Bump the python version used to build the documentation
to prevent installation errors in the build environment.
2022-08-18 18:46:59 +01:00
Marc Bonnici
8b94ed972d ci: fix the version of pylint to a known version
Later versions of pylint include additional suggestions
including features from the latest versions of Python. Until
we can update our code base to make use of the new features
fix the version of pylint to a known version.
2022-08-18 17:19:43 +01:00
Marc Bonnici
276f146c1e pylint: Update for newer versions of pylint
In later versions of pylint the 'version' attribute has been moved,
therefore check both options while attempting to detect the installed
version number.
2022-08-18 17:19:43 +01:00
Marc Bonnici
3b9fcd8001 ci: Bump the python version used for running tests
The latest devlib master branch requires Python >=3.7 therefore
bump the python version used to run the CI tests.
2022-08-18 17:19:43 +01:00
dependabot[bot]
88fb1de62b build(deps): bump paramiko from 2.7.2 to 2.10.1
Bumps [paramiko](https://github.com/paramiko/paramiko) from 2.7.2 to 2.10.1.
- [Release notes](https://github.com/paramiko/paramiko/releases)
- [Changelog](https://github.com/paramiko/paramiko/blob/main/NEWS)
- [Commits](https://github.com/paramiko/paramiko/compare/2.7.2...2.10.1)

---
updated-dependencies:
- dependency-name: paramiko
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-08 11:10:52 +01:00
dependabot[bot]
7dc337b7d0 build(deps): bump numpy from 1.19.4 to 1.22.0
Bumps [numpy](https://github.com/numpy/numpy) from 1.19.4 to 1.22.0.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst)
- [Commits](https://github.com/numpy/numpy/compare/v1.19.4...v1.22.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-08 11:10:41 +01:00
Kajetan Puchalski
b0f9072830 perf: Fix processing simpleperf stats
Currently, when processing the output of 'simpleperf stat', wa does not
skip the header and tries to process part of it as a number, leading
to type errors. This change skips the header (line starting with '#').

Furthermore, some events (e.g. cpu-clock or task-clock) include "(ms)"
in their count value and are floats instead of integers. Because of
this, when either of those is included, processing metrics fails due to
assuming every metric is an integer. Then another error happens when the
code tries to split the line on '(' assuming that there's only one set
of those around the percentage.

This change removes "(ms)" from the line
before it's processed and properly determines whether 'count' is an
integer or a float before attempting to convert it.
2022-08-08 11:04:13 +01:00
Qais Yousef
b109acac05 geekbench: Add/fix support for Geekbench5
The non corporate version of geekbench5 didn't work although the code
had everything needed, except for a number of tiny required tweaks:

1. Add '5' in the supported versions in __init__.py
2. Fix the name of the android package in__init__.py and
   UiAutomation.java
3. Improve handling of minorVersion to fix potential exception when we
   don't specify the minorVersion number in the yaml file. Launching
   geekbench5 works fine when it's the only one installed. But if you
   have multiple versions, then using the version string in the yaml
   agenda didn't like specifying '5' as the version and threw exception
   out of bound because we assume '5.X' as input. No reason I'm aware of
   to force support for a specific version of geekbench5. So keep it
   relaxed until we know for sure it breaks with a specific version.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:52 +00:00
Qais Yousef
9c7bae3440 gfxbench: Update uiauto APK
To support the new corporate version change.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:35 +00:00
Qais Yousef
7b5ffafbda gfxbench: Add a non corporate version
Which works as the corporate version except for a different in package
name and a set of Fixed Time Tests that don't exist on free version.

Only support v 4.X and v5.X as that's what's available.

Note there's a clash with glbenchmark package name. glbenchmark is an
ancient version provided by the same developers but was superseded by
gfxbench.  The version checks in both workloads should ensure we get the
right one in the unlikely case both are installed.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:35 +00:00
Qais Yousef
be02ad649c pcmark: Update uiauto APK
To include the new fix for the long delays to start the run.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:35 +00:00
Qais Yousef
5a121983fc pcmark: Check for description instead of text in installbenchmark
The test was hanging for a long time waiting for RUN text. Checking for
description first prevents that.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:35 +00:00
Marc Bonnici
69795628ed doc: Fix typo in requirements.txt 2021-10-28 10:59:45 +01:00
Marc Bonnici
7a332dfd5b docs: Update readthedocs package versions
The latest sphinx and docutils versions currently used on readthedocs
are no longer compatible. Explicitly list the package versions that
should be used when building the documentation.
2021-10-28 10:56:05 +01:00
Peter Collingbourne
4bad433670 Add support for Antutu 9.1.6.
The code for reading the results is almost the same as for Antutu 8,
but it needs to be adjusted to account for a slightly different set
of benchmarks.

At least on my device, Antutu 9 takes over 10 minutes to run, so increase
the timeout to 20 minutes.
2021-10-04 09:12:08 +01:00
Peter Collingbourne
0b558e408c Upgrade Gradle to 7.2 and Android Gradle plugin to 4.2.
The older versions of the plugin caused problems building with newer
NDK versions due to a lack of MIPS support.

This also required upgrading to a version of Gradle that knows about
the Google Maven repository.
2021-09-29 09:46:51 +01:00
setrofim
c023b9859c doc: update coding guide with APK rebuild info
Update the Contributing/Code section with instruction to rebuild the UI
automation APK if the corresponding source has been update.
2021-09-28 09:07:49 +01:00
Vladislav Ivanishin
284cc60b00 docs: Fix typo
as *with* all WA configuration, the more specific settings will take
precedence over the less specific ones
2021-08-25 11:02:12 +01:00
Kajetan Puchalski
06b508107b workloads/pcmark: Add PCMark 3.0 support
Add support for PCMark for Android version 3.
Use a 'version' Parameter to maintain support for v2.
2021-07-19 11:06:11 +01:00
dependabot[bot]
cb1107df8f build(deps): bump urllib3 from 1.26.4 to 1.26.5
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.4 to 1.26.5.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.4...1.26.5)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-02 18:30:33 +01:00
Javi Merino
789e150b0a perf: report a config error if stat is combined with report options
When running perf/simpleperf stat, the report_option_string,
report_sample_options and run_report_sample configuration parameters
don't make sense.  Instead of quietly ignoring them, raise a
ConfigError so that the user can fix the agenda.
2021-04-30 13:35:08 +01:00
Javi Merino
43cb80d854 perf: support report-sample
devlib learnt to use report-sample in
ARM-software/devlib@fe2fe3ae04 ("collector/perf: run simpleperf
report-sample in the target if requested").  Adapt the perf instrument
to use the new parameters of the PerfCollector.
2021-04-30 13:35:08 +01:00
Marc Bonnici
31d306c23a docs: Fix typo in variable name 2021-04-19 16:19:49 +01:00
Benjamin Mordaunt
591c85edec Remove pkg-resources
See https://github.com/pypa/pip/issues/4022
2021-04-09 18:44:35 +01:00
Stephen Kyle
72298ff9ac speedometer: address pylint complaints 2021-04-08 14:34:44 +01:00
Stephen Kyle
f08770884a speedometer: fix @once methods which declare attributes
We need to make those attributes class-attributes, to make sure they are still
defined in subsequent jobs. We still access them through 'self', however.
2021-04-08 14:34:44 +01:00
Stephen Kyle
a5e5920aca speedometer: ensure adb reverse works across reboots
Before this patch, if you used reboot_policy: "each_job", the adb reverse
connection would be lost.
2021-04-08 14:34:44 +01:00
Marc Bonnici
5558d43ddd version: Bump required devlib version 2021-04-07 18:26:38 +01:00
Marc Bonnici
c8ea525a00 fw/target/info: Utilise target properties
Use hostname and hostid target properties instead
of existing calls.
2021-04-07 18:26:38 +01:00
dependabot[bot]
c4c0230958 build(deps): bump urllib3 from 1.26.3 to 1.26.4
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.3 to 1.26.4.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.3...1.26.4)

Signed-off-by: dependabot[bot] <support@github.com>
2021-04-07 10:56:00 +01:00
Javi Merino
b65a371b9d docs/installation: Fix package name in pypi
WA is called wlauto in pypi. "pip install wa" installs workflow
automation, a very different project.
2021-04-07 09:15:29 +01:00
dependabot[bot]
7f0a6da86b build(deps): bump pyyaml from 5.3.1 to 5.4
Bumps [pyyaml](https://github.com/yaml/pyyaml) from 5.3.1 to 5.4.
- [Release notes](https://github.com/yaml/pyyaml/releases)
- [Changelog](https://github.com/yaml/pyyaml/blob/master/CHANGES)
- [Commits](https://github.com/yaml/pyyaml/compare/5.3.1...5.4)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-26 16:43:24 +00:00
dependabot[bot]
75a70ad181 build(deps): bump urllib3 from 1.26.2 to 1.26.3
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.2 to 1.26.3.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.2...1.26.3)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-22 18:26:11 +00:00
dependabot[bot]
84b5ea8a56 build(deps): bump cryptography from 3.3.1 to 3.3.2
Bumps [cryptography](https://github.com/pyca/cryptography) from 3.3.1 to 3.3.2.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/3.3.1...3.3.2)

Signed-off-by: dependabot[bot] <support@github.com>
2021-02-23 09:41:23 +00:00
Marc Bonnici
4b54e17020 fw/job: Fix workload cache check
Don't assume the first job iteration is already in the workload cache.
This may not always be the case, for example with the random execution
order a later iteration can be processed first.
Instead check to see if the job id is present or not.
2021-02-01 18:00:34 +00:00
Marc Bonnici
da4d10d4e7 fw/rt_config: Avoid querying online cpus if hotplug disabled
If the device does not have the hoptplug module installed, avoid
unnecessary querying of the device to check the number of cpus which
can cause issues with some devices.
2021-01-29 18:19:48 +00:00
Marc Bonnici
8882feed84 Dorkerfile: Set a default TZ and non-interactive install
Set DEBIAN_FRONTEND to prevent waiting on user input and provide
a default timezone before package installation.
2021-01-25 09:52:30 +00:00
Marc Bonnici
7f82480a26 Dockerfile: Update base image to 20.04
19.10 is now EOL so update to use 20.04 LTS as a base instead.
2021-01-25 09:52:30 +00:00
Marc Bonnici
e4be2b73ef fw/version: Prevent installation failure on systems without git
On systems that do not have git installed WA will currently fail
to install with a FileNotFound Exception. If git is not present then
we will not have a commit hash so just ignore this error.
2021-01-12 17:53:46 +00:00
Javi Merino
22750b15c7 perf: correctly parse csv when using "--csv --interval-only-values"
With the perf instrument configured as:

    perf:
      perf_type: simpleperf
      command: stat
      optionstring: '-a --interval-only-values --csv'

WA fails to parse simpleperf's output:

    INFO             Extracting reports from target...
    ERROR            Error in instrument perf
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/instrument.py", line 272, in __call__
    ERROR                self.callback(context)
    ERROR              File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 142, in update_output
    ERROR                self._process_simpleperf_output(context)
    ERROR              File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 155, in _process_simpleperf_output
    ERROR                self._process_simpleperf_stat_output(context)
    ERROR              File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 233, in _process_simpleperf_stat_output
    ERROR                self._process_simpleperf_stat_from_csv(stat_file, context, label)
    ERROR              File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 245, in _process_simpleperf_stat_from_csv
    ERROR                context.add_metric('{}_{}'.format(label, row[1]), row[0], 'count', classifiers=classifiers)
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/execution.py", line 222, in add_metric
    ERROR                self.output.add_metric(name, value, units, lower_is_better, classifiers)
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/output.py", line 142, in add_metric
    ERROR                self.result.add_metric(name, value, units, lower_is_better, classifiers)
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/output.py", line 390, in add_metric
    ERROR                metric = Metric(name, value, units, lower_is_better, classifiers)
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/output.py", line 653, in __init__
    ERROR                self.value = numeric(value)
    ERROR              File "/work/workload_automation/devlib/devlib/utils/types.py", line 88, in numeric
    ERROR                raise ValueError('Not numeric: {}'.format(value))
    ERROR
    ERROR            ValueError(Not numeric: Performance counter statistics)

With the above options, the csv that simpleperf produces looks like
this:

    Performance counter statistics,
    123456789,raw-l1-dtlb,,(60%),
    42424242,raw-l1-itlb,,(60%),
    Total test time,1.001079,seconds,
    Performance counter statistics,
    123456789,raw-l1-dtlb,,(60%),
    42424242,raw-l1-itlb,,(60%),
    Total test time,2.001178,seconds,
    Performance counter statistics,
    [...]

That is, with "--interval-only-values", the "Performance counter
statistics," header is repeated every interval.  WA current expects
it only in the first line.  Modify the condition so that it is ignored
every time we find it in the file and not just the first time.
2021-01-11 17:48:27 +00:00
Marc Bonnici
e3703f0e1e utils/doc: Fix display of Falsey default parameters
Explicitly check for is `None` to determine if a default value is
not present or just a Falsey value.
2021-01-11 15:32:34 +00:00
Marc Bonnici
4ddd610149 Migrate to Github Actions
Now Travis has stopped providing a free tier migrate initial
tests over to use Github Actions.
2020-12-16 09:51:09 +00:00
Marc Bonnici
c5e3a421b1 fw/version: Dev version bump 2020-12-11 16:43:12 +00:00
Marc Bonnici
0e2a150170 fw/version: Bump release versions 2020-12-11 16:31:13 +00:00
Marc Bonnici
69378b0873 Dockerfile: Bump release version 2020-12-11 16:31:13 +00:00
Marc Bonnici
c543c49423 requirements.txt: Update to latest tested versions 2020-12-11 16:31:13 +00:00
Marc Bonnici
dd07d2ec43 workloads/speedometer: Fix markdown formatting in docstring 2020-12-11 16:26:49 +00:00
Marc Bonnici
94590e88ee docs/changes: Update changelog for v3.3 2020-12-11 16:26:49 +00:00
Marc Bonnici
c2725ffaa2 fw/uiauto: Update to handle additional permissions screen
On the latest version of android (currently Q) for applications that
are designed to run on older versions of android, an additional
screen asking to confirm the required permissions can popup.
Enable confirming of the granted permissions.
2020-12-10 15:57:30 +00:00
Marc Bonnici
751bbb19fe docs: Update Signal Dispatch diagram
Update the signal dispatch diagram to include the new optional reboot stage.
2020-12-09 07:48:20 +00:00
Marc Bonnici
ae1bc2c031 fw/config: Add additional run_completed reboot policy
Add an additional `run_completed` reboot policy for when a run
has finished.
This complements the `initial` reboot policy and aims to leave
the device in a fresh state after WA has finished executing.
2020-12-09 07:48:20 +00:00
Marc Bonnici
91b791665a workloads/googleplaybooks: Update to handle updated IDs
Resourceid and classes have been modified so update the
workload to handle these cases.
Additionally on some devices regex matches appear to fail
so workaround to match separately.
2020-11-29 19:42:30 +00:00
Marc Bonnici
62c4f3837c workloads/googleslides: Update to accommodate newer versions
Add support for newer version of the apk.
Also add support for differing screen sizes, on larger devices
the direction of swipe to change slide differs, perform both
horizontal and vertical swipes to satisfy both layouts.
2020-11-29 19:42:30 +00:00
Marc Bonnici
3c5bece01e workloads/aitutu: Improve reliability of results extraction
Wait for the device to become idle before attempting to extract
the test scores.
2020-11-29 19:42:30 +00:00
Marc Bonnici
cb51ef4d47 workloads/googlephotos: Update to handle new popup
Bump the minor known working version APK and handle a new "missing out"
popup.
2020-11-29 19:42:30 +00:00
Marc Bonnici
8e56a4c831 utils/doc: Fix output for lambda function
The "name" can be in the format "<class>.<lambda>" so
update to allow correct function with the updated format.
2020-11-13 16:27:39 +00:00
Marc Bonnici
76032c1d05 workloads/rt_app: Remove timeout in file transfer
Remove the explict timeout when pushing to the device.
Allow the polling mechanims to monitor the transfer if required.
2020-11-13 15:42:00 +00:00
Marc Bonnici
4c20fe814a workloads/exoplayer: Remove timeout in file transfer
Remove the explict timeout when pushing a media file to the device.
Allow the polling mechanims to monitor the transfer.
2020-11-13 15:42:00 +00:00
Marc Bonnici
92e253d838 workloads/aitutu: Handle additional popup on launch
Allow agreeing to an updated Terms agreement on launch
2020-11-13 11:31:50 +00:00
Marc Bonnici
18439e3b31 workloads/youtube: Update Youtube workload
The previous known working version of the youtube apk appears
to have stopped working. Update to support the new format.
2020-11-12 15:03:01 +00:00
Marc Bonnici
5cfe452a35 fw/version: Bump dev version.
We are relying on a newly available variable in devlib
so bump the version to remain in sync.
2020-11-09 17:56:16 +00:00
Marc Bonnici
f1aff6b5a8 fw/descriptor: Update sudo_cmd default
The WA default `sudo_cmd` is out of date compared to devlib.
Update the parameter to use the value directly from devlib
to prevent these from being out of sync in the future.
2020-11-09 17:56:16 +00:00
Marc Bonnici
5dd3abe564 fw/execution: Fix Typos 2020-11-04 18:27:34 +00:00
Marc Bonnici
e3ab798f6e wl/speedometer: Ensure test package is installed.
Check that the package specified for the test is installed on the
device.
2020-11-04 18:27:34 +00:00
Marc Bonnici
ed925938dc fw/version: Development version bump
Bump the development version to synchronise the parameters for transfer
polling.
2020-11-03 10:02:08 +00:00
Jonathan Paynter
ed4eb8af5d target/descriptor: Add connection config for polls
Adds parameters needed for WA to support file transfer polling.

``poll_transfers`` of type ``bool``, default ``True`` sets whether
transfers should be polled

``transfer_wait_no_poll`` controls the initial time in seconds that the
poller should wait for the transfer to complete before polling its
progress.
2020-11-03 10:01:52 +00:00
setrofim
a1bdb7de45 config/core: merge params from different files
Ensure that runtime and workload parameters specified across multiple
config files and the config section of the agenda are merged rather than
overwritten.
2020-10-30 12:05:30 +00:00
Marc Bonnici
fbe9460995 pylint: Remove uneccessary pass statements 2020-10-30 11:49:54 +00:00
Marc Bonnici
aa4df95a69 pep8: Ignore line break before binary operator
PEP8 has switched its guidance [1] for where a line break should occur
in relation to a binary operator, so don't raise this warning for
new code and update the code base to follow the new style.

[1] https://www.python.org/dev/peps/pep-0008/#should-a-line-break-before-or-after-a-binary-operator
2020-10-30 11:49:54 +00:00
Marc Bonnici
fbb84eca72 Pylint Fixes
Update our version of pylint to use the latest version and update the
codebase to comply with the majority of the updates.

For now disable the additional checks for `super-with-arguments`,
`useless-object-inheritance`, `raise-missing-from`, `no-else-raise`,
`no-else-break`, `no-else-continue` to be consistent with the existing
codebase.
2020-10-30 11:49:54 +00:00
Marc Bonnici
fbd6f4e90c fw/tm: Only finalize the assistant if instantiated 2020-10-30 11:47:56 +00:00
Marc Bonnici
1c08360263 fw/runtime_config: Fix case where no available gov for cpu
Ensure there is a default iterable value in the case there is no
governor entry for a particular cpu.
2020-10-09 08:29:20 +01:00
Vincent Donnefort
ff220dfb44 pcmark: do not clear on reset
The PCMark Work2.0 data-set is cleared and downloaded before each run. This
operation is time-consuming and pollutes the benchmark instrumentation.
Disabling clear_data_on_reset for the PCMark workload bypass this per-run
download.
2020-09-16 18:58:24 +01:00
Stephen Kyle
7489b487e1 workloads/speedometer: offline version of speedometer
This version replaces the previous uiauto version of Speedometer with a new
version.

* Supports both chrome and chromium again, this is selected with the
  chrome_package parameter.
* No longer needs internet access.
* Version 1.0 of Speedometer is no longer supported.
* Requires root:
  - sometimes uiautomator dump doesn't capture the score if not run as root
  - need to modify the browser's XML preferences file to bypass T&C acceptance
    screen
2020-09-16 18:58:01 +01:00
Jonathan Paynter
ba5a65aad7 target/runtime_config: Add support for stay-on
Adds runtime config support for the android setting
``stay_on_while_plugged_in``.
2020-09-10 15:53:03 +01:00
Jonathan Paynter
7bea3a69bb tests: Add tests to check job finalized
Adds tests to see if jobs are finalized after being skipped,
failed. Tests if jobs are labelled as skipped.
2020-09-10 15:52:20 +01:00
Jonathan Paynter
971289698b core,execution: Add run skipping on job failure
Add a global configuration parameter ``bail_on_job_failure`` that
allows all remaining jobs in a run to be skipped should a job fail its
initial execution and its retries. This is by default disabled.
2020-09-10 15:52:20 +01:00
Marc Bonnici
66e220d444 docs/installation: Rephrase dependency information
Reduce emphasis on the Android SDK requirements to prevent confusion when
running with Linux only targets.
2020-09-03 11:40:24 +01:00
Marc Bonnici
ae8a7bdfb5 docs/installation: Update WA's tested platform
Update WA's tested platform to a later version.
2020-09-03 11:40:24 +01:00
Marc Bonnici
b0355194bc docs/installation: Add note about python3 commands 2020-09-03 11:40:24 +01:00
Marc Bonnici
7817308bf7 docs/user_information: Fix formatting and typos 2020-09-03 11:40:24 +01:00
douglas-raillard-arm
ab9e29bdae framework/output: Speedup discover_wa_outputs()
Avoid recursing into subdirectory of folders containing __meta, since
they are not of interest and recursing can take a very large amount of
time if there are lot of files, like if there is a sysfs dump.
2020-08-05 13:17:15 +01:00
Jonathan Paynter
9edb6b20f0 postgres_schemas: Add rules for cascading deletes
Add cascading deletes to foreign keys as well as a rule to delete large
objects when artifacts are deleted.

Deleting a run entry should delete all dependent data of that run.
2020-07-17 13:52:50 +01:00
Marc Bonnici
879a491691 README: Update with more specific supported python version. 2020-07-16 12:20:02 +01:00
Jonathan Paynter
7086fa6b48 target: Force consistent logcat format
On some devices the default logcat format was inconsistent with what was
expected. This change explicitly sets the logcat format to be as
expected.
2020-07-16 11:38:06 +01:00
Jonathan Paynter
716e59daf5 framework/target: Add logcat buffer wrap detection
As WA currently supports either a single logcat dump after each job,
or fixed rate polling of logcat, it is possible that the fixed size
logcat buffer wraps around and overwrites data between each dump or
poll. This data may be used by output processors that should be
notified of the loss.

This change allows the detection of buffer wrapping by inserting a
known log entry into the buffer, although it cannot say how much data
was lost, and only applies to the "main" logcat buffer.

If buffer wrap is detected, a warning is logged by WA.
2020-07-16 11:38:06 +01:00
Marc Bonnici
08fcc7d30f fw/getters: Fix typo 2020-07-15 15:04:31 +01:00
Marc Bonnici
684121e2e7 fw: Replace usage of file locking with atomic writes
To prevent long timeouts occurring during to file locking on
both reads and writes replace locking with
atomic writes.

While this may results in cache entries being overwritten,
the amount of time used in duplicated retrievals will likely
be saved with the prevention of stalls due to waiting to
acquire the file lock.
2020-07-15 15:04:31 +01:00
Marc Bonnici
0c1229df8c utils/misc: Implement atomic writes
To simulate atomic writes, use a context manager to write to
a temporary file location and then rename over the original
file.
This is performed using the `safe_move` method which performs
this operation and handles cases where the source and destination
are on separate file systems.
2020-07-15 15:04:31 +01:00
Marc Bonnici
615cbbc94d fw/target_info: Prevent multiple parses of the target_info_cache
Instead of parsing the target_info_cache multiple times,allow for
it to be read it once and passed as a paramter to the coresponding
methods.
2020-07-15 15:04:31 +01:00
Marc Bonnici
1425a6f6c9 Implement caching of ApkInfo
Allow caching of ApkInfo to prevent the requirement of re-parsing
of APK files.
2020-07-15 15:04:31 +01:00
Marc Bonnici
4557da2f80 utils/android: Implement a Podable wrapper of ApkInfo
Add a Podable warpper to ApkInfo.
2020-07-15 15:04:31 +01:00
Jonathan Paynter
7cf5fbd8af framework, tests: Correct signal disconnection
While the Louie system operated on weakrefs for the callback
functions, the priority list wrapper did not. This difference led to
weakrefs to callback functions being compared to strong references in
list element operations within Louie's disconnect method, so that
handler methods were not disconnected from signals.

Converting the receiver to a weakref then allowed Louie to operate as
normal, which may include deleting and re-appending the handler method
to the receivers list. As ``append`` is a dummy method that allows the
priority list implementation, the handler method is then never added
back to the list of connected functions, so we must ``add`` it after
``connect`` is called.

Also included is a testcase to confirm the proper disconnection of
signals.
2020-07-14 17:31:38 +01:00
Jonathan Paynter
3f5a31de96 commands: Add report command
report provides a summary of a run and an optional list of all
jobs in the run, with any events that might have occurred during each
job and their current status.

report allows an output directory to be specified or will attempt to
discover possible output directories within the current directory.
2020-07-14 17:31:38 +01:00
Jonathan Paynter
7c6ebfb49c framework: Have Job hold its own JobState
JobState, previously handled by RunState, is now held in the
Job.

Changes and accesses to a Job's status access the Job's
JobState directly, so that there is only one place now that each Job's
state data is tracked.

This also means there is no use for update_job in RunState.
2020-07-14 17:31:38 +01:00
Jonathan Paynter
8640f4f69a framework: Add serializing Job status setter
When setting the job status through ExecutionContext, this change
should be accompanied by an update to the state file, so that the state
file accurately reflects execution state.

As Jobs should not be aware of the output, this method is added to
ExecutionContext, and couples setting job state with writing to the
state file.
2020-07-14 17:31:38 +01:00
Jonathan Paynter
460965363f framework: Fix serialized job retries set to 0
JobState serializations did not reflect the current state of
execution, as the 'retries' field was set to 0 instead of
JobState.retries.
2020-07-14 17:31:38 +01:00
Jonathan Paynter
d4057367d8 tests: Add run state testbench
Need to test:
- whether the state files properly track the state of wa
runs
- the state of the jobs and whether they are correctly
updated

State file consistency tests implemented for scenarios:
	- job passes on first try
	- job requires a retry
	- job fails all retries

Job object state test implemented for:
	- Job status should reset on job retry (from FAILED or PARTIAL
	to PENDING)
2020-07-14 17:31:38 +01:00
Marc Bonnici
ef6cffd85a travis: Limit the maximum version of isort
The later versions of isort is not compatible with the version of
pylint we use so ensure we use a compatible version in the travis tests.
2020-07-09 14:45:28 +01:00
Chris Redpath
37f4d33015 WA/Jankbench: Update Pandas function to remove deprecated .ix access
Pandas removed .ix as a way to iterate the index, .loc is the replacement
in most cases. Jankbench as a workload fails on a clean install due to
this call.

Replacing this works for me on a native install of Lisa with Ubuntu 20.04

Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2020-07-03 12:13:40 +01:00
Marc Bonnici
8c7320a1be workloads/gfxbench: Swtich to the home screen before run
Alter the element to check popups have closed to be present
on any main screen and ensure we switch to the homescreen of the
app before performing setup steps.
2020-06-30 16:51:11 +01:00
setrofim
6d72a242ce doc: fix callback priority table
Correctly specify the decorator name that should be used to give an
instrument method appropriate priority when it's being invoked via
signal dispatch.

Previously, the table was listing the priority enum names in the column
that was labeled "decorator". Now both are shown and the distinction has
been made clearer.
2020-06-30 15:04:20 +01:00
Marc Bonnici
0c2613c608 fw/execution: Fix missing parameter 2020-06-29 16:22:13 +01:00
Marc Bonnici
b8301640f7 docs/dev_ref: Fix incorrect attribute name 2020-06-25 13:01:28 +01:00
Marc Bonnici
c473cfa8fe docs/user_information: Fix references to Python 2.7
Remove references to Python 2.7 and update example paths
to Python3.
2020-06-25 13:01:28 +01:00
Marc Bonnici
1f0da5facf Dockerfile: Update to store environment variables in separate file
Instead of storing the environment variables in `.bashrc` store them
in `/home/wa/.wa_environment`. This allows for sourcing of this file
in non interactive environments.
2020-06-19 11:25:34 +01:00
Marc Bonnici
39121caf66 workloads/gfxbench: Fix using the correct scrollable element.
On smaller devices there can be multiple scrollable elements, ensure
we scroll the correct one to identify tests.
2020-06-19 11:24:46 +01:00
Marc Bonnici
83da20ce9f ouput_processor/postgres: Fix events sql command 2020-06-15 15:30:56 +01:00
Marc Bonnici
f664a00bdc config/core: Fix handling of depreciated parameters
Provide warning to user when attempting to set a depreciated
parameter instead of during validation and only raise the warning
if a value has been explicitly provided.
2020-06-12 09:24:51 +01:00
Marc Bonnici
443358f513 workloads/gfxbench: Rework score detection
Rework how the result matching is performed. Some tests from
gfxbench provide more than 1 score per test and
some provide their output in a different format to others.
Update the matching to perform more flexible matching as well
as dealing with entries that do not fit on a single results screen.
2020-06-10 11:10:26 +01:00
Marc Bonnici
586d95a4f0 Dockerfile: Add note about mounting volumes with selinux 2020-06-01 12:26:25 +01:00
Marc Bonnici
58f3ea35ec workloads/gfxbench: Fix incorrect parameter name 2020-05-26 20:55:58 +01:00
Marc Bonnici
7fe334b467 workloads/gfxbench: Fix incorrect parameter name 2020-05-26 20:38:58 +01:00
Marc Bonnici
3967071a5e workloads/gfxbench: Fix incorrect parameter name 2020-05-26 20:13:53 +01:00
Marc Bonnici
cd6f4541ca workloads/gfxbench: Move results extraction to the extraction stage 2020-05-21 12:39:25 +01:00
Marc Bonnici
7e6eb089ab workloads/geekbench: Update result screen matching criteria
Update the element that is searched for as on some devices this can
match before all the tests are complete.
2020-05-21 12:39:25 +01:00
Marc Bonnici
491dcd5b5b Dockerfile: Update with support for additional instruments
Ensure support is present in the Docker image for instruemnts that
require trace-cmd, monsoon or iio-capture.
2020-05-21 12:39:25 +01:00
Marc Bonnici
7a085e586a workloads/gfxbench: Allow configuration of tests to be ran.
Allow the user to customise which tests are to be ran on the device.
2020-05-21 12:39:25 +01:00
Marc Bonnici
0f47002e4e fw/getters: Use the assets_repository as the default for the filer 2020-05-21 12:39:25 +01:00
Marc Bonnici
6ff5abdffe fw/config: Remove whitespace 2020-05-21 12:39:25 +01:00
Marc Bonnici
82d09612cb fw/config: Add default to `assets_repository' 2020-05-21 12:39:25 +01:00
Marc Bonnici
ecbfe32b9d docs: Update python2 style print statements 2020-05-21 12:39:25 +01:00
Marc Bonnici
2d32d81acb utils/file_lock: Create lock files in system temp directory
Use the original file path to create a lock file in the system temp
directory. This prevents issues where we are attempting to lock a file
where wa does not have permission to create new files.
2020-05-19 17:55:40 +01:00
Marc Bonnici
b9d593e578 fw/version: Development version bump
Bump dev version to synchronise interface for SSHConnection with devlib.
2020-05-13 16:43:03 +01:00
Marc Bonnici
1f8be77331 Disable pep8 errors 2020-05-13 16:43:03 +01:00
Marc Bonnici
66f0edec5b descriptor/SSHConnection: Expose use_scp parameter
Allow specifying to use scp for file transfer rather than sftp as
this is not supported by all targets.
2020-05-13 16:43:03 +01:00
Marc Bonnici
e2489ea3a0 descriptor/ssh: Add note to password parameter for passwordless target
For a passwordless target the `password` parameter needs to be set to an
empty string to prevent attempting ssh key authentication.
2020-05-13 16:43:03 +01:00
Rob Freeman
16be8a70f5 Fix pcmark setup
* Pcmark sometimes auto installs without need for clicking button,
in such cases workload throws UiObjectNotFound exception.
* Added logic to check for installation button existence.
* Increased install wait time to 5 mins.
2020-04-28 09:57:41 +01:00
Rob Freeman
dce07e5095 Update gfxbench to log correct scores.
* Updated regex to reflect correct test name.

* Enabling/disabling tests on setup was missing tesselation on some devices
  so changed order of toggle.

* Sometimes whilst collecting scores the workload grabs the wrong score.
  Updated to check the name of the test before grabbing the score.

* Tested on mate20, xperia, s9-exynos, s10-exynos, pixel-4
2020-04-27 13:54:48 +01:00
Marc Bonnici
711bff6a60 docs/plugins: Fix formatting and typos 2020-04-27 13:53:29 +01:00
Marc Bonnici
2a8454db6a docs/how_tos: Update example workload creation
Clarify the workload creation example.
2020-04-27 13:53:29 +01:00
Marc Bonnici
9b19f33186 target/descriptor: Fix overwriting variable
Ensure we don't overwrite `conn_params` in the inner for loop.
2020-04-17 13:03:27 +01:00
Marc Bonnici
53faf159e8 target/descriptor: Cosmetic fixes
Fix typo and choose more descriptive variable name.
2020-04-17 13:03:27 +01:00
Marc Bonnici
84a9526dd3 target/descriptor: Fix handling of custom Targets 2020-04-17 13:03:27 +01:00
Marc Bonnici
a3cf2e5650 descriptor: Fix overriding of parameters
Make sure we only override parameters that are present in the current
config. This allows for connection parameters to be supplied for a
platform but only overridden if required for the connection.
2020-04-16 09:44:17 +01:00
Marc Bonnici
607cff4c54 framework: Lock files which could be read/written to concurrently
Add file locking to files that could be read and written to concurrently
by separate wa processes causing race conditions.
2020-04-09 09:14:39 +01:00
Marc Bonnici
d56f0fbe20 utils/misc: Add file locking context manager
Enable automation locking and unlocking of a file path provided. Used to
prevent synchronisation issues between multiple wa processes.
2020-04-09 09:14:39 +01:00
Marc Bonnici
0f9c20dc69 target/descriptor: Add support for connection parameter overriding.
Allow for overriding connection parameters on a per platform basis, and
make the `host` parameter for `Juno` optional as this can be auto
detected via the serial connection.
2020-04-09 09:10:11 +01:00
Marc Bonnici
310bad3966 target/descriptor: Rework how parameter defaults are overridden.
Instead of supplying only the parameter name and value to be set as a
default, allow for replacing the entire parameter object as this allow
more control over what needs overriding for a particular platform.
2020-04-09 09:10:11 +01:00
Marc Bonnici
a8abf24db0 fw/descriptor: Add unsupported_platforms for a particular target
Allow for specifying a list of `Platforms` that a particular target does
not support, e.g. 'local_juno'
2020-04-09 09:10:11 +01:00
Marc Bonnici
dad0a28b5e logcat_parsing: Replace errors when decoding logcat output
Some devices print non standard characters to logcat. If an error
occurs when parsing the output, replace the offending character instead
of raising an error.
2020-04-07 14:15:24 +01:00
Marc Bonnici
2cd4bf7e31 Add initial issue templates. 2020-04-02 10:18:47 +01:00
Rob Freeman
5049e3663b Force speedometer to use chrome and change to ApkUiAutoWorkload
* Workload was failing when chrome was not set as default broser so
  altered to use chrome every time.

* Changed workload to an ApkuiAutoWorkload since chrome is now a
  dependency.

* Refactored opening speedometer to new method.

* Added wait time for scores to show up when test finished.
2020-03-31 10:49:25 +01:00
Rob Freeman
c9ddee761a Update framework to wait for object before dismissing chrome popup
* Added wait for exist for google terms accept.

* Reduced wait time for device sync negative button to reduce workload run
  time.
2020-03-31 10:49:25 +01:00
scojac01
3be00b296d Androbench: Handle storage permissions prompt.
Updating the workload to handle the storage permissions that present themselves on certain devices.
2020-03-25 18:21:43 +00:00
scojac01
9a931f42ee Handle the common chrome browser popup messages.
The Chrome browser presents a number of popups when run for
the first time. This update handles those popup messages.
2020-03-25 16:35:04 +00:00
Marc Bonnici
06ba8409c1 target/descriptor: Make strict_host_check default to False
The majority of users will not find a benefit of the additional
check so make this parameter default to `False` instead.
2020-03-12 11:21:07 +00:00
Marc Bonnici
2da9370920 target/descriptor: Ensure we set a default SSH port. 2020-03-06 19:16:47 +00:00
Marc Bonnici
ef9b4c8919 fw/version: Dev version bump
Bump the dev version of WA and required devlib version to ensure
that both repos stay in sync to accommodate the SSH interface
change.
2020-03-06 17:34:30 +00:00
Marc Bonnici
31f4c0fd5f fw/descriptor: Add parameter list for Telenet connections.
`TelnetConnection` no longer uses the same parameter list as
`SSHConnection` so create it's own parameter list.
2020-03-06 17:34:30 +00:00
Marc Bonnici
62ca7c0c36 fw/SSHConnection: Deprecated parameters for Parimiko implementation
Deprecate parameters for the new implementation of the SSHConnection
based on Parimiko.
2020-03-06 17:34:30 +00:00
Marc Bonnici
d0f099700a fw/ConfigutationPoints: Add support for deprecated parameters
Allow specifying a ConfigutationPoint is deprecated. This means that any
supplied configuration will not be used however execution will continue
with a warning displayed to the user.
2020-03-06 17:34:30 +00:00
Sergei Trofimov
5f00a94121 utils/types: fix toggle_set creation
Correctly handle the presence of both an element and its toggle in the
input, and handle them base on order, e.g.

toggle_set(['x', 'y', '~x']) --> {'y', '~x'}
toggle_set(['~x', 'y', 'x']) --> {'y', 'x'}
2020-02-19 17:02:58 +00:00
Sergei Trofimov
0f2de5f951 util/exec_control: add once_per_attribute_value
Add a decorator to run a method once for all instances that share the
value of the specified attribute.
2020-02-07 16:49:48 +00:00
Sergei Trofimov
51ffd60c06 instruments: add proc_stat
Add an instrument that monitors CPU load using data from /proc/stat
2020-02-07 14:11:31 +00:00
Marc Bonnici
0a4164349b
Merge pull request #1065 from setrofim/doc-fix
doc: fix Instrument documentation
2020-02-04 13:28:57 +00:00
Sergei Trofimov
fe50d75858 fw/instrument: derive Instrument from TargetedPlugin
Change Instrument to derive from TargetedPlugin rather than Plugin,
which it should have been all along.
2020-02-04 13:28:48 +00:00
Sergei Trofimov
b93a8cbbd6 doc: fix Instrument documentation
Remove reference to `extract_results`, which does not exist for
Instruments (this only exists for Workloads.)
2020-02-04 12:49:14 +00:00
Sergei Trofimov
79dec810f3 fw/plugin: move cleanup_assets to TargetedPlugin
Move cleanup_assets from Workload up into TargetedPlugin. This way,
Instruments may also utilize it if they deploy assets.

More generally, it makes sense for it to be inside TargetedPlugin, as
any plugin that interacts with the target may conceivably need to clean
up.
2020-02-04 10:43:26 +00:00
Marc Bonnici
44cead2f76 fw/workload: Prefix TestPackages with "test_"
Previously, when pulling an apk from the target to the host, the default
package name was used for both regular apks and test apks. This could
result in one overwriting the other. To prevent this ensure
`TestPackages` have the "test_" prefixed to their filename.
2020-02-03 14:55:39 +00:00
Marc Bonnici
c6d23ab01f workloads/exoplayer: Support Android 10
Android 10 appears to use a new format in logcat when displaying the
PlayerActivity. Update the regex to suport both formats.
2020-01-27 15:00:33 +00:00
Marc Bonnici
6f9856cf2e pcmark: Update popup dismissal to be case insensitive.
Some devices use difference capitalisation so ignore case when matching.
2020-01-21 15:46:14 +00:00
Sergei Trofimov
0f9331dafe fw/job: copy classifiers from the spec
Now that classifiers may be added to the job during execution, its
classifiers dict should be unique to each job rather than just returning
them form spec (which may be shared between multiple jobs.)
2020-01-17 17:07:52 +00:00
Sergei Trofimov
659e60414f fw/exec: Add add_classifier() method
Add add_classifier() method to context. Allow plugins to add classifiers
to the current job, or the run as a whole. This will ensure that the new
classifiers are propagated to all relevant current and future artifacts
and metrics.
2020-01-17 16:38:58 +00:00
Sergei Trofimov
796f62d924 commands/process: partial results + write info
- Correct handling of skipped jobs -- the output directory would not
  have been generated, so do not try to write it.
- Do not attempt to process runs that are in progress, unless forced,
  and do not try to process jobs that have not completed yet.
- Write the run info as well as the result, allowing output processors
  to modify it (e.g. adjusting run names).
2020-01-17 10:59:56 +00:00
Marc Bonnici
f60032a59d fw/target/manager: Update to use module_name_set
Allow for comparing which modules are installed on a Target when
additional module configuration is present.
2020-01-16 15:55:29 +00:00
Marc Bonnici
977ce4995d utils/types: Add module_name_set type
The list of modules retrieved from a `Target` may include configuration
as a dictionary. This helper function will produce a set of only the
module names allowing for comparison.
2020-01-16 15:55:29 +00:00
scojac01
a66251dd60 Antutu: Updating to work with major version 8.
The update to Antutu major version 8 has changed a lot of element names.
There have also been changes to the tests run in three of the four categories.

This commit handles those updates while also retaining backwards compatibility
with major version 7.
2020-01-15 12:59:43 +00:00
Marc Bonnici
d3adfa1af9 fw/getters: Pylint fix 2020-01-14 13:24:51 +00:00
Marc Bonnici
39a294ddbe utils/types: Update version_tuple to allow splitting on "-"
Some Apks use "-" characters to separate their version and identifier so
treat as a separator value.
2020-01-14 13:24:51 +00:00
Marc Bonnici
164095e664 utils/types: Update version_tuple to use strings
The versionName field of an apk allows for containing non-numerical
characters so update the type to be a string.
2020-01-14 13:24:51 +00:00
Sergei Trofimov
24a4a032db fw/getters: update Executable resolution
Use Executable.match() rather than just checking the path inside
get_from_location(); this allows for alternative matching semantics
(e.g. globbing) inside derived implementations.
2020-01-10 13:56:11 +00:00
Sergei Trofimov
05857ec2bc utils/cpustates: update idle state naming
If idle state names for a cpu could not be discovered, use "idle[N]"
where N is the state number, instead of just making them all as
"unknown".
2020-01-10 13:32:40 +00:00
Sergei Trofimov
fd8a7e442c utils/trace_cmd: update for Python 3
re._pattern_type became re.Pattern in Python 3.
2020-01-10 13:31:30 +00:00
Marc Bonnici
dfb4737e51 Development version bump 2019-12-20 16:25:01 +00:00
Marc Bonnici
06518ad40a Version bump for release 2019-12-20 16:07:10 +00:00
Marc Bonnici
009fd831b8 docs/changelog: Update changelog for version 3.2 release. 2019-12-20 16:07:10 +00:00
Marc Bonnici
88284750e7 Dockerfile: Update to reference new release of WA and devlib 2019-12-20 16:07:10 +00:00
Marc Bonnici
8b337768a3 Dockerfile: Update ubuntu base to 19.10 2019-12-20 16:07:10 +00:00
Marc Bonnici
38aa9d12bd fw/entrypoint: Fix devlib version check
That absence of a value in the "dev" version field indicates a release
version, ensure this is taken into account when comparing version numbers.
2019-12-20 16:07:10 +00:00
Marc Bonnici
769c883a3a requirements: Update to latest known working package versions 2019-12-20 16:07:10 +00:00
Marc Bonnici
90db655959 instrument/perf: Fix incorrect argument 2019-12-20 16:07:10 +00:00
Marc Bonnici
817d98ed72 wa/instruments: Refactor collectors to use Collector Inferface
Update the WA instruments which rely on the refactored devlib collectors
to reflect the new API.
2019-12-20 15:17:01 +00:00
scojac01
d67668621c Geekbench: Adding 4.4.2 as a supported version
There have been no UI changes to the application so simply adding the
new supported version to the list of accepted versions.
2019-12-19 14:18:10 +00:00
Marc Bonnici
1531ddcdef workloads/speedometer: Only close tabs on supported devices
Some devices don't have the option to close all tabs so don't error if
this element cannot be found.
2019-12-18 10:07:11 +00:00
Marc Bonnici
322f9be2d3 workloads/googleplaybooks: Fix book selector
Do not try and use a parent element of the book entry, search for the entry
directly.
2019-12-18 10:07:11 +00:00
Marc Bonnici
494424c8ea utils/types: Fix ParameterDict update method.
When updating a ParameterDict with another ParameterDict the unencoded
values were being merged. Ensure consistent behaviour by implicitally
iterating via `__iter__` which will cause ParameterDict values to be
decoded before being re-endcoded as expected.
2019-12-18 10:07:11 +00:00
scojac01
ee54a68b65 Updating the framework to include the app package data
As part of our continous integration system it has become
clear that gathering the app package data as well as the
version name can provide useful.

Adding this functionality to mainline as it could prove
useful to other developers.
2019-12-03 17:48:59 +00:00
Sergei Trofimov
cc1cc6f77f tests: fix pytest warnings
Fix warnings reported when running unit tests via pytest.

- Rename TestDevice to MockDevice, so that it is not interpreted as a
  test case.
- Fix collections abstract base class imports.
- Add an ini file to ignore the same from "past".
2019-12-03 14:03:18 +00:00
scojac01
da0ceab027 PCMark: Updating to handle new Android 10 permission warnings
Android 10 has introduced a new permission warning and a
seperate warning about the APK being built for an older version
of the Android OS. Have added two checks to accept these
permissions and continue with the workload and the given APK
rather than attempting to update.
2019-11-28 16:02:20 +00:00
scojac01
683eec2377 PCMark: Editing the screen orientation
Some devices have proved to have a natural orientation
that does not lend itself well to this workload. Therefore
I have edited the orientation lock to portrait instead of
natural.
2019-11-22 16:29:59 +00:00
scojac01
07e47de807 Geekbench: Updating supported versions
Adding support for Geekbench 4.3.4 and 4.4.0.

Adding support for Geekbench Corporate 5.0.1 and 5.0.3.

There are no changes required to the functional workload.
2019-11-22 12:07:50 +00:00
Marc Bonnici
5906bca6b3 instruments/acme_cape: Fix missing parameter to get_instruments
The signature of `get_instruments` was missing the `keep_raw` parameter
so fix this and use it as part of the subsequent common invocation.
2019-11-18 16:09:09 +00:00
Marc Bonnici
9556c3a004 docs: Fix typos 2019-11-05 08:35:57 +00:00
Marc Bonnici
1f4bae92bf docs/device_setup: Explicitly mention load_default_modules
This is becoming a commonly used parameter in the `device_config` so
explicitly list its functionality.
2019-11-05 08:35:57 +00:00
Marc Bonnici
dcbc00addd docs/faq: Add workaround for module initialisation failure 2019-11-05 08:35:57 +00:00
Marc Bonnici
4ee75be7ab docs/userguide: Update reference to outdated output_processor 2019-11-05 08:35:57 +00:00
Robert Freeman
796dfb1de6 Update googlephotos workload to work against most recent version
* Updated googlephotos workload to work against version 4.28.0
2019-10-29 16:17:20 +00:00
Robert Freeman
f3e7b14b28 Update googlemaps workload to work against most recent version
* Updated google maps workload to work against apk version 10.19.1
2019-10-29 13:34:08 +00:00
Marc Bonnici
e9839d52c4 output_processor/postgres: Fix out of range for hostid
Change the field type of `hostid` as part of `TargetInfo` from `Int` to
Bigint to prevent some ids from exceeding the maximum value.
2019-10-23 15:45:56 +01:00
Marc Bonnici
7ebbb05934 target/info: Fix missing return statement
Add missing return statement when upgrading a `TargetInfo` POD to v5.
2019-10-23 15:45:56 +01:00
Robert Freeman
13166f66d1 Update gmail workload to work against most recent version
* Updated gmail to work against 2019.05.26.252424914.release.
2019-10-17 14:17:56 +01:00
Marc Bonnici
ab5d12be72 output_processors/cpustates: Improve handling of missing cpuinfo data
Improve checking of whether cpu idle state information is available for
processing.
Add debug message to inform user if the cpuidle module is not detected
on the target.
2019-10-15 15:17:02 +01:00
Marc Bonnici
298bc3a7f3 output_processors/cpustates: Deal with cpufreq data being unavailable
If the `cpufreq` module is not detected as present then warn the user
and processes the remaining data instead of crashing.
2019-10-15 15:17:02 +01:00
Marc Bonnici
09d6f4dea1 uilts/cpustates: Fix inverted no_idle check
If there is no information about idle states then `no_idle` should be
set to `True` instead of `False`.
2019-10-15 15:17:02 +01:00
Marc Bonnici
d7c95fa844 instrument/energy_measurement: Fix typo and formatting 2019-10-03 12:48:24 +01:00
Marc Bonnici
0efd20cf59 uiauto: Update all applications to target SDK version 28
On devices running android 9 with google play services, PlayProtect
blocks the installation of our automation apks due to targeting a lower
SDK version. Update all apk builds to target SDK version 28 (Android 9)
however do not change the minimum version to maintain backwards
compatibility.
2019-10-03 12:18:48 +01:00
Marc Bonnici
e41aa3c967 instruments/energy_measurement: Add a keep_raw parameter
Add a `keep_raw` parameter to control whether raw files should be
deleted during teardown.
2019-10-03 11:38:29 +01:00
Marc Bonnici
3bef4fc92d instrument/energy_measurement: Invoke teardown method for backends
Forward the teardown method invocation to the instrument backend.
2019-10-03 08:39:53 +01:00
Scott Jackson
0166180f30 PCMark: Fixing click co-ordinates
The workload is clicking the run button in the centre
of the element but this is no longer starting the
run operation.

Refactoring the code to click in the topleft of the
object seems to rectify the issue.
2019-09-24 16:01:37 +01:00
Robert Freeman
a9f3ee9752 Update adobe reader workload to work with most recent version
Updated adobe reader workload to work with version 19.7.1.10709
2019-09-24 12:57:26 +01:00
Scott Jackson
35ce87068c Antutu: Updating to handle the new shortcut popup
Added logic to dismiss the new popup message which asks
to add a shortcut to the android homescreen.
2019-09-19 14:07:50 +01:00
Robert Freeman
6beac11ee2 Add simpleperf type to perf instrument
* Added simpleperf type to perf instrument as it's more stable
  on Android devices.
* Added record command to instrument
* Added output processing for simpleperf
2019-09-18 12:55:59 +01:00
Sergei Trofimov
2f231b5ce5 fw/target: detect module variations in TargetInfo
- Add modules entry to TargetInfo
- When retrieving TargetInfo from cache, make sure info modules match
  those for the current target, otherwise mark info as stale and
  re-generate.
2019-09-12 15:27:23 +01:00
Marc Bonnici
75878e2f27 uiauto/build_scripts: Update to use python3
Ensure we are invoking python3 when attempting to import `wa` and
update the printing syntax to be compatible.
2019-09-06 11:02:13 +01:00
Marc Bonnici
023cb88ab1 templates/setup: Update package setup template to specify python3 2019-09-06 11:02:13 +01:00
Marc Bonnici
d27443deb5 workloads/rt_app: Update to use python3
Update the workload generation script to be python3 compatible and
invoke with python3.
2019-09-06 11:02:13 +01:00
Scott Jackson
1a15f5c761 Geekbench: Adding support for Version 5 of Geekbench Corporate 2019-09-06 10:34:41 +01:00
Marc Bonnici
d3af4e7515 setup.py: Update pandas version restrictions
Pandas versions 0.25+ requires Python 3.5.3 as a minimum so ensure that
an older version of pandas is installed for older versions of Python.
2019-08-30 14:03:03 +01:00
Marc Bonnici
73b0b0d709 readthedocs: Add ReadtheDocs configuration
Provide configuration file for building the documentation. We need to
specify to use python3 explicitly.
2019-08-28 15:03:38 +01:00
Marc Bonnici
bb18a1a51c travis: Remove tests for python2.7 2019-08-28 11:46:26 +01:00
Marc Bonnici
062be6d544 output_processors/postgresql: Don't reconnect if not initialised
Update check to clarify that we should not attempt to reconnect to
the database if the initialisation of the output processor failed.
2019-08-28 11:33:46 +01:00
Marc Bonnici
c1e095be51 output_processors/postgresql: Ensure still connected to the database
Before exporting output to ensure that we are still connected to the
database. The connection may be dropped so reconnect if necessary, this
is a more of an issue with longer running jobs.
2019-08-28 11:33:46 +01:00
Marc Bonnici
eeebd010b9 output_processors/postgresql: Group database connection operations
Refactors connection operations into the `connect_to_database`
method.
2019-08-28 11:33:46 +01:00
Marc Bonnici
e387e3d9b7 Update to remove Python2 as supported version. 2019-07-19 17:07:46 +01:00
Marc Bonnici
6042fa374a fw/version: Version Bump 2019-07-19 17:07:46 +01:00
Marc Bonnici
050329a5ee fw/version: Update version for WA and required devlib 2019-07-19 16:37:00 +01:00
Marc Bonnici
d9e7aa9af0 Dockerfile: Update to use the latest versions of WA and devlib 2019-07-19 16:37:00 +01:00
Marc Bonnici
125cd3bb41 docs/changes: Changelog for v3.1.4 2019-07-19 16:37:00 +01:00
Marc Bonnici
75ea78ea4f docs/faq: Fix formatting 2019-07-19 16:37:00 +01:00
Marc Bonnici
12bb21045e instruments/SysfsExtractor: Add extracted directories as artifacts
Add the directories that have been extracted by the `SysfsExtractor` and
derived instruments as artifacts.
2019-07-19 16:36:11 +01:00
Marc Bonnici
4bb1f4988f fw/DatabaseOutput: Only attempt to extract config if avaliable
Do not try to parse `kernel_config` if no data is present.
2019-07-19 16:36:11 +01:00
Marc Bonnici
0ff6b4842a fw/DatabaseOutput: Fix the retrieval of job level artifacts 2019-07-19 16:36:11 +01:00
Marc Bonnici
98b787e326 fw/DatabaseOutput: Enabled retrieving of directory artifacts
To provide the same user experience of accessing a directory
artifact from a standard `wa_output` when attempting to retrieve the
path of the artifact extract the stored tar file and extract it to a
temporary location on the host returning the path.
2019-07-19 16:36:11 +01:00
Marc Bonnici
e915436661 commands/postgres: Upgrade the data base schema to v1.3
Upgrade the database schema to reflect the additions of directory
artifacts and the missing TargetInfo property.
2019-07-19 16:36:11 +01:00
Marc Bonnici
68e1806c07 output_processors/postgresql: Add support for new directory Artifacts
Reflecting the addition to being able to store directories as Artifacts
enable uploading of a directory as a compressed tar file rather than
storing the file directly.
2019-07-19 16:36:11 +01:00
Marc Bonnici
f19ebb79ee output_processors/postgresql: Add missing system_id field
When storing the `TargetInfo` the `system_id` attribute was ommited.
2019-07-19 16:36:11 +01:00
Marc Bonnici
c950f5ec8f utils/postgres: Fix formatting 2019-07-19 16:36:11 +01:00
Marc Bonnici
6aaa28781b fw/Artifact: Allows adding directories as artifacts
Adds a `is_dir` property to an `Artifact` to indicate that the
artifact represents a directory rather than an individual file.
2019-07-19 16:36:11 +01:00
Marc Bonnici
d87025ad3a output_processors/postgres: Fix empty iterable
In the case of an empty iterable an empty string would be returned
however this was not an valid value so ensure that the brackets are
always inserted into the output.
2019-07-19 16:36:11 +01:00
Marc Bonnici
ac5819da8e output_processors/postgres: Fix incorrect dict keys 2019-07-19 16:36:11 +01:00
Marc Bonnici
31e08a6477 instruments/interrupts: Add interrupt files as artifacts
Ensure that the interrupt files pulled and diffed from the device are
added as artifacts.
2019-07-19 16:36:11 +01:00
scott
47769cf28d Add a workload for Motionmark tests 2019-07-19 14:31:04 +01:00
Marc Bonnici
d8601880ac setup.py: Add README as package description
Add the project README to be displayed as the project description on
PyPI.
2019-07-18 15:17:24 +01:00
Marc Bonnici
0efc9b9ccd setup.py: Clean up extra dependencies
- Remove unused dependencies left over from WA2.
- Allow installing of the `daqpower` module as an option dependency.
2019-07-18 15:17:24 +01:00
Marc Bonnici
501d3048a5 requirements: Add initial version
Adds a "requirements.txt" to the project. This will not be used during a
standard installation however will be used to indicate which are known
working packages in cases of conflicts.

Update README and documentaion to reflect this.
2019-07-18 15:17:24 +01:00
Marc Bonnici
c4daccd800 README: Update installation instruction to match documentation.
When installing from github we recommend installing with setup.py as
install with pip does not always resolve dependencies correctly.
2019-07-18 15:17:24 +01:00
Marc Bonnici
db944629f3 setup.py: Update classifiers 2019-07-18 15:17:24 +01:00
Marc Bonnici
564738a2ad workloads/monoperf: Fix typos 2019-07-18 15:17:24 +01:00
Marc Bonnici
c092128e94 workloads: Sets requires_network attribute for workloads
Both speedometer and aitutu require internet to function however this
attribute was missing from the workloads.
2019-07-18 15:17:24 +01:00
Marc Bonnici
463840d2b7 docs/faq: Add question about non UTF-8 environments. 2019-07-12 13:32:28 +01:00
Marc Bonnici
43633ab362 extras/Dockerfile: Ensure we are using utf-8 in our docker container
For compatibility we want to be using utf-8 by default when we interact
with files within WA so ensure that our environment is configured
accordingly.
2019-07-12 13:32:28 +01:00
Marc Bonnici
a6f0ab31e4 fw/entrypoint: Add check for system default encoding
Check what the default encoding for the system is set to. If this is not
configured to use 'UTF-8', log a warning to the user as this is known
to cause issues when attempting to parse none ascii files during operation.
2019-07-12 13:32:28 +01:00
Marc Bonnici
72fd5b5139 setup.py: Set maximum package version for python2.7 support
In the latest versions of panadas and numpy python2.7 support has been
dropped therefore restrict the maximum version of these packages.
2019-07-08 13:46:35 +01:00
Marc Bonnici
766bb4da1a fw/uiauto: Allow specifying landscape and portrait orientation
Previously the `setScreenOrientation` function only accepted relative
orientations, this causes issue when attempt to use across tablets and
phones with different natural orientations. Now take into account the
current orientation and screen resolution to allow specifying portrait vs
landscape across different types of devices.
2019-07-04 13:18:48 +01:00
Marc Bonnici
a5f0521353 utils/types: Fix typos 2019-06-28 17:56:13 +01:00
Marc Bonnici
3435c36b98 fw/workload: Improve version matching and error propergation
Ensure that the appropriate error message is returned to the user to
outline what caused the version matching to fail.
Additionally fix the case where if specifying a package name directly
the version matching result would be ignored.
2019-06-28 17:56:13 +01:00
Marc Bonnici
bd252a6471 fw/workload: Introduce max / min versions for apks
Allow specifying a maximum and minimum version of an APK to be used for
a workload.
2019-06-28 17:56:13 +01:00
Marc Bonnici
f46851a3b4 utils/types: Add version_tuple
Allow for `version_tuple` to be used more generically to enable
natural comparing of versions encoded as strings.
2019-06-28 17:56:13 +01:00
Marc Bonnici
8910234448 fw/workload: Don't override the package manager for ApkRevent workloads
`ApkRevent` workloads should be able to use the same Apk selection
criteria as `ApkWorkloads` therefore rely on the superclass to
instantiate the `PackageHandler`.
2019-06-28 17:56:13 +01:00
Marc Bonnici
1108c5701e workloads: Update to better utilize cleanup_assets and uninstall
Update the workload classes to attempt and standardize the use of the
`cleanup_assets` parameter and the newly added `uninstall` parameter
2019-06-28 17:54:04 +01:00
Marc Bonnici
f5d1a9e94a fw/workload: Add the uninstall parameter to the base workload class
In additional to being able to specify whether the APK should be
uninstalled as part of a `APKWorkload`s teardown add the `uninstall`
parameter to the base `workload` class in order to specify whether any
binaries installed for a workload should be uninstalled again.
2019-06-28 17:54:04 +01:00
Marc Bonnici
959106d61b fw/workload: Update description of cleanup_assests parameter
Improve the description of the parameter as the parameter may be used in
other places aside from the teardown method.
2019-06-28 17:54:04 +01:00
Pierre-Clement Tosi
0aea3abcaf workloads: Add support for UIBench Jank Tests
Add a workload that launches UIBenchJankTests. This differs from the
UIBench application as it adds automation and instrumentation to that
APK. This therefore requires a different implementation than classical
ApkWorkloads as 2 APKs are required (UIBench and UIBenchJankTests) and
the main APK is invoked through `am instrument` (as opposed to `am
start`).
2019-06-28 09:27:56 +01:00
Pierre-Clement Tosi
24ccc024f8 framework.workload: am instrument APK manager
Add support for Android applications that are invoked through `am
instrument` (as opposed to `am start`) _i.e._ that have been
instrumented. See AOSP `/platform_testing/tests/` for examples of such
applications.
2019-06-28 09:27:56 +01:00
Marc Bonnici
42ab811032 workloads/lmbench: Fix missing run method declaration 2019-06-19 11:28:28 +01:00
Marc Bonnici
832ed797e1 fw/config/execution: Raise error if no jobs are available for running
If no jobs have been generated that are available for running then WA
will crash when trying to access the job queue. Add an explicit check to
ensure that a sensible error is raised in this case, for example if
attempting to run a specific job ID that is not found.
2019-06-06 15:17:42 +01:00
Marc Bonnici
31b44e447e setup.py: Add missing dependency for building documentation
In later versions of sphinx the rtd theme needs installing explicitly
as it is no longer included in the main package.
2019-06-04 14:53:59 +01:00
Marc Bonnici
179b2e2264 Dockerfile: Update to install all available extras for WA and devlib
Install all extras of WA and devliv to be able to use all available
features within the docker container.
2019-06-04 14:53:59 +01:00
Marc Bonnici
22437359b6 setup.py: Change short hand to install all extras to all
In our documentation we detail being able to install the `all` extra
as a shorthand for installing all the available extra packages that WA
may require however this was actually implemented as `everything`.
2019-06-04 14:53:59 +01:00
Marc Bonnici
2347c8c007 setup.py: Add postgres dependency in extras list 2019-06-04 14:53:59 +01:00
Pierre-Clément Tosi
52a0a79012 build_plugin_docs: Pylint fix
Fix various pylint warnings.
2019-06-04 14:53:20 +01:00
Pierre-Clément Tosi
60693e1b65 doc: Fix build_instrument_method_map script
Fix a wrong call to a function in the script execution path.
2019-06-04 14:53:20 +01:00
Pierre-Clément Tosi
8ddf16dfea doc: Patch for doc generation under Py3
Patch scripts with methods that are supported under Py2.7 and Py3.
2019-06-04 14:53:20 +01:00
Marc Bonnici
9aec4850c2 workloads/uibench: Pylint Fix 2019-05-28 09:33:15 +01:00
scott
bdaa26d772 Geekbench: Updating supported versions to include 4.3.2 2019-05-24 17:47:49 +01:00
Pierre-Clement Tosi
d7aedae69c workloads/uibench: Initial commit
Add support for Android's UIBench suite of tests as a WA workload.
2019-05-24 17:47:35 +01:00
Pierre-Clément Tosi
45af8c69b8 ApkWorkload: Support implicit activity path
If the activity field of an instance of ApkWorkload does not the '.'
character, it is assumed that it is in the Java namespace of the
application. This is similar to how activities can be referred to with
relative paths:
    com.domain.app/.activity
instead of
    com.domain.app/com.domain.app.activity
2019-05-24 17:47:35 +01:00
scott
e398083f6e PCMark: Removing hard coded index to make the workload more robust 2019-05-22 11:07:43 +01:00
Marc Bonnici
4ce41407e9 tests/test_agenda_parser: Ensure anchors can be used as part of agenda
Ensure that yaml anchors and aliases can be used within a WA agenda.
2019-05-17 20:04:33 +01:00
Marc Bonnici
aa0564e8f3 tests/test_agenda_parser: Use custom yaml loader for test cases
Instead of using the default yaml loader make sure to use our customised
loader. Also move the loading stage into our test cases as this should
be part of the test case to ensure that it functions for the individual
test case.
2019-05-17 20:04:33 +01:00
Marc Bonnici
83f826d6fe utils/serializser: Re-fix support for YAML anchors
Include missing `flatten_mapping` call in our implementation of
`construct_mapping`. This is performed by a subclass in the default
implementation which was missing in our previous fix.
2019-05-17 20:04:33 +01:00
scott
1599b59770 workloads: add aitutu
Add a workload to execute the Aitutu benchmark.
2019-05-17 13:26:36 +01:00
Marc Bonnici
8cd9862e32 workloads/geekbench: Clean up version handling
The workload could attempt to use the version attribute before it was
discovered to assess the workload activity causing an error however the
whole process can be simplified using newer discovery features.
2019-05-17 09:15:23 +01:00
Marc Bonnici
b4ea2798dd tests/test_agenda_parser: Remove attribute assignment
Do not try and assign the name attribute of the yaml loaded agenda as
this is not used.
2019-05-15 19:48:39 +01:00
Marc Bonnici
76e6f14212 utils/serializer: pylint fixes 2019-05-15 19:48:39 +01:00
Marc Bonnici
ce59318e66 utils/serializer: Fix using incorrect loader and imports
- Ensure that the new loader is used when opening files to ensure that our
custom constructors are used.
- Fix missing imports
2019-05-15 19:48:39 +01:00
Sergei Trofimov
5652057adb utils/serializer: fix support for YAML anchors.
Change the way maps get processed by YAML constructor to support YAML
features, such as anchors, while still maintaining dict ordering.
2019-05-15 09:59:14 +01:00
Sergei Trofimov
e9f5577237 utils/serializer: fix error reporting for YAML
When attempting to access the message of a exception check not only that
e.args is populated, but also that e.args[0] actually contains
something, before defaulting to str(e).
2019-05-15 09:57:52 +01:00
Marc Bonnici
ec3d928b3b docs: Fix incorrect environment variable name 2019-04-26 08:05:51 +01:00
Marc Bonnici
ee8bab365b docs/revent: Clarify the naming conventions for revent recordings
As per https://github.com/ARM-software/workload-automation/issues/968
the current documentation for detailing the naming scheme for an revent
recording is unclear. Reword the descriptions focusing on the typical
usecase rather then based on a customized target class.
2019-04-26 08:05:51 +01:00
Marc Bonnici
e3406bdb74 instruments/delay: Convert module name to identifier
- Ensure cooling module name is converted to identifier when resolving
- Fix typo
2019-04-26 08:04:45 +01:00
Marc Bonnici
55d983ecaf workloads/vellamo: Fix initialization order
Ensure that uiauto parameters are set before calling the super method.
2019-04-26 08:04:45 +01:00
Marc Bonnici
f8908e8194 Dockerfile: Update to newer base and Python version
- Update the base ubunutu image to 18.10 and switch to using Python3 for
installing WA.
- Fix typo in documenation.
2019-04-18 10:48:00 +01:00
Marc Bonnici
dd44d6fa16 docs/api/workload: Update documentation for activity attribute 2019-04-18 10:44:50 +01:00
Marc Bonnici
753786a45c fw/workload: Add activity attribute to APK workloads
Allow specifying an `activity` attribute for an APK based workload which
will override the automatically detected activity from the resolved APK.
2019-04-18 10:44:50 +01:00
Marc Bonnici
8647ceafd8 workloads/meabo: Fix incorrect add_metric call 2019-04-03 11:33:27 +01:00
Marc Bonnici
2c2118ad23 fw/resource: Fix attempting to match against empty values
Update checking of attributes to allow for empty structures as they can
be set to empty lists etc. and therefore should not be checking if
explicitly `None`.
2019-04-02 07:54:05 +01:00
Marc Bonnici
0ec8427d05 fw/output: Implement retriving "augmentations" for JobDatabaseOutputs
Enable retriving augmentations on a per job basis when using a Postgres
database backend.
2019-03-18 15:26:19 +00:00
Marc Bonnici
cf5c3a2723 fw/output: Add missing "augmentation" attribute to JobOutput
Add attribute to `JobOutput` to allow easy listing of enabled augmentations
for individual jobs rather than just the overall run level.
2019-03-18 15:26:19 +00:00
Marc Bonnici
8ddc1c1eba fw/version: Bump to development version 2019-03-04 15:50:13 +00:00
Marc Bonnici
b5db4afc05 fw/version: Version Bump
Bump to the next revision release.
2019-03-04 15:50:13 +00:00
Marc Bonnici
f977c3dfc8 setup.py: Update PyYaml Dependency 2019-03-04 15:50:13 +00:00
Marc Bonnici
769aae3047 utils/serializer: Explicitly state yaml loader
In newer versions of PyYAML we need to manually specify the `Loader` to
be used as per https://msg.pyyaml.org/load.
`FullLoader` is now the default loader which attempts to avoid arbitrary
code execution, however if we are running an older version where this is
not available default back to the original Loader.
2019-03-04 15:50:13 +00:00
Marc Bonnici
a1ba3c6f69 utils/misc: Update load structure to use WA's yaml wrapper 2019-03-04 15:50:13 +00:00
Marc Bonnici
536fc7eb92 workloads/stress_ng: Update to use WA's yaml wrapper 2019-03-04 15:50:13 +00:00
Marc Bonnici
de36dacb82 fw/version: Bump to development versions 2019-03-04 10:37:39 +00:00
Marc Bonnici
637bf57cbc fw/version: Bump revison versions
Bump the revision version for WA and the required version for
devlib.
2019-03-04 10:37:39 +00:00
Marc Bonnici
60ffd27bba extras/Dockerfile: Update to use the latest release version 2019-03-04 10:37:39 +00:00
Marc Bonnici
984a74a6ca doc/changes: Update changelog for v3.1.2 2019-03-04 10:37:39 +00:00
Marc Bonnici
5b8dc1779c setup.py: Limit the maximum version of PyYAML
Specify the latest stable release of PyYAML should be installed rather
than the latest pre-release.
2019-03-04 10:37:39 +00:00
Marc Bonnici
ba0cd7f842 fw/target/info: Bump target info version
Due to mismatches in WA and devlib versions this previous upgrade method
could have been trigger before it was needed and would not be called a
second time. Now we can be sure that WA and devlib are updated together
bump the version number again to ensure the upgrade method is called a
second time to ensure the POD is upgraded correctly.
2019-03-04 10:37:39 +00:00
Marc Bonnici
adb3ffa6aa fw/version: Introduce required version for devlib
To ensure that a compatible version of devlib is installed on the system
keep track of the version of devlib that is required by WA and provide a
more useful error message if this is not satisfied.
2019-03-04 10:37:39 +00:00
Marc Bonnici
bedd3bf062 docs/faq: Add entry about missing kernel config errors
Although WA supports automatic updating during parsing of a serialized
`kernel_config` from devlib, if the installed versions of WA and devlib
have become out of sync where WA has "updated" the old implementation it
will not attempt to update it again when devlib is later updated to use
the new implementation and therefore will not trigger the existing
checks that are in place.
2019-02-21 11:57:32 +00:00
Marc Bonnici
03e463ad4a docs/installation: Add warning about using pip to install from github 2019-02-20 16:30:53 +00:00
Marc Bonnici
2ce8d6fc95 output_processors/postgresql: Add missing default
In the case of no screen resolution being present ensure that a default
is used instead of `None`.
2019-02-14 10:51:38 +00:00
Marc Bonnici
1415f61e36 workloads/chrome: Fix for tablet devices
Some tablet devices use an alternate tab switching method due to the
larger screen space. Add support for adding new tabs via the menu
instead of via the tab switcher.
2019-02-08 14:32:58 +00:00
Marc Bonnici
6ab1ae74a6 wa/apk_workloads: Update to not specify a default apk version.
No longer specify a default version to allow any available apks to be
detected and then choose the appropriate automation based on the
detected version.
Refactor to support new supported_versions attribute and since APK
resolution needs to have happened before setting uiauto parameter
move assignments to ``initialize``.
2019-02-08 13:56:55 +00:00
Marc Bonnici
a1cecc0002 fw/workload: Add "support_versions" attribute to workloads
Allow for specifying a list of supported APK versions for a workload. If
a specific version is no specified then attempt to a resolve any valid
version for the workload.
2019-02-08 13:56:55 +00:00
Marc Bonnici
0cba3c68dc fw/resource: Support matching APKs on multiple versions.
In the case where a range of apk versions are valid allow for the matching
process to accommodate a list of versions instead of a single value.
2019-02-08 13:56:55 +00:00
Marc Bonnici
f267fc9277 fw/workload: Use apk version for workload if not set.
If a workloads `version` attribute is not set, and an APK file is
found, use this as the version number. This allows for workloads to not
specify a default version via parameters and for an available APK to be
automatically chosen.
2019-02-08 13:56:55 +00:00
Sergei Trofimov
462a5b651a fw/output: add label property to Metric
Add a "label" property to Metric that combines its name with its
classifiers into a single string.
2019-02-05 10:27:06 +00:00
pablololo12
7cd7b73f58 Fixed an error emptying the reading buffer of the poller
Fixed identation

Fixed identation
2019-02-04 09:46:13 +00:00
Marc Bonnici
4a9a2ad105 fw/target/info: Fix for KernelConfig refactor
The Devlib KernelConfig object was refactored in commit
f65130b7c7
therefore update the way KernelConfig objects are deserialized to reflect the new
implementation and provide a conversion for PODs.
2019-01-31 09:44:30 +00:00
Marc Bonnici
9f88459f56 fw/workload: Fix Typo 2019-01-30 15:46:54 +00:00
Marc Bonnici
a2087ea467 workloads/manual: Fix incorrect attribute used to access target 2019-01-30 15:46:54 +00:00
Marc Bonnici
31a5a95803 output_processors/postgresql: Ensure screen resolution is a list
Ensure that the screen resolution is converted to a list to prevent
casting errors.
2019-01-30 15:46:54 +00:00
Marc Bonnici
3f202205a5 doc/faq: Add answer on how to fall back to surfaceflinger 2019-01-28 12:45:10 +00:00
Marc Bonnici
ce7720b26d instruments/fps: Fix Typo 2019-01-28 12:45:10 +00:00
Marc Bonnici
766b96e2ad fw/workload: Add a 'View' parameter to ApkWorkloads
Allow for easy configuring of a view for a particular workload as this
can vary depending on the device which can be used when using certain
instruments for example `fps`.
2019-01-11 10:12:42 +00:00
Marc Bonnici
3c9de98a4b setup: Update devlib requirements to development versions. 2019-01-11 10:12:26 +00:00
Marc Bonnici
5263cfd6f8 fw/version: Add development tag to version
Add a development tag to the version format instead of using the
revision field.
2019-01-11 10:12:26 +00:00
Marc Bonnici
e312efc113 fw/version: Version bump for minor fixes 2019-01-10 13:21:16 +00:00
Marc Bonnici
0ea9e2fb63 setup: Update devlib dependency to the release version 2019-01-10 13:21:16 +00:00
Marc Bonnici
78090bd94e doc/changes: Add change log for v3.1.1 2019-01-10 13:21:16 +00:00
Marc Bonnici
ef45b6b615 MANIFEST: Fix including all of the wa subdirectory
Ensure that all subfolders are included in the MANIFEST otherwise when
packaging WA there can be missing files.
2019-01-10 13:21:16 +00:00
Marc Bonnici
22c237ebe9 extras/Docker: Update to use latest release version.
Update the dockerfile to use the latest released versions of WA and Devlib.
2019-01-10 13:21:16 +00:00
Sergei Trofimov
ed95755af5 fw/output: better classifiers format for metrics
Use a dict-like string representation for classifiers, rather than the
default OrderedDict one, which is a lot more verbose and difficult to
read.
2019-01-10 13:03:29 +00:00
syl-nktaylor
4c6636eb72 tools/revent: update binaries to latest version
- cross-compiled revent binaries to match latest version (with recording timestamp fix f64aaf6 on 12 Oct 2018)
toolchains used:
gcc-linaro-7.3.1-2018.05-x86_64_arm-linux-gnueabi
gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu

- fixes error in utils/revent.py when reading timestamps from recordings made with previous wa revent binaries
2019-01-07 13:31:07 +00:00
Marc Bonnici
60fe412548 wa/version: Update to development version
Update WA and devlib versions to development tags.
2019-01-04 11:29:10 +00:00
Marc Bonnici
e187e7efd6 fw/version: Verion Bump to v3.1.0 2018-12-21 14:31:07 +00:00
Marc Bonnici
d9e16bfebd setup.py: Update devlib release versions
Update WA to use the new release verion of devlib on PyPi instead of
the github repo.
2018-12-21 14:31:06 +00:00
Marc Bonnici
d21258e24d docs/changes: Update changelog for V3.1 2018-12-21 14:26:55 +00:00
Marc Bonnici
8770888685 workloads/googleslides: Misc Fixes:
- Move the slide editing test into the main runWorkload instead of
setup.
- On some devices the folder picker has changed layout so add support for
navigating.
- Add support for differently capitalized splash buttons.
- Add workaround for adding a new slide if click the button doesn't work
the first time.
2018-12-21 14:26:55 +00:00
Marc Bonnici
755417f139 workloads/speedometer: Misc Fixes
- Fix formatting
- Skip teardown automation if elements are not present on some devices
instead of failing the workload.
- Give extra time for start button to appear as some devices can be slow
to load.
2018-12-21 14:26:55 +00:00
Marc Bonnici
ba4004db5f workloads/googlemaps: Fix for alternative layouts.
Add additional check for text based directions button as id can be
missing on some devices and allow for skipping the view steps stage for
large screen devices which do not require this step.
2018-12-21 14:26:55 +00:00
Marc Bonnici
87ac9c6ab3 workloads/androbench: Fix extracting benchmark results
On some devices the entire results page fits on one screen and does not
present a scrollable element, therefore only attempt to scroll if
available.
2018-12-21 14:26:55 +00:00
Marc Bonnici
ea5ea90bb6 docs/dev_info/processing_output: Fix formatting 2018-12-21 14:26:55 +00:00
Marc Bonnici
b93beb3f1f commands/show: Revert quoting method switch
In commit bb282eb19c48b5770186d136e8a40c0573ef59b9 devlibs
`escape_double_quotes` method was retired in favour of the `pipes.quote`
method however this does not format correctly for this purpose therefore
revert back to the original escaping method.
2018-12-21 14:05:14 +00:00
Marc Bonnici
ca0d2eaaf5 setup.py: Add missing classifier for Python3 2018-12-14 07:44:44 +00:00
Marc Bonnici
06961d6adb docs/how_tos: Fix incorrect spacing 2018-12-14 07:44:44 +00:00
Marc Bonnici
7d8cd85951 doc/rt_params: Fix incorrect documentaion of parameter names 2018-12-14 07:44:44 +00:00
Marc Bonnici
6b03653227 fw/rt_config: Update tunables parameter to match other formats
Update RT param `governor_tunables` to `gov_tunables` to match the style
of the other paramters e.g. `big_gov_tunables`.
2018-12-14 07:44:44 +00:00
Marc Bonnici
a9e254742a fw/rt_param_manager: Add support for aliased parameters
Additionally check for aliases when matching runtime parameters to their
corresponding cfg points.
2018-12-14 07:44:44 +00:00
Marc Bonnici
f2d6f351cb output_processors/postgres: Fix incorrect parameter
When verifying the database schema the connection instead of a cursor
should be passed.
2018-12-07 10:51:18 +00:00
Marc Bonnici
916f7cbb17 docs: Update documentation about database output API and create command 2018-12-07 09:55:17 +00:00
Marc Bonnici
72046f5f0b fw/output: Convert Status enums to/from POD during (de)serialization
Previously the `Status` Enum was converted to a string as part of
serialization however now use the Enum `to_pod` method and make the
respective changes for de-serialization.
2018-12-07 09:55:17 +00:00
Marc Bonnici
4f67cda89f utils/types: When creating an enum also try to deserialize from POD
Allows for recreating an Enum from a full string representation of the Enum
rather than just the name of the Enum.
2018-12-07 09:55:17 +00:00
Marc Bonnici
0113940c85 fw/execution: Fix status being assigned as strings 2018-12-07 09:55:17 +00:00
Marc Bonnici
0fb8d261fa fw/output: Add check for schema versions 2018-12-07 09:55:17 +00:00
Marc Bonnici
0426a966da utils/postgres: Relocate functions to retrieve schema information
Move the functions to retrieve schema information to general utilities to
be used in other classes.
2018-12-07 09:55:17 +00:00
Marc Bonnici
eabe15750c commands/create: Allow for upgrading database schema
Provide a method of upgrading existing postgres databases to a new
schema version.
2018-12-07 09:55:17 +00:00
Marc Bonnici
250bf61c4b postgres: Update schema to v1.2
Update the postgres database schema:
    - Rename "resourcegetters" schema to "resource_getters" for
      consistency
    - Rename "retreies" colum to "retry" to better relflect it purpose
    - Store additional information including:
        - POD serialization data
        - Missing target information
        - JSON formatted runstate
2018-12-07 09:55:17 +00:00
Marc Bonnici
64f7c2431e utils/postgres: Rename postgres_covert to house more general methods
Rename the postgres_covert file to allow for place more general postgres
utility functions.
2018-12-07 09:55:17 +00:00
Marc Bonnici
0fee3debea fw/output: Implement the Output API for using a database backend
Allow for the creating of a RunDatabaseOutput to allow for utilizing WA
output API from run data stored in a postgres database.
2018-12-07 09:55:17 +00:00
Marc Bonnici
423882a8e6 output_processors/postgres: Update target info to use POD representation
Instead of taking values directly when storing target information use
the POD representation to allow for restoring the state.
2018-12-07 09:55:17 +00:00
Marc Bonnici
86287831b3 utilts/serializer: Update exception method to support Python3 2018-12-07 09:55:17 +00:00
Marc Bonnici
e81aaf3421 framework/output: Split out common Output functionality
In preparation for the creation of a DatabaseRunOut split out
functionality that can be shared.
2018-12-07 09:55:17 +00:00
Marc Bonnici
2d7dc61686 output_processors/postgresql: Serialize parameters in json
To make it easier to deserialize the data again ensure that the data is
converted to json rather than using the built in string representation.
2018-12-07 09:55:17 +00:00
Marc Bonnici
88a4677434 utils/serializer: Fix attempting to deserialize a single value. 2018-12-07 08:46:12 +00:00
Marc Bonnici
dcf0418379 fw/config/execution: Implement CombinedConfig as Podable
Ensure that the various Configuration structures now have serialization
versions.
2018-12-07 08:46:12 +00:00
Marc Bonnici
1723ac8132 fw/output: Implement Output structures as Podable
Ensure that the various Output structures now have serialization
versions.
2018-12-07 08:46:12 +00:00
Marc Bonnici
1462f26b2e fw/run: Implement Run Structures as Podable
Ensure that Run structures now have serialization versions.
Also fix serialization/de-serialization of `Status` type as previously this
was formatted as a String instead a pod.
2018-12-07 08:46:12 +00:00
Marc Bonnici
8ee924b896 fw/config/core: Implement Configuration structures as Podable
Ensure that the various Configuration structures now have serialization versions.
2018-12-07 08:46:12 +00:00
Marc Bonnici
92cf132cf2 fw/target/info: Implement TargetInfo structures as Podable
Ensure that the various data structures used to store target information
now have a serialization versions.
2018-12-07 08:46:12 +00:00
Marc Bonnici
4ff7e4aab0 utils/serializer: Add Podable Mix-in class
Add a new mix-in class for classes that are serialized to PODs, the aim
of this class is to provide a way to ensure that both the original data
version and the current serialization version are known. When attempting
to de-serialize a POD the serialization version will be compared to the
latest version in WA if not matching will call the appropriate method to
upgrade the pod to a known structure state populating any missing fields
with a sensible default or converting the existing data to the new
format.
2018-12-07 08:46:12 +00:00
Marc Bonnici
e0ffd84239 fw/output: Ensure that Event message is converted to a string
Explicitly convert the passed message into a string as this is expected when
generating a event summary, otherwise splitting can fail.
2018-12-04 15:15:47 +00:00
Marc Bonnici
d3d5ca9154 workloads/glbench: Port workload to WA3 2018-11-23 17:24:41 +00:00
Marc Bonnici
88f708abf5 target/descriptor: Update default sudo command format
Due to changes introduced in devlib https://github.com/ARM-software/devlib/pull/339
the command placeholder should no longer be in quote so remove them from
the default value.
2018-11-21 15:07:25 +00:00
Marc Bonnici
bb282eb19c wa: Remove reference to devlibs escaping methods
As part of https://github.com/ARM-software/devlib/pull/339 the escaping
method are being removed in favour of using `quote` from `pipes` so
also make reflecting changes here.
2018-11-21 15:07:25 +00:00
Marc Bonnici
285bc2cd0b workloads/gfxbench: Move test selection into setup phase
Previously the configuration of the tests was performed in the run stage
instead of the setup.
2018-11-20 10:10:19 +00:00
Marc Bonnici
0d9dbe8845 workloads/gfxbench: Fix clicking on select tests
The X coordinate was miscalculated when attempting to load the test
selection menu.
2018-11-20 10:10:19 +00:00
Marc Bonnici
c89ea9875e workloads/gfxbench: Fix parameter description 2018-11-20 10:10:19 +00:00
Marc Bonnici
e4283729c1 workloads/gfxbenchmark: Fix score matching
On some devices the score string obtained can contain extra characters.
Only use the numerical values from the score when converting, otherwise
if not found set the result to 'NaN'.
2018-11-20 10:10:19 +00:00
Marc Bonnici
a2eb6e96e2 commands/process: Fix initialization of ProcessContext ordering
Ensure that that ProcessContext is initialized before attempting to
initialize any of the output processors.
2018-11-19 10:17:53 +00:00
scott
3bd8f033d5 workloads: Updating geekbench to support v4.3.1
v4.3.1 has made a minor change to the run cpu benchmark element.
Refactoring to support both the new and previous elements.
2018-11-15 15:56:09 +00:00
Marc Bonnici
ea1d4e9071 workloads/gfxbench: Do not clear package data on launch
By clearing the application data each time the workload is run this
forces the required assets to be re-installed each time. As the
workload is not affected by persistent state do not perform the
clearing.
2018-11-15 07:54:43 +00:00
Marc Bonnici
cc0cfaafe3 fw/workload: Add attribute to control if package data should be cleared.
Allow specifying that the package data should not be cleared
before starting the workload.
2018-11-15 07:54:43 +00:00
Marc Bonnici
1b4fc68542 workloads/gfxbench: Fix formatting 2018-11-13 13:06:54 +00:00
Marc Bonnici
e40517ab95 workloads/gfxbench: Fix not detecting missing asset popup
Add check for a differently worded popup informing that assets are
missing.
2018-11-13 13:06:54 +00:00
Sergei Trofimov
ce94638436 fw/target: record page size as part of TargetInfo
Record target.page_size_kb as part of target info.
2018-11-02 12:11:00 +00:00
Sergei Trofimov
d1fba957b3 fw/target: add versioning to TargetInfo
Add format_version class attribute to TargetInfo to track format
changes. This is checked when deserializing from POD to catch format
changes between cached and obtained TargetInfo's.
2018-11-02 12:11:00 +00:00
Marc Bonnici
17bb0083e5 doc/installation: Update installation instructions
Update the instructions for installing WA from git not to use
pip as this method does not process dependency_links correctly and
results in an incompatible version of devlib being installed.
2018-10-25 10:32:28 +01:00
Marc Bonnici
c4ad7467e0 doc: Fix formatting and typo 2018-10-25 10:32:28 +01:00
Marc Bonnici
2f75261567 doc: Add WA icon to documentation 2018-10-25 10:32:28 +01:00
Marc Bonnici
281eb6adf9 output_processors/postgresql: Refactor and fix uploading duplication
Previously run level artifacts would be added with a particular job_id,
and updated artifacts would be stored as new objects each time. Refactor
to remove unnecessary instance variables, only provide a job_id when
required and add an update capability for largeobjects to ensure this
does not happen.
2018-10-24 10:42:28 +01:00
Marc Bonnici
576df80379 output_processors/postgres: Move logging message
Print the debug message warning about writing a large object to the
database before writing the object.
2018-10-24 10:42:28 +01:00
Marc Bonnici
f2f210c37f utils/postgres_convert: PEP8 fix
Remove unused local variable.
2018-10-24 10:34:44 +01:00
Marc Bonnici
6b04cbffd2 worklods: Fix whitespace 2018-10-24 10:34:44 +01:00
Marc Bonnici
dead312ff7 workloads/uiauto: Update workloads to dismiss android version warning
Update workloads that use uiautomator and can display a warning about
using an old version of the app to dismiss the popup if present.
2018-10-24 10:34:44 +01:00
Marc Bonnici
7632ee8288 fw/uiauto: Add method to baseclass to dismiss android version popup
In Android Q a popup will be displayed warning if the application has
not been designed for the latest version of android. This has currently been
dealt with on a per workload basis however this is a common popup so
add a method to dismiss the popup if present to the base class.
2018-10-24 10:34:44 +01:00
Lisa Nguyen
8f6b1a7fae workloads/vellamo: Close warning popup message to run on Android Q
While attempting to run vellamo on Android Q, a popup warning with
the message, "This app was built for an older version of Android and may not
work properly. Try checking for updates, or contact the developer." would
appear, causing the workload to halt.

Close the popup warning before dismissing EULA and executing the remaining
steps to run vellamo.

Tested with vellamo apk version 3.2.4.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2018-10-23 10:17:45 +01:00
syltaylor
f64aaf64a0 tools/revent: recording timestamp fix
- force cast start/end timestamps to uint64_t to correct recording format issue on 32bit devices (i.e. 4 bytes timespec tv_sec written on 8 bytes memory slot)
2018-10-15 09:48:02 +01:00
Marc Bonnici
7dce0fb208 workloads/jankbench: Ensure logcat monitor thread is terminated
Previously the LogcatRunMonitor left the logcat process running in the
background causing issues with concurrent accesses. Now ensure the thread
terminates correctly.
2018-10-12 13:41:21 +01:00
Marc Bonnici
375a36c155 utils/log: Ensure to convert all arguments to strings
Ensure that all arguments provided for an Exception are converted to
stings before attempting to join them for debug information.
2018-10-09 15:26:53 +01:00
Marc Bonnici
7c3054b54b commands/run: Update run output with final run config
The RunInfo object in the run output is initally created before the
config has been fully parsed therefore attributes for the project and
run name are never updated, once the config has been finalized make sure
to update the relavant information.
2018-10-09 15:26:53 +01:00
Sergei Trofimov
98727bce30 utils/revent: recording parser fixes
- change magic string literal to a b'' string so that the comparison
  works in python 3
- expand timestamp tuples (struct.unpack always returns a tuple) before
  attempting to cast to float.
2018-10-08 17:46:35 +01:00
Marc Bonnici
93ffe0434c workloads/meabo: Support python 3
Ensure output is encoded correctly if running with python 3
2018-10-05 10:10:43 +01:00
Marc Bonnici
75f3080c9b workloads: Use uninstall method instead of uninstall_executable
For workloads that support Linux targets do not use
`uninstall_executable` as this is not available, instead use `uninstall` as
other targets should be able to determine the appropriate uninstallation
method.
2018-10-05 10:10:43 +01:00
Marc Bonnici
75c0e40bb0 workloads/androbench: Fix extracting results with small resolutions
Previously the workload assumed that all the scores were visible on a
single screen however for devices with smaller displays the results need
to scrolled.
2018-10-03 14:33:09 +01:00
Qais Yousef
e73b299fbe pcmark: update uiautomation to fix Android-Q breakage
A new popup appears when running pcmark on android Q that complains
about the app being built for an older version of android.

Since this popup will be temporary, the fix has to make sure not to
break in the future when this popup disappears or when the test is ran
on a compatible version of android.

To achieve this, we attempt to dismiss the popup and if we timeout we
silently carry on with the test assuming no popup will appear.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2018-10-01 16:12:05 +01:00
scott
9a4a90e0a6 GFXBench: New workload
Creating a new workload to execute the following tests on GFXBench.

* Car Chase
* Car Chase Offscreen
* Manhattan 3.1
* 1080p Manhattan 3.1 Offscreen
* 1440p Manhattan 3.1 Offscreen
* Tessellation
* Tessellation Offscreen
2018-09-25 17:27:58 +01:00
scott
8ba602cf83 Googlephotos: Updating to work with latest version
Updating the googlephotos workload to work with app version 4.0.0.212659618
2018-09-25 10:50:03 +01:00
Marc Bonnici
891ef60f4d configuration: Add support for section groups
Now allows for specifying a `group` value for each section which will
cross product the sections within that group with the sections in each
other group. Additionally classifiers will automatically be added to
each job spec with the relevant group information.
2018-09-24 10:17:26 +01:00
Marc Bonnici
6632223ac5 output_processors: Move variable initialization to __init__
In the case of a failure in the initialization of one output_processor the
remaining `initialize` methods may not get called causing variables to
not be initialized correctly.
2018-09-21 15:06:30 +01:00
Marc Bonnici
5dcac0c8ef output_processors/postgres: Do not process output if not connected
Only try to process the run output if the processor is connected to a
database.
2018-09-21 15:06:30 +01:00
Marc Bonnici
9a9a2c0742 commands/create: Add version check for Postgres Server
The 'jsonB' datatype was only added in v9.4 so ensure that the Postgres
server to is running this or later and inform the user if this is not
the case.
2018-09-21 15:06:30 +01:00
Marc Bonnici
57aa5ca588 fw/version: Add developement tag to version number 2018-09-21 15:06:30 +01:00
Marc Bonnici
fb42d11a83 setup: Update devlib dependency version number
Ensure that a sufficiently up to date version of devlib is installed and
enable of installing directly from github to satisfy the requirements.
2018-09-21 15:06:30 +01:00
Marc Bonnici
fce506eb02 instruments/misc: Fix typo 2018-09-21 15:06:30 +01:00
Marc Bonnici
a1213cf84e commands/create: Use class name rather than user supplied name
Use the actual name of the plugin instead of the user supplied value,
for consistency and ensure that duplicate entries cannot be specified.
2018-09-21 15:06:30 +01:00
Marc Bonnici
a7d0b6fdbd target/descriptor: Do not convert the module list to strings
Change the type of the `modules` to `list` so that additional
configuration can be supplied to individual modules as a dict of values.
2018-09-21 15:06:30 +01:00
Marc Bonnici
f5fed26758 doc/device_setup: Fix typo 2018-09-21 15:06:30 +01:00
Marc Bonnici
7d01258bce fw/target/manager: Do no finalize target if not instantiated
In the case of an error occurring during target initialization do not
try and check for disconnecting upon finalizing.
2018-09-21 15:06:30 +01:00
scott
ccaca3d6d8 Speedometer: Extending teardown function
Some devices throw errors if too many browser tabs are open. Have
added a method to close down the tabs in the teardown function.
2018-09-20 10:26:34 +01:00
Waleed El-Geresy
6d654157b2 Add Postgres Output Processor
The Output processor which is used to upload the results found in the
wa_output folder to a Postgres database, whose schema is defined by the
WA Create Database command.
2018-09-12 10:13:34 +01:00
Waleed El-Geresy
bb255de9ad Add WA Create Database Command
Add a command to create a PostgreSQL database with supplied parameters
which draws its structure from the supplied schema (Version 1.1). This
database is of a format intended to be used with the forthcoming WA
Postgres output processor.
2018-09-12 10:13:34 +01:00
Marc Bonnici
ca03f21f46 workloads/jankbench: Update to clear logcat using devlib
Leftover code from WA2 meant that logcat was cleared on the device by
the workload directly instead of using devlib, this caused issues if logcat was
still being cleared from other areas of the code.
2018-09-10 13:30:59 +01:00
Marc Bonnici
59e29de285 workloads/jankbench: Replace errors during decoding
When running jankbench invalid bytes can be read from the device causing
decoding in the monitor to fail, now replace any invalid sequences.
2018-09-10 13:30:59 +01:00
scott
0440c41266 Googleslides: Updating the workload to support the new range of huawei devices 2018-09-06 18:01:24 +01:00
Marc Bonnici
b20b9f9cad instruments/perf: Port the perf instrument to WA3 2018-09-06 08:39:09 +01:00
Marc Bonnici
8cd79f2ac4 fw/instrument: Fix compatibility with Python2
The Python2 inspect module does not contain the `getfullargspec` method so call
the appropriate method depending on Python version.
2018-09-05 15:44:48 +01:00
Waleed El-Geresy
6239f6ab2f Update documentation
Update documentation with new API for Output Processors which includes
the context in the intialize and finalize methods.
2018-09-05 14:40:42 +01:00
Waleed El-Geresy
718f2c1c90 Expose context in OP initialize and finalize
Expose the context to the initialize and finalize functions for Output
Processors. This was found to be necessary for the upcoming PostgreSQL
Output Processor.
2018-09-05 14:40:42 +01:00
scott
4c4fd2a267 Gmail: Minor change to allow workload to run correctly on Huawei devices 2018-09-05 14:01:01 +01:00
Marc Bonnici
6366a2c264 framework/version: Specify default encoding when parsing commit id 2018-08-22 14:41:12 +01:00
Marc Bonnici
42b3f4cf9f commands/create: Add special case for EnergyInstruemntBackends
Previously when using the create command for adding
EnergyInstruemntBackends they were treated like any other plugin and
generated incorrect configuration. Now automatically add the
`energy_measurement` instrument and populate it's configuration with the
relevant defaults for the specified Backend.
2018-08-14 13:41:39 +01:00
Marc Bonnici
1eaffb6744 commands/create: Only add instruments/output processors once
Ensure that instruments and output processors are only added to the
generated agenda once.
2018-08-14 13:41:39 +01:00
Marc Bonnici
4a9b24a9a8 instruments/energy_measurments: Improve instrument description
Add note to users that all configuration for the backends should be
added through this instrument rather than directly.
2018-08-14 13:41:39 +01:00
Marc Bonnici
5afc96dc4d workloads/pcmark: Fix reading results in python3
Ensure that the results file is decoded when using python3.
2018-08-14 13:39:04 +01:00
Marc Bonnici
d858435c3d utils/version: Fix check to only decode bytes
When using Python3 the returned value of the commit is a byte string and
therefore needs to be decoded.
2018-07-27 10:11:32 +01:00
Marc Bonnici
caf805e851 setup.py: Update minimum version of devlib to be v1.0.0 2018-07-27 10:11:32 +01:00
Marc Bonnici
973be6326a Travis: Update to exercise wa commands
Add tests for additional wa commands.
2018-07-26 12:07:17 +01:00
Marc Bonnici
778bc46217 commands/process: Add dummy method to ProcessContext
In commit d2ece we are now tracking augmentations which are used during a
run in the run_config via the context when installing augmentations.
Update the Process command and its ProcessContext with a dummy method
to relect this change.
2018-07-26 12:07:17 +01:00
Marc Bonnici
fc226fbb6e fw/execution: Ensure that identifiers are used when retrieving plugins.
Make sure that when retrieving plugin information from the plugin
cache the name is converted to an identifier first.
2018-07-24 11:34:19 +01:00
Marc Bonnici
d007b283df instruments/trace_cmd: Fix reporting on target
If reporting on the target the extracted trace data file was not
defined, now locate the file correctly.
2018-07-24 11:34:00 +01:00
Sergei Trofimov
8464c32808 tests: add unit test for includes 2018-07-23 16:47:10 +01:00
Sergei Trofimov
e4a856ad03 fw/config: preserve included config files
Save included config files, along with the explicitly-specified config
that included them, under run output's __meta/raw_config/.
2018-07-23 16:47:10 +01:00
Sergei Trofimov
7d833ec112 fw/config: add includes
Add the ability to include other YAML files inside agendas and config
files using "include#:" entries.
2018-07-23 16:47:10 +01:00
Marc Bonnici
b729f7c9e4 fw/parsers: Ensure plug-in names are converted to an identifier
Ensure that a plug-ins config entry is converted to an identifier before being
stored in the PluginCache so that the relevant configuration can be
retrieved appropriately. For example this allows for both 'trace-cmd' and
'trace_cmd' to be used as config entries to provide configuration for the
'trace-cmd' plugin.
2018-07-19 17:15:26 +01:00
Marc Bonnici
fbfd81caeb commands/revent: Fix missing target initialization
In commit 8da911 the initialization of the target was split into a
separate method of the TargetManger. Ensure that we call the relevant
 method after creating the manager.
2018-07-19 12:13:14 +01:00
Marc Bonnici
0e69a9808d commands/record: Fix argument validation
When ensuring that at least one stage for a workload recording was
present there was a missing check to see if recording for a workload was
specified.
2018-07-19 12:13:14 +01:00
Marc Bonnici
4478bd4574 travis: Force version 1.9.2 of pylint to be used
Force version 1.9.2 of pylint to be used rather than 2.0.0. Currently
there appears to be issues raising errors that are explicitly ignored.
2018-07-19 08:13:55 +01:00
Sergei Trofimov
0e84cf6d64 dev_scripts/pylint: fix default path
Fix the default scan path used if one has not been specified.
2018-07-18 11:20:48 +01:00
Sergei Trofimov
6d9ec3138c In lint we trust
Quieten pylint with regard to import order.
2018-07-18 11:20:48 +01:00
Sergei Trofimov
e8f545861d fw: cache target info
Cache target info after pulling it from the device. Attempt to retrieve
from cache before querying target.
2018-07-13 15:53:01 +01:00
Sergei Trofimov
770d2b2f0e fw: add cache subdir under $WA_USER_DIRECTORY
Add a sub-directory for caching stuff.
2018-07-13 15:53:01 +01:00
Marc Bonnici
8a2c660fdd Travis: Improve testing organisation
Split the tests into their own jobs so it is easier to see what test is
failing, also only run pep8 and pylint tests with python 3.6 as it is
uncessary to run with both python versions.
2018-07-13 13:32:41 +01:00
Sergei Trofimov
dacb350992 fw/target: add system_id to TargetInfo
Add target's system_id to TargetInfo. This ID is intended to be unique
of the combination of hardware and software on the target.
2018-07-13 13:28:50 +01:00
Marc Bonnici
039758948e workloads/androbench: Update uiauto apk with fix 2018-07-11 17:32:08 +01:00
Sergei Trofimov
ce93823967 fw/execution: write config after installing augs
Add Context.write_config() to write the combined config into run output
__meta. Use it after instruments and result processors get installed to
make sure their configuration gets serialized in the output.
2018-07-11 13:28:04 +01:00
Sergei Trofimov
7755363efd fw/config: add get_config() to ConfigManager
Add a method to allow obtaining the combined config after config
manager has been finalized.
2018-07-11 13:28:04 +01:00
Sergei Trofimov
bcea1bd0af fw/config: add resource getter to run config
Track resource getter configuration as part of the run config.
2018-07-11 13:28:04 +01:00
Sergei Trofimov
a062a39f78 fw/config: add installed aug configs to run config
Track configuration used for installed augmentations inside RunConfig.
2018-07-11 13:28:04 +01:00
Sergei Trofimov
b1a01f777f fw/execution: rename things for clarity
- Rename "instrument_name" to "instrument" inside do_execute(), as
  ConfigManger.get_instrument() returns a list of Instrument objects,
  not names.
- To avoid name clash, rename the imported instrument module to
  "instrumentation".
2018-07-11 13:28:04 +01:00
Sergei Trofimov
96dd100b70 utils/toggle_set: fix merge behavior
- Change how "source" and "dest" are handled inside merge() to be more
  sane and less confusing, ensuring that disabling toggles are merged
  correctly.
- Do not drop disabling values during merge, to ensure that merging
  is a transitive operation.
- Add unit tests for the above fixes.
2018-07-11 10:48:00 +01:00
Marc Bonnici
e485b9ed39 utils/version: do not decode bytes
Check that the resulting output inside get_commit() is a str before
attempting to decode it when running on Python 3.
2018-07-11 10:39:38 +01:00
Marc Bonnici
86dcfbf595 workloads/androbench: Fix Formatting 2018-07-10 15:57:18 +01:00
Marc Bonnici
1b58390ff5 workloads/androbench: Fix for devices running Android 8.1
On some devices running Android 8.1 the start benchmark button was
failing to be clicked, this is a workaround to click on the coordinates
of the button instead of the UiObject iteslf.
2018-07-10 15:57:18 +01:00
Sergei Trofimov
ae4fdb9e77 dev_scripts: pylint: check both Python Versions
Check both "python" and "python3" for the pylint package, as it is
possible that pylint will be installed via Python 3 on Python 2 systems.
2018-07-10 12:56:51 +01:00
Marc Bonnici
5714c8e6a1 wa: Additional pylint fixes 2018-07-10 12:56:51 +01:00
Marc Bonnici
791d9496a7 wa: Pylint Fixes for Travis
Pylint has trouble using imports from the distutils module in
virtualenvs so we need to explicitly ignore these imports.
2018-07-10 12:56:51 +01:00
Marc Bonnici
e8b0d42758 wa: PEP8 Fixes 2018-07-10 12:56:51 +01:00
Marc Bonnici
97cf0ac059 travis: Enable pylint and pep8 checkers 2018-07-10 12:56:51 +01:00
Marc Bonnici
f3dc94b95e travis: Remove tests for Python 3.4
Testing with Python3.4 takes significantly longer than with 2.7 or 3.6
due to having to compile some dependencies from source each time.
2018-07-10 12:56:51 +01:00
Marc Bonnici
ad87a40e06 Travis: Run the idle workload as part of the tests. 2018-07-10 12:56:51 +01:00
Sergei Trofimov
fd1dd789bf fw/output: update internal state on write_config()
Update the internal _combined_config object with the one that
has been written to ensure that the serialized and run time states are
the same.
2018-07-09 16:00:07 +01:00
Sergei Trofimov
c410d2e1a1 I lint, therefore I am
Implement fixes for the most recent pylint version.
2018-07-09 15:59:40 +01:00
Sergei Trofimov
0e0d4e0ff0 dev_scripts: port pylint plugins to Python 3 2018-07-09 15:59:40 +01:00
339 changed files with 15261 additions and 1837 deletions

16
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,16 @@
---
name: Bug report
about: Create a report to help resolve an issue.
title: ''
labels: bug
assignees: ''
---
**Describe the issue**
A clear and concise description of what the bug is.
**Run Log**
Please attach your `run.log` detailing the issue.
**Other comments (optional)**

View File

@ -0,0 +1,17 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context about the feature request here.

View File

@ -0,0 +1,10 @@
---
name: 'Question / Support '
about: Ask a question or reqeust support
title: ''
labels: question
assignees: ''
---
**

11
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@ -0,0 +1,11 @@
---
name: Question
about: Ask a question
title: ''
labels: question
assignees: ''
---
**Describe you query**
What would you like to know / what are you trying to achieve?

92
.github/workflows/main.yml vendored Normal file
View File

@ -0,0 +1,92 @@
name: WA Test Suite
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
types: [opened, synchronize, reopened, ready_for_review]
schedule:
- cron: 0 2 * * *
# Allows runing this workflow manually from the Actions tab
workflow_dispatch:
jobs:
Run-Linters-and-Tests:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.8.18
uses: actions/setup-python@v2
with:
python-version: 3.8.18
- name: git-bash
uses: pkg-src/github-action-git-bash@v1.1
- name: Install dependencies
run: |
python -m pip install --upgrade pip
cd /tmp && git clone https://github.com/ARM-software/devlib.git && cd devlib && pip install .
cd $GITHUB_WORKSPACE && pip install .[test]
python -m pip install pylint==2.6.2 pep8 flake8 mock nose
- name: Run pylint
run: |
cd $GITHUB_WORKSPACE && ./dev_scripts/pylint wa/
- name: Run PEP8
run: |
cd $GITHUB_WORKSPACE && ./dev_scripts/pep8 wa
- name: Run nose tests
run: |
nosetests
Execute-Test-Workload-and-Process:
runs-on: ubuntu-22.04
strategy:
matrix:
python-version: [3.7.17, 3.8.18, 3.9.21, 3.10.16, 3.13.2]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: git-bash
uses: pkg-src/github-action-git-bash@v1.1
- name: Install dependencies
run: |
python -m pip install --upgrade pip
cd /tmp && git clone https://github.com/ARM-software/devlib.git && cd devlib && pip install .
cd $GITHUB_WORKSPACE && pip install .
- name: Run test workload
run: |
cd /tmp && wa run $GITHUB_WORKSPACE/tests/ci/idle_agenda.yaml -v -d idle_workload
- name: Test Process Command
run: |
cd /tmp && wa process -f -p csv idle_workload
Test-WA-Commands:
runs-on: ubuntu-22.04
strategy:
matrix:
python-version: [3.7.17, 3.8.18, 3.9.21, 3.10.16, 3.13.2]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: git-bash
uses: pkg-src/github-action-git-bash@v1.1
- name: Install dependencies
run: |
python -m pip install --upgrade pip
cd /tmp && git clone https://github.com/ARM-software/devlib.git && cd devlib && pip install .
cd $GITHUB_WORKSPACE && pip install .
- name: Test Show Command
run: |
wa show dhrystone && wa show generic_android && wa show trace-cmd && wa show csv
- name: Test List Command
run: |
wa list all
- name: Test Create Command
run: |
wa create agenda dhrystone generic_android csv trace_cmd && wa create package test && wa create workload test

28
.readthedocs.yml Normal file
View File

@ -0,0 +1,28 @@
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: html
configuration: doc/source/conf.py
# Build the docs in additional formats such as PDF and ePub
formats: all
# Configure the build environment
build:
os: ubuntu-22.04
tools:
python: "3.11"
# Ensure doc dependencies are installed before building
python:
install:
- requirements: doc/requirements.txt
- method: pip
path: .

View File

@ -1,31 +0,0 @@
# Copyright 2018 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
language: python
python:
- "2.7"
- "3.4"
- "3.6"
install:
- pip install nose
- pip install nose2
script:
- git clone -v https://github.com/ARM-software/devlib.git /tmp/devlib && cd /tmp/devlib && python setup.py install
- cd $TRAVIS_BUILD_DIR && python setup.py install
- nose2 -s $TRAVIS_BUILD_DIR/tests

View File

@ -1,2 +1,3 @@
recursive-include scripts *
recursive-include doc *
recursive-include wa *

View File

@ -18,7 +18,7 @@ workloads, instruments or output processing.
Requirements
============
- Python 2.7 or Python 3
- Python 3.5+
- Linux (should work on other Unixes, but untested)
- Latest Android SDK (ANDROID_HOME must be set) for Android devices, or
- SSH for Linux devices
@ -30,7 +30,11 @@ Installation
To install::
git clone git@github.com:ARM-software/workload-automation.git workload-automation
sudo -H pip install ./workload-automation
sudo -H python setup [install|develop]
Note: A `requirements.txt` is included however this is designed to be used as a
reference for known working versions rather than as part of a standard
installation.
Please refer to the `installation section <http://workload-automation.readthedocs.io/en/latest/user_information.html#install>`_
in the documentation for more details.

View File

@ -6,7 +6,7 @@ DEFAULT_DIRS=(
EXCLUDE=wa/tests,wa/framework/target/descriptor.py
EXCLUDE_COMMA=
IGNORE=E501,E265,E266,W391,E401,E402,E731,W504,W605,F401
IGNORE=E501,E265,E266,W391,E401,E402,E731,W503,W605,F401
if ! hash flake8 2>/dev/null; then
echo "flake8 not found in PATH"

View File

@ -1,6 +1,4 @@
#!/bin/bash
set -e
DEFAULT_DIRS=(
wa
)
@ -34,7 +32,18 @@ compare_versions() {
return 0
}
pylint_version=$(python3 -c 'from pylint.__pkginfo__ import version; print(version)')
pylint_version=$(python -c 'from pylint.__pkginfo__ import version; print(version)' 2>/dev/null)
if [ "x$pylint_version" == "x" ]; then
pylint_version=$(python3 -c 'from pylint.__pkginfo__ import version; print(version)' 2>/dev/null)
fi
if [ "x$pylint_version" == "x" ]; then
pylint_version=$(python3 -c 'from pylint import version; print(version)' 2>/dev/null)
fi
if [ "x$pylint_version" == "x" ]; then
echo "ERROR: no pylint verison found; is it installed?"
exit 1
fi
compare_versions $pylint_version "1.9.2"
result=$?
if [ "$result" == "2" ]; then
@ -42,12 +51,13 @@ if [ "$result" == "2" ]; then
exit 1
fi
set -e
THIS_DIR="`dirname \"$0\"`"
CWD=$PWD
pushd $THIS_DIR > /dev/null
if [[ "$target" == "" ]]; then
for dir in "${DEFAULT_DIRS[@]}"; do
PYTHONPATH=. pylint --rcfile ../extras/pylintrc --load-plugins pylint_plugins $THIS_DIR/../$dir
PYTHONPATH=. pylint --rcfile ../extras/pylintrc --load-plugins pylint_plugins ../$dir
done
else
PYTHONPATH=. pylint --rcfile ../extras/pylintrc --load-plugins pylint_plugins $CWD/$target

View File

@ -1,3 +1,5 @@
import sys
from astroid import MANAGER
from astroid import scoped_nodes
@ -23,18 +25,19 @@ def transform(mod):
if not text.strip():
return
text = text.split('\n')
text = text.split(b'\n')
# NOTE: doing it this way because the "correct" approach below does not
# work. We can get away with this, because in well-formated WA files,
# the initial line is the copyright header's blank line.
if 'pylint:' in text[0]:
if b'pylint:' in text[0]:
msg = 'pylint directive found on the first line of {}; please move to below copyright header'
raise RuntimeError(msg.format(mod.name))
if text[0].strip() and text[0][0] != '#':
char = chr(text[0][0])
if text[0].strip() and char != '#':
msg = 'first line of {} is not a comment; is the copyright header missing?'
raise RuntimeError(msg.format(mod.name))
text[0] = '# pylint: disable={}'.format(','.join(errors))
mod.file_bytes = '\n'.join(text)
text[0] = '# pylint: disable={}'.format(','.join(errors)).encode('utf-8')
mod.file_bytes = b'\n'.join(text)
# This is what *should* happen, but doesn't work.
# text.insert(0, '# pylint: disable=attribute-defined-outside-init')

View File

@ -1,5 +1,5 @@
#!/usr/bin/env python
# Copyright 2015-2015 ARM Limited
# Copyright 2015-2019 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -26,10 +26,11 @@ OUTPUT_TEMPLATE_FILE = os.path.join(os.path.dirname(__file__), 'source', 'instr
def generate_instrument_method_map(outfile):
signal_table = format_simple_table([(k, v) for k, v in SIGNAL_MAP.iteritems()],
signal_table = format_simple_table([(k, v) for k, v in SIGNAL_MAP.items()],
headers=['method name', 'signal'], align='<<')
priority_table = format_simple_table(zip(CallbackPriority.names, CallbackPriority.values),
headers=['decorator', 'priority'], align='<>')
decorator_names = map(lambda x: x.replace('high', 'fast').replace('low', 'slow'), CallbackPriority.names)
priority_table = format_simple_table(zip(decorator_names, CallbackPriority.names, CallbackPriority.values),
headers=['decorator', 'CallbackPriority name', 'CallbackPriority value'], align='<>')
with open(OUTPUT_TEMPLATE_FILE) as fh:
template = string.Template(fh.read())
with open(outfile, 'w') as wfh:
@ -37,4 +38,4 @@ def generate_instrument_method_map(outfile):
if __name__ == '__main__':
generate_instrumentation_method_map(sys.argv[1])
generate_instrument_method_map(sys.argv[1])

View File

@ -1,5 +1,5 @@
#!/usr/bin/env python
# Copyright 2014-2015 ARM Limited
# Copyright 2014-2019 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -25,7 +25,12 @@ from wa.utils.doc import (strip_inlined_text, get_rst_from_plugin,
get_params_rst, underline, line_break)
from wa.utils.misc import capitalize
GENERATE_FOR_PACKAGES = ['wa.workloads', 'wa.instruments', 'wa.output_processors']
GENERATE_FOR_PACKAGES = [
'wa.workloads',
'wa.instruments',
'wa.output_processors',
]
def insert_contents_table(title='', depth=1):
"""
@ -41,6 +46,7 @@ def insert_contents_table(title='', depth=1):
def generate_plugin_documentation(source_dir, outdir, ignore_paths):
# pylint: disable=unused-argument
pluginloader.clear()
pluginloader.update(packages=GENERATE_FOR_PACKAGES)
if not os.path.exists(outdir):
@ -57,7 +63,7 @@ def generate_plugin_documentation(source_dir, outdir, ignore_paths):
exts = pluginloader.list_plugins(ext_type)
sorted_exts = iter(sorted(exts, key=lambda x: x.name))
try:
wfh.write(get_rst_from_plugin(sorted_exts.next()))
wfh.write(get_rst_from_plugin(next(sorted_exts)))
except StopIteration:
return
for ext in sorted_exts:
@ -73,9 +79,11 @@ def generate_target_documentation(outdir):
'juno_linux',
'juno_android']
intro = '\nThis is a list of commonly used targets and their device '\
'parameters, to see a complete for a complete reference please use the '\
'WA :ref:`list command <list-command>`.\n\n\n'
intro = (
'\nThis is a list of commonly used targets and their device '
'parameters, to see a complete for a complete reference please use the'
' WA :ref:`list command <list-command>`.\n\n\n'
)
pluginloader.clear()
pluginloader.update(packages=['wa.framework.target.descriptor'])
@ -112,7 +120,8 @@ def generate_config_documentation(config, outdir):
if not os.path.exists(outdir):
os.mkdir(outdir)
outfile = os.path.join(outdir, '{}.rst'.format('_'.join(config.name.split())))
config_name = '_'.join(config.name.split())
outfile = os.path.join(outdir, '{}.rst'.format(config_name))
with open(outfile, 'w') as wfh:
wfh.write(get_params_rst(config.config_points))

View File

@ -1,4 +1,7 @@
nose
numpy
pandas
sphinx_rtd_theme>=0.3.1
sphinx_rtd_theme==1.0.0
sphinx==4.2
docutils<0.18
devlib @ git+https://github.com/ARM-software/devlib@master

View File

@ -0,0 +1,78 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="231.99989"
height="128.625"
id="svg4921"
version="1.1"
inkscape:version="0.48.4 r9939"
sodipodi:docname="WA-logo-black.svg">
<defs
id="defs4923" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.70000001"
inkscape:cx="80.419359"
inkscape:cy="149.66406"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1676"
inkscape:window-height="1027"
inkscape:window-x="0"
inkscape:window-y="19"
inkscape:window-maximized="0"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0" />
<metadata
id="metadata4926">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-135.03125,-342.375)">
<path
style="fill:#ffffff;fill-opacity:1;stroke:none"
d="m 239,342.375 0,11.21875 c -5.57308,1.24469 -10.80508,3.40589 -15.5,6.34375 l -8.34375,-8.34375 -15.5625,15.5625 8.28125,8.28125 c -3.25948,5.08895 -5.62899,10.81069 -6.875,16.9375 l -11,0 0,22 11.46875,0 c 1.38373,5.61408 3.71348,10.8741 6.8125,15.5625 l -8.15625,8.1875 15.5625,15.53125 8.46875,-8.46875 c 4.526,2.73972 9.527,4.77468 14.84375,5.96875 l 0,11.21875 14.59375,0 c -4.57581,-6.7196 -7.25,-14.81979 -7.25,-23.5625 0,-5.85191 1.21031,-11.43988 3.375,-16.5 -10.88114,-0.15024 -19.65625,-9.02067 -19.65625,-19.9375 0,-10.66647 8.37245,-19.40354 18.90625,-19.9375 0.3398,-0.0172 0.68717,0 1.03125,0 10.5808,0 19.2466,8.24179 19.90625,18.65625 5.54962,-2.70912 11.78365,-4.25 18.375,-4.25 7.94803,0 15.06896,2.72769 21.71875,6.0625 l 0,-10.53125 -11.03125,0 c -1.13608,-5.58713 -3.20107,-10.85298 -6.03125,-15.59375 l 8.1875,-8.21875 -15.5625,-15.53125 -7.78125,7.78125 C 272.7607,357.45113 267.0827,354.99261 261,353.625 l 0,-11.25 z m 11,46 c -7.73198,0 -14,6.26802 -14,14 0,7.732 6.26802,14 14,14 1.05628,0 2.07311,-0.12204 3.0625,-0.34375 2.84163,-4.38574 6.48859,-8.19762 10.71875,-11.25 C 263.91776,403.99646 264,403.19884 264,402.375 c 0,-7.73198 -6.26801,-14 -14,-14 z m -87.46875,13.25 -11.78125,4.78125 2.4375,6 c -2.7134,1.87299 -5.02951,4.16091 -6.90625,6.75 L 140,416.5 l -4.96875,11.6875 6.21875,2.65625 c -0.64264,3.42961 -0.65982,6.98214 0,10.53125 l -5.875,2.40625 4.75,11.78125 6.15625,-2.5 c 1.95629,2.70525 4.32606,5.00539 7,6.84375 l -2.59375,6.15625 11.6875,4.9375 2.71875,-6.34375 c 3.01575,0.48636 6.11446,0.48088 9.21875,-0.0312 l 2.4375,6 11.78125,-4.75 -2.4375,-6.03125 c 2.70845,-1.87526 5.03044,-4.16169 6.90625,-6.75 l 6.21875,2.625 4.96875,-11.6875 -6.15625,-2.625 c 0.56936,-3.04746 0.64105,-6.22008 0.1875,-9.375 l 6.125,-2.46875 -4.75,-11.78125 -5.90625,2.40625 c -1.8179,-2.74443 -4.05238,-5.13791 -6.59375,-7.0625 L 189.6875,406.9688 178,402.0313 l -2.5,5.84375 c -3.41506,-0.712 -6.97941,-0.8039 -10.53125,-0.21875 z m 165.28125,7.125 -7.09375,19.125 -9.59375,23 -1.875,-42.0625 -14.1875,0 -18.1875,42.0625 -1.78125,-42.0625 -13.8125,0 2.5,57.875 17.28125,0 18.71875,-43.96875 1.9375,43.96875 16.90625,0 0.0312,-0.0625 2.71875,0 1.78125,-5.0625 7.90625,-22.90625 0.0312,0 1.59375,-4.65625 4.46875,-10.40625 7.46875,21.75 -11.125,0 -3.71875,10.75 18.625,0 3.625,10.53125 15,0 -21.4375,-57.875 z m -158,15.875 c 4.48547,0.0706 8.71186,2.76756 10.5,7.1875 2.38422,5.89328 -0.45047,12.61577 -6.34375,15 -5.89327,2.38421 -12.61578,-0.48172 -15,-6.375 -2.3097,-5.70909 0.29002,-12.18323 5.8125,-14.75 0.17811,-0.0828 0.34709,-0.14426 0.53125,-0.21875 1.47332,-0.59605 3.00484,-0.86727 4.5,-0.84375 z m -0.1875,3.40625 c -0.2136,5.4e-4 -0.44162,0.0134 -0.65625,0.0312 -0.79249,0.0658 -1.56779,0.24857 -2.34375,0.5625 -4.13846,1.67427 -6.14302,6.3928 -4.46875,10.53125 1.67428,4.13847 6.3928,6.14301 10.53125,4.46875 4.13847,-1.67428 6.11177,-6.3928 4.4375,-10.53125 -1.27532,-3.15234 -4.29605,-5.07059 -7.5,-5.0625 z"
id="rect4081-3-8"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cccccccccccccccccscscscsccccccccccsssccsscccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccssssccssssssss" />
<g
id="g3117"
transform="translate(-244.99999,-214.64287)">
<g
transform="translate(83.928571,134.28571)"
id="text4037-4-7"
style="font-size:79.3801651px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:DejaVu Sans;-inkscape-font-specification:DejaVu Sans Bold" />
<g
transform="translate(83.928571,134.28571)"
id="text4041-5-8"
style="font-size:79.3801651px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:DejaVu Sans;-inkscape-font-specification:DejaVu Sans" />
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.7 KiB

View File

@ -23,6 +23,23 @@ iterating over all WA output directories found.
:param path: must be the path to the top-level output directory (the one
containing ``__meta`` subdirectory and ``run.log``).
WA output stored in a Postgres database by the ``Postgres`` output processor
can be accessed via a :class:`RunDatabaseOutput` which can be initialized as follows:
.. class:: RunDatabaseOutput(password, host='localhost', user='postgres', port='5432', dbname='wa', run_uuid=None, list_runs=False)
The main interface into Postgres database containing WA results.
:param password: The password used to authenticate with
:param host: The database host address. Defaults to ``'localhost'``
:param user: The user name used to authenticate with. Defaults to ``'postgres'``
:param port: The database connection port number. Defaults to ``'5432'``
:param dbname: The database name. Defaults to ``'wa'``
:param run_uuid: The ``run_uuid`` to identify the selected run
:param list_runs: Will connect to the database and will print out the available runs
with their corresponding run_uuids. Defaults to ``False``
Example
-------
@ -39,6 +56,32 @@ called ``wa_output`` in the current working directory we can initialize a
...: output_directory = 'wa_output'
...: run_output = RunOutput(output_directory)
Alternatively if the results have been stored in a Postgres database we can
initialize a ``RunDatabaseOutput`` as follows:
.. code-block:: python
In [1]: from wa import RunDatabaseOutput
...:
...: db_settings = {
...: host: 'localhost',
...: port: '5432',
...: dbname: 'wa'
...: user: 'postgres',
...: password: 'wa'
...: }
...:
...: RunDatabaseOutput(list_runs=True, **db_settings)
Available runs are:
========= ============ ============= =================== =================== ====================================
Run Name Project Project Stage Start Time End Time run_uuid
========= ============ ============= =================== =================== ====================================
Test Run my_project None 2018-11-29 14:53:08 2018-11-29 14:53:24 aa3077eb-241a-41d3-9610-245fd4e552a9
run_1 my_project None 2018-11-29 14:53:34 2018-11-29 14:53:37 4c2885c9-2f4a-49a1-bbc5-b010f8d6b12a
========= ============ ============= =================== =================== ====================================
In [2]: run_uuid = '4c2885c9-2f4a-49a1-bbc5-b010f8d6b12a'
...: run_output = RunDatabaseOutput(run_uuid=run_uuid, **db_settings)
From here we can retrieve various information about the run. For example if we
@ -65,7 +108,7 @@ parameters and the metrics recorded from the first job was we can do the followi
Out[5]: u'dhrystone'
# Print out all the runtime parameters and their values for this job
In [6]: for k, v in job_1.spec.runtime_parameters.iteritems():
In [6]: for k, v in job_1.spec.runtime_parameters.items():
...: print (k, v)
(u'airplane_mode': False)
(u'brightness': 100)
@ -73,7 +116,7 @@ parameters and the metrics recorded from the first job was we can do the followi
(u'big_frequency': 1700000)
(u'little_frequency': 1400000)
# Print out all the metrics avalible for this job
# Print out all the metrics available for this job
In [7]: job_1.metrics
Out[7]:
[<thread 0 score: 14423105 (+)>,
@ -92,6 +135,15 @@ parameters and the metrics recorded from the first job was we can do the followi
<total DMIPS: 52793 (+)>,
<total score: 92758402 (+)>]
# Load the run results csv file into pandas
In [7]: pd.read_csv(run_output.get_artifact_path('run_result_csv'))
Out[7]:
id workload iteration metric value units
0 450000-wk1 dhrystone 1 thread 0 score 1.442310e+07 NaN
1 450000-wk1 dhrystone 1 thread 0 DMIPS 8.209700e+04 NaN
2 450000-wk1 dhrystone 1 thread 1 score 1.442310e+07 NaN
3 450000-wk1 dhrystone 1 thread 1 DMIPS 8.720900e+04 NaN
...
We can also retrieve information about the target that the run was performed on
@ -214,7 +266,7 @@ methods
Return the :class:`Metric` associated with the run (not the individual jobs)
with the specified `name`.
:return: The :class`Metric` object for the metric with the specified name.
:return: The :class:`Metric` object for the metric with the specified name.
.. method:: RunOutput.get_job_spec(spec_id)
@ -232,6 +284,56 @@ methods
:return: A list of `str` labels of workloads that were part of this run.
.. method:: RunOutput.add_classifier(name, value, overwrite=False)
Add a classifier to the run as a whole. If a classifier with the specified
``name`` already exists, a``ValueError`` will be raised, unless
`overwrite=True` is specified.
:class:`RunDatabaseOutput`
---------------------------
:class:`RunDatabaseOutput` provides access to the output of a WA :term:`run`,
including metrics,artifacts, metadata, and configuration stored in a postgres database.
The majority of attributes and methods are the same :class:`RunOutput` however the
noticeable differences are:
``jobs``
A list of :class:`JobDatabaseOutput` objects for each job that was executed
during the run.
``basepath``
A representation of the current database and host information backing this object.
methods
~~~~~~~
.. method:: RunDatabaseOutput.get_artifact(name)
Return the :class:`Artifact` specified by ``name``. This will only look
at the run artifacts; this will not search the artifacts of the individual
jobs. The `path` attribute of the :class:`Artifact` will be set to the Database OID of the object.
:param name: The name of the artifact who's path to retrieve.
:return: The :class:`Artifact` with that name
:raises HostError: If the artifact with the specified name does not exist.
.. method:: RunDatabaseOutput.get_artifact_path(name)
If the artifcat is a file this method returns a `StringIO` object containing
the contents of the artifact specified by ``name``. If the aritifcat is a
directory, the method returns a path to a locally extracted version of the
directory which is left to the user to remove after use. This will only look
at the run artifacts; this will not search the artifacts of the individual
jobs.
:param name: The name of the artifact who's path to retrieve.
:return: A `StringIO` object with the contents of the artifact
:raises HostError: If the artifact with the specified name does not exist.
:class:`JobOutput`
------------------
@ -307,16 +409,15 @@ artifacts, metadata, and configuration. It has the following attributes:
methods
~~~~~~~
.. method:: RunOutput.get_artifact(name)
.. method:: JobOutput.get_artifact(name)
Return the :class:`Artifact` specified by ``name`` associated with this job.
:param name: The name of the artifact who's path to retrieve.
:param name: The name of the artifact to retrieve.
:return: The :class:`Artifact` with that name
:raises HostError: If the artifact with the specified name does not exist.
.. method:: RunOutput.get_artifact_path(name)
.. method:: JobOutput.get_artifact_path(name)
Return the path to the file backing the artifact specified by ``name``,
associated with this job.
@ -325,13 +426,58 @@ methods
:return: The path to the artifact
:raises HostError: If the artifact with the specified name does not exist.
.. method:: RunOutput.get_metric(name)
.. method:: JobOutput.get_metric(name)
Return the :class:`Metric` associated with this job with the specified
`name`.
:return: The :class`Metric` object for the metric with the specified name.
:return: The :class:`Metric` object for the metric with the specified name.
.. method:: JobOutput.add_classifier(name, value, overwrite=False)
Add a classifier to the job. The classifier will be propagated to all
existing artifacts and metrics, as well as those added afterwards. If a
classifier with the specified ``name`` already exists, a ``ValueError`` will
be raised, unless `overwrite=True` is specified.
:class:`JobDatabaseOutput`
---------------------------
:class:`JobOutput` provides access to the output of a single :term:`job`
executed during a WA :term:`run`, including metrics, artifacts, metadata, and
configuration stored in a postgres database.
The majority of attributes and methods are the same :class:`JobOutput` however the
noticeable differences are:
``basepath``
A representation of the current database and host information backing this object.
methods
~~~~~~~
.. method:: JobDatabaseOutput.get_artifact(name)
Return the :class:`Artifact` specified by ``name`` associated with this job.
The `path` attribute of the :class:`Artifact` will be set to the Database
OID of the object.
:param name: The name of the artifact to retrieve.
:return: The :class:`Artifact` with that name
:raises HostError: If the artifact with the specified name does not exist.
.. method:: JobDatabaseOutput.get_artifact_path(name)
If the artifcat is a file this method returns a `StringIO` object containing
the contents of the artifact specified by ``name`` associated with this job.
If the aritifcat is a directory, the method returns a path to a locally
extracted version of the directory which is left to the user to remove after
use.
:param name: The name of the artifact who's path to retrieve.
:return: A `StringIO` object with the contents of the artifact
:raises HostError: If the artifact with the specified name does not exist.
:class:`Metric`
@ -371,6 +517,11 @@ A :class:`Metric` has the following attributes:
or they may have been added by the workload to help distinguish between
otherwise identical metrics.
``label``
This is a string constructed from the name and classifiers, to provide a
more unique identifier, e.g. for grouping values across iterations. The
format is in the form ``name/cassifier1=value1/classifier2=value2/...``.
:class:`Artifact`
-----------------
@ -420,7 +571,7 @@ An :class:`Artifact` has the following attributes:
it is the opposite of ``export``, but in general may also be
discarded.
.. note:: whether a file is marked as ``log``/``data`` or ``raw``
.. note:: Whether a file is marked as ``log``/``data`` or ``raw``
depends on how important it is to preserve this file,
e.g. when archiving, vs how much space it takes up.
Unlike ``export`` artifacts which are (almost) always
@ -471,6 +622,12 @@ The available attributes of the class are as follows:
The name of the target class that was uised ot interact with the device
during the run E.g. ``"AndroidTarget"``, ``"LinuxTarget"`` etc.
``modules``
A list of names of modules that have been loaded by the target. Modules
provide additional functionality, such as access to ``cpufreq`` and which
modules are installed may impact how much of the ``TargetInfo`` has been
populated.
``cpus``
A list of :class:`CpuInfo` objects describing the capabilities of each CPU.

View File

@ -178,6 +178,16 @@ methods.
locations) and device will be searched for an application with a matching
package name.
``supported_versions``
This attribute should be a list of apk versions that are suitable for this
workload, if a specific apk version is not specified then any available
supported version may be chosen.
``activity``
This attribute can be optionally set to override the default activity that
will be extracted from the selected APK file which will be used when
launching the APK.
``view``
This is the "view" associated with the application. This is used by
instruments like ``fps`` to monitor the current framerate being generated by

View File

@ -2,9 +2,427 @@
What's New in Workload Automation
=================================
-------------
***********
Version 3.3.1
***********
.. warning:: This is the last release supporting Python 3.5 and Python 3.6.
Subsequent releases will support Python 3.7+.
New Features:
==============
Commands:
---------
Instruments:
------------
- ``perf``: Add support for ``report-sample``.
Workloads:
----------------
- ``PCMark``: Add support for PCMark 3.0.
- ``Antutu``: Add support for 9.1.6.
- ``Geekbench``: Add support for Geekbench5.
- ``gfxbench``: Support the non corporate version.
Fixes/Improvements
==================
Framework:
----------
- Fix installation on systems without git installed.
- Avoid querying online cpus if hotplug is disabled.
Dockerfile:
-----------
- Update base image to Ubuntu 20.04.
Instruments:
------------
- ``perf``: Fix parsing csv with using interval-only-values.
- ``perf``: Improve error reporting of an invalid agenda.
Output Processors:
------------------
- ``postgres``: Fixed SQL command when creating a new event.
Workloads:
----------
- ``speedometer``: Fix adb reverse when rebooting a device.
- ``googleplaybook``: Support newer apk version.
- ``googlephotos``: Support newer apk version.
- ``gmail``: Support newer apk version.
Other:
------
- Upgrade Android Gradle to 7.2 and Gradle plugin to 4.2.
***********
Version 3.3
***********
New Features:
==============
Commands:
---------
- Add ``report`` command to provide a summary of a run.
Instruments:
------------
- Add ``proc_stat`` instrument to monitor CPU load using data from ``/proc/stat``.
Framework:
----------
- Add support for simulating atomic writes to prevent race conditions when running current instances of WA.
- Add support file transfer for SSH connections via SFTP and falling back to using SCP implementation.
- Support detection of logcat buffer overflow and present a warning if this occurs.
- Allow skipping all remaining jobs if a job had exhausted all of its retires.
- Add polling mechanism for file transfers rather than relying on timeouts.
- Add `run_completed` reboot policy to enable rebooting a target after a run has been completed.
Android Devices:
----------------
- Enable configuration of whether to keep the screen on while the device is plugged in.
Output Processors:
------------------
- Enable the use of cascading deletion in Postgres databases to clean up after deletion of a run entry.
Fixes/Improvements
==================
Framework:
----------
- Improvements to the ``process`` command to correctly handle skipped and in process jobs.
- Add support for deprecated parameters allowing for a warning to be raised when providing
a parameter that will no longer have an effect.
- Switch implementation of SSH connections to use Paramiko for greater stability.
- By default use sftp for file transfers with SSH connections, allow falling back to scp
by setting ``use_scp``.
- Fix callbacks not being disconnected correctly when requested.
- ``ApkInfo`` objects are now cached to reduce re-parsing of APK files.
- Speed up discovery of wa output directories.
- Fix merge handling of parameters from multiple files.
Dockerfile:
-----------
- Install additional instruments for use in the docker environment.
- Fix environment variables not being defined in non interactive environments.
Instruments:
------------
- ``trace_cmd`` additional fixes for python 3 support.
Output Processors:
------------------
- ``postgres``: Fixed SQL command when creating a new event.
Workloads:
----------
- ``aitutu``: Improve reliability of results extraction.
- ``androbench``: Enabling dismissing of additional popups on some devices.
- ``antutu``: Now supports major version 8 in additional to version 7.X.
- ``exoplayer``: Add support for Android 10.
- ``googlephotos``: Support newer apk version.
- ``gfxbench``: Allow user configuration for which tests should be ran.
- ``gfxbench``: Improved score detection for a wider range of devices.
- ``gfxbench``: Moved results extraction out of run stage.
- ``jankbench``: Support newer versions of Pandas for processing.
- ``pcmark``: Add support for handling additional popups and installation flows.
- ``pcmark``: No longer clear and re-download test data before each execution.
- ``speedometer``: Enable the workload to run offline and drops requirement for
UiAutomator. To support this root access is now required to run the workload.
- ``youtube``: Update to support later versions of the apk.
Other:
------
- ``cpustates``: Improved name handling for unknown idle states.
***********
Version 3.2
***********
.. warning:: This release only supports Python 3.5+. Python 2 support has now
been dropped.
Fixes/Improvements
==================
Framework:
----------
- ``TargetInfo`` now tracks installed modules and will ensure the cache is
also updated on module change.
- Migrated the build scripts for uiauto based workloads to Python 3.
- Uiauto applications now target SDK version 28 to prevent PlayProtect
blocking the installation of the automation apks on some devices.
- The workload metadata now includes the apk package name if applicable.
Instruments:
------------
- ``energy_instruments`` will now have their ``teardown`` method called
correctly.
- ``energy_instruments``: Added a ``keep_raw`` parameter to control whether
raw files generated during execution should be deleted upon teardown.
- Update relevant instruments to make use of the new devlib collector
interface, for more information please see the
`devlib documentation <https://devlib.readthedocs.io/en/latest/collectors.html>`_.
Output Processors:
------------------
- ``postgres``: If initialisation fails then the output processor will no
longer attempt to reconnect at a later point during the run.
- ``postgres``: Will now ensure that the connection to the database is
re-established if it is dropped e.g. due to a long expecting workload.
- ``postgres``: Change the type of the ``hostid`` field to ``Bigint`` to
allow a larger range of ids.
- ``postgres``: Bump schema version to 1.5.
- ``perf``: Added support for the ``simpleperf`` profiling tool for android
devices.
- ``perf``: Added support for the perf ``record`` command.
- ``cpustates``: Improve handling of situations where cpufreq and/or cpuinfo
data is unavailable.
Workloads:
----------
- ``adodereader``: Now support apk version 19.7.1.10709.
- ``antutu``: Supports dismissing of popup asking to create a shortcut on
the homescreen.
- ``gmail``: Now supports apk version 2019.05.26.252424914.
- ``googlemaps``: Now supports apk version 10.19.1.
- ``googlephotos``: Now supports apk version 4.28.0.
- ``geekbench``: Added support for versions 4.3.4, 4.4.0 and 4.4.2.
- ``geekbench-corporate``: Added support for versions 5.0.1 and 5.0.3.
- ``pcmark``: Now locks device orientation to portrait to increase
compatibility.
- ``pcmark``: Supports dismissing new Android 10 permission warnings.
Other:
------
- Improve documentation to help debugging module installation errors.
*************
Version 3.1.4
*************
.. warning:: This is the last release that supports Python 2. Subsequent versions
will be support Python 3.5+ only.
New Features:
==============
Framework:
----------
- ``ApkWorkload``: Allow specifying A maximum and minimum version of an APK
instead of requiring a specific version.
- ``TestPackageHandler``: Added to support running android applications that
are invoked via ``am instrument``.
- Directories can now be added as ``Artifacts``.
Workloads:
----------
- ``aitutu``: Executes the Aitutu Image Speed/Accuracy and Object
Speed/Accuracy tests.
- ``uibench``: Run a configurable activity of the UIBench workload suite.
- ``uibenchjanktests``: Run an automated and instrument version of the
UIBench JankTests.
- ``motionmark``: Run a browser graphical benchmark.
Other:
------
- Added ``requirements.txt`` as a reference for known working package versions.
Fixes/Improvements
==================
Framework:
----------
- ``JobOuput``: Added an ``augmentation`` attribute to allow listing of
enabled augmentations for individual jobs.
- Better error handling for misconfiguration job selection.
- All ``Workload`` classes now have an ``uninstall`` parameter to control whether
any binaries installed to the target should be uninstalled again once the
run has completed.
- The ``cleanup_assets`` parameter is now more consistently utilized across
workloads.
- ``ApkWorkload``: Added an ``activity`` attribute to allow for overriding the
automatically detected version from the APK.
- ``ApkWorkload`` Added support for providing an implicit activity path.
- Fixed retrieving job level artifacts from a database backend.
Output Processors:
------------------
- ``SysfsExtractor``: Ensure that the extracted directories are added as
``Artifacts``.
- ``InterruptStatsInstrument``: Ensure that the output files are added as
``Artifacts``.
- ``Postgres``: Fix missing ``system_id`` field from ``TargetInfo``.
- ``Postgres``: Support uploading directory ``Artifacts``.
- ``Postgres``: Bump the schema version to v1.3.
Workloads:
----------
- ``geekbench``: Improved apk version handling.
- ``geekbench``: Now supports apk version 4.3.2.
Other:
------
- ``Dockerfile``: Now installs all optional extras for use with WA.
- Fixed support for YAML anchors.
- Fixed building of documentation with Python 3.
- Changed shorthand of installing all of WA extras to `all` as per
the documentation.
- Upgraded the Dockerfile to use Ubuntu 18.10 and Python 3.
- Restricted maximum versions of ``numpy`` and ``pandas`` for Python 2.7.
*************
Version 3.1.3
*************
Fixes/Improvements
==================
Other:
------
- Security update for PyYAML to attempt prevention of arbitrary code execution
during parsing.
*************
Version 3.1.2
*************
Fixes/Improvements
==================
Framework:
----------
- Implement an explicit check for Devlib versions to ensure that versions
are kept in sync with each other.
- Added a ``View`` parameter to ApkWorkloads for use with certain instruments
for example ``fps``.
- Added ``"supported_versions"`` attribute to workloads to allow specifying a
list of supported version for a particular workload.
- Change default behaviour to run any available version of a workload if a
specific version is not specified.
Output Processors:
------------------
- ``Postgres``: Fix handling of ``screen_resoultion`` during processing.
Other
-----
- Added additional information to documentation
- Added fix for Devlib's ``KernelConfig`` refactor
- Added a ``"label"`` property to ``Metrics``
*************
Version 3.1.1
*************
Fixes/Improvements
==================
Other
-----
- Improve formatting when displaying metrics
- Update revent binaries to include latest fixes
- Update DockerImage to use new released version of WA and Devlib
- Fix broken package on PyPi
*************
Version 3.1.0
*************
New Features:
==============
Commands
---------
- ``create database``: Added :ref:`create subcommand <create-command>`
command in order to initialize a PostgresSQL database to allow for storing
WA output with the Postgres Output Processor.
Output Processors:
------------------
- ``Postgres``: Added output processor which can be used to populate a
Postgres database with the output generated from a WA run.
- ``logcat-regex``: Add new output processor to extract arbitrary "key"
"value" pairs from logcat.
Configuration:
--------------
- :ref:`Configuration Includes <config-include>`: Add support for including
other YAML files inside agendas and config files using ``"include#:"``
entries.
- :ref:`Section groups <section-groups>`: This allows for a ``group`` entry
to be specified for each section and will automatically cross product the
relevant sections with sections from other groups adding the relevant
classifiers.
Framework:
----------
- Added support for using the :ref:`OutputAPI <output_processing_api>` with a
Postgres Database backend. Used to retrieve and
:ref:`process <processing_output>` run data uploaded by the ``Postgres``
output processor.
Workloads:
----------
- ``gfxbench-corporate``: Execute a set of on and offscreen graphical benchmarks from
GFXBench including Car Chase and Manhattan.
- ``glbench``: Measures the graphics performance of Android devices by
testing the underlying OpenGL (ES) implementation.
Fixes/Improvements
==================
Framework:
----------
- Remove quotes from ``sudo_cmd`` parameter default value due to changes in
devlib.
- Various Python 3 related fixes.
- Ensure plugin names are converted to identifiers internally to act more
consistently when dealing with names containing ``-``'s etc.
- Now correctly updates RunInfo with project and run name information.
- Add versioning support for POD structures with the ability to
automatically update data structures / formats to new versions.
Commands:
---------
- Fix revent target initialization.
- Fix revent argument validation.
Workloads:
----------
- ``Speedometer``: Close open tabs upon workload completion.
- ``jankbench``: Ensure that the logcat monitor thread is terminated
correctly to prevent left over adb processes.
- UiAutomator workloads are now able to dismiss android warning that a
workload has not been designed for the latest version of android.
Other:
------
- Report additional metadata about target, including: system_id,
page_size_kb.
- Uses cache directory to reduce target calls, e.g. will now use cached
version of TargetInfo if local copy is found.
- Update recommended :ref:`installation <github>` commands when installing from
github due to pip not following dependency links correctly.
- Fix incorrect parameter names in runtime parameter documentation.
--------------------------------------------------
*************
Version 3.0.0
-------------
*************
WA3 is a more or less from-scratch re-write of WA2. We have attempted to
maintain configuration-level compatibility wherever possible (so WA2 agendas
@ -29,7 +447,7 @@ believe to be no longer useful.
do the port yourselves :-) ).
New Features
~~~~~~~~~~~~
============
- Python 3 support. WA now runs on both Python 2 and Python 3.
@ -75,7 +493,7 @@ New Features
.. _devlib: https://github.com/ARM-software/devlib
Changes
~~~~~~~
=======
- Configuration files ``config.py`` are now specified in YAML format in
``config.yaml``. WA3 has support for automatic conversion of the default

View File

@ -1,5 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright 2018 ARM Limited
# Copyright 2023 ARM Limited
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
@ -68,7 +68,7 @@ master_doc = 'index'
# General information about the project.
project = u'wa'
copyright = u'2018, ARM Limited'
copyright = u'2023, ARM Limited'
author = u'ARM Limited'
# The version info for the project you're documenting, acts as replacement for
@ -135,7 +135,9 @@ html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
html_theme_options = {
'logo_only': True
}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
@ -149,7 +151,7 @@ html_theme = 'sphinx_rtd_theme'
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
html_logo = 'WA-logo-white.svg'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32

View File

@ -343,7 +343,7 @@ see the
A list of additional :class:`Parameters` the output processor can take.
:initialize():
:initialize(context):
This method will only be called once during the workload run
therefore operations that only need to be performed initially should
@ -373,7 +373,7 @@ see the
existing data collected/generated for the run as a whole. E.g.
uploading them to a database etc.
:finalize():
:finalize(context):
This method is the complement to the initialize method and will also
only be called once.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 63 KiB

After

Width:  |  Height:  |  Size: 74 KiB

View File

@ -47,6 +47,10 @@ submitting a pull request:
- If significant additions have been made to the framework, unit
tests should be added to cover the new functionality.
- If modifications have been made to the UI Automation source of a workload, the
corresponding APK should be rebuilt and submitted as part of the same pull
request. This can be done via the ``build.sh`` script in the relevant
``uiauto`` subdirectory.
- If modifications have been made to documentation (this includes description
attributes for Parameters and Extensions), documentation should be built to
make sure no errors or warning during build process, and a visual inspection

View File

@ -37,8 +37,8 @@ This section contains reference information common to plugins of all types.
The Context
~~~~~~~~~~~
.. note:: For clarification on the meaning of "workload specification" ("spec"), "job"
and "workload" and the distiction between them, please see the :ref:`glossary <glossary>`.
.. note:: For clarification on the meaning of "workload specification" "spec", "job"
and "workload" and the distinction between them, please see the :ref:`glossary <glossary>`.
The majority of methods in plugins accept a context argument. This is an
instance of :class:`wa.framework.execution.ExecutionContext`. It contains
@ -119,7 +119,7 @@ context.output_directory
This is the output directory for the current iteration. This will an
iteration-specific subdirectory under the main results location. If
there is no current iteration (e.g. when processing overall run results)
this will point to the same location as ``root_output_directory``.
this will point to the same location as ``run_output_directory``.
Additionally, the global ``wa.settings`` object exposes on other location:
@ -158,7 +158,7 @@ irrespective of the host's path notation. For example:
.. note:: Output processors, unlike workloads and instruments, do not have their
own target attribute as they are designed to be able to be run offline.
.. _plugin-parmeters:
.. _plugin-parameters:
Parameters
~~~~~~~~~~~

View File

@ -5,10 +5,12 @@ Convention for Naming revent Files for Revent Workloads
-------------------------------------------------------------------------------
There is a convention for naming revent files which you should follow if you
want to record your own revent files. Each revent file must start with the
device name(case sensitive) then followed by a dot '.' then the stage name
then '.revent'. All your custom revent files should reside at
``'~/.workload_automation/dependencies/WORKLOAD NAME/'``. These are the current
want to record your own revent files. Each revent file must be called (case sensitive)
``<device name>.<stage>.revent``,
where ``<device name>`` is the name of your device (as defined by the model
name of your device which can be retrieved with
``adb shell getprop ro.product.model`` or by the ``name`` attribute of your
customized device class), and ``<stage>`` is one of the following currently
supported stages:
:setup: This stage is where the application is loaded (if present). It is
@ -26,10 +28,12 @@ Only the run stage is mandatory, the remaining stages will be replayed if a
recording is present otherwise no actions will be performed for that particular
stage.
For instance, to add a custom revent files for a device named "mydevice" and
a workload name "myworkload", you need to add the revent files to the directory
``/home/$WA_USER_HOME/dependencies/myworkload/revent_files`` creating it if
necessary. ::
All your custom revent files should reside at
``'$WA_USER_DIRECTORY/dependencies/WORKLOAD NAME/'``. So
typically to add a custom revent files for a device named "mydevice" and a
workload name "myworkload", you would need to add the revent files to the
directory ``~/.workload_automation/dependencies/myworkload/revent_files``
creating the directory structure if necessary. ::
mydevice.setup.revent
mydevice.run.revent
@ -332,6 +336,6 @@ recordings in scripts. Here is an example:
from wa.utils.revent import ReventRecording
with ReventRecording('/path/to/recording.revent') as recording:
print "Recording: {}".format(recording.filepath)
print "There are {} input events".format(recording.num_events)
print "Over a total of {} seconds".format(recording.duration)
print("Recording: {}".format(recording.filepath))
print("There are {} input events".format(recording.num_events))
print("Over a total of {} seconds".format(recording.duration))

View File

@ -58,22 +58,28 @@ will automatically generate a workload in the your ``WA_CONFIG_DIR/plugins``. If
you wish to specify a custom location this can be provided with ``-p
<path>``
A typical invocation of the :ref:`create <create-command>` command would be in
the form::
wa create workload -k <workload_kind> <workload_name>
.. _adding-a-basic-workload-example:
Adding a Basic Workload
-----------------------
To add a basic workload you can simply use the command::
To add a ``basic`` workload template for our example workload we can simply use the
command::
wa create workload basic
wa create workload -k basic ziptest
This will generate a very basic workload with dummy methods for the workload
interface and it is left to the developer to add any required functionality to
the workload.
This will generate a very basic workload with dummy methods for the each method in
the workload interface and it is left to the developer to add any required functionality.
Not all the methods are required to be implemented, this example shows how a
subset might be used to implement a simple workload that times how long it takes
to compress a file of a particular size on the device.
Not all the methods from the interface are required to be implemented, this
example shows how a subset might be used to implement a simple workload that
times how long it takes to compress a file of a particular size on the device.
.. note:: This is intended as an example of how to implement the Workload
@ -87,14 +93,15 @@ in this example we are implementing a very simple workload and do not
require any additional feature so shall inherit directly from the the base
:class:`Workload` class. We then need to provide a ``name`` for our workload
which is what will be used to identify your workload for example in an
agenda or via the show command.
agenda or via the show command, if you used the `create` command this will
already be populated for you.
.. code-block:: python
import os
from wa import Workload, Parameter
class ZipTestWorkload(Workload):
class ZipTest(Workload):
name = 'ziptest'
@ -113,7 +120,7 @@ separated by a new line.
'''
In order to allow for additional configuration of the workload from a user a
list of :ref:`parameters <plugin-parmeters>` can be supplied. These can be
list of :ref:`parameters <plugin-parameters>` can be supplied. These can be
configured in a variety of different ways. For example here we are ensuring that
the value of the parameter is an integer and larger than 0 using the ``kind``
and ``constraint`` options, also if no value is provided we are providing a
@ -176,7 +183,7 @@ allow it to decide whether to keep the file or not.
# Pull the results file to the host
self.host_outfile = os.path.join(context.output_directory, 'timing_results')
self.target.pull(self.target_outfile, self.host_outfile)
context.add_artifact('ziptest-results', host_output_file, kind='raw')
context.add_artifact('ziptest-results', self.host_outfile, kind='raw')
The ``update_output`` method we can do any generation of metrics that we wish to
for our workload. In this case we are going to simply convert the times reported
@ -252,7 +259,7 @@ The full implementation of this workload would look something like:
# Pull the results file to the host
self.host_outfile = os.path.join(context.output_directory, 'timing_results')
self.target.pull(self.target_outfile, self.host_outfile)
context.add_artifact('ziptest-results', host_output_file, kind='raw')
context.add_artifact('ziptest-results', self.host_outfile, kind='raw')
def update_output(self, context):
super(ZipTestWorkload, self).update_output(context)
@ -485,9 +492,10 @@ Adding an Instrument
====================
This is an example of how we would create a instrument which will trace device
errors using a custom "trace" binary file. For more detailed information please see the
:ref:`Instrument Reference <instrument-reference>`. The first thing to do is to subclass
:class:`Instrument`, overwrite the variable name with what we want our instrument
to be called and locate our binary for our instrument.
:ref:`Instrument Reference <instrument-reference>`. The first thing to do is to create
a new file under ``$WA_USER_DIRECTORY/plugins/`` and subclass
:class:`Instrument`. Make sure to overwrite the variable name with what we want our instrument
to be called and then locate our binary for the instrument.
::
@ -495,8 +503,8 @@ to be called and locate our binary for our instrument.
name = 'trace-errors'
def __init__(self, target):
super(TraceErrorsInstrument, self).__init__(target)
def __init__(self, target, **kwargs):
super(TraceErrorsInstrument, self).__init__(target, **kwargs)
self.binary_name = 'trace'
self.binary_file = os.path.join(os.path.dirname(__file__), self.binary_name)
self.trace_on_target = None
@ -533,21 +541,20 @@ again decorated the method. ::
Once we have generated our result data we need to retrieve it from the device
for further processing or adding directly to WA's output for that job. For
example for trace data we will want to pull it to the device and add it as a
:ref:`artifact <artifact>` to WA's :ref:`context <context>` as shown below::
def extract_results(self, context):
# pull the trace file from the target
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.target.pull(self.result, context.working_directory)
context.add_artifact('error_trace', self.result, kind='export')
Once we have retrieved the data we can now do any further processing and add any
relevant :ref:`Metrics <metrics>` to the :ref:`context <context>`. For this we
will use the the ``add_metric`` method to add the results to the final output
for that workload. The method can be passed 4 params, which are the metric
`key`, `value`, `unit` and `lower_is_better`. ::
:ref:`artifact <artifact>` to WA's :ref:`context <context>`. Once we have
retrieved the data, we can now do any further processing and add any relevant
:ref:`Metrics <metrics>` to the :ref:`context <context>`. For this we will use
the the ``add_metric`` method to add the results to the final output for that
workload. The method can be passed 4 params, which are the metric `key`,
`value`, `unit` and `lower_is_better`. ::
def update_output(self, context):
# pull the trace file from the target
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.outfile = os.path.join(context.output_directory, 'trace.txt')
self.target.pull(self.result, self.outfile)
context.add_artifact('error_trace', self.outfile, kind='export')
# parse the file if needs to be parsed, or add result directly to
# context.
@ -567,12 +574,14 @@ At the very end of the run we would want to uninstall the binary we deployed ear
So the full example would look something like::
from wa import Instrument
class TraceErrorsInstrument(Instrument):
name = 'trace-errors'
def __init__(self, target):
super(TraceErrorsInstrument, self).__init__(target)
def __init__(self, target, **kwargs):
super(TraceErrorsInstrument, self).__init__(target, **kwargs)
self.binary_name = 'trace'
self.binary_file = os.path.join(os.path.dirname(__file__), self.binary_name)
self.trace_on_target = None
@ -588,12 +597,12 @@ So the full example would look something like::
def stop(self, context):
self.target.execute('{} stop'.format(self.trace_on_target))
def extract_results(self, context):
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.target.pull(self.result, context.working_directory)
context.add_artifact('error_trace', self.result, kind='export')
def update_output(self, context):
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.outfile = os.path.join(context.output_directory, 'trace.txt')
self.target.pull(self.result, self.outfile)
context.add_artifact('error_trace', self.outfile, kind='export')
metric = # ..
context.add_metric('number_of_errors', metric, lower_is_better=True
@ -609,8 +618,9 @@ Adding an Output Processor
==========================
This is an example of how we would create an output processor which will format
the run metrics as a column-aligned table. The first thing to do is to subclass
:class:`OutputProcessor` and overwrite the variable name with what we want our
the run metrics as a column-aligned table. The first thing to do is to create
a new file under ``$WA_USER_DIRECTORY/plugins/`` and subclass
:class:`OutputProcessor`. Make sure to overwrite the variable name with what we want our
processor to be called and provide a short description.
Next we need to implement any relevant methods, (please see

View File

@ -26,7 +26,8 @@ CPU frequency fixed to max, and once with CPU frequency fixed to min.
Classifiers are used to indicate the configuration in the output.
First, create the :class:`RunOutput` object, which is the main interface for
interacting with WA outputs.
interacting with WA outputs. Or alternatively a :class:`RunDatabaseOutput`
if storing your results in a postgres database.
.. code-block:: python
@ -151,10 +152,6 @@ For the purposes of this report, they will be used to augment the metric's name.
scores[workload][name][freq] = metric
rows = []
for workload in sorted(scores.keys()):
wldata = scores[workload]
Once the metrics have been sorted, generate the report showing the delta
between the two configurations (indicated by the "frequency" classifier) and
highlight any unexpected deltas (based on the ``lower_is_better`` attribute of
@ -164,23 +161,27 @@ statically significant deltas.)
.. code-block:: python
for name in sorted(wldata.keys()):
min_score = wldata[name]['min'].value
max_score = wldata[name]['max'].value
delta = max_score - min_score
units = wldata[name]['min'].units or ''
lib = wldata[name]['min'].lower_is_better
rows = []
for workload in sorted(scores.keys()):
wldata = scores[workload]
warn = ''
if (lib and delta > 0) or (not lib and delta < 0):
warn = '!!!'
for name in sorted(wldata.keys()):
min_score = wldata[name]['min'].value
max_score = wldata[name]['max'].value
delta = max_score - min_score
units = wldata[name]['min'].units or ''
lib = wldata[name]['min'].lower_is_better
rows.append([workload, name,
'{:.3f}'.format(min_score), '{:.3f}'.format(max_score),
'{:.3f}'.format(delta), units, warn])
warn = ''
if (lib and delta > 0) or (not lib and delta < 0):
warn = '!!!'
# separate workloads with a blank row
rows.append(['', '', '', '', '', '', ''])
rows.append([workload, name,
'{:.3f}'.format(min_score), '{:.3f}'.format(max_score),
'{:.3f}'.format(delta), units, warn])
# separate workloads with a blank row
rows.append(['', '', '', '', '', '', ''])
write_table(rows, sys.stdout, align='<<>>><<',
@ -275,23 +276,23 @@ Below is the complete example code, and a report it generated for a sample run.
for workload in sorted(scores.keys()):
wldata = scores[workload]
for name in sorted(wldata.keys()):
min_score = wldata[name]['min'].value
max_score = wldata[name]['max'].value
delta = max_score - min_score
units = wldata[name]['min'].units or ''
lib = wldata[name]['min'].lower_is_better
for name in sorted(wldata.keys()):
min_score = wldata[name]['min'].value
max_score = wldata[name]['max'].value
delta = max_score - min_score
units = wldata[name]['min'].units or ''
lib = wldata[name]['min'].lower_is_better
warn = ''
if (lib and delta > 0) or (not lib and delta < 0):
warn = '!!!'
warn = ''
if (lib and delta > 0) or (not lib and delta < 0):
warn = '!!!'
rows.append([workload, name,
'{:.3f}'.format(min_score), '{:.3f}'.format(max_score),
'{:.3f}'.format(delta), units, warn])
rows.append([workload, name,
'{:.3f}'.format(min_score), '{:.3f}'.format(max_score),
'{:.3f}'.format(delta), units, warn])
# separate workloads with a blank row
rows.append(['', '', '', '', '', '', ''])
# separate workloads with a blank row
rows.append(['', '', '', '', '', '', ''])
write_table(rows, sys.stdout, align='<<>>><<',

View File

@ -69,7 +69,72 @@ WA3 config file.
**Q:** My Juno board keeps resetting upon starting WA even if it hasn't crashed.
--------------------------------------------------------------------------------
Please ensure that you do not have any other terminals (e.g. ``screen``
**A** Please ensure that you do not have any other terminals (e.g. ``screen``
sessions) connected to the board's UART. When WA attempts to open the connection
for its own use this can cause the board to reset if a connection is already
present.
**Q:** I'm using the FPS instrument but I do not get any/correct results for my workload
-----------------------------------------------------------------------------------------
**A:** If your device is running with Android 6.0 + then the default utility for
collecting fps metrics will be ``gfxinfo`` however this does not seem to be able
to extract any meaningful information for some workloads. In this case please
try setting the ``force_surfaceflinger`` parameter for the ``fps`` augmentation
to ``True``. This will attempt to guess the "View" for the workload
automatically however this is device specific and therefore may need
customizing. If this is required please open the application and execute
``dumpsys SurfaceFlinger --list`` on the device via adb. This will provide a
list of all views available for measuring.
As an example, when trying to find the view for the AngryBirds Rio workload you
may get something like:
.. code-block:: none
...
AppWindowToken{41dfe54 token=Token{77819a7 ActivityRecord{a151266 u0 com.rovio.angrybirdsrio/com.rovio.fusion.App t506}}}#0
a3d001c com.rovio.angrybirdsrio/com.rovio.fusion.App#0
Background for -SurfaceView - com.rovio.angrybirdsrio/com.rovio.fusion.App#0
SurfaceView - com.rovio.angrybirdsrio/com.rovio.fusion.App#0
com.rovio.angrybirdsrio/com.rovio.fusion.App#0
boostedAnimationLayer#0
mAboveAppWindowsContainers#0
...
From these ``"SurfaceView - com.rovio.angrybirdsrio/com.rovio.fusion.App#0"`` is
the mostly likely the View that needs to be set as the ``view`` workload
parameter and will be picked up be the ``fps`` augmentation.
**Q:** I am getting an error which looks similar to ``'CONFIG_SND_BT87X is not exposed in kernel config'...``
-------------------------------------------------------------------------------------------------------------
**A:** If you are receiving this under normal operation this can be caused by a
mismatch of your WA and devlib versions. Please update both to their latest
versions and delete your ``$USER_HOME/.workload_automation/cache/targets.json``
(or equivalent) file.
**Q:** I get an error which looks similar to ``UnicodeDecodeError('ascii' codec can't decode byte...``
------------------------------------------------------------------------------------------------------
**A:** If you receive this error or a similar warning about your environment,
please ensure that you configure your environment to use a locale which supports
UTF-8. Otherwise this can cause issues when attempting to parse files containing
none ascii characters.
**Q:** I get the error ``Module "X" failed to install on target``
------------------------------------------------------------------------------------------------------
**A:** By default a set of devlib modules will be automatically loaded onto the
target designed to add additional functionality. If the functionality provided
by the module is not required then the module can be safely disabled by setting
``load_default_modules`` to ``False`` in the ``device_config`` entry of the
:ref:`agenda <config-agenda-entry>` and then re-enabling any specific modules
that are still required. An example agenda snippet is shown below:
.. code-block:: none
config:
device: generic_android
device_config:
load_default_modules: False
modules: ['list', 'of', 'modules', 'to', 'enable']

View File

@ -13,10 +13,11 @@ these signals are dispatched during execution please see the
$signal_names
The methods above may be decorated with on the listed decorators to set the
priority of the Instrument method relative to other callbacks registered for the
signal (within the same priority level, callbacks are invoked in the order they
were registered). The table below shows the mapping of the decorator to the
corresponding priority:
priority (a value in the ``wa.framework.signal.CallbackPriority`` enum) of the
Instrument method relative to other callbacks registered for the signal (within
the same priority level, callbacks are invoked in the order they were
registered). The table below shows the mapping of the decorator to the
corresponding priority name and level:
$priority_prefixes

View File

@ -16,7 +16,7 @@ Configuration
Default configuration file change
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Instead of the standard ``config.py`` file located at
``$WA_USER_HOME/config.py`` WA now uses a ``confg.yaml`` file (at the same
``$WA_USER_DIRECTORY/config.py`` WA now uses a ``confg.yaml`` file (at the same
location) which is written in the YAML format instead of python. Additionally
upon first invocation WA3 will automatically try and detect whether a WA2 config
file is present and convert it to use the new WA3 format. During this process

View File

@ -489,6 +489,75 @@ Note that the ``config`` section still applies to every spec in the agenda. So
the precedence order is -- spec settings override section settings, which in
turn override global settings.
.. _section-groups:
Section Groups
---------------
Section groups are a way of grouping sections together and are used to produce a
cross product of each of the different groups. This can be useful when you want
to run a set of experiments with all the available combinations without having
to specify each combination manually.
For example if we want to investigate the differences between running the
maximum and minimum frequency with both the maximum and minimum number of cpus
online, we can create an agenda as follows:
.. code-block:: yaml
sections:
- id: min_freq
runtime_parameters:
freq: min
group: frequency
- id: max_freq
runtime_parameters:
freq: max
group: frequency
- id: min_cpus
runtime_parameters:
cpus: 1
group: cpus
- id: max_cpus
runtime_parameters:
cpus: 8
group: cpus
workloads:
- dhrystone
This will results in 8 jobs being generated for each of the possible combinations.
::
min_freq-min_cpus-wk1 (dhrystone)
min_freq-max_cpus-wk1 (dhrystone)
max_freq-min_cpus-wk1 (dhrystone)
max_freq-max_cpus-wk1 (dhrystone)
min_freq-min_cpus-wk1 (dhrystone)
min_freq-max_cpus-wk1 (dhrystone)
max_freq-min_cpus-wk1 (dhrystone)
max_freq-max_cpus-wk1 (dhrystone)
Each of the generated jobs will have :ref:`classifiers <classifiers>` for
each group and the associated id automatically added.
.. code-block:: python
# ...
print('Job ID: {}'.format(job.id))
print('Classifiers:')
for k, v in job.classifiers.items():
print(' {}: {}'.format(k, v))
Job ID: min_freq-min_cpus-no_idle-wk1
Classifiers:
frequency: min_freq
cpus: min_cpus
.. _augmentations:
Augmentations
@ -621,7 +690,7 @@ Workload-specific augmentation
It is possible to enable or disable (but not configure) augmentations at
workload or section level, as well as in the global config, in which case, the
augmentations would only be enabled/disabled for that workload/section. If the
same augmentation is enabled at one level and disabled at another, as will all
same augmentation is enabled at one level and disabled at another, as with all
WA configuration, the more specific settings will take precedence over the less
specific ones (i.e. workloads override sections that, in turn, override global
config).

View File

@ -17,6 +17,8 @@ further configuration will be required.
Android
-------
.. _android-general-device-setup:
General Device Setup
^^^^^^^^^^^^^^^^^^^^
@ -44,12 +46,15 @@ common parameters you might want to change are outlined below.
Android builds. If this is not the case for your device, you will need to
specify an alternative working directory (e.g. under ``/data/local``).
:load_default_modules: A number of "default" modules (e.g. for cpufreq
subsystem) are loaded automatically, unless explicitly disabled. If you
encounter an issue with one of the modules then this setting can be set to
``False`` and any specific modules that you require can be request via the
``modules`` entry.
:modules: A list of additional modules to be installed for the target. Devlib
implements functionality for particular subsystems as modules. A number of
"default" modules (e.g. for cpufreq subsystem) are loaded automatically,
unless explicitly disabled. If additional modules need to be loaded, they
may be specified using this parameter.
implements functionality for particular subsystems as modules. If additional
modules need to be loaded, they may be specified using this parameter.
Please see the `devlib documentation <http://devlib.readthedocs.io/en/latest/modules.html>`_
for information on the available modules.
@ -76,13 +81,14 @@ A typical ``device_config`` inside ``config.yaml`` may look something like
# ...
or a more specific config could be be
or a more specific config could be:
.. code-block:: yaml
device_config:
device: 0123456789ABCDEF
working_direcory: '/sdcard/wa-working'
load_default_modules: True
modules: ['hotplug', 'cpufreq']
core_names : ['a7', 'a7', 'a7', 'a15', 'a15']
# ...

View File

@ -14,9 +14,9 @@ Using revent with workloads
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Some workloads (pretty much all games) rely on recorded revents for their
execution. ReventWorkloads will require between 1 and 4 revent files be be ran.
There is one mandatory recording ``run`` for performing the actual execution of
the workload and the remaining are optional. ``setup`` can be used to perform
execution. ReventWorkloads require between 1 and 4 revent files to be ran.
There is one mandatory recording, ``run``, for performing the actual execution of
the workload and the remaining stages are optional. ``setup`` can be used to perform
the initial setup (navigating menus, selecting game modes, etc).
``extract_results`` can be used to perform any actions after the main stage of
the workload for example to navigate a results or summary screen of the app. And
@ -26,17 +26,21 @@ exiting the app.
Because revents are very device-specific\ [*]_, these files would need to
be recorded for each device.
The files must be called ``<device name>.(setup|run|extract_results|teardown).revent``
, where ``<device name>`` is the name of your device (as defined by the ``name``
attribute of your device's class). WA will look for these files in two
places: ``<install dir>/wa/workloads/<workload name>/revent_files``
and ``~/.workload_automation/dependencies/<workload name>``. The first
location is primarily intended for revent files that come with WA (and if
The files must be called ``<device name>.(setup|run|extract_results|teardown).revent``,
where ``<device name>`` is the name of your device (as defined by the model
name of your device which can be retrieved with
``adb shell getprop ro.product.model`` or by the ``name`` attribute of your
customized device class).
WA will look for these files in two places:
``<installdir>/wa/workloads/<workload name>/revent_files`` and
``$WA_USER_DIRECTORY/dependencies/<workload name>``. The
first location is primarily intended for revent files that come with WA (and if
you did a system-wide install, you'll need sudo to add files there), so it's
probably easier to use the second location for the files you record. Also,
if revent files for a workload exist in both locations, the files under
``~/.workload_automation/dependencies`` will be used in favour of those
installed with WA.
probably easier to use the second location for the files you record. Also, if
revent files for a workload exist in both locations, the files under
``$WA_USER_DIRECTORY/dependencies`` will be used in favour
of those installed with WA.
.. [*] It's not just about screen resolution -- the event codes may be different
even if devices use the same screen.

View File

@ -12,8 +12,9 @@ Installation
.. module:: wa
This page describes the 3 methods of installing Workload Automation 3. The first
option is to use :ref:`pip` which
will install the latest release of WA, the latest development version from :ref:`github <github>` or via a :ref:`dockerfile`.
option is to use :ref:`pip` which will install the latest release of WA, the
latest development version from :ref:`github <github>` or via a
:ref:`dockerfile`.
Prerequisites
@ -22,11 +23,11 @@ Prerequisites
Operating System
----------------
WA runs on a native Linux install. It was tested with Ubuntu 14.04,
but any recent Linux distribution should work. It should run on either
32-bit or 64-bit OS, provided the correct version of Android (see below)
was installed. Officially, **other environments are not supported**. WA
has been known to run on Linux Virtual machines and in Cygwin environments,
WA runs on a native Linux install. It has been tested on recent Ubuntu releases,
but other recent Linux distributions should work as well. It should run on
either 32-bit or 64-bit OS, provided the correct version of dependencies (see
below) are installed. Officially, **other environments are not supported**.
WA has been known to run on Linux Virtual machines and in Cygwin environments,
though additional configuration may be required in both cases (known issues
include makings sure USB/serial connections are passed to the VM, and wrong
python/pip binaries being picked up in Cygwin). WA *should* work on other
@ -45,7 +46,8 @@ possible to get limited functionality with minimal porting effort).
Android SDK
-----------
You need to have the Android SDK with at least one platform installed.
To interact with Android devices you will need to have the Android SDK
with at least one platform installed.
To install it, download the ADT Bundle from here_. Extract it
and add ``<path_to_android_sdk>/sdk/platform-tools`` and ``<path_to_android_sdk>/sdk/tools``
to your ``PATH``. To test that you've installed it properly, run ``adb
@ -72,7 +74,11 @@ the install location of the SDK (i.e. ``<path_to_android_sdk>/sdk``).
Python
------
Workload Automation 3 currently supports both Python 2.7 and Python 3.
Workload Automation 3 currently supports Python 3.5+
.. note:: If your system's default python version is still Python 2, please
replace the commands listed here with their Python3 equivalent
(e.g. python3, pip3 etc.)
.. _pip:
@ -94,11 +100,11 @@ similar distributions, this may be done with APT::
sudo -H pip install --upgrade pip
sudo -H pip install --upgrade setuptools
If you do run into this issue after already installing some packages,
If you do run into this issue after already installing some packages,
you can resolve it by running ::
sudo chmod -R a+r /usr/local/lib/python2.7/dist-packagessudo
find /usr/local/lib/python2.7/dist-packages -type d -exec chmod a+x {} \;
sudo chmod -R a+r /usr/local/lib/python3.X/dist-packages
sudo find /usr/local/lib/python3.X/dist-packages -type d -exec chmod a+x {} \;
(The paths above will work for Ubuntu; they may need to be adjusted
for other distros).
@ -171,9 +177,11 @@ install them upfront (e.g. if you're planning to use WA to an environment that
may not always have Internet access).
* nose
* PyDAQmx
* pymongo
* jinja2
* mock
* daqpower
* sphinx
* sphinx_rtd_theme
* psycopg2-binary
@ -184,20 +192,33 @@ Installing
Installing the latest released version from PyPI (Python Package Index)::
sudo -H pip install wa
sudo -H pip install wlauto
This will install WA along with its mandatory dependencies. If you would like to
install all optional dependencies at the same time, do the following instead::
sudo -H pip install wa[all]
sudo -H pip install wlauto[all]
Alternatively, you can also install the latest development version from GitHub
(you will need git installed for this to work)::
git clone git@github.com:ARM-software/workload-automation.git workload-automation
sudo -H pip install ./workload-automation
cd workload-automation
sudo -H python setup.py install
.. note:: Please note that if using pip to install from github this will most
likely result in an older and incompatible version of devlib being
installed alongside WA. If you wish to use pip please also manually
install the latest version of
`devlib <https://github.com/ARM-software/devlib>`_.
.. note:: Please note that while a `requirements.txt` is included, this is
designed to be a reference of known working packages rather to than to
be used as part of a standard installation. The version restrictions
in place as part of `setup.py` should automatically ensure the correct
packages are install however if encountering issues please try
updating/downgrading to the package versions list within.
If the above succeeds, try ::
@ -221,7 +242,7 @@ image in a container.
The Dockerfile can be found in the "extras" directory or online at
`<https://github.com/ARM-software /workload- automation/blob/next/extras/Dockerfile>`_
which contains addional information about how to build and to use the file.
which contains additional information about how to build and to use the file.
(Optional) Post Installation

View File

@ -20,7 +20,7 @@ Install
.. note:: This is a quick summary. For more detailed instructions, please see
the :ref:`installation` section.
Make sure you have Python 2.7 or Python 3 and a recent Android SDK with API
Make sure you have Python 3.5+ and a recent Android SDK with API
level 18 or above installed on your system. A complete install of the Android
SDK is required, as WA uses a number of its utilities, not just adb. For the
SDK, make sure that either ``ANDROID_HOME`` environment variable is set, or that
@ -125,7 +125,7 @@ There are multiple options for configuring your device depending on your
particular use case.
You can either add your configuration to the default configuration file
``config.yaml``, under the ``$WA_USER_HOME/`` directory or you can specify it in
``config.yaml``, under the ``$WA_USER_DIRECTORY/`` directory or you can specify it in
the ``config`` section of your agenda directly.
Alternatively if you are using multiple devices, you may want to create separate
@ -318,7 +318,7 @@ like this:
config:
augmentations:
- ~execution_time
- json
- targz
iterations: 2
workloads:
- memcpy
@ -332,7 +332,7 @@ This agenda:
- Specifies two workloads: memcpy and dhrystone.
- Specifies that dhrystone should run in one thread and execute five million loops.
- Specifies that each of the two workloads should be run twice.
- Enables json output processor, in addition to the output processors enabled in
- Enables the targz output processor, in addition to the output processors enabled in
the config.yaml.
- Disables execution_time instrument, if it is enabled in the config.yaml
@ -352,13 +352,13 @@ in-depth information please see the :ref:`Create Command <create-command>` docum
In order to populate the agenda with relevant information you can supply all of
the plugins you wish to use as arguments to the command, for example if we want
to create an agenda file for running ``dhystrone`` on a 'generic android' device and we
to create an agenda file for running ``dhrystone`` on a `generic_android` device and we
want to enable the ``execution_time`` and ``trace-cmd`` instruments and display the
metrics using the ``csv`` output processor. We would use the following command::
wa create agenda generic_android dhrystone execution_time trace-cmd csv -o my_agenda.yaml
This will produce a `my_agenda.yaml` file containing all the relevant
This will produce a ``my_agenda.yaml`` file containing all the relevant
configuration for the specified plugins along with their default values as shown
below:
@ -373,6 +373,7 @@ below:
device: generic_android
device_config:
adb_server: null
adb_port: null
big_core: null
core_clusters: null
core_names: null
@ -399,6 +400,7 @@ below:
no_install: false
report: true
report_on_target: false
mode: write-to-memory
csv:
extra_columns: null
use_all_classifiers: false
@ -483,14 +485,14 @@ that parses the contents of the output directory:
>>> ro = RunOutput('./wa_output')
>>> for job in ro.jobs:
... if job.status != 'OK':
... print 'Job "{}" did not complete successfully: {}'.format(job, job.status)
... print('Job "{}" did not complete successfully: {}'.format(job, job.status))
... continue
... print 'Job "{}":'.format(job)
... print('Job "{}":'.format(job))
... for metric in job.metrics:
... if metric.units:
... print '\t{}: {} {}'.format(metric.name, metric.value, metric.units)
... print('\t{}: {} {}'.format(metric.name, metric.value, metric.units))
... else:
... print '\t{}: {}'.format(metric.name, metric.value)
... print('\t{}: {}'.format(metric.name, metric.value))
...
Job "wk1-dhrystone-1":
thread 0 score: 20833333

View File

@ -18,6 +18,3 @@ User Reference
-------------------
.. include:: user_information/user_reference/output_directory.rst

View File

@ -30,7 +30,7 @@ An example agenda can be seen here:
device: generic_android
device_config:
device: R32C801B8XY # Th adb name of our device we want to run on
device: R32C801B8XY # The adb name of our device we want to run on
disable_selinux: true
load_default_modules: true
package_data_directory: /data/data
@ -45,6 +45,7 @@ An example agenda can be seen here:
no_install: false
report: true
report_on_target: false
mode: write-to-disk
csv: # Provide config for the csv augmentation
use_all_classifiers: true
@ -116,7 +117,9 @@ whole will behave. The most common options that that you may want to specify are
to connect to (e.g. ``host`` for an SSH connection or
``device`` to specific an ADB name) as well as configure other
options for the device for example the ``working_directory``
or the list of ``modules`` to be loaded onto the device.
or the list of ``modules`` to be loaded onto the device. (For
more information please see
:ref:`here <android-general-device-setup>`)
:execution_order: Defines the order in which the agenda spec will be executed.
:reboot_policy: Defines when during execution of a run a Device will be rebooted.
:max_retries: The maximum number of times failed jobs will be retried before giving up.
@ -124,7 +127,7 @@ whole will behave. The most common options that that you may want to specify are
For more information and a full list of these configuration options please see
:ref:`Run Configuration <run-configuration>` and
:ref:`"Meta Configuration" <meta-configuration>`.
:ref:`Meta Configuration <meta-configuration>`.
Plugins

View File

@ -102,6 +102,91 @@ remove the high level configuration.
Dependent on specificity, configuration parameters from different sources will
have different inherent priorities. Within an agenda, the configuration in
"workload" entries wil be more specific than "sections" entries, which in turn
"workload" entries will be more specific than "sections" entries, which in turn
are more specific than parameters in the "config" entry.
.. _config-include:
Configuration Includes
----------------------
It is possible to include other files in your config files and agendas. This is
done by specifying ``include#`` (note the trailing hash) as a key in one of the
mappings, with the value being the path to the file to be included. The path
must be either absolute, or relative to the location of the file it is being
included from (*not* to the current working directory). The path may also
include ``~`` to indicate current user's home directory.
The include is performed by removing the ``include#`` loading the contents of
the specified into the mapping that contained it. In cases where the mapping
already contains the key to be loaded, values will be merged using the usual
merge method (for overwrites, values in the mapping take precedence over those
from the included files).
Below is an example of an agenda that includes other files. The assumption is
that all of those files are in one directory
.. code-block:: yaml
# agenda.yaml
config:
augmentations: [trace-cmd]
include#: ./my-config.yaml
sections:
- include#: ./section1.yaml
- include#: ./section2.yaml
include#: ./workloads.yaml
.. code-block:: yaml
# my-config.yaml
augmentations: [cpufreq]
.. code-block:: yaml
# section1.yaml
runtime_parameters:
frequency: max
.. code-block:: yaml
# section2.yaml
runtime_parameters:
frequency: min
.. code-block:: yaml
# workloads.yaml
workloads:
- dhrystone
- memcpy
The above is equivalent to having a single file like this:
.. code-block:: yaml
# agenda.yaml
config:
augmentations: [cpufreq, trace-cmd]
sections:
- runtime_parameters:
frequency: max
- runtime_parameters:
frequency: min
workloads:
- dhrystone
- memcpy
Some additional details about the implementation and its limitations:
- The ``include#`` *must* be a key in a mapping, and the contents of the
included file *must* be a mapping as well; it is not possible to include a
list (e.g. in the examples above ``workload:`` part *must* be in the included
file.
- Being a key in a mapping, there can only be one ``include#`` entry per block.
- The included file *must* have a ``.yaml`` extension.
- Nested inclusions *are* allowed. I.e. included files may themselves include
files; in such cases the included paths must be relative to *that* file, and
not the "main" file.

View File

@ -40,7 +40,7 @@ Will display help for this subcommand that will look something like this:
AGENDA Agenda for this workload automation run. This defines
which workloads will be executed, how many times, with
which tunables, etc. See example agendas in
/usr/local/lib/python2.7/dist-packages/wa for an
/usr/local/lib/python3.X/dist-packages/wa for an
example of how this file should be structured.
optional arguments:
@ -238,6 +238,33 @@ Which will produce something like::
This will be populated with default values which can then be customised for the
particular use case.
Additionally the create command can be used to initialize (and update) a
Postgres database which can be used by the ``postgres`` output processor.
The most of database connection parameters have a default value however they can
be overridden via command line arguments. When initializing the database WA will
also save the supplied parameters into the default user config file so that they
do not need to be specified time the output processor is used.
As an example if we had a database server running on at 10.0.0.2 using the
standard port we could use the following command to initialize a database for
use with WA::
wa create database -a 10.0.0.2 -u my_username -p Pa55w0rd
This will log into the database server with the supplied credentials and create
a database (defaulting to 'wa') and will save the configuration to the
``~/.workload_automation/config.yaml`` file.
With updates to WA there may be changes to the database schema used. In this
case the create command can also be used with the ``-U`` flag to update the
database to use the new schema as follows::
wa create database -a 10.0.0.2 -u my_username -p Pa55w0rd -U
This will upgrade the database sequentially until the database schema is using
the latest version.
.. _process-command:
Process

View File

@ -87,6 +87,7 @@ __failed
the failed attempts.
.. _job_execution_subd:
job execution output subdirectory
Each subdirectory will be named ``<job id>_<workload label>_<iteration
number>``, and will, at minimum, contain a ``result.json`` (see above).

View File

@ -33,6 +33,7 @@ states.
iterations: 1
runtime_parameters:
screen_on: false
unlock_screen: 'vertical'
- name: benchmarkpi
iterations: 1
sections:
@ -98,7 +99,7 @@ CPUFreq
:governor: A ``string`` that can be used to specify the governor for all cores if there are common governors available.
:governor_tunable: A ``dict`` that can be used to specify governor
:gov_tunables: A ``dict`` that can be used to specify governor
tunables for all cores, unlike the other common parameters these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -113,7 +114,7 @@ CPUFreq
:<core_name>_governor: A ``string`` that can be used to specify the governor for cores of a particular type e.g. 'A72'.
:<core_name>_governor_tunable: A ``dict`` that can be used to specify governor
:<core_name>_gov_tunables: A ``dict`` that can be used to specify governor
tunables for cores of a particular type e.g. 'A72', these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -129,7 +130,7 @@ CPUFreq
:cpu<no>_governor: A ``string`` that can be used to specify the governor for a particular core e.g. 'cpu0'.
:cpu<no>_governor_tunable: A ``dict`` that can be used to specify governor
:cpu<no>_gov_tunables: A ``dict`` that can be used to specify governor
tunables for a particular core e.g. 'cpu0', these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -147,7 +148,7 @@ If big.LITTLE is detected for the device an additional set of parameters are ava
:big_governor: A ``string`` that can be used to specify the governor for the big cores.
:big_governor_tunable: A ``dict`` that can be used to specify governor
:big_gov_tunables: A ``dict`` that can be used to specify governor
tunables for the big cores, these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -162,7 +163,7 @@ If big.LITTLE is detected for the device an additional set of parameters are ava
:little_governor: A ``string`` that can be used to specify the governor for the little cores.
:little_governor_tunable: A ``dict`` that can be used to specify governor
:little_gov_tunables: A ``dict`` that can be used to specify governor
tunables for the little cores, these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -208,6 +209,13 @@ Android Specific Runtime Parameters
:screen_on: A ``boolean`` to specify whether the devices screen should be
turned on. Defaults to ``True``.
:unlock_screen: A ``String`` to specify how the devices screen should be
unlocked. Unlocking screen is disabled by default. ``vertical``, ``diagonal``
and ``horizontal`` are the supported values (see :meth:`devlib.AndroidTarget.swipe_to_unlock`).
Note that unlocking succeeds when no passcode is set. Since unlocking screen
requires turning on the screen, this option overrides value of ``screen_on``
option.
.. _setting-sysfiles:
Setting Sysfiles

View File

@ -6,7 +6,7 @@
#
# docker build -t wa .
#
# This will create an image called wadocker, which is preconfigured to
# This will create an image called wa, which is preconfigured to
# run WA and devlib. Please note that the build process automatically
# accepts the licenses for the Android SDK, so please be sure that you
# are willing to accept these prior to building and running the image
@ -17,6 +17,13 @@
#
# docker run -it --privileged -v /dev/bus/usb:/dev/bus/usb --volume ${PWD}:/workspace --workdir /workspace wa
#
# If using selinux you may need to add the `z` option when mounting
# volumes e.g.:
# --volume ${PWD}:/workspace:z
# Warning: Please ensure you do not use this option when mounting
# system directores. For more information please see:
# https://docs.docker.com/storage/bind-mounts/#configure-the-selinux-label
#
# The above command starts the container in privileged mode, with
# access to USB devices. The current directory is mounted into the
# image, allowing you to work from there. Any files written to this
@ -32,27 +39,80 @@
#
# When you are finished, please run `exit` to leave the container.
#
# The relevant environment variables are stored in a separate
# file which is automatically sourced in an interactive shell.
# If running from a non-interactive environment this can
# be manually sourced with `source /home/wa/.wa_environment`
#
# NOTE: Please make sure that the ADB server is NOT running on the
# host. If in doubt, run `adb kill-server` before running the docker
# container.
#
# We want to make sure to base this on a recent ubuntu release
FROM ubuntu:17.10
FROM ubuntu:20.04
# Please update the references below to use different versions of
# devlib, WA or the Android SDK
ARG DEVLIB_REF=v1.0.0
ARG WA_REF=v3.0.0
ARG DEVLIB_REF=v1.3.4
ARG WA_REF=v3.3.1
ARG ANDROID_SDK_URL=https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip
RUN apt-get update
RUN apt-get install -y python-pip git wget zip openjdk-8-jre-headless vim emacs nano curl sshpass ssh usbutils
RUN pip install pandas
# Set a default timezone to use
ENV TZ=Europe/London
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
apache2-utils \
bison \
cmake \
curl \
emacs \
flex \
git \
libcdk5-dev \
libiio-dev \
libxml2 \
libxml2-dev \
locales \
nano \
openjdk-8-jre-headless \
python3 \
python3-pip \
ssh \
sshpass \
sudo \
trace-cmd \
usbutils \
vim \
wget \
zip
# Clone and download iio-capture
RUN git clone -v https://github.com/BayLibre/iio-capture.git /tmp/iio-capture && \
cd /tmp/iio-capture && \
make && \
make install
RUN pip3 install pandas
# Ensure we're using utf-8 as our default encoding
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
# Let's get the two repos we need, and install them
RUN git clone -v https://github.com/ARM-software/devlib.git /tmp/devlib && cd /tmp/devlib && git checkout $DEVLIB_REF && python setup.py install
RUN git clone -v https://github.com/ARM-software/workload-automation.git /tmp/wa && cd /tmp/wa && git checkout $WA_REF && python setup.py install
RUN git clone -v https://github.com/ARM-software/devlib.git /tmp/devlib && \
cd /tmp/devlib && \
git checkout $DEVLIB_REF && \
python3 setup.py install && \
pip3 install .[full]
RUN git clone -v https://github.com/ARM-software/workload-automation.git /tmp/wa && \
cd /tmp/wa && \
git checkout $WA_REF && \
python3 setup.py install && \
pip3 install .[all]
# Clean-up
RUN rm -R /tmp/devlib /tmp/wa
@ -66,10 +126,19 @@ RUN mkdir -p /home/wa/.android
RUN mkdir -p /home/wa/AndroidSDK && cd /home/wa/AndroidSDK && wget $ANDROID_SDK_URL -O sdk.zip && unzip sdk.zip
RUN cd /home/wa/AndroidSDK/tools/bin && yes | ./sdkmanager --licenses && ./sdkmanager platform-tools && ./sdkmanager 'build-tools;27.0.3'
# Update the path
RUN echo 'export PATH=/home/wa/AndroidSDK/platform-tools:${PATH}' >> /home/wa/.bashrc
RUN echo 'export PATH=/home/wa/AndroidSDK/build-tools:${PATH}' >> /home/wa/.bashrc
RUN echo 'export ANDROID_HOME=/home/wa/AndroidSDK' >> /home/wa/.bashrc
# Download Monsoon
RUN mkdir -p /home/wa/monsoon
RUN curl https://android.googlesource.com/platform/cts/+/master/tools/utils/monsoon.py\?format\=TEXT | base64 --decode > /home/wa/monsoon/monsoon.py
RUN chmod +x /home/wa/monsoon/monsoon.py
# Update WA's required environment variables.
RUN echo 'export PATH=/home/wa/monsoon:${PATH}' >> /home/wa/.wa_environment
RUN echo 'export PATH=/home/wa/AndroidSDK/platform-tools:${PATH}' >> /home/wa/.wa_environment
RUN echo 'export PATH=/home/wa/AndroidSDK/build-tools:${PATH}' >> /home/wa/.wa_environment
RUN echo 'export ANDROID_HOME=/home/wa/AndroidSDK' >> /home/wa/.wa_environment
# Source WA environment variables in an interactive environment
RUN echo 'source /home/wa/.wa_environment' >> /home/wa/.bashrc
# Generate some ADB keys. These will change each time the image is build but will otherwise persist.
RUN /home/wa/AndroidSDK/platform-tools/adb keygen /home/wa/.android/adbkey

View File

@ -43,7 +43,7 @@ ignore=external
# https://bitbucket.org/logilab/pylint/issue/232/wrong-hanging-indentation-false-positive
# TODO: disabling no-value-for-parameter and logging-format-interpolation, as they appear to be broken
# in version 1.4.1 and return a lot of false postives; should be re-enabled once fixed.
disable=C0301,C0103,C0111,W0142,R0903,R0904,R0922,W0511,W0141,I0011,R0921,W1401,C0330,no-value-for-parameter,logging-format-interpolation,no-else-return,inconsistent-return-statements,keyword-arg-before-vararg,consider-using-enumerate,no-member
disable=C0301,C0103,C0111,W0142,R0903,R0904,R0922,W0511,W0141,I0011,R0921,W1401,C0330,no-value-for-parameter,logging-format-interpolation,no-else-return,inconsistent-return-statements,keyword-arg-before-vararg,consider-using-enumerate,no-member,super-with-arguments,useless-object-inheritance,raise-missing-from,no-else-raise,no-else-break,no-else-continue
[FORMAT]
max-module-lines=4000

3
pytest.ini Normal file
View File

@ -0,0 +1,3 @@
[pytest]
filterwarnings=
ignore::DeprecationWarning:past[.*]

30
requirements.txt Normal file
View File

@ -0,0 +1,30 @@
bcrypt==4.0.1
certifi==2024.7.4
cffi==1.15.1
charset-normalizer==3.1.0
colorama==0.4.6
cryptography==43.0.1
devlib==1.3.4
future==0.18.3
idna==3.7
Louie-latest==1.3.1
lxml==4.9.2
nose==1.3.7
numpy==1.24.3
pandas==2.0.1
paramiko==3.4.0
pexpect==4.8.0
ptyprocess==0.7.0
pycparser==2.21
PyNaCl==1.5.0
pyserial==3.5
python-dateutil==2.8.2
pytz==2023.3
PyYAML==6.0
requests==2.32.0
scp==0.14.5
six==1.16.0
tzdata==2023.3
urllib3==1.26.19
wlauto==3.3.1
wrapt==1.15.0

40
setup.py Normal file → Executable file
View File

@ -29,9 +29,10 @@ except ImportError:
wa_dir = os.path.join(os.path.dirname(__file__), 'wa')
sys.path.insert(0, os.path.join(wa_dir, 'framework'))
from version import get_wa_version, get_wa_version_with_commit
from version import (get_wa_version, get_wa_version_with_commit,
format_version, required_devlib_version)
# happends if falling back to distutils
# happens if falling back to distutils
warnings.filterwarnings('ignore', "Unknown distribution option: 'install_requires'")
warnings.filterwarnings('ignore', "Unknown distribution option: 'extras_require'")
@ -41,7 +42,7 @@ except OSError:
pass
packages = []
data_files = {}
data_files = {'': [os.path.join(wa_dir, 'commands', 'postgres_schema.sql')]}
source_dir = os.path.dirname(__file__)
for root, dirs, files in os.walk(wa_dir):
rel_dir = os.path.relpath(root, source_dir)
@ -61,54 +62,62 @@ for root, dirs, files in os.walk(wa_dir):
scripts = [os.path.join('scripts', s) for s in os.listdir('scripts')]
with open("README.rst", "r") as fh:
long_description = fh.read()
devlib_version = format_version(required_devlib_version)
params = dict(
name='wlauto',
description='A framework for automating workload execution and measurement collection on ARM devices.',
long_description=long_description,
version=get_wa_version_with_commit(),
packages=packages,
package_data=data_files,
include_package_data=True,
scripts=scripts,
url='https://github.com/ARM-software/workload-automation',
license='Apache v2',
maintainer='ARM Architecture & Technology Device Lab',
maintainer_email='workload-automation@arm.com',
python_requires='>= 3.7',
setup_requires=[
'numpy'
'numpy<=1.16.4; python_version<"3"',
'numpy; python_version>="3"',
],
install_requires=[
'python-dateutil', # converting between UTC and local time.
'pexpect>=3.3', # Send/receive to/from device
'pyserial', # Serial port interface
'colorama', # Printing with colors
'pyYAML', # YAML-formatted agenda parsing
'pyYAML>=5.1b3', # YAML-formatted agenda parsing
'requests', # Fetch assets over HTTP
'devlib>=0.0.4', # Interacting with devices
'devlib>={}'.format(devlib_version), # Interacting with devices
'louie-latest', # callbacks dispatch
'wrapt', # better decorators
'pandas>=0.23.0', # Data analysis and manipulation
'pandas>=0.23.0,<=0.24.2; python_version<"3.5.3"', # Data analysis and manipulation
'pandas>=0.23.0; python_version>="3.5.3"', # Data analysis and manipulation
'future', # Python 2-3 compatiblity
],
dependency_links=['https://github.com/ARM-software/devlib/tarball/master#egg=devlib-0.0.4'],
dependency_links=['https://github.com/ARM-software/devlib/tarball/master#egg=devlib-{}'.format(devlib_version)],
extras_require={
'other': ['jinja2'],
'test': ['nose', 'mock'],
'mongodb': ['pymongo'],
'notify': ['notify2'],
'doc': ['sphinx'],
'doc': ['sphinx', 'sphinx_rtd_theme'],
'postgres': ['psycopg2-binary'],
'daq': ['daqpower'],
},
# https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
'Development Status :: 4 - Beta',
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'License :: OSI Approved :: Apache Software License',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
],
)
all_extras = list(chain(iter(params['extras_require'].values())))
params['extras_require']['everything'] = all_extras
params['extras_require']['all'] = all_extras
class sdist(orig_sdist):
@ -122,7 +131,6 @@ class sdist(orig_sdist):
orig_sdist.initialize_options(self)
self.strip_commit = False
def run(self):
if self.strip_commit:
self.distribution.get_version = get_wa_version

23
tests/ci/idle_agenda.yaml Normal file
View File

@ -0,0 +1,23 @@
config:
iterations: 1
augmentations:
- ~~
- status
device: generic_local
device_config:
big_core: null
core_clusters: null
core_names: null
executables_directory: null
keep_password: true
load_default_modules: false
model: null
modules: null
password: null
shell_prompt: !<tag:wa:regex> '40:^.*(shell|root|juno)@?.*:[/~]\S* *[#$] '
unrooted: True
working_directory: null
workloads:
- name: idle
params:
duration: 1

View File

@ -17,7 +17,7 @@
from wa import Plugin
class TestDevice(Plugin):
class MockDevice(Plugin):
name = 'test-device'
kind = 'device'

View File

@ -0,0 +1,7 @@
config:
augmentations: [~execution_time]
include#: configs/test.yaml
sections:
- include#: sections/section1.yaml
- include#: sections/section2.yaml
include#: workloads.yaml

View File

@ -0,0 +1 @@
augmentations: [cpufreq, trace-cmd]

View File

@ -0,0 +1,2 @@
classifiers:
included: true

View File

@ -0,0 +1 @@
classifiers: {'section': 'one'}

View File

@ -0,0 +1,2 @@
classifiers: {'section': 'two'}
include#: ../section-include.yaml

View File

@ -0,0 +1,2 @@
augmentations: [execution_time]

View File

@ -0,0 +1,5 @@
workloads:
- dhrystone
- name: memcpy
classifiers:
memcpy: True

View File

@ -17,19 +17,26 @@
# pylint: disable=E0611
# pylint: disable=R0201
import os
import yaml
import sys
from collections import defaultdict
from unittest import TestCase
from nose.tools import assert_equal, assert_in, raises, assert_true
DATA_DIR = os.path.join(os.path.dirname(__file__), 'data')
os.environ['WA_USER_DIRECTORY'] = os.path.join(DATA_DIR, 'includes')
from wa.framework.configuration.execution import ConfigManager
from wa.framework.configuration.parsers import AgendaParser
from wa.framework.exception import ConfigError
from wa.utils.serializer import yaml
from wa.utils.types import reset_all_counters
YAML_TEST_FILE = os.path.join(os.path.dirname(__file__), 'data', 'test-agenda.yaml')
YAML_BAD_SYNTAX_FILE = os.path.join(os.path.dirname(__file__), 'data', 'bad-syntax-agenda.yaml')
YAML_TEST_FILE = os.path.join(DATA_DIR, 'test-agenda.yaml')
YAML_BAD_SYNTAX_FILE = os.path.join(DATA_DIR, 'bad-syntax-agenda.yaml')
INCLUDES_TEST_FILE = os.path.join(DATA_DIR, 'includes', 'agenda.yaml')
invalid_agenda_text = """
workloads:
@ -37,8 +44,6 @@ workloads:
workload_parameters:
test: 1
"""
invalid_agenda = yaml.load(invalid_agenda_text)
invalid_agenda.name = 'invalid1'
duplicate_agenda_text = """
global:
@ -51,14 +56,10 @@ workloads:
- id: "1"
workload_name: benchmarkpi
"""
duplicate_agenda = yaml.load(duplicate_agenda_text)
duplicate_agenda.name = 'invalid2'
short_agenda_text = """
workloads: [antutu, dhrystone, benchmarkpi]
"""
short_agenda = yaml.load(short_agenda_text)
short_agenda.name = 'short'
default_ids_agenda_text = """
workloads:
@ -71,8 +72,6 @@ workloads:
cpus: 1
- vellamo
"""
default_ids_agenda = yaml.load(default_ids_agenda_text)
default_ids_agenda.name = 'default_ids'
sectioned_agenda_text = """
sections:
@ -95,8 +94,6 @@ sections:
workloads:
- memcpy
"""
sectioned_agenda = yaml.load(sectioned_agenda_text)
sectioned_agenda.name = 'sectioned'
dup_sectioned_agenda_text = """
sections:
@ -109,8 +106,22 @@ sections:
workloads:
- memcpy
"""
dup_sectioned_agenda = yaml.load(dup_sectioned_agenda_text)
dup_sectioned_agenda.name = 'dup-sectioned'
yaml_anchors_agenda_text = """
workloads:
- name: dhrystone
params: &dhrystone_single_params
cleanup_assets: true
cpus: 0
delay: 3
duration: 0
mloops: 10
threads: 1
- name: dhrystone
params:
<<: *dhrystone_single_params
threads: 4
"""
class AgendaTest(TestCase):
@ -125,6 +136,8 @@ class AgendaTest(TestCase):
assert_equal(len(self.config.jobs_config.root_node.workload_entries), 4)
def test_duplicate_id(self):
duplicate_agenda = yaml.load(duplicate_agenda_text)
try:
self.parser.load(self.config, duplicate_agenda, 'test')
except ConfigError as e:
@ -133,6 +146,8 @@ class AgendaTest(TestCase):
raise Exception('ConfigError was not raised for an agenda with duplicate ids.')
def test_yaml_missing_field(self):
invalid_agenda = yaml.load(invalid_agenda_text)
try:
self.parser.load(self.config, invalid_agenda, 'test')
except ConfigError as e:
@ -141,20 +156,26 @@ class AgendaTest(TestCase):
raise Exception('ConfigError was not raised for an invalid agenda.')
def test_defaults(self):
short_agenda = yaml.load(short_agenda_text)
self.parser.load(self.config, short_agenda, 'test')
workload_entries = self.config.jobs_config.root_node.workload_entries
assert_equal(len(workload_entries), 3)
assert_equal(workload_entries[0].config['workload_name'], 'antutu')
assert_equal(workload_entries[0].id, 'wk1')
def test_default_id_assignment(self):
default_ids_agenda = yaml.load(default_ids_agenda_text)
self.parser.load(self.config, default_ids_agenda, 'test2')
workload_entries = self.config.jobs_config.root_node.workload_entries
assert_equal(workload_entries[0].id, 'wk2')
assert_equal(workload_entries[3].id, 'wk3')
def test_sections(self):
sectioned_agenda = yaml.load(sectioned_agenda_text)
self.parser.load(self.config, sectioned_agenda, 'test')
root_node_workload_entries = self.config.jobs_config.root_node.workload_entries
leaves = list(self.config.jobs_config.root_node.leaves())
section1_workload_entries = leaves[0].workload_entries
@ -164,10 +185,58 @@ class AgendaTest(TestCase):
assert_true(section1_workload_entries[0].config['workload_parameters']['markers_enabled'])
assert_equal(section2_workload_entries[0].config['workload_name'], 'antutu')
def test_yaml_anchors(self):
yaml_anchors_agenda = yaml.load(yaml_anchors_agenda_text)
self.parser.load(self.config, yaml_anchors_agenda, 'test')
workload_entries = self.config.jobs_config.root_node.workload_entries
assert_equal(len(workload_entries), 2)
assert_equal(workload_entries[0].config['workload_name'], 'dhrystone')
assert_equal(workload_entries[0].config['workload_parameters']['threads'], 1)
assert_equal(workload_entries[0].config['workload_parameters']['delay'], 3)
assert_equal(workload_entries[1].config['workload_name'], 'dhrystone')
assert_equal(workload_entries[1].config['workload_parameters']['threads'], 4)
assert_equal(workload_entries[1].config['workload_parameters']['delay'], 3)
@raises(ConfigError)
def test_dup_sections(self):
dup_sectioned_agenda = yaml.load(dup_sectioned_agenda_text)
self.parser.load(self.config, dup_sectioned_agenda, 'test')
@raises(ConfigError)
def test_bad_syntax(self):
self.parser.load_from_path(self.config, YAML_BAD_SYNTAX_FILE)
class FakeTargetManager:
def merge_runtime_parameters(self, params):
return params
def validate_runtime_parameters(self, params):
pass
class IncludesTest(TestCase):
def test_includes(self):
from pprint import pprint
parser = AgendaParser()
cm = ConfigManager()
tm = FakeTargetManager()
includes = parser.load_from_path(cm, INCLUDES_TEST_FILE)
include_set = set([os.path.basename(i) for i in includes])
assert_equal(include_set,
set(['test.yaml', 'section1.yaml', 'section2.yaml',
'section-include.yaml', 'workloads.yaml']))
job_classifiers = {j.id: j.classifiers
for j in cm.jobs_config.generate_job_specs(tm)}
assert_equal(job_classifiers,
{
's1-wk1': {'section': 'one'},
's2-wk1': {'section': 'two', 'included': True},
's1-wk2': {'section': 'one', 'memcpy': True},
's2-wk2': {'section': 'two', 'included': True, 'memcpy': True},
})

View File

@ -16,6 +16,7 @@
import unittest
from nose.tools import assert_equal
from wa.framework.configuration.execution import ConfigManager
from wa.utils.misc import merge_config_values
@ -38,3 +39,21 @@ class TestConfigUtils(unittest.TestCase):
if v2 is not None:
assert_equal(type(result), type(v2))
class TestConfigParser(unittest.TestCase):
def test_param_merge(self):
config = ConfigManager()
config.load_config({'workload_params': {'one': 1, 'three': {'ex': 'x'}}, 'runtime_params': {'aye': 'a'}}, 'file_one')
config.load_config({'workload_params': {'two': 2, 'three': {'why': 'y'}}, 'runtime_params': {'bee': 'b'}}, 'file_two')
assert_equal(
config.jobs_config.job_spec_template['workload_parameters'],
{'one': 1, 'two': 2, 'three': {'why': 'y'}},
)
assert_equal(
config.jobs_config.job_spec_template['runtime_parameters'],
{'aye': 'a', 'bee': 'b'},
)

View File

@ -21,9 +21,10 @@ from nose.tools import assert_equal, assert_raises
from wa.utils.exec_control import (init_environment, reset_environment,
activate_environment, once,
once_per_class, once_per_instance)
once_per_class, once_per_instance,
once_per_attribute_value)
class TestClass(object):
class MockClass(object):
called = 0
@ -32,7 +33,7 @@ class TestClass(object):
@once
def called_once(self):
TestClass.called += 1
MockClass.called += 1
@once
def initilize_once(self):
@ -50,7 +51,7 @@ class TestClass(object):
return '{}: Called={}'.format(self.__class__.__name__, self.called)
class SubClass(TestClass):
class SubClass(MockClass):
def __init__(self):
super(SubClass, self).__init__()
@ -110,7 +111,19 @@ class AnotherClass(object):
self.count += 1
class AnotherSubClass(TestClass):
class NamedClass:
count = 0
def __init__(self, name):
self.name = name
@once_per_attribute_value('name')
def initilize(self):
NamedClass.count += 1
class AnotherSubClass(MockClass):
def __init__(self):
super(AnotherSubClass, self).__init__()
@ -142,7 +155,7 @@ class EnvironmentManagementTest(TestCase):
def test_reset_current_environment(self):
activate_environment('CURRENT_ENVIRONMENT')
t1 = TestClass()
t1 = MockClass()
t1.initilize_once()
assert_equal(t1.count, 1)
@ -152,7 +165,7 @@ class EnvironmentManagementTest(TestCase):
def test_switch_environment(self):
activate_environment('ENVIRONMENT1')
t1 = TestClass()
t1 = MockClass()
t1.initilize_once()
assert_equal(t1.count, 1)
@ -166,7 +179,7 @@ class EnvironmentManagementTest(TestCase):
def test_reset_environment_name(self):
activate_environment('ENVIRONMENT')
t1 = TestClass()
t1 = MockClass()
t1.initilize_once()
assert_equal(t1.count, 1)
@ -195,7 +208,7 @@ class OnlyOnceEnvironmentTest(TestCase):
reset_environment('TEST_ENVIRONMENT')
def test_single_instance(self):
t1 = TestClass()
t1 = MockClass()
ac = AnotherClass()
t1.initilize_once()
@ -209,8 +222,8 @@ class OnlyOnceEnvironmentTest(TestCase):
def test_mulitple_instances(self):
t1 = TestClass()
t2 = TestClass()
t1 = MockClass()
t2 = MockClass()
t1.initilize_once()
assert_equal(t1.count, 1)
@ -220,7 +233,7 @@ class OnlyOnceEnvironmentTest(TestCase):
def test_sub_classes(self):
t1 = TestClass()
t1 = MockClass()
sc = SubClass()
ss = SubSubClass()
asc = AnotherSubClass()
@ -250,7 +263,7 @@ class OncePerClassEnvironmentTest(TestCase):
reset_environment('TEST_ENVIRONMENT')
def test_single_instance(self):
t1 = TestClass()
t1 = MockClass()
ac = AnotherClass()
t1.initilize_once_per_class()
@ -264,8 +277,8 @@ class OncePerClassEnvironmentTest(TestCase):
def test_mulitple_instances(self):
t1 = TestClass()
t2 = TestClass()
t1 = MockClass()
t2 = MockClass()
t1.initilize_once_per_class()
assert_equal(t1.count, 1)
@ -275,7 +288,7 @@ class OncePerClassEnvironmentTest(TestCase):
def test_sub_classes(self):
t1 = TestClass()
t1 = MockClass()
sc1 = SubClass()
sc2 = SubClass()
ss1 = SubSubClass()
@ -308,7 +321,7 @@ class OncePerInstanceEnvironmentTest(TestCase):
reset_environment('TEST_ENVIRONMENT')
def test_single_instance(self):
t1 = TestClass()
t1 = MockClass()
ac = AnotherClass()
t1.initilize_once_per_instance()
@ -322,8 +335,8 @@ class OncePerInstanceEnvironmentTest(TestCase):
def test_mulitple_instances(self):
t1 = TestClass()
t2 = TestClass()
t1 = MockClass()
t2 = MockClass()
t1.initilize_once_per_instance()
assert_equal(t1.count, 1)
@ -333,7 +346,7 @@ class OncePerInstanceEnvironmentTest(TestCase):
def test_sub_classes(self):
t1 = TestClass()
t1 = MockClass()
sc = SubClass()
ss = SubSubClass()
asc = AnotherSubClass()
@ -352,3 +365,30 @@ class OncePerInstanceEnvironmentTest(TestCase):
asc.initilize_once_per_instance()
asc.initilize_once_per_instance()
assert_equal(asc.count, 2)
class OncePerAttributeValueTest(TestCase):
def setUp(self):
activate_environment('TEST_ENVIRONMENT')
def tearDown(self):
reset_environment('TEST_ENVIRONMENT')
def test_once_attribute_value(self):
classes = [
NamedClass('Rick'),
NamedClass('Morty'),
NamedClass('Rick'),
NamedClass('Morty'),
NamedClass('Morty'),
NamedClass('Summer'),
]
for c in classes:
c.initilize()
for c in classes:
c.initilize()
assert_equal(NamedClass.count, 3)

315
tests/test_execution.py Normal file
View File

@ -0,0 +1,315 @@
# Copyright 2020 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import tempfile
from unittest import TestCase
from mock.mock import Mock
from nose.tools import assert_equal
from datetime import datetime
from wa.framework.configuration import RunConfiguration
from wa.framework.configuration.core import JobSpec, Status
from wa.framework.execution import ExecutionContext, Runner
from wa.framework.job import Job
from wa.framework.output import RunOutput, init_run_output
from wa.framework.output_processor import ProcessorManager
import wa.framework.signal as signal
from wa.framework.run import JobState
from wa.framework.exception import ExecutionError
class MockConfigManager(Mock):
@property
def jobs(self):
return self._joblist
@property
def loaded_config_sources(self):
return []
@property
def plugin_cache(self):
return MockPluginCache()
def __init__(self, *args, **kwargs):
super(MockConfigManager, self).__init__(*args, **kwargs)
self._joblist = None
self.run_config = RunConfiguration()
def to_pod(self):
return {}
class MockPluginCache(Mock):
def list_plugins(self, kind=None):
return []
class MockProcessorManager(Mock):
def __init__(self, *args, **kwargs):
super(MockProcessorManager, self).__init__(*args, **kwargs)
def get_enabled(self):
return []
class JobState_force_retry(JobState):
@property
def status(self):
return self._status
@status.setter
def status(self, value):
if(self.retries != self.times_to_retry) and (value == Status.RUNNING):
self._status = Status.FAILED
if self.output:
self.output.status = Status.FAILED
else:
self._status = value
if self.output:
self.output.status = value
def __init__(self, to_retry, *args, **kwargs):
self.retries = 0
self._status = Status.NEW
self.times_to_retry = to_retry
self.output = None
super(JobState_force_retry, self).__init__(*args, **kwargs)
class Job_force_retry(Job):
'''This class imitates a job that retries as many times as specified by
``retries`` in its constructor'''
def __init__(self, to_retry, *args, **kwargs):
super(Job_force_retry, self).__init__(*args, **kwargs)
self.state = JobState_force_retry(to_retry, self.id, self.label, self.iteration, Status.NEW)
self.initialized = False
self.finalized = False
def initialize(self, context):
self.initialized = True
return super().initialize(context)
def finalize(self, context):
self.finalized = True
return super().finalize(context)
class TestRunState(TestCase):
def setUp(self):
self.path = tempfile.mkstemp()[1]
os.remove(self.path)
self.initialise_signals()
self.context = get_context(self.path)
self.job_spec = get_jobspec()
def tearDown(self):
signal.disconnect(self._verify_serialized_state, signal.RUN_INITIALIZED)
signal.disconnect(self._verify_serialized_state, signal.JOB_STARTED)
signal.disconnect(self._verify_serialized_state, signal.JOB_RESTARTED)
signal.disconnect(self._verify_serialized_state, signal.JOB_COMPLETED)
signal.disconnect(self._verify_serialized_state, signal.JOB_FAILED)
signal.disconnect(self._verify_serialized_state, signal.JOB_ABORTED)
signal.disconnect(self._verify_serialized_state, signal.RUN_FINALIZED)
def test_job_state_transitions_pass(self):
'''Tests state equality when the job passes first try'''
job = Job(self.job_spec, 1, self.context)
job.workload = Mock()
self.context.cm._joblist = [job]
self.context.run_state.add_job(job)
runner = Runner(self.context, MockProcessorManager())
runner.run()
def test_job_state_transitions_fail(self):
'''Tests state equality when job fails completely'''
job = Job_force_retry(3, self.job_spec, 1, self.context)
job.workload = Mock()
self.context.cm._joblist = [job]
self.context.run_state.add_job(job)
runner = Runner(self.context, MockProcessorManager())
runner.run()
def test_job_state_transitions_retry(self):
'''Tests state equality when job fails initially'''
job = Job_force_retry(1, self.job_spec, 1, self.context)
job.workload = Mock()
self.context.cm._joblist = [job]
self.context.run_state.add_job(job)
runner = Runner(self.context, MockProcessorManager())
runner.run()
def initialise_signals(self):
signal.connect(self._verify_serialized_state, signal.RUN_INITIALIZED)
signal.connect(self._verify_serialized_state, signal.JOB_STARTED)
signal.connect(self._verify_serialized_state, signal.JOB_RESTARTED)
signal.connect(self._verify_serialized_state, signal.JOB_COMPLETED)
signal.connect(self._verify_serialized_state, signal.JOB_FAILED)
signal.connect(self._verify_serialized_state, signal.JOB_ABORTED)
signal.connect(self._verify_serialized_state, signal.RUN_FINALIZED)
def _verify_serialized_state(self, _):
fs_state = RunOutput(self.path).state
ex_state = self.context.run_output.state
assert_equal(fs_state.status, ex_state.status)
fs_js_zip = zip(
[value for key, value in fs_state.jobs.items()],
[value for key, value in ex_state.jobs.items()]
)
for fs_jobstate, ex_jobstate in fs_js_zip:
assert_equal(fs_jobstate.iteration, ex_jobstate.iteration)
assert_equal(fs_jobstate.retries, ex_jobstate.retries)
assert_equal(fs_jobstate.status, ex_jobstate.status)
class TestJobState(TestCase):
def test_job_retry_status(self):
job_spec = get_jobspec()
context = get_context()
job = Job_force_retry(2, job_spec, 1, context)
job.workload = Mock()
context.cm._joblist = [job]
context.run_state.add_job(job)
verifier = lambda _: assert_equal(job.status, Status.PENDING)
signal.connect(verifier, signal.JOB_RESTARTED)
runner = Runner(context, MockProcessorManager())
runner.run()
signal.disconnect(verifier, signal.JOB_RESTARTED)
def test_skipped_job_state(self):
# Test, if the first job fails and the bail parameter set,
# that the remaining jobs have status: SKIPPED
job_spec = get_jobspec()
context = get_context()
context.cm.run_config.bail_on_job_failure = True
job1 = Job_force_retry(3, job_spec, 1, context)
job2 = Job(job_spec, 1, context)
job1.workload = Mock()
job2.workload = Mock()
context.cm._joblist = [job1, job2]
context.run_state.add_job(job1)
context.run_state.add_job(job2)
runner = Runner(context, MockProcessorManager())
try:
runner.run()
except ExecutionError:
assert_equal(job2.status, Status.SKIPPED)
else:
assert False, "ExecutionError not raised"
def test_normal_job_finalized(self):
# Test that a job is initialized then finalized normally
job_spec = get_jobspec()
context = get_context()
job = Job_force_retry(0, job_spec, 1, context)
job.workload = Mock()
context.cm._joblist = [job]
context.run_state.add_job(job)
runner = Runner(context, MockProcessorManager())
runner.run()
assert_equal(job.initialized, True)
assert_equal(job.finalized, True)
def test_skipped_job_finalized(self):
# Test that a skipped job has been finalized
job_spec = get_jobspec()
context = get_context()
context.cm.run_config.bail_on_job_failure = True
job1 = Job_force_retry(3, job_spec, 1, context)
job2 = Job_force_retry(0, job_spec, 1, context)
job1.workload = Mock()
job2.workload = Mock()
context.cm._joblist = [job1, job2]
context.run_state.add_job(job1)
context.run_state.add_job(job2)
runner = Runner(context, MockProcessorManager())
try:
runner.run()
except ExecutionError:
assert_equal(job2.finalized, True)
else:
assert False, "ExecutionError not raised"
def test_failed_job_finalized(self):
# Test that a failed job, while the bail parameter is set,
# is finalized
job_spec = get_jobspec()
context = get_context()
context.cm.run_config.bail_on_job_failure = True
job1 = Job_force_retry(3, job_spec, 1, context)
job1.workload = Mock()
context.cm._joblist = [job1]
context.run_state.add_job(job1)
runner = Runner(context, MockProcessorManager())
try:
runner.run()
except ExecutionError:
assert_equal(job1.finalized, True)
else:
assert False, "ExecutionError not raised"
def get_context(path=None):
if not path:
path = tempfile.mkstemp()[1]
os.remove(path)
config = MockConfigManager()
output = init_run_output(path, config)
return ExecutionContext(config, Mock(), output)
def get_jobspec():
job_spec = JobSpec()
job_spec.augmentations = {}
job_spec.finalize()
return job_spec

View File

@ -30,6 +30,27 @@ class Callable(object):
return self.val
class TestSignalDisconnect(unittest.TestCase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.callback_ctr = 0
def setUp(self):
signal.connect(self._call_me_once, 'first')
signal.connect(self._call_me_once, 'second')
def test_handler_disconnected(self):
signal.send('first')
signal.send('second')
def _call_me_once(self):
assert_equal(self.callback_ctr, 0)
self.callback_ctr += 1
signal.disconnect(self._call_me_once, 'first')
signal.disconnect(self._call_me_once, 'second')
class TestPriorityDispatcher(unittest.TestCase):
def setUp(self):
@ -61,12 +82,16 @@ class TestPriorityDispatcher(unittest.TestCase):
def test_wrap_propagate(self):
d = {'before': False, 'after': False, 'success': False}
def before():
d['before'] = True
def after():
d['after'] = True
def success():
d['success'] = True
signal.connect(before, signal.BEFORE_WORKLOAD_SETUP)
signal.connect(after, signal.AFTER_WORKLOAD_SETUP)
signal.connect(success, signal.SUCCESSFUL_WORKLOAD_SETUP)
@ -76,7 +101,7 @@ class TestPriorityDispatcher(unittest.TestCase):
with signal.wrap('WORKLOAD_SETUP'):
raise RuntimeError()
except RuntimeError:
caught=True
caught = True
assert_true(d['before'])
assert_true(d['after'])

View File

@ -21,7 +21,7 @@ from nose.tools import raises, assert_equal, assert_not_equal, assert_in, assert
from nose.tools import assert_true, assert_false, assert_raises, assert_is, assert_list_equal
from wa.utils.types import (list_or_integer, list_or_bool, caseless_string,
arguments, prioritylist, enum, level)
arguments, prioritylist, enum, level, toggle_set)
@ -149,3 +149,51 @@ class TestEnumLevel(TestCase):
s = e.one.to_pod()
l = e.from_pod(s)
assert_equal(l, e.one)
class TestToggleSet(TestCase):
def test_equality(self):
ts1 = toggle_set(['one', 'two',])
ts2 = toggle_set(['one', 'two', '~three'])
assert_not_equal(ts1, ts2)
assert_equal(ts1.values(), ts2.values())
assert_equal(ts2, toggle_set(['two', '~three', 'one']))
def test_merge(self):
ts1 = toggle_set(['one', 'two', 'three', '~four', '~five'])
ts2 = toggle_set(['two', '~three', 'four', '~five'])
ts3 = ts1.merge_with(ts2)
assert_equal(ts1, toggle_set(['one', 'two', 'three', '~four', '~five']))
assert_equal(ts2, toggle_set(['two', '~three', 'four', '~five']))
assert_equal(ts3, toggle_set(['one', 'two', '~three', 'four', '~five']))
assert_equal(ts3.values(), set(['one', 'two','four']))
ts4 = ts1.merge_into(ts2)
assert_equal(ts1, toggle_set(['one', 'two', 'three', '~four', '~five']))
assert_equal(ts2, toggle_set(['two', '~three', 'four', '~five']))
assert_equal(ts4, toggle_set(['one', 'two', 'three', '~four', '~five']))
assert_equal(ts4.values(), set(['one', 'two', 'three']))
def test_drop_all_previous(self):
ts1 = toggle_set(['one', 'two', 'three'])
ts2 = toggle_set(['four', '~~', 'five'])
ts3 = toggle_set(['six', 'seven', '~three'])
ts4 = ts1.merge_with(ts2).merge_with(ts3)
assert_equal(ts4, toggle_set(['four', 'five', 'six', 'seven', '~three', '~~']))
ts5 = ts2.merge_into(ts3).merge_into(ts1)
assert_equal(ts5, toggle_set(['four', 'five', '~~']))
ts6 = ts2.merge_into(ts3).merge_with(ts1)
assert_equal(ts6, toggle_set(['one', 'two', 'three', 'four', 'five', '~~']))
def test_order_on_create(self):
ts1 = toggle_set(['one', 'two', 'three', '~one'])
assert_equal(ts1, toggle_set(['~one', 'two', 'three']))
ts1 = toggle_set(['~one', 'two', 'three', 'one'])
assert_equal(ts1, toggle_set(['one', 'two', 'three']))

View File

@ -17,7 +17,7 @@ from wa.framework import pluginloader, signal
from wa.framework.command import Command, ComplexCommand, SubCommand
from wa.framework.configuration import settings
from wa.framework.configuration.core import Status
from wa.framework.exception import (CommandError, ConfigError, HostError, InstrumentError,
from wa.framework.exception import (CommandError, ConfigError, HostError, InstrumentError, # pylint: disable=redefined-builtin
JobError, NotFoundError, OutputProcessorError,
PluginLoaderError, ResourceError, TargetError,
TargetNotRespondingError, TimeoutError, ToolError,
@ -33,7 +33,7 @@ from wa.framework.target.descriptor import (TargetDescriptor, TargetDescription,
create_target_description, add_description_for_target)
from wa.framework.workload import (Workload, ApkWorkload, ApkUiautoWorkload,
ApkReventWorkload, UIWorkload, UiautoWorkload,
ReventWorkload)
PackageHandler, ReventWorkload, TestPackageHandler)
from wa.framework.version import get_wa_version, get_wa_version_with_commit

Binary file not shown.

Binary file not shown.

View File

@ -13,28 +13,331 @@
# limitations under the License.
#
import os
import sys
import stat
import shutil
import string
import re
import uuid
import getpass
from collections import OrderedDict
from distutils.dir_util import copy_tree
from devlib.utils.types import identifier
try:
import psycopg2
from psycopg2 import connect, OperationalError, extras
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
except ImportError as e:
psycopg2 = None
import_error_msg = e.args[0] if e.args else str(e)
from wa import ComplexCommand, SubCommand, pluginloader, settings
from wa.framework.target.descriptor import list_target_descriptions
from wa.framework.exception import ConfigError, CommandError
from wa.instruments.energy_measurement import EnergyInstrumentBackend
from wa.utils.misc import (ensure_directory_exists as _d, capitalize,
ensure_file_directory_exists as _f)
from wa.utils.postgres import get_schema, POSTGRES_SCHEMA_DIR
from wa.utils.serializer import yaml
from devlib.utils.types import identifier
if sys.version_info >= (3, 8):
def copy_tree(src, dst):
from shutil import copy, copytree # pylint: disable=import-outside-toplevel
copytree(
src,
dst,
# dirs_exist_ok=True only exists in Python >= 3.8
dirs_exist_ok=True,
# Align with devlib and only copy the content without metadata
copy_function=copy
)
else:
def copy_tree(src, dst):
# pylint: disable=import-outside-toplevel, redefined-outer-name
from distutils.dir_util import copy_tree
# Align with devlib and only copy the content without metadata
copy_tree(src, dst, preserve_mode=False, preserve_times=False)
TEMPLATES_DIR = os.path.join(os.path.dirname(__file__), 'templates')
class CreateDatabaseSubcommand(SubCommand):
name = 'database'
description = """
Create a Postgresql database which is compatible with the WA Postgres
output processor.
"""
schemafilepath = os.path.join(POSTGRES_SCHEMA_DIR, 'postgres_schema.sql')
schemaupdatefilepath = os.path.join(POSTGRES_SCHEMA_DIR, 'postgres_schema_update_v{}.{}.sql')
def __init__(self, *args, **kwargs):
super(CreateDatabaseSubcommand, self).__init__(*args, **kwargs)
self.sql_commands = None
self.schema_major = None
self.schema_minor = None
self.postgres_host = None
self.postgres_port = None
self.username = None
self.password = None
self.dbname = None
self.config_file = None
self.force = None
def initialize(self, context):
self.parser.add_argument(
'-a', '--postgres-host', default='localhost',
help='The host on which to create the database.')
self.parser.add_argument(
'-k', '--postgres-port', default='5432',
help='The port on which the PostgreSQL server is running.')
self.parser.add_argument(
'-u', '--username', default='postgres',
help='The username with which to connect to the server.')
self.parser.add_argument(
'-p', '--password',
help='The password for the user account.')
self.parser.add_argument(
'-d', '--dbname', default='wa',
help='The name of the database to create.')
self.parser.add_argument(
'-f', '--force', action='store_true',
help='Force overwrite the existing database if one exists.')
self.parser.add_argument(
'-F', '--force-update-config', action='store_true',
help='Force update the config file if an entry exists.')
self.parser.add_argument(
'-r', '--config-file', default=settings.user_config_file,
help='Path to the config file to be updated.')
self.parser.add_argument(
'-x', '--schema-version', action='store_true',
help='Display the current schema version.')
self.parser.add_argument(
'-U', '--upgrade', action='store_true',
help='Upgrade the database to use the latest schema version.')
def execute(self, state, args): # pylint: disable=too-many-branches
if not psycopg2:
raise CommandError(
'The module psycopg2 is required for the wa '
+ 'create database command.')
if args.dbname == 'postgres':
raise ValueError('Databasename to create cannot be postgres.')
self._parse_args(args)
self.schema_major, self.schema_minor, self.sql_commands = get_schema(self.schemafilepath)
# Display the version if needed and exit
if args.schema_version:
self.logger.info(
'The current schema version is {}.{}'.format(self.schema_major,
self.schema_minor))
return
if args.upgrade:
self.update_schema()
return
# Open user configuration
with open(self.config_file, 'r') as config_file:
config = yaml.load(config_file)
if 'postgres' in config and not args.force_update_config:
raise CommandError(
"The entry 'postgres' already exists in the config file. "
+ "Please specify the -F flag to force an update.")
possible_connection_errors = [
(
re.compile('FATAL: role ".*" does not exist'),
'Username does not exist or password is incorrect'
),
(
re.compile('FATAL: password authentication failed for user'),
'Password was incorrect'
),
(
re.compile('fe_sendauth: no password supplied'),
'Passwordless connection is not enabled. '
'Please enable trust in pg_hba for this host '
'or use a password'
),
(
re.compile('FATAL: no pg_hba.conf entry for'),
'Host is not allowed to connect to the specified database '
'using this user according to pg_hba.conf. Please change the '
'rules in pg_hba or your connection method'
),
(
re.compile('FATAL: pg_hba.conf rejects connection'),
'Connection was rejected by pg_hba.conf'
),
]
def predicate(error, handle):
if handle[0].match(str(error)):
raise CommandError(handle[1] + ': \n' + str(error))
# Attempt to create database
try:
self.create_database()
except OperationalError as e:
for handle in possible_connection_errors:
predicate(e, handle)
raise e
# Update the configuration file
self._update_configuration_file(config)
def create_database(self):
self._validate_version()
self._check_database_existence()
self._create_database_postgres()
self._apply_database_schema(self.sql_commands, self.schema_major, self.schema_minor)
self.logger.info(
"Successfully created the database {}".format(self.dbname))
def update_schema(self):
self._validate_version()
schema_major, schema_minor, _ = get_schema(self.schemafilepath)
meta_oid, current_major, current_minor = self._get_database_schema_version()
while not (schema_major == current_major and schema_minor == current_minor):
current_minor = self._update_schema_minors(current_major, current_minor, meta_oid)
current_major, current_minor = self._update_schema_major(current_major, current_minor, meta_oid)
msg = "Database schema update of '{}' to v{}.{} complete"
self.logger.info(msg.format(self.dbname, schema_major, schema_minor))
def _update_schema_minors(self, major, minor, meta_oid):
# Upgrade all available minor versions
while True:
minor += 1
schema_update = os.path.join(POSTGRES_SCHEMA_DIR,
self.schemaupdatefilepath.format(major, minor))
if not os.path.exists(schema_update):
break
_, _, sql_commands = get_schema(schema_update)
self._apply_database_schema(sql_commands, major, minor, meta_oid)
msg = "Updated the database schema to v{}.{}"
self.logger.debug(msg.format(major, minor))
# Return last existing update file version
return minor - 1
def _update_schema_major(self, current_major, current_minor, meta_oid):
current_major += 1
schema_update = os.path.join(POSTGRES_SCHEMA_DIR,
self.schemaupdatefilepath.format(current_major, 0))
if not os.path.exists(schema_update):
return (current_major - 1, current_minor)
# Reset minor to 0 with major version bump
current_minor = 0
_, _, sql_commands = get_schema(schema_update)
self._apply_database_schema(sql_commands, current_major, current_minor, meta_oid)
msg = "Updated the database schema to v{}.{}"
self.logger.debug(msg.format(current_major, current_minor))
return (current_major, current_minor)
def _validate_version(self):
conn = connect(user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
if conn.server_version < 90400:
msg = 'Postgres version too low. Please ensure that you are using atleast v9.4'
raise CommandError(msg)
def _get_database_schema_version(self):
conn = connect(dbname=self.dbname, user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
cursor = conn.cursor()
cursor.execute('''SELECT
DatabaseMeta.oid,
DatabaseMeta.schema_major,
DatabaseMeta.schema_minor
FROM
DatabaseMeta;''')
return cursor.fetchone()
def _check_database_existence(self):
try:
connect(dbname=self.dbname, user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
except OperationalError as e:
# Expect an operational error (database's non-existence)
if not re.compile('FATAL: database ".*" does not exist').match(str(e)):
raise e
else:
if not self.force:
raise CommandError(
"Database {} already exists. ".format(self.dbname)
+ "Please specify the -f flag to create it from afresh."
)
def _create_database_postgres(self):
conn = connect(dbname='postgres', user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
cursor = conn.cursor()
cursor.execute('DROP DATABASE IF EXISTS ' + self.dbname)
cursor.execute('CREATE DATABASE ' + self.dbname)
conn.commit()
cursor.close()
conn.close()
def _apply_database_schema(self, sql_commands, schema_major, schema_minor, meta_uuid=None):
conn = connect(dbname=self.dbname, user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
cursor = conn.cursor()
cursor.execute(sql_commands)
if not meta_uuid:
extras.register_uuid()
meta_uuid = uuid.uuid4()
cursor.execute("INSERT INTO DatabaseMeta VALUES (%s, %s, %s)",
(meta_uuid,
schema_major,
schema_minor
))
else:
cursor.execute("UPDATE DatabaseMeta SET schema_major = %s, schema_minor = %s WHERE oid = %s;",
(schema_major,
schema_minor,
meta_uuid
))
conn.commit()
cursor.close()
conn.close()
def _update_configuration_file(self, config):
''' Update the user configuration file with the newly created database's
configuration.
'''
config['postgres'] = OrderedDict(
[('host', self.postgres_host), ('port', self.postgres_port),
('dbname', self.dbname), ('username', self.username), ('password', self.password)])
with open(self.config_file, 'w+') as config_file:
yaml.dump(config, config_file)
def _parse_args(self, args):
self.postgres_host = args.postgres_host
self.postgres_port = args.postgres_port
self.username = args.username
self.password = args.password
self.dbname = args.dbname
self.config_file = args.config_file
self.force = args.force
class CreateAgendaSubcommand(SubCommand):
name = 'agenda'
@ -51,6 +354,7 @@ class CreateAgendaSubcommand(SubCommand):
self.parser.add_argument('-o', '--output', metavar='FILE',
help='Output file. If not specfied, STDOUT will be used instead.')
# pylint: disable=too-many-branches
def execute(self, state, args):
agenda = OrderedDict()
agenda['config'] = OrderedDict(augmentations=[], iterations=args.iterations)
@ -71,7 +375,15 @@ class CreateAgendaSubcommand(SubCommand):
extcls = pluginloader.get_plugin_class(name)
config = pluginloader.get_default_config(name)
if extcls.kind == 'workload':
# Handle special case for EnergyInstrumentBackends
if issubclass(extcls, EnergyInstrumentBackend):
if 'energy_measurement' not in agenda['config']['augmentations']:
energy_config = pluginloader.get_default_config('energy_measurement')
agenda['config']['augmentations'].append('energy_measurement')
agenda['config']['energy_measurement'] = energy_config
agenda['config']['energy_measurement']['instrument'] = extcls.name
agenda['config']['energy_measurement']['instrument_parameters'] = config
elif extcls.kind == 'workload':
entry = OrderedDict()
entry['name'] = extcls.name
if name != extcls.name:
@ -79,11 +391,12 @@ class CreateAgendaSubcommand(SubCommand):
entry['params'] = config
agenda['workloads'].append(entry)
else:
if extcls.kind == 'instrument':
agenda['config']['augmentations'].append(name)
if extcls.kind == 'output_processor':
agenda['config']['augmentations'].append(name)
agenda['config'][name] = config
if extcls.kind in ('instrument', 'output_processor'):
if extcls.name not in agenda['config']['augmentations']:
agenda['config']['augmentations'].append(extcls.name)
if extcls.name not in agenda['config']:
agenda['config'][extcls.name] = config
if args.output:
wfh = open(args.output, 'w')
@ -104,14 +417,14 @@ class CreateWorkloadSubcommand(SubCommand):
self.parser.add_argument('name', metavar='NAME',
help='Name of the workload to be created')
self.parser.add_argument('-p', '--path', metavar='PATH', default=None,
help='The location at which the workload will be created. If not specified, ' +
'this defaults to "~/.workload_automation/plugins".')
help='The location at which the workload will be created. If not specified, '
+ 'this defaults to "~/.workload_automation/plugins".')
self.parser.add_argument('-f', '--force', action='store_true',
help='Create the new workload even if a workload with the specified ' +
'name already exists.')
help='Create the new workload even if a workload with the specified '
+ 'name already exists.')
self.parser.add_argument('-k', '--kind', metavar='KIND', default='basic', choices=list(create_funcs.keys()),
help='The type of workload to be created. The available options ' +
'are: {}'.format(', '.join(list(create_funcs.keys()))))
help='The type of workload to be created. The available options '
+ 'are: {}'.format(', '.join(list(create_funcs.keys()))))
def execute(self, state, args): # pylint: disable=R0201
where = args.path or 'local'
@ -134,8 +447,8 @@ class CreatePackageSubcommand(SubCommand):
self.parser.add_argument('name', metavar='NAME',
help='Name of the package to be created')
self.parser.add_argument('-p', '--path', metavar='PATH', default=None,
help='The location at which the new package will be created. If not specified, ' +
'current working directory will be used.')
help='The location at which the new package will be created. If not specified, '
+ 'current working directory will be used.')
self.parser.add_argument('-f', '--force', action='store_true',
help='Create the new package even if a file or directory with the same name '
'already exists at the specified location.')
@ -170,6 +483,7 @@ class CreateCommand(ComplexCommand):
object-specific arguments.
'''
subcmd_classes = [
CreateDatabaseSubcommand,
CreateWorkloadSubcommand,
CreateAgendaSubcommand,
CreatePackageSubcommand,
@ -240,6 +554,7 @@ def create_uiauto_project(path, name):
wfh.write(render_template(os.path.join('uiauto', 'UiAutomation.java'),
{'name': name, 'package_name': package_name}))
# Mapping of workload types to their corresponding creation method
create_funcs = {
'basic': create_template_workload,
@ -266,5 +581,5 @@ def get_class_name(name, postfix=''):
def touch(path):
with open(path, 'w') as _:
with open(path, 'w') as _: # NOQA
pass

View File

@ -0,0 +1,201 @@
--!VERSION!1.6!ENDVERSION!
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "lo";
-- In future, it may be useful to implement rules on which Parameter oid fields can be none depeendent on the value in the type column;
DROP TABLE IF EXISTS DatabaseMeta;
DROP TABLE IF EXISTS Parameters;
DROP TABLE IF EXISTS Classifiers;
DROP TABLE IF EXISTS LargeObjects;
DROP TABLE IF EXISTS Artifacts;
DROP TABLE IF EXISTS Metrics;
DROP TABLE IF EXISTS Augmentations;
DROP TABLE IF EXISTS Jobs_Augs;
DROP TABLE IF EXISTS ResourceGetters;
DROP TABLE IF EXISTS Resource_Getters;
DROP TABLE IF EXISTS Events;
DROP TABLE IF EXISTS Targets;
DROP TABLE IF EXISTS Jobs;
DROP TABLE IF EXISTS Runs;
DROP TYPE IF EXISTS status_enum;
DROP TYPE IF EXISTS param_enum;
CREATE TYPE status_enum AS ENUM ('UNKNOWN(0)','NEW(1)','PENDING(2)','STARTED(3)','CONNECTED(4)', 'INITIALIZED(5)', 'RUNNING(6)', 'OK(7)', 'PARTIAL(8)', 'FAILED(9)', 'ABORTED(10)', 'SKIPPED(11)');
CREATE TYPE param_enum AS ENUM ('workload', 'resource_getter', 'augmentation', 'device', 'runtime', 'boot');
-- In future, it might be useful to create an ENUM type for the artifact kind, or simply a generic enum type;
CREATE TABLE DatabaseMeta (
oid uuid NOT NULL,
schema_major int,
schema_minor int,
PRIMARY KEY (oid)
);
CREATE TABLE Runs (
oid uuid NOT NULL,
event_summary text,
basepath text,
status status_enum,
timestamp timestamp,
run_name text,
project text,
project_stage text,
retry_on_status status_enum[],
max_retries int,
bail_on_init_failure boolean,
allow_phone_home boolean,
run_uuid uuid,
start_time timestamp,
end_time timestamp,
duration float,
metadata jsonb,
_pod_version int,
_pod_serialization_version int,
state jsonb,
PRIMARY KEY (oid)
);
CREATE TABLE Jobs (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
status status_enum,
retry int,
label text,
job_id text,
iterations int,
workload_name text,
metadata jsonb,
_pod_version int,
_pod_serialization_version int,
PRIMARY KEY (oid)
);
CREATE TABLE Targets (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
target text,
modules text[],
cpus text[],
os text,
os_version jsonb,
hostid bigint,
hostname text,
abi text,
is_rooted boolean,
kernel_version text,
kernel_release text,
kernel_sha1 text,
kernel_config text[],
sched_features text[],
page_size_kb int,
screen_resolution int[],
prop json,
android_id text,
_pod_version int,
_pod_serialization_version int,
system_id text,
PRIMARY KEY (oid)
);
CREATE TABLE Events (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid),
timestamp timestamp,
message text,
_pod_version int,
_pod_serialization_version int,
PRIMARY KEY (oid)
);
CREATE TABLE Resource_Getters (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
name text,
PRIMARY KEY (oid)
);
CREATE TABLE Augmentations (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
name text,
PRIMARY KEY (oid)
);
CREATE TABLE Jobs_Augs (
oid uuid NOT NULL,
job_oid uuid NOT NULL references Jobs(oid) ON DELETE CASCADE,
augmentation_oid uuid NOT NULL references Augmentations(oid) ON DELETE CASCADE,
PRIMARY KEY (oid)
);
CREATE TABLE Metrics (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid),
name text,
value double precision,
units text,
lower_is_better boolean,
_pod_version int,
_pod_serialization_version int,
PRIMARY KEY (oid)
);
CREATE TABLE LargeObjects (
oid uuid NOT NULL,
lo_oid lo NOT NULL,
PRIMARY KEY (oid)
);
-- Trigger that allows you to manage large objects from the LO table directly;
CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON LargeObjects
FOR EACH ROW EXECUTE PROCEDURE lo_manage(lo_oid);
CREATE TABLE Artifacts (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid),
name text,
large_object_uuid uuid NOT NULL references LargeObjects(oid),
description text,
kind text,
_pod_version int,
_pod_serialization_version int,
is_dir boolean,
PRIMARY KEY (oid)
);
CREATE RULE del_lo AS
ON DELETE TO Artifacts
DO DELETE FROM LargeObjects
WHERE LargeObjects.oid = old.large_object_uuid
;
CREATE TABLE Classifiers (
oid uuid NOT NULL,
artifact_oid uuid references Artifacts(oid) ON DELETE CASCADE,
metric_oid uuid references Metrics(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid) ON DELETE CASCADE,
run_oid uuid references Runs(oid) ON DELETE CASCADE,
key text,
value text,
PRIMARY KEY (oid)
);
CREATE TABLE Parameters (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid) ON DELETE CASCADE,
job_oid uuid references Jobs(oid),
augmentation_oid uuid references Augmentations(oid),
resource_getter_oid uuid references Resource_Getters(oid),
name text,
value text,
value_type text,
type param_enum,
PRIMARY KEY (oid)
);

View File

@ -0,0 +1,30 @@
ALTER TABLE resourcegetters RENAME TO resource_getters;
ALTER TABLE classifiers ADD COLUMN job_oid uuid references Jobs(oid);
ALTER TABLE classifiers ADD COLUMN run_oid uuid references Runs(oid);
ALTER TABLE targets ADD COLUMN page_size_kb int;
ALTER TABLE targets ADD COLUMN screen_resolution int[];
ALTER TABLE targets ADD COLUMN prop text;
ALTER TABLE targets ADD COLUMN android_id text;
ALTER TABLE targets ADD COLUMN _pod_version int;
ALTER TABLE targets ADD COLUMN _pod_serialization_version int;
ALTER TABLE jobs RENAME COLUMN retries TO retry;
ALTER TABLE jobs ADD COLUMN _pod_version int;
ALTER TABLE jobs ADD COLUMN _pod_serialization_version int;
ALTER TABLE runs ADD COLUMN project_stage text;
ALTER TABLE runs ADD COLUMN state jsonb;
ALTER TABLE runs ADD COLUMN duration float;
ALTER TABLE runs ADD COLUMN _pod_version int;
ALTER TABLE runs ADD COLUMN _pod_serialization_version int;
ALTER TABLE artifacts ADD COLUMN _pod_version int;
ALTER TABLE artifacts ADD COLUMN _pod_serialization_version int;
ALTER TABLE events ADD COLUMN _pod_version int;
ALTER TABLE events ADD COLUMN _pod_serialization_version int;
ALTER TABLE metrics ADD COLUMN _pod_version int;
ALTER TABLE metrics ADD COLUMN _pod_serialization_version int;

View File

@ -0,0 +1,3 @@
ALTER TABLE targets ADD COLUMN system_id text;
ALTER TABLE artifacts ADD COLUMN is_dir boolean;

View File

@ -0,0 +1,2 @@
ALTER TABLE targets ADD COLUMN modules text[];

View File

@ -0,0 +1 @@
ALTER TABLE targets ALTER hostid TYPE BIGINT;

View File

@ -0,0 +1,109 @@
ALTER TABLE jobs
DROP CONSTRAINT jobs_run_oid_fkey,
ADD CONSTRAINT jobs_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE targets
DROP CONSTRAINT targets_run_oid_fkey,
ADD CONSTRAINT targets_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE events
DROP CONSTRAINT events_run_oid_fkey,
ADD CONSTRAINT events_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE resource_getters
DROP CONSTRAINT resource_getters_run_oid_fkey,
ADD CONSTRAINT resource_getters_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE augmentations
DROP CONSTRAINT augmentations_run_oid_fkey,
ADD CONSTRAINT augmentations_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE jobs_augs
DROP CONSTRAINT jobs_augs_job_oid_fkey,
DROP CONSTRAINT jobs_augs_augmentation_oid_fkey,
ADD CONSTRAINT jobs_augs_job_oid_fkey
FOREIGN KEY (job_oid)
REFERENCES Jobs(oid)
ON DELETE CASCADE,
ADD CONSTRAINT jobs_augs_augmentation_oid_fkey
FOREIGN KEY (augmentation_oid)
REFERENCES Augmentations(oid)
ON DELETE CASCADE
;
ALTER TABLE metrics
DROP CONSTRAINT metrics_run_oid_fkey,
ADD CONSTRAINT metrics_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE artifacts
DROP CONSTRAINT artifacts_run_oid_fkey,
ADD CONSTRAINT artifacts_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
CREATE RULE del_lo AS
ON DELETE TO Artifacts
DO DELETE FROM LargeObjects
WHERE LargeObjects.oid = old.large_object_uuid
;
ALTER TABLE classifiers
DROP CONSTRAINT classifiers_artifact_oid_fkey,
DROP CONSTRAINT classifiers_metric_oid_fkey,
DROP CONSTRAINT classifiers_job_oid_fkey,
DROP CONSTRAINT classifiers_run_oid_fkey,
ADD CONSTRAINT classifiers_artifact_oid_fkey
FOREIGN KEY (artifact_oid)
REFERENCES artifacts(oid)
ON DELETE CASCADE,
ADD CONSTRAINT classifiers_metric_oid_fkey
FOREIGN KEY (metric_oid)
REFERENCES metrics(oid)
ON DELETE CASCADE,
ADD CONSTRAINT classifiers_job_oid_fkey
FOREIGN KEY (job_oid)
REFERENCES jobs(oid)
ON DELETE CASCADE,
ADD CONSTRAINT classifiers_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;
ALTER TABLE parameters
DROP CONSTRAINT parameters_run_oid_fkey,
ADD CONSTRAINT parameters_run_oid_fkey
FOREIGN KEY (run_oid)
REFERENCES runs(oid)
ON DELETE CASCADE
;

View File

@ -17,6 +17,7 @@ import os
from wa import Command
from wa import discover_wa_outputs
from wa.framework.configuration.core import Status
from wa.framework.exception import CommandError
from wa.framework.output import RunOutput
from wa.framework.output_processor import ProcessorManager
@ -30,6 +31,9 @@ class ProcessContext(object):
self.target_info = None
self.job_output = None
def add_augmentation(self, aug):
pass
class ProcessCommand(Command):
@ -54,8 +58,9 @@ class ProcessCommand(Command):
""")
self.parser.add_argument('-f', '--force', action='store_true',
help="""
Run processors that have already been
run. By default these will be skipped.
Run processors that have already been run. By
default these will be skipped. Also, forces
processing of in-progress runs.
""")
self.parser.add_argument('-r', '--recursive', action='store_true',
help="""
@ -64,7 +69,7 @@ class ProcessCommand(Command):
instead of just processing the root.
""")
def execute(self, config, args):
def execute(self, config, args): # pylint: disable=arguments-differ,too-many-branches,too-many-statements
process_directory = os.path.expandvars(args.directory)
self.logger.debug('Using process directory: {}'.format(process_directory))
if not os.path.exists(process_directory):
@ -73,10 +78,18 @@ class ProcessCommand(Command):
if not args.recursive:
output_list = [RunOutput(process_directory)]
else:
output_list = [output for output in discover_wa_outputs(process_directory)]
output_list = list(discover_wa_outputs(process_directory))
pc = ProcessContext()
for run_output in output_list:
if run_output.status < Status.OK and not args.force:
msg = 'Skipping {} as it has not completed -- {}'
self.logger.info(msg.format(run_output.basepath, run_output.status))
continue
pc.run_output = run_output
pc.target_info = run_output.target_info
if not args.recursive:
self.logger.info('Installing output processors')
else:
@ -92,7 +105,7 @@ class ProcessCommand(Command):
pm = ProcessorManager(loader=config.plugin_cache)
for proc in config.get_processors():
pm.install(proc, None)
pm.install(proc, pc)
if args.additional_processors:
for proc in args.additional_processors:
# Do not add any processors that are already present since
@ -100,14 +113,18 @@ class ProcessCommand(Command):
try:
pm.get_output_processor(proc)
except ValueError:
pm.install(proc, None)
pm.install(proc, pc)
pm.validate()
pm.initialize()
pm.initialize(pc)
pc.run_output = run_output
pc.target_info = run_output.target_info
for job_output in run_output.jobs:
if job_output.status < Status.OK or job_output.status in [Status.SKIPPED, Status.ABORTED]:
msg = 'Skipping job {} {} iteration {} -- {}'
self.logger.info(msg.format(job_output.id, job_output.label,
job_output.iteration, job_output.status))
continue
pc.job_output = job_output
pm.enable_all()
if not args.force:
@ -136,7 +153,8 @@ class ProcessCommand(Command):
self.logger.info('Processing run')
pm.process_run_output(pc)
pm.export_run_output(pc)
pm.finalize()
pm.finalize(pc)
run_output.write_info()
run_output.write_result()
self.logger.info('Done.')

288
wa/commands/report.py Normal file
View File

@ -0,0 +1,288 @@
from collections import Counter
from datetime import datetime, timedelta
import logging
import os
from wa import Command, settings
from wa.framework.configuration.core import Status
from wa.framework.output import RunOutput, discover_wa_outputs
from wa.utils.doc import underline
from wa.utils.log import COLOR_MAP, RESET_COLOR
from wa.utils.terminalsize import get_terminal_size
class ReportCommand(Command):
name = 'report'
description = '''
Monitor an ongoing run and provide information on its progress.
Specify the output directory of the run you would like the monitor;
alternatively report will attempt to discover wa output directories
within the current directory. The output includes run information such as
the UUID, start time, duration, project name and a short summary of the
run's progress (number of completed jobs, the number of jobs in each
different status).
If verbose output is specified, the output includes a list of all events
labelled as not specific to any job, followed by a list of the jobs in the
order executed, with their retries (if any), current status and, if the job
is finished, a list of events that occurred during that job's execution.
This is an example of a job status line:
wk1 (exoplayer) [1] - 2, PARTIAL
It contains two entries delimited by a comma: the job's descriptor followed
by its completion status (``PARTIAL``, in this case). The descriptor
consists of the following elements:
- the job ID (``wk1``)
- the job label (which defaults to the workload name) in parentheses
- job iteration number in square brakets (``1`` in this case)
- a hyphen followed by the retry attempt number.
(note: this will only be shown if the job has been retried as least
once. If the job has not yet run, or if it completed on the first
attempt, the hyphen and retry count -- which in that case would be
zero -- will not appear).
'''
def initialize(self, context):
self.parser.add_argument('-d', '--directory',
help='''
Specify the WA output path. report will
otherwise attempt to discover output
directories in the current directory.
''')
def execute(self, state, args):
if args.directory:
output_path = args.directory
run_output = RunOutput(output_path)
else:
possible_outputs = list(discover_wa_outputs(os.getcwd()))
num_paths = len(possible_outputs)
if num_paths > 1:
print('More than one possible output directory found,'
' please choose a path from the following:'
)
for i in range(num_paths):
print("{}: {}".format(i, possible_outputs[i].basepath))
while True:
try:
select = int(input())
except ValueError:
print("Please select a valid path number")
continue
if select not in range(num_paths):
print("Please select a valid path number")
continue
break
run_output = possible_outputs[select]
else:
run_output = possible_outputs[0]
rm = RunMonitor(run_output)
print(rm.generate_output(args.verbose))
class RunMonitor:
@property
def elapsed_time(self):
if self._elapsed is None:
if self.ro.info.duration is None:
self._elapsed = datetime.utcnow() - self.ro.info.start_time
else:
self._elapsed = self.ro.info.duration
return self._elapsed
@property
def job_outputs(self):
if self._job_outputs is None:
self._job_outputs = {
(j_o.id, j_o.label, j_o.iteration): j_o for j_o in self.ro.jobs
}
return self._job_outputs
@property
def projected_duration(self):
elapsed = self.elapsed_time.total_seconds()
proj = timedelta(seconds=elapsed * (len(self.jobs) / len(self.segmented['finished'])))
return proj - self.elapsed_time
def __init__(self, ro):
self.ro = ro
self._elapsed = None
self._p_duration = None
self._job_outputs = None
self._termwidth = None
self._fmt = _simple_formatter()
self.get_data()
def get_data(self):
self.jobs = [state for label_id, state in self.ro.state.jobs.items()]
if self.jobs:
rc = self.ro.run_config
self.segmented = segment_jobs_by_state(self.jobs,
rc.max_retries,
rc.retry_on_status
)
def generate_run_header(self):
info = self.ro.info
header = underline('Run Info')
header += "UUID: {}\n".format(info.uuid)
if info.run_name:
header += "Run name: {}\n".format(info.run_name)
if info.project:
header += "Project: {}\n".format(info.project)
if info.project_stage:
header += "Project stage: {}\n".format(info.project_stage)
if info.start_time:
duration = _seconds_as_smh(self.elapsed_time.total_seconds())
header += ("Start time: {}\n"
"Duration: {:02}:{:02}:{:02}\n"
).format(info.start_time,
duration[2], duration[1], duration[0],
)
if self.segmented['finished'] and not info.end_time:
p_duration = _seconds_as_smh(self.projected_duration.total_seconds())
header += "Projected time remaining: {:02}:{:02}:{:02}\n".format(
p_duration[2], p_duration[1], p_duration[0]
)
elif self.ro.info.end_time:
header += "End time: {}\n".format(info.end_time)
return header + '\n'
def generate_job_summary(self):
total = len(self.jobs)
num_fin = len(self.segmented['finished'])
summary = underline('Job Summary')
summary += 'Total: {}, Completed: {} ({}%)\n'.format(
total, num_fin, (num_fin / total) * 100
) if total > 0 else 'No jobs created\n'
ctr = Counter()
for run_state, jobs in ((k, v) for k, v in self.segmented.items() if v):
if run_state == 'finished':
ctr.update([job.status.name.lower() for job in jobs])
else:
ctr[run_state] += len(jobs)
return summary + ', '.join(
[str(count) + ' ' + self._fmt.highlight_keyword(status) for status, count in ctr.items()]
) + '\n\n'
def generate_job_detail(self):
detail = underline('Job Detail')
for job in self.jobs:
detail += ('{} ({}) [{}]{}, {}\n').format(
job.id,
job.label,
job.iteration,
' - ' + str(job.retries)if job.retries else '',
self._fmt.highlight_keyword(str(job.status))
)
job_output = self.job_outputs[(job.id, job.label, job.iteration)]
for event in job_output.events:
detail += self._fmt.fit_term_width(
'\t{}\n'.format(event.summary)
)
return detail
def generate_run_detail(self):
detail = underline('Run Events') if self.ro.events else ''
for event in self.ro.events:
detail += '{}\n'.format(event.summary)
return detail + '\n'
def generate_output(self, verbose):
if not self.jobs:
return 'No jobs found in output directory\n'
output = self.generate_run_header()
output += self.generate_job_summary()
if verbose:
output += self.generate_run_detail()
output += self.generate_job_detail()
return output
def _seconds_as_smh(seconds):
seconds = int(seconds)
hours = seconds // 3600
minutes = (seconds % 3600) // 60
seconds = seconds % 60
return seconds, minutes, hours
def segment_jobs_by_state(jobstates, max_retries, retry_status):
finished_states = [
Status.PARTIAL, Status.FAILED,
Status.ABORTED, Status.OK, Status.SKIPPED
]
segmented = {
'finished': [], 'other': [], 'running': [],
'pending': [], 'uninitialized': []
}
for jobstate in jobstates:
if (jobstate.status in retry_status) and jobstate.retries < max_retries:
segmented['running'].append(jobstate)
elif jobstate.status in finished_states:
segmented['finished'].append(jobstate)
elif jobstate.status == Status.RUNNING:
segmented['running'].append(jobstate)
elif jobstate.status == Status.PENDING:
segmented['pending'].append(jobstate)
elif jobstate.status == Status.NEW:
segmented['uninitialized'].append(jobstate)
else:
segmented['other'].append(jobstate)
return segmented
class _simple_formatter:
color_map = {
'running': COLOR_MAP[logging.INFO],
'partial': COLOR_MAP[logging.WARNING],
'failed': COLOR_MAP[logging.CRITICAL],
'aborted': COLOR_MAP[logging.ERROR]
}
def __init__(self):
self.termwidth = get_terminal_size()[0]
self.color = settings.logging['color']
def fit_term_width(self, text):
text = text.expandtabs()
if len(text) <= self.termwidth:
return text
else:
return text[0:self.termwidth - 4] + " ...\n"
def highlight_keyword(self, kw):
if not self.color or kw not in _simple_formatter.color_map:
return kw
color = _simple_formatter.color_map[kw.lower()]
return '{}{}{}'.format(color, kw, RESET_COLOR)

View File

@ -25,10 +25,6 @@ from wa.framework.target.manager import TargetManager
from wa.utils.revent import ReventRecorder
if sys.version_info[0] == 3:
raw_input = input # pylint: disable=redefined-builtin
class RecordCommand(Command):
name = 'record'
@ -96,11 +92,11 @@ class RecordCommand(Command):
if args.workload and args.output:
self.logger.error("Output file cannot be specified with Workload")
sys.exit()
if not args.workload and (args.setup or args.extract_results or
args.teardown or args.all):
if not args.workload and (args.setup or args.extract_results
or args.teardown or args.all):
self.logger.error("Cannot specify a recording stage without a Workload")
sys.exit()
if not (args.all or args.teardown or args.extract_results or args.run or args.setup):
if args.workload and not any([args.all, args.teardown, args.extract_results, args.run, args.setup]):
self.logger.error("Please specify which workload stages you wish to record")
sys.exit()
@ -120,6 +116,7 @@ class RecordCommand(Command):
outdir = os.getcwd()
self.tm = TargetManager(device, device_config, outdir)
self.tm.initialize()
self.target = self.tm.target
self.revent_recorder = ReventRecorder(self.target)
self.revent_recorder.deploy()
@ -136,11 +133,11 @@ class RecordCommand(Command):
def record(self, revent_file, name, output_path):
msg = 'Press Enter when you are ready to record {}...'
self.logger.info(msg.format(name))
raw_input('')
input('')
self.revent_recorder.start_record(revent_file)
msg = 'Press Enter when you have finished recording {}...'
self.logger.info(msg.format(name))
raw_input('')
input('')
self.revent_recorder.stop_record()
if not os.path.isdir(output_path):
@ -261,6 +258,7 @@ class ReplayCommand(Command):
device_config = state.run_config.device_config or {}
target_manager = TargetManager(device, device_config, None)
target_manager.initialize()
self.target = target_manager.target
revent_file = self.target.path.join(self.target.working_directory,
os.path.split(args.recording)[1])

View File

@ -84,7 +84,7 @@ class RunCommand(Command):
be specified multiple times.
""")
def execute(self, config, args):
def execute(self, config, args): # pylint: disable=arguments-differ
output = self.set_up_output_directory(config, args)
log.add_file(output.logfile)
output.add_artifact('runlog', output.logfile, kind='log',
@ -97,8 +97,10 @@ class RunCommand(Command):
parser = AgendaParser()
if os.path.isfile(args.agenda):
parser.load_from_path(config, args.agenda)
includes = parser.load_from_path(config, args.agenda)
shutil.copy(args.agenda, output.raw_config_dir)
for inc in includes:
shutil.copy(inc, output.raw_config_dir)
else:
try:
pluginloader.get_plugin_class(args.agenda, kind='workload')
@ -110,6 +112,11 @@ class RunCommand(Command):
'by running "wa list workloads".'
raise ConfigError(msg.format(args.agenda))
# Update run info with newly parsed config values
output.info.project = config.run_config.project
output.info.project_stage = config.run_config.project_stage
output.info.run_name = config.run_config.run_name
executor = Executor()
executor.execute(config, output)

View File

@ -0,0 +1,28 @@
# 1
## 1.0
- First version
## 1.1
- LargeObjects table added as a substitute for the previous plan to
use the filesystem and a path reference to store artifacts. This
was done following an extended discussion and tests that verified
that the savings in processing power were not enough to warrant
the creation of a dedicated server or file handler.
## 1.2
- Rename the `resourcegetters` table to `resource_getters` for consistency.
- Add Job and Run level classifiers.
- Add missing android specific properties to targets.
- Add new POD meta data to relevant tables.
- Correct job column name from `retires` to `retry`.
- Add missing run information.
## 1.3
- Add missing "system_id" field from TargetInfo.
- Enable support for uploading Artifact that represent directories.
## 1.4
- Add "modules" field to TargetInfo to list the modules loaded by the target
during the run.
## 1.5
- Change the type of the "hostid" in TargetInfo from Int to Bigint.
## 1.6
- Add cascading deletes to most tables to allow easy deletion of a run
and its associated data
- Add rule to delete associated large object on deletion of artifact

View File

@ -21,6 +21,8 @@
import sys
from subprocess import call, Popen, PIPE
from devlib.utils.misc import escape_double_quotes
from wa import Command
from wa.framework import pluginloader
from wa.framework.configuration.core import MetaConfiguration, RunConfiguration
@ -31,8 +33,6 @@ from wa.utils.doc import (strip_inlined_text, get_rst_from_plugin,
get_params_rst, underline)
from wa.utils.misc import which
from devlib.utils.misc import escape_double_quotes
class ShowCommand(Command):
@ -73,11 +73,8 @@ class ShowCommand(Command):
if which('pandoc'):
p = Popen(['pandoc', '-f', 'rst', '-t', 'man'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
if sys.version_info[0] == 3:
output, _ = p.communicate(rst_output.encode(sys.stdin.encoding))
output = output.decode(sys.stdout.encoding)
else:
output, _ = p.communicate(rst_output)
output, _ = p.communicate(rst_output.encode(sys.stdin.encoding))
output = output.decode(sys.stdout.encoding)
# Make sure to double escape back slashes
output = output.replace('\\', '\\\\\\')

View File

@ -59,7 +59,7 @@ params = dict(
'Environment :: Console',
'License :: Other/Proprietary License',
'Operating System :: Unix',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
],
)

View File

@ -1,18 +1,18 @@
apply plugin: 'com.android.application'
android {
compileSdkVersion 18
buildToolsVersion '25.0.0'
compileSdkVersion 28
buildToolsVersion '28.0.0'
defaultConfig {
applicationId "${package_name}"
minSdkVersion 18
targetSdkVersion 25
targetSdkVersion 28
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$$project.buildDir/apk/${package_name}.apk")
output.outputFileName = "${package_name}.apk"
}
}
}

View File

@ -16,7 +16,7 @@ fi
# Copy base class library from wlauto dist
libs_dir=app/libs
base_class=`python -c "import os, wa; print os.path.join(os.path.dirname(wa.__file__), 'framework', 'uiauto', 'uiauto.aar')"`
base_class=`python3 -c "import os, wa; print(os.path.join(os.path.dirname(wa.__file__), 'framework', 'uiauto', 'uiauto.aar'))"`
mkdir -p $$libs_dir
cp $$base_class $$libs_dir
@ -31,8 +31,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
rm -f ../$package_name
if [[ -f app/build/apk/$package_name.apk ]]; then
cp app/build/apk/$package_name.apk ../$package_name.apk
if [[ -f app/build/outputs/apk/debug/$package_name.apk ]]; then
cp app/build/outputs/apk/debug/$package_name.apk ../$package_name.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.1'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -65,7 +65,6 @@ class SubCommand(object):
options to the command's parser). ``context`` is always ``None``.
"""
pass
def execute(self, state, args):
"""

View File

@ -13,6 +13,7 @@
# limitations under the License.
import os
import logging
from copy import copy, deepcopy
from collections import OrderedDict, defaultdict
@ -22,7 +23,7 @@ from wa.utils import log
from wa.utils.misc import (get_article, merge_config_values)
from wa.utils.types import (identifier, integer, boolean, list_of_strings,
list_of, toggle_set, obj_dict, enum)
from wa.utils.serializer import is_pod
from wa.utils.serializer import is_pod, Podable
# Mapping for kind conversion; see docs for convert_types below
@ -36,6 +37,8 @@ Status = enum(['UNKNOWN', 'NEW', 'PENDING',
'STARTED', 'CONNECTED', 'INITIALIZED', 'RUNNING',
'OK', 'PARTIAL', 'FAILED', 'ABORTED', 'SKIPPED'])
logger = logging.getLogger('config')
##########################
### CONFIG POINT TYPES ###
@ -55,10 +58,11 @@ class RebootPolicy(object):
executing the first workload spec.
:each_spec: The device will be rebooted before running a new workload spec.
:each_iteration: The device will be rebooted before each new iteration.
:run_completion: The device will be rebooted after the run has been completed.
"""
valid_policies = ['never', 'as_needed', 'initial', 'each_spec', 'each_job']
valid_policies = ['never', 'as_needed', 'initial', 'each_spec', 'each_job', 'run_completion']
@staticmethod
def from_pod(pod):
@ -89,6 +93,10 @@ class RebootPolicy(object):
def reboot_on_each_spec(self):
return self.policy == 'each_spec'
@property
def reboot_on_run_completion(self):
return self.policy == 'run_completion'
def __str__(self):
return self.policy
@ -110,7 +118,9 @@ class status_list(list):
list.append(self, str(item).upper())
class LoggingConfig(dict):
class LoggingConfig(Podable, dict):
_pod_serialization_version = 1
defaults = {
'file_format': '%(asctime)s %(levelname)-8s %(name)s: %(message)s',
@ -121,9 +131,14 @@ class LoggingConfig(dict):
@staticmethod
def from_pod(pod):
return LoggingConfig(pod)
pod = LoggingConfig._upgrade_pod(pod)
pod_version = pod.pop('_pod_version')
instance = LoggingConfig(pod)
instance._pod_version = pod_version # pylint: disable=protected-access
return instance
def __init__(self, config=None):
super(LoggingConfig, self).__init__()
dict.__init__(self)
if isinstance(config, dict):
config = {identifier(k.lower()): v for k, v in config.items()}
@ -142,7 +157,14 @@ class LoggingConfig(dict):
raise ValueError(config)
def to_pod(self):
return self
pod = super(LoggingConfig, self).to_pod()
pod.update(self)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def expanded_path(path):
@ -178,7 +200,8 @@ class ConfigurationPoint(object):
constraint=None,
merge=False,
aliases=None,
global_alias=None):
global_alias=None,
deprecated=False):
"""
Create a new Parameter object.
@ -229,10 +252,12 @@ class ConfigurationPoint(object):
:param global_alias: An alias for this parameter that can be specified at
the global level. A global_alias can map onto many
ConfigurationPoints.
:param deprecated: Specify that this parameter is deprecated and its
config should be ignored. If supplied WA will display
a warning to the user however will continue execution.
"""
self.name = identifier(name)
if kind in KIND_MAP:
kind = KIND_MAP[kind]
kind = KIND_MAP.get(kind, kind)
if kind is not None and not callable(kind):
raise ValueError('Kind must be callable.')
self.kind = kind
@ -252,6 +277,7 @@ class ConfigurationPoint(object):
self.merge = merge
self.aliases = aliases or []
self.global_alias = global_alias
self.deprecated = deprecated
if self.default is not None:
try:
@ -267,6 +293,11 @@ class ConfigurationPoint(object):
return False
def set_value(self, obj, value=None, check_mandatory=True):
if self.deprecated:
if value is not None:
msg = 'Depreciated parameter supplied for "{}" in "{}". The value will be ignored.'
logger.warning(msg.format(self.name, obj.name))
return
if value is None:
if self.default is not None:
value = self.kind(self.default)
@ -288,6 +319,8 @@ class ConfigurationPoint(object):
setattr(obj, self.name, value)
def validate(self, obj, check_mandatory=True):
if self.deprecated:
return
value = getattr(obj, self.name, None)
if value is not None:
self.validate_value(obj.name, value)
@ -347,8 +380,9 @@ def _to_pod(cfg_point, value):
raise ValueError(msg.format(cfg_point.name, value))
class Configuration(object):
class Configuration(Podable):
_pod_serialization_version = 1
config_points = []
name = ''
@ -357,7 +391,7 @@ class Configuration(object):
@classmethod
def from_pod(cls, pod):
instance = cls()
instance = super(Configuration, cls).from_pod(pod)
for cfg_point in cls.config_points:
if cfg_point.name in pod:
value = pod.pop(cfg_point.name)
@ -370,6 +404,7 @@ class Configuration(object):
return instance
def __init__(self):
super(Configuration, self).__init__()
for confpoint in self.config_points:
confpoint.set_value(self, check_mandatory=False)
@ -393,12 +428,17 @@ class Configuration(object):
cfg_point.validate(self)
def to_pod(self):
pod = {}
pod = super(Configuration, self).to_pod()
for cfg_point in self.config_points:
value = getattr(self, cfg_point.name, None)
pod[cfg_point.name] = _to_pod(cfg_point, value)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
# This configuration for the core WA framework
class MetaConfiguration(Configuration):
@ -429,6 +469,7 @@ class MetaConfiguration(Configuration):
description="""
The local mount point for the filer hosting WA assets.
""",
default=''
),
ConfigurationPoint(
'logging',
@ -445,7 +486,6 @@ class MetaConfiguration(Configuration):
contain bash color escape codes. Set this to ``False`` if
console output will be piped somewhere that does not know
how to handle those.
""",
),
ConfigurationPoint(
@ -482,6 +522,10 @@ class MetaConfiguration(Configuration):
def plugins_directory(self):
return os.path.join(self.user_directory, 'plugins')
@property
def cache_directory(self):
return os.path.join(self.user_directory, 'cache')
@property
def plugin_paths(self):
return [self.plugins_directory] + (self.extra_plugin_paths or [])
@ -494,6 +538,14 @@ class MetaConfiguration(Configuration):
def additional_packages_file(self):
return os.path.join(self.user_directory, 'packages')
@property
def target_info_cache_file(self):
return os.path.join(self.cache_directory, 'targets.json')
@property
def apk_info_cache_file(self):
return os.path.join(self.cache_directory, 'apk_info.json')
def __init__(self, environ=None):
super(MetaConfiguration, self).__init__()
if environ is None:
@ -615,15 +667,18 @@ class RunConfiguration(Configuration):
``"each_spec"``
The device will be rebooted before running a new workload spec.
.. note:: this acts the same as each_job when execution order
.. note:: This acts the same as ``each_job`` when execution order
is set to by_iteration
``"run_completion"``
The device will be rebooted after the run has been completed.
'''),
ConfigurationPoint(
'device',
kind=str,
default='generic_android',
description='''
This setting defines what specific Device subclass will be used to
This setting defines what specific ``Device`` subclass will be used to
interact the connected device. Obviously, this must match your
setup.
''',
@ -677,6 +732,17 @@ class RunConfiguration(Configuration):
failed, but continue attempting to run others.
'''
),
ConfigurationPoint(
'bail_on_job_failure',
kind=bool,
default=False,
description='''
When a job fails during its run phase, WA will attempt to retry the
job, then continue with remaining jobs after. Setting this to
``True`` means WA will skip remaining jobs and end the run if a job
has retried the maximum number of times, and still fails.
'''
),
ConfigurationPoint(
'allow_phone_home',
kind=bool, default=True,
@ -700,8 +766,12 @@ class RunConfiguration(Configuration):
meta_pod[cfg_point.name] = pod.pop(cfg_point.name, None)
device_config = pod.pop('device_config', None)
augmentations = pod.pop('augmentations', {})
getters = pod.pop('resource_getters', {})
instance = super(RunConfiguration, cls).from_pod(pod)
instance.device_config = device_config
instance.augmentations = augmentations
instance.resource_getters = getters
for cfg_point in cls.meta_data:
cfg_point.set_value(instance, meta_pod[cfg_point.name])
@ -712,6 +782,8 @@ class RunConfiguration(Configuration):
for confpoint in self.meta_data:
confpoint.set_value(self, check_mandatory=False)
self.device_config = None
self.augmentations = {}
self.resource_getters = {}
def merge_device_config(self, plugin_cache):
"""
@ -725,9 +797,21 @@ class RunConfiguration(Configuration):
self.device_config = plugin_cache.get_plugin_config(self.device,
generic_name="device_config")
def add_augmentation(self, aug):
if aug.name in self.augmentations:
raise ValueError('Augmentation "{}" already added.'.format(aug.name))
self.augmentations[aug.name] = aug.get_config()
def add_resource_getter(self, getter):
if getter.name in self.resource_getters:
raise ValueError('Resource getter "{}" already added.'.format(getter.name))
self.resource_getters[getter.name] = getter.get_config()
def to_pod(self):
pod = super(RunConfiguration, self).to_pod()
pod['device_config'] = dict(self.device_config or {})
pod['augmentations'] = self.augmentations
pod['resource_getters'] = self.resource_getters
return pod
@ -746,12 +830,12 @@ class JobSpec(Configuration):
description='''
The name of the workload to run.
'''),
ConfigurationPoint('workload_parameters', kind=obj_dict,
ConfigurationPoint('workload_parameters', kind=obj_dict, merge=True,
aliases=["params", "workload_params", "parameters"],
description='''
Parameter to be passed to the workload
'''),
ConfigurationPoint('runtime_parameters', kind=obj_dict,
ConfigurationPoint('runtime_parameters', kind=obj_dict, merge=True,
aliases=["runtime_params"],
description='''
Runtime parameters to be set prior to running
@ -952,8 +1036,8 @@ class JobGenerator(object):
if name == "augmentations":
self.update_augmentations(value)
def add_section(self, section, workloads):
new_node = self.root_node.add_section(section)
def add_section(self, section, workloads, group):
new_node = self.root_node.add_section(section, group)
with log.indentcontext():
for workload in workloads:
new_node.add_workload(workload)
@ -1015,6 +1099,12 @@ def create_job_spec(workload_entry, sections, target_manager, plugin_cache,
# PHASE 2.1: Merge general job spec configuration
for section in sections:
job_spec.update_config(section, check_mandatory=False)
# Add classifiers for any present groups
if section.id == 'global' or section.group is None:
# Ignore global config and default group
continue
job_spec.classifiers[section.group] = section.id
job_spec.update_config(workload_entry, check_mandatory=False)
# PHASE 2.2: Merge global, section and workload entry "workload_parameters"

View File

@ -18,31 +18,44 @@ from itertools import groupby, chain
from future.moves.itertools import zip_longest
from devlib.utils.types import identifier
from wa.framework.configuration.core import (MetaConfiguration, RunConfiguration,
JobGenerator, settings)
from wa.framework.configuration.parsers import ConfigParser
from wa.framework.configuration.plugin_cache import PluginCache
from wa.framework.exception import NotFoundError
from wa.framework.exception import NotFoundError, ConfigError
from wa.framework.job import Job
from wa.utils import log
from wa.utils.serializer import Podable
class CombinedConfig(object):
class CombinedConfig(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
instance = CombinedConfig()
instance = super(CombinedConfig, CombinedConfig).from_pod(pod)
instance.settings = MetaConfiguration.from_pod(pod.get('settings', {}))
instance.run_config = RunConfiguration.from_pod(pod.get('run_config', {}))
return instance
def __init__(self, settings=None, run_config=None): # pylint: disable=redefined-outer-name
super(CombinedConfig, self).__init__()
self.settings = settings
self.run_config = run_config
def to_pod(self):
return {'settings': self.settings.to_pod(),
'run_config': self.run_config.to_pod()}
pod = super(CombinedConfig, self).to_pod()
pod['settings'] = self.settings.to_pod()
pod['run_config'] = self.run_config.to_pod()
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
class ConfigManager(object):
@ -90,15 +103,16 @@ class ConfigManager(object):
self.agenda = None
def load_config_file(self, filepath):
self._config_parser.load_from_path(self, filepath)
includes = self._config_parser.load_from_path(self, filepath)
self.loaded_config_sources.append(filepath)
self.loaded_config_sources.extend(includes)
def load_config(self, values, source):
self._config_parser.load(self, values, source)
self.loaded_config_sources.append(source)
def get_plugin(self, name=None, kind=None, *args, **kwargs):
return self.plugin_cache.get_plugin(name, kind, *args, **kwargs)
return self.plugin_cache.get_plugin(identifier(name), kind, *args, **kwargs)
def get_instruments(self, target):
instruments = []
@ -122,15 +136,21 @@ class ConfigManager(object):
processors.append(proc)
return processors
def get_config(self):
return CombinedConfig(self.settings, self.run_config)
def finalize(self):
if not self.agenda:
msg = 'Attempting to finalize config before agenda has been set'
raise RuntimeError(msg)
self.run_config.merge_device_config(self.plugin_cache)
return CombinedConfig(self.settings, self.run_config)
return self.get_config()
def generate_jobs(self, context):
job_specs = self.jobs_config.generate_job_specs(context.tm)
if not job_specs:
msg = 'No jobs available for running.'
raise ConfigError(msg)
exec_order = self.run_config.execution_order
log.indent()
for spec, i in permute_iterations(job_specs, exec_order):

View File

@ -18,11 +18,14 @@ import os
import logging
from functools import reduce # pylint: disable=redefined-builtin
from devlib.utils.types import identifier
from wa.framework.configuration.core import JobSpec
from wa.framework.exception import ConfigError
from wa.utils import log
from wa.utils.serializer import json, read_pod, SerializerSyntaxError
from wa.utils.types import toggle_set, counter
from wa.utils.misc import merge_config_values, isiterable
logger = logging.getLogger('config')
@ -31,7 +34,9 @@ logger = logging.getLogger('config')
class ConfigParser(object):
def load_from_path(self, state, filepath):
self.load(state, _load_file(filepath, "Config"), filepath)
raw, includes = _load_file(filepath, "Config")
self.load(state, raw, filepath)
return includes
def load(self, state, raw, source, wrap_exceptions=True): # pylint: disable=too-many-branches
logger.debug('Parsing config from "{}"'.format(source))
@ -72,8 +77,8 @@ class ConfigParser(object):
for name, values in raw.items():
# Assume that all leftover config is for a plug-in or a global
# alias it is up to PluginCache to assert this assumption
logger.debug('Caching "{}" with "{}"'.format(name, values))
state.plugin_cache.add_configs(name, values, source)
logger.debug('Caching "{}" with "{}"'.format(identifier(name), values))
state.plugin_cache.add_configs(identifier(name), values, source)
except ConfigError as e:
if wrap_exceptions:
@ -87,8 +92,9 @@ class ConfigParser(object):
class AgendaParser(object):
def load_from_path(self, state, filepath):
raw = _load_file(filepath, 'Agenda')
raw, includes = _load_file(filepath, 'Agenda')
self.load(state, raw, filepath)
return includes
def load(self, state, raw, source):
logger.debug('Parsing agenda from "{}"'.format(source))
@ -190,9 +196,10 @@ class AgendaParser(object):
raise ConfigError(msg.format(json.dumps(section, indent=None)))
section['runtime_params'] = section.pop('params')
group = section.pop('group', None)
section = _construct_valid_entry(section, seen_sect_ids,
"s", state.jobs_config)
state.jobs_config.add_section(section, workloads)
state.jobs_config.add_section(section, workloads, group)
########################
@ -222,12 +229,72 @@ def _load_file(filepath, error_name):
raise ValueError("{} does not exist".format(filepath))
try:
raw = read_pod(filepath)
includes = _process_includes(raw, filepath, error_name)
except SerializerSyntaxError as e:
raise ConfigError('Error parsing {} {}: {}'.format(error_name, filepath, e))
if not isinstance(raw, dict):
message = '{} does not contain a valid {} structure; top level must be a dict.'
raise ConfigError(message.format(filepath, error_name))
return raw
return raw, includes
def _config_values_from_includes(filepath, include_path, error_name):
source_dir = os.path.dirname(filepath)
included_files = []
if isinstance(include_path, str):
include_path = os.path.expanduser(os.path.join(source_dir, include_path))
replace_value, includes = _load_file(include_path, error_name)
included_files.append(include_path)
included_files.extend(includes)
elif isinstance(include_path, list):
replace_value = {}
for path in include_path:
include_path = os.path.expanduser(os.path.join(source_dir, path))
sub_replace_value, includes = _load_file(include_path, error_name)
for key, val in sub_replace_value.items():
replace_value[key] = merge_config_values(val, replace_value.get(key, None))
included_files.append(include_path)
included_files.extend(includes)
else:
message = "{} does not contain a valid {} structure; value for 'include#' must be a string or a list"
raise ConfigError(message.format(filepath, error_name))
return replace_value, included_files
def _process_includes(raw, filepath, error_name):
if not raw:
return []
included_files = []
replace_value = None
if hasattr(raw, 'items'):
for key, value in raw.items():
if key == 'include#':
replace_value, includes = _config_values_from_includes(filepath, value, error_name)
included_files.extend(includes)
elif hasattr(value, 'items') or isiterable(value):
includes = _process_includes(value, filepath, error_name)
included_files.extend(includes)
elif isiterable(raw):
for element in raw:
if hasattr(element, 'items') or isiterable(element):
includes = _process_includes(element, filepath, error_name)
included_files.extend(includes)
if replace_value is not None:
del raw['include#']
for key, value in replace_value.items():
raw[key] = merge_config_values(value, raw.get(key, None))
return included_files
def merge_augmentations(raw):
@ -257,7 +324,7 @@ def merge_augmentations(raw):
raise ConfigError(msg.format(value, n, exc))
# Make sure none of the specified aliases conflict with each other
to_check = [e for e in entries]
to_check = list(entries)
while len(to_check) > 1:
check_entry = to_check.pop()
for e in to_check:

View File

@ -84,9 +84,9 @@ class PluginCache(object):
'defined in a config file, move the entry content into the top level'
raise ConfigError(msg.format((plugin_name)))
if (not self.loader.has_plugin(plugin_name) and
plugin_name not in self.targets and
plugin_name not in GENERIC_CONFIGS):
if (not self.loader.has_plugin(plugin_name)
and plugin_name not in self.targets
and plugin_name not in GENERIC_CONFIGS):
msg = 'configuration provided for unknown plugin "{}"'
raise ConfigError(msg.format(plugin_name))
@ -95,8 +95,8 @@ class PluginCache(object):
raise ConfigError(msg.format(plugin_name, repr(values), type(values)))
for name, value in values.items():
if (plugin_name not in GENERIC_CONFIGS and
name not in self.get_plugin_parameters(plugin_name)):
if (plugin_name not in GENERIC_CONFIGS
and name not in self.get_plugin_parameters(plugin_name)):
msg = "'{}' is not a valid parameter for '{}'"
raise ConfigError(msg.format(name, plugin_name))

View File

@ -33,6 +33,7 @@ class JobSpecSource(object):
def id(self):
return self.config['id']
@property
def name(self):
raise NotImplementedError()
@ -69,14 +70,20 @@ class SectionNode(JobSpecSource):
def is_leaf(self):
return not bool(self.children)
def __init__(self, config, parent=None):
def __init__(self, config, parent=None, group=None):
super(SectionNode, self).__init__(config, parent=parent)
self.workload_entries = []
self.children = []
self.group = group
def add_section(self, section):
new_node = SectionNode(section, parent=self)
self.children.append(new_node)
def add_section(self, section, group=None):
# Each level is the same group, only need to check first
if not self.children or group == self.children[0].group:
new_node = SectionNode(section, parent=self, group=group)
self.children.append(new_node)
else:
for child in self.children:
new_node = child.add_section(section, group)
return new_node
def add_workload(self, workload_config):

View File

@ -16,19 +16,25 @@
import sys
import argparse
import locale
import logging
import os
import warnings
import devlib
try:
from devlib.utils.version import version as installed_devlib_version
except ImportError:
installed_devlib_version = None
from wa.framework import pluginloader
from wa.framework.command import init_argument_parser
from wa.framework.configuration import settings
from wa.framework.configuration.execution import ConfigManager
from wa.framework.host import init_user_directory, init_config
from wa.framework.exception import ConfigError
from wa.framework.version import get_wa_version_with_commit
from wa.framework.exception import ConfigError, HostError
from wa.framework.version import (get_wa_version_with_commit, format_version,
required_devlib_version)
from wa.utils import log
from wa.utils.doc import format_body
@ -64,6 +70,27 @@ def split_joined_options(argv):
return output
# Instead of presenting an obscure error due to a version mismatch explicitly warn the user.
def check_devlib_version():
if not installed_devlib_version or installed_devlib_version[:-1] <= required_devlib_version[:-1]:
# Check the 'dev' field separately to account for comparing with release versions.
if installed_devlib_version.dev and installed_devlib_version.dev < required_devlib_version.dev:
msg = 'WA requires Devlib version >={}. Please update the currently installed version {}'
raise HostError(msg.format(format_version(required_devlib_version), devlib.__version__))
# If the default encoding is not UTF-8 warn the user as this may cause compatibility issues
# when parsing files.
def check_system_encoding():
system_encoding = locale.getpreferredencoding()
msg = 'System Encoding: {}'.format(system_encoding)
if 'UTF-8' not in system_encoding:
logger.warning(msg)
logger.warning('To prevent encoding issues please use a locale setting which supports UTF-8')
else:
logger.debug(msg)
def main():
if not os.path.exists(settings.user_directory):
init_user_directory()
@ -102,6 +129,8 @@ def main():
logger.debug('Version: {}'.format(get_wa_version_with_commit()))
logger.debug('devlib version: {}'.format(devlib.__full_version__))
logger.debug('Command Line: {}'.format(' '.join(sys.argv)))
check_devlib_version()
check_system_encoding()
# each command will add its own subparser
subparsers = parser.add_subparsers(dest='command')

View File

@ -13,7 +13,7 @@
# limitations under the License.
#
# pylint: disable=unused-import
from devlib.exception import (DevlibError, HostError, TimeoutError,
from devlib.exception import (DevlibError, HostError, TimeoutError, # pylint: disable=redefined-builtin
TargetError, TargetNotRespondingError)
from wa.utils.misc import get_traceback
@ -30,60 +30,49 @@ class WAError(Exception):
class NotFoundError(WAError):
"""Raised when the specified item is not found."""
pass
class ValidationError(WAError):
"""Raised on failure to validate an extension."""
pass
class ExecutionError(WAError):
"""Error encountered by the execution framework."""
pass
class WorkloadError(WAError):
"""General Workload error."""
pass
class JobError(WAError):
"""Job execution error."""
pass
class InstrumentError(WAError):
"""General Instrument error."""
pass
class OutputProcessorError(WAError):
"""General OutputProcessor error."""
pass
class ResourceError(WAError):
"""General Resolver error."""
pass
class CommandError(WAError):
"""Raised by commands when they have encountered an error condition
during execution."""
pass
class ToolError(WAError):
"""Raised by tools when they have encountered an error condition
during execution."""
pass
class ConfigError(WAError):
"""Raised when configuration provided is invalid. This error suggests that
the user should modify their config and try again."""
pass
class SerializerSyntaxError(Exception):

View File

@ -23,10 +23,10 @@ from copy import copy
from datetime import datetime
import wa.framework.signal as signal
from wa.framework import instrument
from wa.framework import instrument as instrumentation
from wa.framework.configuration.core import Status
from wa.framework.exception import TargetError, HostError, WorkloadError
from wa.framework.exception import TargetNotRespondingError, TimeoutError
from wa.framework.exception import TargetError, HostError, WorkloadError, ExecutionError
from wa.framework.exception import TargetNotRespondingError, TimeoutError # pylint: disable=redefined-builtin
from wa.framework.job import Job
from wa.framework.output import init_job_output
from wa.framework.output_processor import ProcessorManager
@ -100,15 +100,13 @@ class ExecutionContext(object):
self.tm = tm
self.run_output = output
self.run_state = output.state
self.logger.debug('Loading resource discoverers')
self.resolver = ResourceResolver(cm.plugin_cache)
self.resolver.load()
self.job_queue = None
self.completed_jobs = None
self.current_job = None
self.successful_jobs = 0
self.failed_jobs = 0
self.run_interrupted = False
self._load_resource_getters()
def start_run(self):
self.output.info.start_time = datetime.utcnow()
@ -130,8 +128,8 @@ class ExecutionContext(object):
self.run_state.status = status
self.run_output.status = status
self.run_output.info.end_time = datetime.utcnow()
self.run_output.info.duration = (self.run_output.info.end_time -
self.run_output.info.start_time)
self.run_output.info.duration = (self.run_output.info.end_time
- self.run_output.info.start_time)
self.write_output()
def finalize(self):
@ -143,21 +141,24 @@ class ExecutionContext(object):
self.current_job = self.job_queue.pop(0)
job_output = init_job_output(self.run_output, self.current_job)
self.current_job.set_output(job_output)
self.update_job_state(self.current_job)
return self.current_job
def end_job(self):
if not self.current_job:
raise RuntimeError('No jobs in progress')
self.completed_jobs.append(self.current_job)
self.update_job_state(self.current_job)
self.output.write_result()
self.current_job = None
def set_status(self, status, force=False):
def set_status(self, status, force=False, write=True):
if not self.current_job:
raise RuntimeError('No jobs in progress')
self.current_job.set_status(status, force)
self.set_job_status(self.current_job, status, force, write)
def set_job_status(self, job, status, force=False, write=True):
job.set_status(status, force)
if write:
self.run_output.write_state()
def extract_results(self):
self.tm.extract_results(self)
@ -165,13 +166,8 @@ class ExecutionContext(object):
def move_failed(self, job):
self.run_output.move_failed(job.output)
def update_job_state(self, job):
self.run_state.update_job(job)
self.run_output.write_state()
def skip_job(self, job):
job.status = Status.SKIPPED
self.run_state.update_job(job)
self.set_job_status(job, Status.SKIPPED, force=True)
self.completed_jobs.append(job)
def skip_remaining_jobs(self):
@ -180,6 +176,9 @@ class ExecutionContext(object):
self.skip_job(job)
self.write_state()
def write_config(self):
self.run_output.write_config(self.cm.get_config())
def write_state(self):
self.run_output.write_state()
@ -191,6 +190,9 @@ class ExecutionContext(object):
def write_job_specs(self):
self.run_output.write_job_specs(self.cm.job_specs)
def add_augmentation(self, aug):
self.cm.run_config.add_augmentation(aug)
def get_resource(self, resource, strict=True):
result = self.resolver.get(resource, strict)
if result is None:
@ -245,6 +247,11 @@ class ExecutionContext(object):
def add_event(self, message):
self.output.add_event(message)
def add_classifier(self, name, value, overwrite=False):
self.output.add_classifier(name, value, overwrite)
if self.current_job:
self.current_job.add_classifier(name, value, overwrite)
def add_metadata(self, key, *args, **kwargs):
self.output.add_metadata(key, *args, **kwargs)
@ -284,7 +291,7 @@ class ExecutionContext(object):
try:
job.initialize(self)
except WorkloadError as e:
job.set_status(Status.FAILED)
self.set_job_status(job, Status.FAILED, write=False)
log.log_error(e, self.logger)
failed_ids.append(job.id)
@ -294,6 +301,14 @@ class ExecutionContext(object):
new_queue.append(job)
self.job_queue = new_queue
self.write_state()
def _load_resource_getters(self):
self.logger.debug('Loading resource discoverers')
self.resolver = ResourceResolver(self.cm.plugin_cache)
self.resolver.load()
for getter in self.resolver.getters:
self.cm.run_config.add_resource_getter(getter)
def _get_unique_filepath(self, filename):
filepath = os.path.join(self.output_directory, filename)
@ -322,7 +337,7 @@ class Executor(object):
returning.
The initial context set up involves combining configuration from various
sources, loading of requided workloads, loading and installation of
sources, loading of required workloads, loading and installation of
instruments and output processors, etc. Static validation of the combined
configuration is also performed.
@ -338,7 +353,7 @@ class Executor(object):
def execute(self, config_manager, output):
"""
Execute the run specified by an agenda. Optionally, selectors may be
used to only selecute a subset of the specified agenda.
used to only execute a subset of the specified agenda.
Params::
@ -365,12 +380,12 @@ class Executor(object):
try:
self.do_execute(context)
except KeyboardInterrupt as e:
context.run_output.status = 'ABORTED'
context.run_output.status = Status.ABORTED
log.log_error(e, self.logger)
context.write_output()
raise
except Exception as e:
context.run_output.status = 'FAILED'
context.run_output.status = Status.FAILED
log.log_error(e, self.logger)
context.write_output()
raise
@ -388,7 +403,7 @@ class Executor(object):
attempts = context.cm.run_config.max_retries
while attempts:
try:
self.target_manager.reboot()
self.target_manager.reboot(context)
except TargetError as e:
if attempts:
attempts -= 1
@ -405,9 +420,9 @@ class Executor(object):
context.output.write_state()
self.logger.info('Installing instruments')
for instrument_name in context.cm.get_instruments(self.target_manager.target):
instrument.install(instrument_name, context)
instrument.validate()
for instrument in context.cm.get_instruments(self.target_manager.target):
instrumentation.install(instrument, context)
instrumentation.validate()
self.logger.info('Installing output processors')
pm = ProcessorManager()
@ -415,6 +430,8 @@ class Executor(object):
pm.install(proc, context)
pm.validate()
context.write_config()
self.logger.info('Starting run')
runner = Runner(context, pm)
signal.send(signal.RUN_STARTED, self, context)
@ -432,16 +449,16 @@ class Executor(object):
for status in reversed(Status.levels):
if status in counter:
parts.append('{} {}'.format(counter[status], status))
self.logger.info(status_summary + ', '.join(parts))
self.logger.info('{}{}'.format(status_summary, ', '.join(parts)))
self.logger.info('Results can be found in {}'.format(output.basepath))
if self.error_logged:
self.logger.warn('There were errors during execution.')
self.logger.warn('Please see {}'.format(output.logfile))
self.logger.warning('There were errors during execution.')
self.logger.warning('Please see {}'.format(output.logfile))
elif self.warning_logged:
self.logger.warn('There were warnings during execution.')
self.logger.warn('Please see {}'.format(output.logfile))
self.logger.warning('There were warnings during execution.')
self.logger.warning('Please see {}'.format(output.logfile))
def _error_signalled_callback(self, _):
self.error_logged = True
@ -503,7 +520,7 @@ class Runner(object):
signal.connect(self._error_signalled_callback, signal.ERROR_LOGGED)
signal.connect(self._warning_signalled_callback, signal.WARNING_LOGGED)
self.context.start_run()
self.pm.initialize()
self.pm.initialize(self.context)
with log.indentcontext():
self.context.initialize_jobs()
self.context.write_state()
@ -519,7 +536,10 @@ class Runner(object):
with signal.wrap('RUN_OUTPUT_PROCESSED', self):
self.pm.process_run_output(self.context)
self.pm.export_run_output(self.context)
self.pm.finalize()
self.pm.finalize(self.context)
if self.context.reboot_policy.reboot_on_run_completion:
self.logger.info('Rebooting target on run completion.')
self.context.tm.reboot(self.context)
signal.disconnect(self._error_signalled_callback, signal.ERROR_LOGGED)
signal.disconnect(self._warning_signalled_callback, signal.WARNING_LOGGED)
@ -539,15 +559,15 @@ class Runner(object):
with signal.wrap('JOB', self, context):
context.tm.start()
self.do_run_job(job, context)
job.set_status(Status.OK)
context.set_job_status(job, Status.OK)
except (Exception, KeyboardInterrupt) as e: # pylint: disable=broad-except
log.log_error(e, self.logger)
if isinstance(e, KeyboardInterrupt):
context.run_interrupted = True
job.set_status(Status.ABORTED)
context.set_job_status(job, Status.ABORTED)
raise e
else:
job.set_status(Status.FAILED)
context.set_job_status(job, Status.FAILED)
if isinstance(e, TargetNotRespondingError):
raise e
elif isinstance(e, TargetError):
@ -570,7 +590,7 @@ class Runner(object):
self.context.skip_job(job)
return
job.set_status(Status.RUNNING)
context.set_job_status(job, Status.RUNNING)
self.send(signal.JOB_STARTED)
job.configure_augmentations(context, self.pm)
@ -581,7 +601,7 @@ class Runner(object):
try:
job.setup(context)
except Exception as e:
job.set_status(Status.FAILED)
context.set_job_status(job, Status.FAILED)
log.log_error(e, self.logger)
if isinstance(e, (TargetError, TimeoutError)):
context.tm.verify_target_responsive(context)
@ -594,10 +614,10 @@ class Runner(object):
job.run(context)
except KeyboardInterrupt:
context.run_interrupted = True
job.set_status(Status.ABORTED)
context.set_job_status(job, Status.ABORTED)
raise
except Exception as e:
job.set_status(Status.FAILED)
context.set_job_status(job, Status.FAILED)
log.log_error(e, self.logger)
if isinstance(e, (TargetError, TimeoutError)):
context.tm.verify_target_responsive(context)
@ -610,7 +630,7 @@ class Runner(object):
self.pm.process_job_output(context)
self.pm.export_job_output(context)
except Exception as e:
job.set_status(Status.PARTIAL)
context.set_job_status(job, Status.PARTIAL)
if isinstance(e, (TargetError, TimeoutError)):
context.tm.verify_target_responsive(context)
self.context.record_ui_state('output-error')
@ -618,7 +638,7 @@ class Runner(object):
except KeyboardInterrupt:
context.run_interrupted = True
job.set_status(Status.ABORTED)
context.set_status(Status.ABORTED)
raise
finally:
# If setup was successfully completed, teardown must
@ -640,6 +660,9 @@ class Runner(object):
self.logger.error(msg.format(job.id, job.iteration, job.status))
self.context.failed_jobs += 1
self.send(signal.JOB_FAILED)
if rc.bail_on_job_failure:
raise ExecutionError('Job {} failed, bailing.'.format(job.id))
else: # status not in retry_on_status
self.logger.info('Job completed with status {}'.format(job.status))
if job.status != 'ABORTED':
@ -651,8 +674,9 @@ class Runner(object):
def retry_job(self, job):
retry_job = Job(job.spec, job.iteration, self.context)
retry_job.workload = job.workload
retry_job.state = job.state
retry_job.retries = job.retries + 1
retry_job.set_status(Status.PENDING)
self.context.set_job_status(retry_job, Status.PENDING, force=True)
self.context.job_queue.insert(0, retry_job)
self.send(signal.JOB_RESTARTED)

View File

@ -31,7 +31,7 @@ import requests
from wa import Parameter, settings, __file__ as _base_filepath
from wa.framework.resource import ResourceGetter, SourcePriority, NO_ONE
from wa.framework.exception import ResourceError
from wa.utils.misc import (ensure_directory_exists as _d,
from wa.utils.misc import (ensure_directory_exists as _d, atomic_write_path,
ensure_file_directory_exists as _f, sha256, urljoin)
from wa.utils.types import boolean, caseless_string
@ -78,15 +78,20 @@ def get_path_matches(resource, files):
return matches
# pylint: disable=too-many-return-statements
def get_from_location(basepath, resource):
if resource.kind == 'file':
path = os.path.join(basepath, resource.path)
if os.path.exists(path):
return path
elif resource.kind == 'executable':
path = os.path.join(basepath, 'bin', resource.abi, resource.filename)
if os.path.exists(path):
return path
bin_dir = os.path.join(basepath, 'bin', resource.abi)
if not os.path.exists(bin_dir):
return None
for entry in os.listdir(bin_dir):
path = os.path.join(bin_dir, entry)
if resource.match(path):
return path
elif resource.kind == 'revent':
path = os.path.join(basepath, 'revent_files')
if os.path.exists(path):
@ -234,21 +239,19 @@ class Http(ResourceGetter):
index_url = urljoin(self.url, 'index.json')
response = self.geturl(index_url)
if response.status_code != http.client.OK:
message = 'Could not fetch "{}"; recieved "{} {}"'
message = 'Could not fetch "{}"; received "{} {}"'
self.logger.error(message.format(index_url,
response.status_code,
response.reason))
return {}
if sys.version_info[0] == 3:
content = response.content.decode('utf-8')
else:
content = response.content
content = response.content.decode('utf-8')
return json.loads(content)
def download_asset(self, asset, owner_name):
url = urljoin(self.url, owner_name, asset['path'])
local_path = _f(os.path.join(settings.dependencies_directory, '__remote',
owner_name, asset['path'].replace('/', os.sep)))
if os.path.exists(local_path) and not self.always_fetch:
local_sha = sha256(local_path)
if local_sha == asset['sha256']:
@ -257,14 +260,15 @@ class Http(ResourceGetter):
self.logger.debug('Downloading {}'.format(url))
response = self.geturl(url, stream=True)
if response.status_code != http.client.OK:
message = 'Could not download asset "{}"; recieved "{} {}"'
message = 'Could not download asset "{}"; received "{} {}"'
self.logger.warning(message.format(url,
response.status_code,
response.reason))
return
with open(local_path, 'wb') as wfh:
for chunk in response.iter_content(chunk_size=self.chunk_size):
wfh.write(chunk)
with atomic_write_path(local_path) as at_path:
with open(at_path, 'wb') as wfh:
for chunk in response.iter_content(chunk_size=self.chunk_size):
wfh.write(chunk)
return local_path
def geturl(self, url, stream=False):
@ -322,7 +326,8 @@ class Filer(ResourceGetter):
"""
parameters = [
Parameter('remote_path', global_alias='remote_assets_path', default='',
Parameter('remote_path', global_alias='remote_assets_path',
default=settings.assets_repository,
description="""
Path, on the local system, where the assets are located.
"""),

View File

@ -42,6 +42,7 @@ def init_user_directory(overwrite_existing=False): # pylint: disable=R0914
os.makedirs(settings.user_directory)
os.makedirs(settings.dependencies_directory)
os.makedirs(settings.plugins_directory)
os.makedirs(settings.cache_directory)
generate_default_config(os.path.join(settings.user_directory, 'config.yaml'))
@ -49,6 +50,7 @@ def init_user_directory(overwrite_existing=False): # pylint: disable=R0914
# If running with sudo on POSIX, change the ownership to the real user.
real_user = os.getenv('SUDO_USER')
if real_user:
# pylint: disable=import-outside-toplevel
import pwd # done here as module won't import on win32
user_entry = pwd.getpwnam(real_user)
uid, gid = user_entry.pw_uid, user_entry.pw_gid

View File

@ -103,8 +103,8 @@ import inspect
from collections import OrderedDict
from wa.framework import signal
from wa.framework.plugin import Plugin
from wa.framework.exception import (TargetNotRespondingError, TimeoutError,
from wa.framework.plugin import TargetedPlugin
from wa.framework.exception import (TargetNotRespondingError, TimeoutError, # pylint: disable=redefined-builtin
WorkloadError, TargetError)
from wa.utils.log import log_error
from wa.utils.misc import isiterable
@ -324,7 +324,7 @@ def install(instrument, context):
if not callable(attr):
msg = 'Attribute {} not callable in {}.'
raise ValueError(msg.format(attr_name, instrument))
argspec = inspect.getargspec(attr)
argspec = inspect.getfullargspec(attr)
arg_num = len(argspec.args)
# Instrument callbacks will be passed exactly two arguments: self
# (the instrument instance to which the callback is bound) and
@ -345,6 +345,7 @@ def install(instrument, context):
instrument.logger.context = context
installed.append(instrument)
context.add_augmentation(instrument)
def uninstall(instrument):
@ -416,14 +417,13 @@ def get_disabled():
return [i for i in installed if not i.is_enabled]
class Instrument(Plugin):
class Instrument(TargetedPlugin):
"""
Base class for instrument implementations.
"""
kind = "instrument"
def __init__(self, target, **kwargs):
super(Instrument, self).__init__(**kwargs)
self.target = target
def __init__(self, *args, **kwargs):
super(Instrument, self).__init__(*args, **kwargs)
self.is_enabled = True
self.is_broken = False

View File

@ -23,6 +23,7 @@ from datetime import datetime
from wa.framework import pluginloader, signal, instrument
from wa.framework.configuration.core import Status
from wa.utils.log import indentcontext
from wa.framework.run import JobState
class Job(object):
@ -37,24 +38,29 @@ class Job(object):
def label(self):
return self.spec.label
@property
def classifiers(self):
return self.spec.classifiers
@property
def status(self):
return self._status
return self.state.status
@property
def has_been_initialized(self):
return self._has_been_initialized
@property
def retries(self):
return self.state.retries
@status.setter
def status(self, value):
self._status = value
self.state.status = value
self.state.timestamp = datetime.utcnow()
if self.output:
self.output.status = value
@retries.setter
def retries(self, value):
self.state.retries = value
def __init__(self, spec, iteration, context):
self.logger = logging.getLogger('job')
self.spec = spec
@ -63,13 +69,13 @@ class Job(object):
self.workload = None
self.output = None
self.run_time = None
self.retries = 0
self.classifiers = copy(self.spec.classifiers)
self._has_been_initialized = False
self._status = Status.NEW
self.state = JobState(self.id, self.label, self.iteration, Status.NEW)
def load(self, target, loader=pluginloader):
self.logger.info('Loading job {}'.format(self))
if self.iteration == 1:
if self.id not in self._workload_cache:
self.workload = loader.get_workload(self.spec.workload_name,
target,
**self.spec.workload_parameters)
@ -91,7 +97,6 @@ class Job(object):
self.workload.initialize(context)
self.set_status(Status.PENDING)
self._has_been_initialized = True
context.update_job_state(self)
def configure_augmentations(self, context, pm):
self.logger.info('Configuring augmentations')
@ -181,6 +186,11 @@ class Job(object):
if force or self.status < status:
self.status = status
def add_classifier(self, name, value, overwrite=False):
if name in self.classifiers and not overwrite:
raise ValueError('Cannot overwrite "{}" classifier.'.format(name))
self.classifiers[name] = value
def __str__(self):
return '{} ({}) [{}]'.format(self.id, self.label, self.iteration)

View File

@ -13,23 +13,36 @@
# limitations under the License.
#
try:
import psycopg2
from psycopg2 import Error as Psycopg2Error
except ImportError:
psycopg2 = None
Psycopg2Error = None
import logging
import os
import shutil
from collections import OrderedDict
import tarfile
import tempfile
from collections import OrderedDict, defaultdict
from copy import copy, deepcopy
from datetime import datetime
from io import StringIO
import devlib
from wa.framework.configuration.core import JobSpec, Status
from wa.framework.configuration.execution import CombinedConfig
from wa.framework.exception import HostError
from wa.framework.exception import HostError, SerializerSyntaxError, ConfigError
from wa.framework.run import RunState, RunInfo
from wa.framework.target.info import TargetInfo
from wa.framework.version import get_wa_version_with_commit
from wa.utils.misc import touch, ensure_directory_exists, isiterable
from wa.utils.serializer import write_pod, read_pod
from wa.utils.doc import format_simple_table
from wa.utils.misc import (touch, ensure_directory_exists, isiterable,
format_ordered_dict, safe_extract)
from wa.utils.postgres import get_schema_versions
from wa.utils.serializer import write_pod, read_pod, Podable, json
from wa.utils.types import enum, numeric
@ -135,9 +148,10 @@ class Output(object):
if not os.path.exists(path):
msg = 'Attempting to add non-existing artifact: {}'
raise HostError(msg.format(path))
is_dir = os.path.isdir(path)
path = os.path.relpath(path, self.basepath)
self.result.add_artifact(name, path, kind, description, classifiers)
self.result.add_artifact(name, path, kind, description, classifiers, is_dir)
def add_event(self, message):
self.result.add_event(message)
@ -152,6 +166,9 @@ class Output(object):
artifact = self.get_artifact(name)
return self.get_path(artifact.path)
def add_classifier(self, name, value, overwrite=False):
self.result.add_classifier(name, value, overwrite)
def add_metadata(self, key, *args, **kwargs):
self.result.add_metadata(key, *args, **kwargs)
@ -166,7 +183,35 @@ class Output(object):
return os.path.basename(self.basepath)
class RunOutput(Output):
class RunOutputCommon(object):
''' Split out common functionality to form a second base of
the RunOutput classes
'''
@property
def run_config(self):
if self._combined_config:
return self._combined_config.run_config
@property
def settings(self):
if self._combined_config:
return self._combined_config.settings
def get_job_spec(self, spec_id):
for spec in self.job_specs:
if spec.id == spec_id:
return spec
return None
def list_workloads(self):
workloads = []
for job in self.jobs:
if job.label not in workloads:
workloads.append(job.label)
return workloads
class RunOutput(Output, RunOutputCommon):
kind = 'run'
@ -207,16 +252,6 @@ class RunOutput(Output):
path = os.path.join(self.basepath, '__failed')
return ensure_directory_exists(path)
@property
def run_config(self):
if self._combined_config:
return self._combined_config.run_config
@property
def settings(self):
if self._combined_config:
return self._combined_config.settings
@property
def augmentations(self):
run_augs = set([])
@ -234,8 +269,8 @@ class RunOutput(Output):
self._combined_config = None
self.jobs = []
self.job_specs = []
if (not os.path.isfile(self.statefile) or
not os.path.isfile(self.infofile)):
if (not os.path.isfile(self.statefile)
or not os.path.isfile(self.infofile)):
msg = '"{}" does not exist or is not a valid WA output directory.'
raise ValueError(msg.format(self.basepath))
self.reload()
@ -269,6 +304,7 @@ class RunOutput(Output):
write_pod(self.state.to_pod(), self.statefile)
def write_config(self, config):
self._combined_config = config
write_pod(config.to_pod(), self.configfile)
def read_config(self):
@ -301,19 +337,6 @@ class RunOutput(Output):
shutil.move(job_output.basepath, failed_path)
job_output.basepath = failed_path
def get_job_spec(self, spec_id):
for spec in self.job_specs:
if spec.id == spec_id:
return spec
return None
def list_workloads(self):
workloads = []
for job in self.jobs:
if job.label not in workloads:
workloads.append(job.label)
return workloads
class JobOutput(Output):
@ -330,13 +353,22 @@ class JobOutput(Output):
self.spec = None
self.reload()
@property
def augmentations(self):
job_augs = set([])
for aug in self.spec.augmentations:
job_augs.add(aug)
return list(job_augs)
class Result(object):
class Result(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
instance = Result()
instance.status = Status(pod['status'])
instance = super(Result, Result).from_pod(pod)
instance.status = Status.from_pod(pod['status'])
instance.metrics = [Metric.from_pod(m) for m in pod['metrics']]
instance.artifacts = [Artifact.from_pod(a) for a in pod['artifacts']]
instance.events = [Event.from_pod(e) for e in pod['events']]
@ -346,6 +378,7 @@ class Result(object):
def __init__(self):
# pylint: disable=no-member
super(Result, self).__init__()
self.status = Status.NEW
self.metrics = []
self.artifacts = []
@ -359,9 +392,10 @@ class Result(object):
logger.debug('Adding metric: {}'.format(metric))
self.metrics.append(metric)
def add_artifact(self, name, path, kind, description=None, classifiers=None):
def add_artifact(self, name, path, kind, description=None, classifiers=None,
is_dir=False):
artifact = Artifact(name, path, kind, description=description,
classifiers=classifiers)
classifiers=classifiers, is_dir=is_dir)
logger.debug('Adding artifact: {}'.format(artifact))
self.artifacts.append(artifact)
@ -380,6 +414,21 @@ class Result(object):
return artifact
raise HostError('Artifact "{}" not found'.format(name))
def add_classifier(self, name, value, overwrite=False):
if name in self.classifiers and not overwrite:
raise ValueError('Cannot overwrite "{}" classifier.'.format(name))
self.classifiers[name] = value
for metric in self.metrics:
if name in metric.classifiers and not overwrite:
raise ValueError('Cannot overwrite "{}" classifier; clashes with {}.'.format(name, metric))
metric.classifiers[name] = value
for artifact in self.artifacts:
if name in artifact.classifiers and not overwrite:
raise ValueError('Cannot overwrite "{}" classifier; clashes with {}.'.format(name, artifact))
artifact.classifiers[name] = value
def add_metadata(self, key, *args, **kwargs):
force = kwargs.pop('force', False)
if kwargs:
@ -429,21 +478,27 @@ class Result(object):
self.metadata[key] = args[0]
def to_pod(self):
return dict(
status=str(self.status),
metrics=[m.to_pod() for m in self.metrics],
artifacts=[a.to_pod() for a in self.artifacts],
events=[e.to_pod() for e in self.events],
classifiers=copy(self.classifiers),
metadata=deepcopy(self.metadata),
)
pod = super(Result, self).to_pod()
pod['status'] = self.status.to_pod()
pod['metrics'] = [m.to_pod() for m in self.metrics]
pod['artifacts'] = [a.to_pod() for a in self.artifacts]
pod['events'] = [e.to_pod() for e in self.events]
pod['classifiers'] = copy(self.classifiers)
pod['metadata'] = deepcopy(self.metadata)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
pod['status'] = Status(pod['status']).to_pod()
return pod
ARTIFACT_TYPES = ['log', 'meta', 'data', 'export', 'raw']
ArtifactType = enum(ARTIFACT_TYPES)
class Artifact(object):
class Artifact(Podable):
"""
This is an artifact generated during execution/post-processing of a
workload. Unlike metrics, this represents an actual artifact, such as a
@ -491,12 +546,20 @@ class Artifact(object):
"""
_pod_serialization_version = 2
@staticmethod
def from_pod(pod):
pod = Artifact._upgrade_pod(pod)
pod_version = pod.pop('_pod_version')
pod['kind'] = ArtifactType(pod['kind'])
return Artifact(**pod)
instance = Artifact(**pod)
instance._pod_version = pod_version # pylint: disable =protected-access
instance.is_dir = pod.pop('is_dir')
return instance
def __init__(self, name, path, kind, description=None, classifiers=None):
def __init__(self, name, path, kind, description=None, classifiers=None,
is_dir=False):
""""
:param name: Name that uniquely identifies this artifact.
:param path: The *relative* path of the artifact. Depending on the
@ -512,8 +575,8 @@ class Artifact(object):
:param classifiers: A set of key-value pairs to further classify this
metric beyond current iteration (e.g. this can be
used to identify sub-tests).
"""
super(Artifact, self).__init__()
self.name = name
self.path = path.replace('/', os.sep) if path is not None else path
try:
@ -523,20 +586,34 @@ class Artifact(object):
raise ValueError(msg.format(kind, ARTIFACT_TYPES))
self.description = description
self.classifiers = classifiers or {}
self.is_dir = is_dir
def to_pod(self):
pod = copy(self.__dict__)
pod = super(Artifact, self).to_pod()
pod.update(self.__dict__)
pod['kind'] = str(self.kind)
pod['is_dir'] = self.is_dir
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
@staticmethod
def _pod_upgrade_v2(pod):
pod['is_dir'] = pod.get('is_dir', False)
return pod
def __str__(self):
return self.path
def __repr__(self):
return '{} ({}): {}'.format(self.name, self.kind, self.path)
ft = 'dir' if self.is_dir else 'file'
return '{} ({}) ({}): {}'.format(self.name, ft, self.kind, self.path)
class Metric(object):
class Metric(Podable):
"""
This is a single metric collected from executing a workload.
@ -553,15 +630,26 @@ class Metric(object):
to identify sub-tests).
"""
__slots__ = ['name', 'value', 'units', 'lower_is_better', 'classifiers']
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
return Metric(**pod)
pod = Metric._upgrade_pod(pod)
pod_version = pod.pop('_pod_version')
instance = Metric(**pod)
instance._pod_version = pod_version # pylint: disable =protected-access
return instance
@property
def label(self):
parts = ['{}={}'.format(n, v) for n, v in self.classifiers.items()]
parts.insert(0, self.name)
return '/'.join(parts)
def __init__(self, name, value, units=None, lower_is_better=False,
classifiers=None):
super(Metric, self).__init__()
self.name = name
self.value = numeric(value)
self.units = units
@ -569,13 +657,18 @@ class Metric(object):
self.classifiers = classifiers or {}
def to_pod(self):
return dict(
name=self.name,
value=self.value,
units=self.units,
lower_is_better=self.lower_is_better,
classifiers=self.classifiers,
)
pod = super(Metric, self).to_pod()
pod['name'] = self.name
pod['value'] = self.value
pod['units'] = self.units
pod['lower_is_better'] = self.lower_is_better
pod['classifiers'] = self.classifiers
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __str__(self):
result = '{}: {}'.format(self.name, self.value)
@ -587,23 +680,27 @@ class Metric(object):
def __repr__(self):
text = self.__str__()
if self.classifiers:
return '<{} {}>'.format(text, self.classifiers)
return '<{} {}>'.format(text, format_ordered_dict(self.classifiers))
else:
return '<{}>'.format(text)
class Event(object):
class Event(Podable):
"""
An event that occured during a run.
"""
__slots__ = ['timestamp', 'message']
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
pod = Event._upgrade_pod(pod)
pod_version = pod.pop('_pod_version')
instance = Event(pod['message'])
instance.timestamp = pod['timestamp']
instance._pod_version = pod_version # pylint: disable =protected-access
return instance
@property
@ -615,14 +712,20 @@ class Event(object):
return result
def __init__(self, message):
super(Event, self).__init__()
self.timestamp = datetime.utcnow()
self.message = message
self.message = str(message)
def to_pod(self):
return dict(
timestamp=self.timestamp,
message=self.message,
)
pod = super(Event, self).to_pod()
pod['timestamp'] = self.timestamp
pod['message'] = self.message
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __str__(self):
return '[{}] {}'.format(self.timestamp, self.message)
@ -674,9 +777,13 @@ def init_job_output(run_output, job):
def discover_wa_outputs(path):
for root, dirs, _ in os.walk(path):
# Use topdown=True to allow pruning dirs
for root, dirs, _ in os.walk(path, topdown=True):
if '__meta' in dirs:
yield RunOutput(root)
# Avoid recursing into the artifact as it can be very lengthy if a
# large number of file is present (sysfs dump)
dirs.clear()
def _save_raw_config(meta_dir, state):
@ -689,3 +796,502 @@ def _save_raw_config(meta_dir, state):
basename = os.path.basename(source)
dest_path = os.path.join(raw_config_dir, 'cfg{}-{}'.format(i, basename))
shutil.copy(source, dest_path)
class DatabaseOutput(Output):
kind = None
@property
def resultfile(self):
if self.conn is None or self.oid is None:
return {}
pod = self._get_pod_version()
pod['metrics'] = self._get_metrics()
pod['status'] = self._get_status()
pod['classifiers'] = self._get_classifiers(self.oid, 'run')
pod['events'] = self._get_events()
pod['artifacts'] = self._get_artifacts()
return pod
@staticmethod
def _build_command(columns, tables, conditions=None, joins=None):
cmd = '''SELECT\n\t{}\nFROM\n\t{}'''.format(',\n\t'.join(columns), ',\n\t'.join(tables))
if joins:
for join in joins:
cmd += '''\nLEFT JOIN {} ON {}'''.format(join[0], join[1])
if conditions:
cmd += '''\nWHERE\n\t{}'''.format('\nAND\n\t'.join(conditions))
return cmd + ';'
def __init__(self, conn, oid=None, reload=True): # pylint: disable=super-init-not-called
self.conn = conn
self.oid = oid
self.result = None
if reload:
self.reload()
def __repr__(self):
return '<{} {}>'.format(self.__class__.__name__, self.oid)
def __str__(self):
return self.oid
def reload(self):
try:
self.result = Result.from_pod(self.resultfile)
except Exception as e: # pylint: disable=broad-except
self.result = Result()
self.result.status = Status.UNKNOWN
self.add_event(str(e))
def get_artifact_path(self, name):
artifact = self.get_artifact(name)
if artifact.is_dir:
return self._read_dir_artifact(artifact)
else:
return self._read_file_artifact(artifact)
def _read_dir_artifact(self, artifact):
artifact_path = tempfile.mkdtemp(prefix='wa_')
with tarfile.open(fileobj=self.conn.lobject(int(artifact.path), mode='b'), mode='r|gz') as tar_file:
safe_extract(tar_file, artifact_path)
self.conn.commit()
return artifact_path
def _read_file_artifact(self, artifact):
artifact = StringIO(self.conn.lobject(int(artifact.path)).read())
self.conn.commit()
return artifact
# pylint: disable=too-many-locals
def _read_db(self, columns, tables, conditions=None, join=None, as_dict=True):
# Automatically remove table name from column when using column names as keys or
# allow for column names to be aliases when retrieving the data,
# (db_column_name, alias)
db_columns = []
aliases_colunms = []
for column in columns:
if isinstance(column, tuple):
db_columns.append(column[0])
aliases_colunms.append(column[1])
else:
db_columns.append(column)
aliases_colunms.append(column.rsplit('.', 1)[-1])
cmd = self._build_command(db_columns, tables, conditions, join)
logger.debug(cmd)
with self.conn.cursor() as cursor:
cursor.execute(cmd)
results = cursor.fetchall()
self.conn.commit()
if not as_dict:
return results
# Format the output dict using column names as keys
output = []
for result in results:
entry = {}
for k, v in zip(aliases_colunms, result):
entry[k] = v
output.append(entry)
return output
def _get_pod_version(self):
columns = ['_pod_version', '_pod_serialization_version']
tables = ['{}s'.format(self.kind)]
conditions = ['{}s.oid = \'{}\''.format(self.kind, self.oid)]
results = self._read_db(columns, tables, conditions)
if results:
return results[0]
else:
return None
def _populate_classifers(self, pod, kind):
for entry in pod:
oid = entry.pop('oid')
entry['classifiers'] = self._get_classifiers(oid, kind)
return pod
def _get_classifiers(self, oid, kind):
columns = ['classifiers.key', 'classifiers.value']
tables = ['classifiers']
conditions = ['{}_oid = \'{}\''.format(kind, oid)]
results = self._read_db(columns, tables, conditions, as_dict=False)
classifiers = {}
for (k, v) in results:
classifiers[k] = v
return classifiers
def _get_metrics(self):
columns = ['metrics.name', 'metrics.value', 'metrics.units',
'metrics.lower_is_better',
'metrics.oid', 'metrics._pod_version',
'metrics._pod_serialization_version']
tables = ['metrics']
joins = [('classifiers', 'classifiers.metric_oid = metrics.oid')]
conditions = ['metrics.{}_oid = \'{}\''.format(self.kind, self.oid)]
pod = self._read_db(columns, tables, conditions, joins)
return self._populate_classifers(pod, 'metric')
def _get_status(self):
columns = ['{}s.status'.format(self.kind)]
tables = ['{}s'.format(self.kind)]
conditions = ['{}s.oid = \'{}\''.format(self.kind, self.oid)]
results = self._read_db(columns, tables, conditions, as_dict=False)
if results:
return results[0][0]
else:
return None
def _get_artifacts(self):
columns = ['artifacts.name', 'artifacts.description', 'artifacts.kind',
('largeobjects.lo_oid', 'path'), 'artifacts.oid', 'artifacts.is_dir',
'artifacts._pod_version', 'artifacts._pod_serialization_version']
tables = ['largeobjects', 'artifacts']
joins = [('classifiers', 'classifiers.artifact_oid = artifacts.oid')]
conditions = ['artifacts.{}_oid = \'{}\''.format(self.kind, self.oid),
'artifacts.large_object_uuid = largeobjects.oid']
# If retrieving run level artifacts we want those that don't also belong to a job
if self.kind == 'run':
conditions.append('artifacts.job_oid IS NULL')
pod = self._read_db(columns, tables, conditions, joins)
for artifact in pod:
artifact['path'] = str(artifact['path'])
return self._populate_classifers(pod, 'metric')
def _get_events(self):
columns = ['events.message', 'events.timestamp']
tables = ['events']
conditions = ['events.{}_oid = \'{}\''.format(self.kind, self.oid)]
return self._read_db(columns, tables, conditions)
def kernel_config_from_db(raw):
kernel_config = {}
if raw:
for k, v in zip(raw[0], raw[1]):
kernel_config[k] = v
return kernel_config
class RunDatabaseOutput(DatabaseOutput, RunOutputCommon):
kind = 'run'
@property
def basepath(self):
return 'db:({})-{}@{}:{}'.format(self.dbname, self.user,
self.host, self.port)
@property
def augmentations(self):
columns = ['augmentations.name']
tables = ['augmentations']
conditions = ['augmentations.run_oid = \'{}\''.format(self.oid)]
results = self._read_db(columns, tables, conditions, as_dict=False)
return [a for augs in results for a in augs]
@property
def _db_infofile(self):
columns = ['start_time', 'project', ('run_uuid', 'uuid'), 'end_time',
'run_name', 'duration', '_pod_version', '_pod_serialization_version']
tables = ['runs']
conditions = ['runs.run_uuid = \'{}\''.format(self.run_uuid)]
pod = self._read_db(columns, tables, conditions)
if not pod:
return {}
return pod[0]
@property
def _db_targetfile(self):
columns = ['os', 'is_rooted', 'target', 'modules', 'abi', 'cpus', 'os_version',
'hostid', 'hostname', 'kernel_version', 'kernel_release',
'kernel_sha1', 'kernel_config', 'sched_features', 'page_size_kb',
'system_id', 'screen_resolution', 'prop', 'android_id',
'_pod_version', '_pod_serialization_version']
tables = ['targets']
conditions = ['targets.run_oid = \'{}\''.format(self.oid)]
pod = self._read_db(columns, tables, conditions)
if not pod:
return {}
pod = pod[0]
try:
pod['cpus'] = [json.loads(cpu) for cpu in pod.pop('cpus')]
except SerializerSyntaxError:
pod['cpus'] = []
logger.debug('Failed to deserialize target cpu information')
pod['kernel_config'] = kernel_config_from_db(pod['kernel_config'])
return pod
@property
def _db_statefile(self):
# Read overall run information
columns = ['runs.state']
tables = ['runs']
conditions = ['runs.run_uuid = \'{}\''.format(self.run_uuid)]
pod = self._read_db(columns, tables, conditions)
pod = pod[0].get('state')
if not pod:
return {}
# Read job information
columns = ['jobs.job_id', 'jobs.oid']
tables = ['jobs']
conditions = ['jobs.run_oid = \'{}\''.format(self.oid)]
job_oids = self._read_db(columns, tables, conditions)
# Match job oid with jobs from state file
for job in pod.get('jobs', []):
for job_oid in job_oids:
if job['id'] == job_oid['job_id']:
job['oid'] = job_oid['oid']
break
return pod
@property
def _db_jobsfile(self):
workload_params = self._get_parameters('workload')
runtime_params = self._get_parameters('runtime')
columns = [('jobs.job_id', 'id'), 'jobs.label', 'jobs.workload_name',
'jobs.oid', 'jobs._pod_version', 'jobs._pod_serialization_version']
tables = ['jobs']
conditions = ['jobs.run_oid = \'{}\''.format(self.oid)]
jobs = self._read_db(columns, tables, conditions)
for job in jobs:
job['augmentations'] = self._get_job_augmentations(job['oid'])
job['workload_parameters'] = workload_params.pop(job['oid'], {})
job['runtime_parameters'] = runtime_params.pop(job['oid'], {})
job.pop('oid')
return jobs
@property
def _db_run_config(self):
pod = defaultdict(dict)
parameter_types = ['augmentation', 'resource_getter']
for parameter_type in parameter_types:
columns = ['parameters.name', 'parameters.value',
'parameters.value_type',
('{}s.name'.format(parameter_type), '{}'.format(parameter_type))]
tables = ['parameters', '{}s'.format(parameter_type)]
conditions = ['parameters.run_oid = \'{}\''.format(self.oid),
'parameters.type = \'{}\''.format(parameter_type),
'parameters.{0}_oid = {0}s.oid'.format(parameter_type)]
configs = self._read_db(columns, tables, conditions)
for config in configs:
entry = {config['name']: json.loads(config['value'])}
pod['{}s'.format(parameter_type)][config.pop(parameter_type)] = entry
# run config
columns = ['runs.max_retries', 'runs.allow_phone_home',
'runs.bail_on_init_failure', 'runs.retry_on_status']
tables = ['runs']
conditions = ['runs.oid = \'{}\''.format(self.oid)]
config = self._read_db(columns, tables, conditions)
if not config:
return {}
config = config[0]
# Convert back into a string representation of an enum list
config['retry_on_status'] = config['retry_on_status'][1:-1].split(',')
pod.update(config)
return pod
def __init__(self,
password=None,
dbname='wa',
host='localhost',
port='5432',
user='postgres',
run_uuid=None,
list_runs=False):
if psycopg2 is None:
msg = 'Please install the psycopg2 in order to connect to postgres databases'
raise HostError(msg)
self.dbname = dbname
self.host = host
self.port = port
self.user = user
self.password = password
self.run_uuid = run_uuid
self.conn = None
self.info = None
self.state = None
self.result = None
self.target_info = None
self._combined_config = None
self.jobs = []
self.job_specs = []
self.connect()
super(RunDatabaseOutput, self).__init__(conn=self.conn, reload=False)
local_schema_version, db_schema_version = get_schema_versions(self.conn)
if local_schema_version != db_schema_version:
self.disconnect()
msg = 'The current database schema is v{} however the local ' \
'schema version is v{}. Please update your database ' \
'with the create command'
raise HostError(msg.format(db_schema_version, local_schema_version))
if list_runs:
print('Available runs are:')
self._list_runs()
self.disconnect()
return
if not self.run_uuid:
print('Please specify "Run uuid"')
self._list_runs()
self.disconnect()
return
if not self.oid:
self.oid = self._get_oid()
self.reload()
def read_job_specs(self):
job_specs = []
for job in self._db_jobsfile:
job_specs.append(JobSpec.from_pod(job))
return job_specs
def connect(self):
if self.conn and not self.conn.closed:
return
try:
self.conn = psycopg2.connect(dbname=self.dbname,
user=self.user,
host=self.host,
password=self.password,
port=self.port)
except Psycopg2Error as e:
raise HostError('Unable to connect to the Database: "{}'.format(e.args[0]))
def disconnect(self):
self.conn.commit()
self.conn.close()
def reload(self):
super(RunDatabaseOutput, self).reload()
info_pod = self._db_infofile
state_pod = self._db_statefile
if not info_pod or not state_pod:
msg = '"{}" does not appear to be a valid WA Database Output.'
raise ValueError(msg.format(self.oid))
self.info = RunInfo.from_pod(info_pod)
self.state = RunState.from_pod(state_pod)
self._combined_config = CombinedConfig.from_pod({'run_config': self._db_run_config})
self.target_info = TargetInfo.from_pod(self._db_targetfile)
self.job_specs = self.read_job_specs()
for job_state in self._db_statefile['jobs']:
job = JobDatabaseOutput(self.conn, job_state.get('oid'), job_state['id'],
job_state['label'], job_state['iteration'],
job_state['retries'])
job.status = job_state['status']
job.spec = self.get_job_spec(job.id)
if job.spec is None:
logger.warning('Could not find spec for job {}'.format(job.id))
self.jobs.append(job)
def _get_oid(self):
columns = ['{}s.oid'.format(self.kind)]
tables = ['{}s'.format(self.kind)]
conditions = ['runs.run_uuid = \'{}\''.format(self.run_uuid)]
oid = self._read_db(columns, tables, conditions, as_dict=False)
if not oid:
raise ConfigError('No matching run entries found for run_uuid {}'.format(self.run_uuid))
if len(oid) > 1:
raise ConfigError('Multiple entries found for run_uuid: {}'.format(self.run_uuid))
return oid[0][0]
def _get_parameters(self, param_type):
columns = ['parameters.job_oid', 'parameters.name', 'parameters.value']
tables = ['parameters']
conditions = ['parameters.type = \'{}\''.format(param_type),
'parameters.run_oid = \'{}\''.format(self.oid)]
params = self._read_db(columns, tables, conditions, as_dict=False)
parm_dict = defaultdict(dict)
for (job_oid, k, v) in params:
try:
parm_dict[job_oid][k] = json.loads(v)
except SerializerSyntaxError:
logger.debug('Failed to deserialize job_oid:{}-"{}":"{}"'.format(job_oid, k, v))
return parm_dict
def _get_job_augmentations(self, job_oid):
columns = ['jobs_augs.augmentation_oid', 'augmentations.name',
'augmentations.oid', 'jobs_augs.job_oid']
tables = ['jobs_augs', 'augmentations']
conditions = ['jobs_augs.job_oid = \'{}\''.format(job_oid),
'jobs_augs.augmentation_oid = augmentations.oid']
augmentations = self._read_db(columns, tables, conditions)
return [aug['name'] for aug in augmentations]
def _list_runs(self):
columns = ['runs.run_uuid', 'runs.run_name', 'runs.project',
'runs.project_stage', 'runs.status', 'runs.start_time', 'runs.end_time']
tables = ['runs']
pod = self._read_db(columns, tables)
if pod:
headers = ['Run Name', 'Project', 'Project Stage', 'Start Time', 'End Time',
'run_uuid']
run_list = []
for entry in pod:
# Format times to display better
start_time = entry['start_time']
end_time = entry['end_time']
if start_time:
start_time = start_time.strftime("%Y-%m-%d %H:%M:%S")
if end_time:
end_time = end_time.strftime("%Y-%m-%d %H:%M:%S")
run_list.append([
entry['run_name'],
entry['project'],
entry['project_stage'],
start_time,
end_time,
entry['run_uuid']])
print(format_simple_table(run_list, headers))
else:
print('No Runs Found')
class JobDatabaseOutput(DatabaseOutput):
kind = 'job'
def __init__(self, conn, oid, job_id, label, iteration, retry):
super(JobDatabaseOutput, self).__init__(conn, oid=oid)
self.id = job_id
self.label = label
self.iteration = iteration
self.retry = retry
self.result = None
self.spec = None
self.reload()
def __repr__(self):
return '<{} {}-{}-{}>'.format(self.__class__.__name__,
self.id, self.label, self.iteration)
def __str__(self):
return '{}-{}-{}'.format(self.id, self.label, self.iteration)
@property
def augmentations(self):
job_augs = set([])
if self.spec:
for aug in self.spec.augmentations:
job_augs.add(aug)
return list(job_augs)

View File

@ -40,10 +40,10 @@ class OutputProcessor(Plugin):
msg = 'Instrument "{}" is required by {}, but is not installed.'
raise ConfigError(msg.format(instrument, self.name))
def initialize(self):
def initialize(self, context):
pass
def finalize(self):
def finalize(self, context):
pass
@ -60,6 +60,7 @@ class ProcessorManager(object):
self.logger.debug('Installing {}'.format(processor.name))
processor.logger.context = context
self.processors.append(processor)
context.add_augmentation(processor)
def disable_all(self):
for output_processor in self.processors:
@ -103,13 +104,13 @@ class ProcessorManager(object):
for proc in self.processors:
proc.validate()
def initialize(self):
def initialize(self, context):
for proc in self.processors:
proc.initialize()
proc.initialize(context)
def finalize(self):
def finalize(self, context):
for proc in self.processors:
proc.finalize()
proc.finalize(context)
def process_job_output(self, context):
self.do_for_each_proc('process_job_output', 'Processing using "{}"',

View File

@ -18,8 +18,6 @@
import os
import sys
import inspect
import imp
import string
import logging
from collections import OrderedDict, defaultdict
from itertools import chain
@ -32,16 +30,10 @@ from wa.framework.exception import (NotFoundError, PluginLoaderError, TargetErro
ValidationError, ConfigError, HostError)
from wa.utils import log
from wa.utils.misc import (ensure_directory_exists as _d, walk_modules, load_class,
merge_dicts_simple, get_article)
merge_dicts_simple, get_article, import_path)
from wa.utils.types import identifier
if sys.version_info[0] == 3:
MODNAME_TRANS = str.maketrans(':/\\.', '____')
else:
MODNAME_TRANS = string.maketrans(':/\\.', '____')
class AttributeCollection(object):
"""
Accumulator for plugin attribute objects (such as Parameters or Artifacts).
@ -157,6 +149,7 @@ class Alias(object):
raise ConfigError(msg.format(param, self.name, ext.name))
# pylint: disable=bad-mcs-classmethod-argument
class PluginMeta(type):
"""
This basically adds some magic to plugins to make implementing new plugins,
@ -246,7 +239,7 @@ class Plugin(with_metaclass(PluginMeta, object)):
@classmethod
def get_default_config(cls):
return {p.name: p.default for p in cls.parameters}
return {p.name: p.default for p in cls.parameters if not p.deprecated}
@property
def dependencies_directory(self):
@ -367,7 +360,7 @@ class Plugin(with_metaclass(PluginMeta, object)):
self._modules.append(module)
def __str__(self):
return self.name
return str(self.name)
def __repr__(self):
params = []
@ -383,12 +376,22 @@ class TargetedPlugin(Plugin):
"""
suppoted_targets = []
supported_targets = []
parameters = [
Parameter('cleanup_assets', kind=bool,
global_alias='cleanup_assets',
aliases=['clean_up'],
default=True,
description="""
If ``True``, assets that are deployed or created by the
plugin will be removed again from the device.
"""),
]
@classmethod
def check_compatible(cls, target):
if cls.suppoted_targets:
if target.os not in cls.suppoted_targets:
if cls.supported_targets:
if target.os not in cls.supported_targets:
msg = 'Incompatible target OS "{}" for {}'
raise TargetError(msg.format(target.os, cls.name))
@ -611,24 +614,30 @@ class PluginLoader(object):
self.logger.debug('Checking path %s', path)
if os.path.isfile(path):
self._discover_from_file(path)
for root, _, files in os.walk(path, followlinks=True):
should_skip = False
for igpath in ignore_paths:
if root.startswith(igpath):
should_skip = True
break
if should_skip:
continue
for fname in files:
if os.path.splitext(fname)[1].lower() != '.py':
elif os.path.exists(path):
for root, _, files in os.walk(path, followlinks=True):
should_skip = False
for igpath in ignore_paths:
if root.startswith(igpath):
should_skip = True
break
if should_skip:
continue
filepath = os.path.join(root, fname)
self._discover_from_file(filepath)
for fname in files:
if os.path.splitext(fname)[1].lower() != '.py':
continue
filepath = os.path.join(root, fname)
self._discover_from_file(filepath)
elif not os.path.isabs(path):
try:
for module in walk_modules(path):
self._discover_in_module(module)
except Exception: # NOQA pylint: disable=broad-except
pass
def _discover_from_file(self, filepath):
try:
modname = os.path.splitext(filepath[1:])[0].translate(MODNAME_TRANS)
module = imp.load_source(modname, filepath)
module = import_path(filepath)
self._discover_in_module(module)
except (SystemExit, ImportError) as e:
if self.keep_going:

View File

@ -35,6 +35,7 @@ class __LoaderWrapper(object):
def reset(self):
# These imports cannot be done at top level, because of
# sys.modules manipulation below
# pylint: disable=import-outside-toplevel
from wa.framework.plugin import PluginLoader
from wa.framework.configuration.core import settings
self._loader = PluginLoader(settings.plugin_packages,

View File

@ -16,15 +16,14 @@ import logging
import os
import re
from devlib.utils.android import ApkInfo
from wa.framework import pluginloader
from wa.framework.plugin import Plugin
from wa.framework.exception import ResourceError
from wa.framework.configuration import settings
from wa.utils import log
from wa.utils.android import get_cacheable_apk_info
from wa.utils.misc import get_object_name
from wa.utils.types import enum, list_or_string, prioritylist
from wa.utils.types import enum, list_or_string, prioritylist, version_tuple
SourcePriority = enum(['package', 'remote', 'lan', 'local',
@ -142,10 +141,12 @@ class ApkFile(Resource):
def __init__(self, owner, variant=None, version=None,
package=None, uiauto=False, exact_abi=False,
supported_abi=None):
supported_abi=None, min_version=None, max_version=None):
super(ApkFile, self).__init__(owner)
self.variant = variant
self.version = version
self.max_version = max_version
self.min_version = min_version
self.package = package
self.uiauto = uiauto
self.exact_abi = exact_abi
@ -158,21 +159,25 @@ class ApkFile(Resource):
def match(self, path):
name_matches = True
version_matches = True
version_range_matches = True
package_matches = True
abi_matches = True
uiauto_matches = uiauto_test_matches(path, self.uiauto)
if self.version is not None:
if self.version:
version_matches = apk_version_matches(path, self.version)
if self.variant is not None:
if self.max_version or self.min_version:
version_range_matches = apk_version_matches_range(path, self.min_version,
self.max_version)
if self.variant:
name_matches = file_name_matches(path, self.variant)
if self.package is not None:
if self.package:
package_matches = package_name_matches(path, self.package)
if self.supported_abi is not None:
if self.supported_abi:
abi_matches = apk_abi_matches(path, self.supported_abi,
self.exact_abi)
return name_matches and version_matches and \
uiauto_matches and package_matches and \
abi_matches
version_range_matches and uiauto_matches \
and package_matches and abi_matches
def __str__(self):
text = '<{}\'s apk'.format(self.owner)
@ -273,15 +278,40 @@ class ResourceResolver(object):
def apk_version_matches(path, version):
info = ApkInfo(path)
if info.version_name == version or info.version_code == version:
return True
return loose_version_matching(version, info.version_name)
version = list_or_string(version)
info = get_cacheable_apk_info(path)
for v in version:
if v in (info.version_name, info.version_code):
return True
if loose_version_matching(v, info.version_name):
return True
return False
def apk_version_matches_range(path, min_version=None, max_version=None):
info = get_cacheable_apk_info(path)
return range_version_matching(info.version_name, min_version, max_version)
def range_version_matching(apk_version, min_version=None, max_version=None):
if not apk_version:
return False
apk_version = version_tuple(apk_version)
if max_version:
max_version = version_tuple(max_version)
if apk_version > max_version:
return False
if min_version:
min_version = version_tuple(min_version)
if apk_version < min_version:
return False
return True
def loose_version_matching(config_version, apk_version):
config_version = config_version.split('.')
apk_version = apk_version.split('.')
config_version = version_tuple(config_version)
apk_version = version_tuple(apk_version)
if len(apk_version) < len(config_version):
return False # More specific version requested than available
@ -302,18 +332,18 @@ def file_name_matches(path, pattern):
def uiauto_test_matches(path, uiauto):
info = ApkInfo(path)
info = get_cacheable_apk_info(path)
return uiauto == ('com.arm.wa.uiauto' in info.package)
def package_name_matches(path, package):
info = ApkInfo(path)
info = get_cacheable_apk_info(path)
return info.package == package
def apk_abi_matches(path, supported_abi, exact_abi=False):
supported_abi = list_or_string(supported_abi)
info = ApkInfo(path)
info = get_cacheable_apk_info(path)
# If no native code present, suitable for all devices.
if not info.native_code:
return True

Some files were not shown because too many files have changed in this diff Show More