1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-04-15 15:20:45 +01:00

Compare commits

...

109 Commits
v3.3 ... master

Author SHA1 Message Date
Sebastian Goscik
2d14c82f92 Added option to re-open files to poller.
Some times a sysfs/debug fs will only generate a value on open. Subsequent seek/read will not vield any new values. This patch adds the option to reopen all files on each read.
2025-03-10 17:39:14 -05:00
Metin Kaya
8598d1ba3c speedometer: Add version 3.0 support
Port v3.0 of Speedometer from Webkit [1] repo and update tarballs.

"version" field can be added to the workload agenda file to specify
which version of Speedomer should be used.

Size of v3.0 tarball is around 12 MB if the compression type is gzip.
Thus, in order to reduce total size of the repo, compress Speedometer
archives in LZMA format.

1. https://github.com/WebKit/WebKit/tree/main/Websites/browserbench.org/Speedometer3.0

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2025-03-10 17:35:32 -05:00
Metin Kaya
523fb3f659 speedometer: Introduce trivial cleanups
- Remove unused imports
- Handle the case that @candidate_files may be undefined
- Customize the log message regarding Speedometer timeout

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2025-03-10 17:35:32 -05:00
Elif Topuz
0732fa9cf0 workloads/geekbench: Add support for Geekbench command-line build
Add Geekbench command-line build workload for Android targets.
Geekbench apks allow to user to run the tests altogether. Using the
command-line, a single test or multiple tests can be specified.

Signed-off-by: Elif Topuz <elif.topuz@arm.com>
2025-03-05 17:24:46 -06:00
Metin Kaya
b03f28d1d5 instruments/trace_cmd: Add tracing mode support to TraceCmdInstrument()
Implement tracing mode support (mainly for write-to-disk mode) in
TraceCmdInstrument, enabling efficient collection of large trace
datasets without encountering memory limitations.

This feature is particularly useful for scenarios requiring extensive
trace data.

Additional changes:
- Replace hardcoded strings with corresponding string literals for
  improved maintainability.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2025-03-01 16:57:49 -06:00
Marc Bonnici
f125fd340d version: Bump required devlib version
Bump the require devlib version that exports new trace-command
functionality.
2025-03-01 16:52:19 -06:00
Marc Bonnici
75cfb56b38 Remove dependency on distutils
Align with devlib and remove dependencies on distutils.

[1] https://github.com/ARM-software/devlib/pull/631/
2025-03-01 16:40:54 -06:00
Marc Bonnici
b734e90de1 ci: Bump python versions and pin ubuntu version
Update CI to run with later python versions and update to
align with latest available versions provided by github actions.

Pin to using ubuntu version 22.04 as this is the latest that support all
python versions.
2025-03-01 16:40:54 -06:00
Metin Kaya
5670e571e1 workloads/speedometer: Fix SyntaxWarning exceptions in regex pattern
The regex pattern for extracting speedometer score causes these
exceptions due to unescaped \d and \/ sequences:

wa/workloads/speedometer/__init__.py:109: SyntaxWarning: invalid escape
sequence '\d'
  '(?:text|content-desc)="(?P<value>\d+.\d+)"[^>]*'
wa/workloads/speedometer/__init__.py:110: SyntaxWarning: invalid escape
sequence '\/'
  '(?(Z)|resource-id="result-number")[^>]*\/>'

Fix the problem via defining the regex pattern as raw string literal to
properly escape backslashes.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2025-02-17 16:41:58 -06:00
dependabot[bot]
45f09a66be build(deps): bump cryptography from 42.0.4 to 43.0.1
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.4 to 43.0.1.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/42.0.4...43.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 17:57:09 -05:00
dependabot[bot]
9638a084f9 build(deps): bump certifi from 2023.7.22 to 2024.7.4
Bumps [certifi](https://github.com/certifi/python-certifi) from 2023.7.22 to 2024.7.4.
- [Commits](https://github.com/certifi/python-certifi/compare/2023.07.22...2024.07.04)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 17:57:00 -05:00
dependabot[bot]
4da8b0691f build(deps): bump urllib3 from 1.26.18 to 1.26.19
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.18 to 1.26.19.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/1.26.19/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.18...1.26.19)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 17:56:50 -05:00
Metin Kaya
412a785068 target/descriptor: Support adb_port parameter
devlib/AdbConnection class supports customizing ADB port number. Enable
that feature in WA side.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-07-11 18:55:56 -05:00
dependabot[bot]
6fc5340f2f ---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 17:09:04 -05:00
dependabot[bot]
da667b58ac build(deps): bump idna from 3.4 to 3.7
Bumps [idna](https://github.com/kjd/idna) from 3.4 to 3.7.
- [Release notes](https://github.com/kjd/idna/releases)
- [Changelog](https://github.com/kjd/idna/blob/master/HISTORY.rst)
- [Commits](https://github.com/kjd/idna/compare/v3.4...v3.7)

---
updated-dependencies:
- dependency-name: idna
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 17:08:15 -05:00
Elif Topuz
4e9d402c24 workloads/speedometer: Dismiss notification pop-up
When speedometer is running on a Chrome package on Android 14, a pop-up
window was showing on the screen. Chrome preferences file is modified
to dismiss the window.
2024-05-30 12:56:53 -05:00
Ola Olsson
e0bf7668b8 Adding support for not validating PMU counters 2024-04-02 14:32:30 -05:00
Fabian Gruber
4839ab354f configuration: Allow including multiple files into one mapping.
The key for 'include#' can now be either a scalar or a list.
A scalar triggers the same behaviour as before.
If the value is a list it must be a list of scalars (filepaths).
The paths will be loaded and merged in order, and finally the resulting
dict is included into the current scope.
2024-03-28 20:06:30 -05:00
Elif Topuz
b6ecc18763 workloads/speedometer: Edit regex search to get the score 2024-03-20 12:17:30 +00:00
Marc Bonnici
7315041e90 fw/uiauto: Fix uiauto builds and update apks
The version of gradle being used was out of date, update to a later
version to fix building of uiauto apks.
2024-03-20 12:17:10 +00:00
dependabot[bot]
adbb647fa7 build(deps): bump cryptography from 41.0.6 to 42.0.4
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.6 to 42.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.6...42.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-20 12:16:59 +00:00
Metin Kaya
366f59ebf7 utils/misc: Clean duplicated code
- ``load_struct_from_yaml()`` has been moved to devlib [1].
- ``LoadSyntaxError()`` is already implemented in devlib.
- Remove ``load_struct_from_file()`` and ``RAND_MOD_NAME_LEN`` since
  they are not used at all.

[1] https://github.com/ARM-software/devlib/commit/591825834028

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-02-23 12:49:03 -08:00
dependabot[bot]
0eb17bf8f0 build(deps): bump paramiko from 3.1.0 to 3.4.0
Bumps [paramiko](https://github.com/paramiko/paramiko) from 3.1.0 to 3.4.0.
- [Commits](https://github.com/paramiko/paramiko/compare/3.1.0...3.4.0)

---
updated-dependencies:
- dependency-name: paramiko
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-09 12:24:16 -08:00
Douglas Raillard
f166ac742e utils/misc: Fix linters violation
A mix of pylint and PEP8 violations that GitHub action enforces.
2024-01-09 12:20:26 -08:00
Douglas Raillard
6fe4bce68d Remove Python 2 support
Python 2 is long dead and devlib does not support it anymore, so cleanup
old Python 2-only code.
2024-01-09 12:20:26 -08:00
Douglas Raillard
28b78a93f1 utils/misc: Replace deprecated __import__ by importlib
Use importlib.import_module instead of __import__ as per Python doc
recommendation.

This will also fix the case where the class is in a
package's submodule (since __import__ returns the top-level package),
e.g. "foo.bar.Class".
2024-01-09 12:20:26 -08:00
Douglas Raillard
77ebefba08 wa: Remove dependency on "imp" module
Python 3.12 removed the "imp" module, so use importlib instead.
2024-01-09 12:20:26 -08:00
Metin Kaya
41f7984243 fw/rt_config: Add unlock_screen config option in runtime_parameters
Introduce 'unlock_screen' option in order to help in automating Android
tests by unlocking device screen automatically. Surely this works only
if no passcode is set.

'unlock_screen' option implicitly requires turning on the screen. IOW,
it will override value of 'screen_on' option.

'diagonal', 'vertical' and 'horizontal' are valid values for
'unlock_screen' option as of now.

Note that this patch depends on
https://github.com/ARM-software/devlib/pull/659 in devlib repo.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-09 07:42:53 -08:00
Metin Kaya
23fcb2c120 framework/plugin: Fix typo at suppoted_targets
Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2023-12-20 08:26:13 -08:00
dependabot[bot]
e38b51b242 build(deps): bump cryptography from 41.0.4 to 41.0.6
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.4 to 41.0.6.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.4...41.0.6)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-15 13:10:50 -08:00
dependabot[bot]
ea08a4f9e6 build(deps): bump urllib3 from 1.26.17 to 1.26.18
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.17...1.26.18)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-15 13:10:40 -08:00
Elif Topuz
5b56210d5f UIBenchJankTests:modification to support Android 14 version
"--user <USER_ID" (current user: 0) option is added to
activity manager (am) command because of "Invalid userId" command.
Tested with other benchmarks (geekbench) as well.
2023-12-04 16:23:19 -08:00
dependabot[bot]
0179202c90 build(deps): bump urllib3 from 1.26.15 to 1.26.17
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.15 to 1.26.17.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.15...1.26.17)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 16:31:02 -07:00
dependabot[bot]
617306fdda build(deps): bump cryptography from 41.0.3 to 41.0.4
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.3 to 41.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.3...41.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 16:30:49 -07:00
Kajetan Puchalski
8d4fe9556b instruments: Add Perfetto instrument
Add an instrument that uses devlib's PerfettoCollector to collect a
Perfetto trace during the execution of a workload.
The instrument takes a path to a Perfetto config file which specifies
how Perfetto should be configured for the tracing.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-09-07 17:29:27 -05:00
Marc Bonnici
775b24f7a3 docs: fix RTD build
- Bump python version used when building documentation.
- `setuptools` is a deprecated installation method on RTD so switch to `pip`.
- Add additional dependency on devlib master branch as RTD env is not
  respecting dependency_links during installation.
- Bump copyright year.
2023-08-21 16:53:53 -05:00
Kajetan Puchalski
13f9c64513 target/descriptor: Expose adb_as_root for AdbConnection
Expose devlib's AdbConnection `adb_as_root` parameter in the target
descriptor.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-21 16:41:23 -05:00
Kajetan Puchalski
6cd1c60715 workloads/speedometer: Add Bromite to package names
Bromite is a fork of Chromium that's easily available for Android. Apart
from small changes it works the same as Chromium and works with this
speedometer workload. Add it to the 'package_names' list to allow using
it as an option.

https://www.bromite.org/

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-18 17:36:17 -05:00
Douglas Raillard
05eab42f27 workloads/rt-app: Update rt-app binaries
Update rt-app binaries to the latest version of the "lisa" branch in
douglas-raillard-arm GitHub fork. This tracks the upstream master branch
with a number of critical patches required notably to work with uclamp.
2023-08-15 17:50:43 -05:00
Kajetan Puchalski
b113a8b351 instruments/trace_cmd: Allow setting top_buffer_size
Allow optionally setting the top level ftrace buffer size separately
from the devlib buffer size. The parameter will be passed to the devlib
FtraceCollector and take effect there.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-15 17:50:26 -05:00
Douglas Raillard
d67d9bd2a4 fw/plugin: Try to load plugin paths as Python module name
WA_PLUGIN_PATHS currently contains a list of filesystem paths to scan
for plugins. This is appropriate for end-user plugins, but this is
problematic for plugins distributed by a 3rd party, such as a plugin
installed from PyPI.

In those cases, the path to the sources is unknown and typically depends
on the specify Python version, local setup etc. What is constant is
Python name of the package, e.g. "lisa.wa.plugins".

Extend the input allowed in WA_PLUGIN_PATHS by trying to load entries as
a Python package name if:
    * There is no filesystem path with that name
    * The entry is a "relative path" (from an fs point of view)
2023-08-14 19:23:56 -05:00
Kajetan Puchalski
11374aae3f worklads/drarm: Set view for FPS instrument
Set the view parameter so that the FPS instrument can collect frame data
from the workload.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-14 19:15:05 -05:00
Douglas Raillard
839242d636 target/descriptor: Add max_async generic target parameter 2023-08-09 17:35:51 -05:00
dependabot[bot]
b9b02f83fc build(deps): bump cryptography from 41.0.0 to 41.0.3
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.0 to 41.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.0...41.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-09 16:45:37 -05:00
dependabot[bot]
6aa1caad94 build(deps): bump certifi from 2022.12.7 to 2023.7.22
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22.
- [Commits](https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-09 16:45:23 -05:00
Kajetan Puchalski
bf72a576e6 uibenchjanktests: Rework to allow listing subtests
Rework the uibenchjanktests workload to allow specifying a list of
subtests. The activity will be re-launched for each provided subtest. If
none are specified, all available tests will be run in alphabetical order.

The workload output will now include metrics with their respective test
names as classifiers.

Add a 'full' parameter to revert back to the old default 'full run'
behaviour with restarts between subtests.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-06-30 12:29:21 -05:00
Kajetan Puchalski
951eec991c drarm: Add DrArm workload
Add a workload for the Dr Arm demo app. Includes functionality for
automatically pulling the ADPF FPS report file from the target if one
was generated by the app.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-06-08 12:38:13 -05:00
dependabot[bot]
0b64b51259 build(deps): bump cryptography from 40.0.2 to 41.0.0
Bumps [cryptography](https://github.com/pyca/cryptography) from 40.0.2 to 41.0.0.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/40.0.2...41.0.0)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-02 16:53:29 -05:00
dependabot[bot]
f4ebca39a1 build(deps): bump requests from 2.29.0 to 2.31.0
Bumps [requests](https://github.com/psf/requests) from 2.29.0 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.29.0...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-30 17:38:26 -05:00
Kajetan Puchalski
88b085c11b perf: Fix instrument for Android 13
The simpleperf included with Android 13 now does not show the percentage
when no counter multiplexing took place. This causes the perf instrument
to crash when processing the output. This fix checks whether the percentage
exists before trying to extract it.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-05-30 17:38:09 -05:00
Marc Bonnici
36a909dda2 fw/ApkWorkload: Allow workloads to provide apk arguments
In some workloads it is beneficial to be able to provide arguments
when launching the required APK. Add a new `arpk_arguments` property
to allow a workload to provide a dict of parameter names and values
that should be used when launching the APK.

The python types of the parameters are used to determine the data
type provided to the APK. Currently supported types are string, bool,
int and float.
2023-04-29 17:54:41 -05:00
Marc Bonnici
3228a3187c Mitigate CVE-2007-4995
Prevent potential directory path traversal attacks (see
https://www.trellix.com/en-us/about/newsroom/stories/research/tarfile-exploiting-the-world.html)
2023-04-29 17:35:54 -05:00
Marc Bonnici
5e0c59babb version: Bump minor version number
Bump the minor version to prepare for dropping Python < 3.7 support.
2023-04-29 17:29:43 -05:00
Marc Bonnici
dc2fc99e98 fw/version: Bump release versions 2023-04-29 17:29:43 -05:00
Marc Bonnici
46ff6e1f62 Dockerfile: Bump release version 2023-04-29 17:29:43 -05:00
Marc Bonnici
8b3f58e726 doc/changes: Update the changelog for v3.3.1 2023-04-29 17:29:43 -05:00
Marc Bonnici
fe7a88e43e requirements.txt: Update to latest tested versions 2023-04-29 17:29:43 -05:00
Marc Bonnici
61bb162350 workloads/gmail: Update workload to latest apk version
The Google services required in the old apk appear to no
longer be avaliable. Update to support a newer version.
2023-04-29 17:29:43 -05:00
Marc Bonnici
d1e960e9b0 workloads/googleplaybooks: Fix ui match
The resource id of the book list cannot always be discovered, search
for the only scrollable list instead.
2023-04-29 17:29:43 -05:00
Marc Bonnici
29a5a7fd43 fw/config: Fix RunConfiguration descriptions
Fix quotations in the RunConfiguration description.
2023-04-29 17:29:43 -05:00
Marc Bonnici
37346fe1b1 instruments/trace_cmd: Handle setup failure
In the case that the trace_cmd collector fails to
initialise, do not attempt to use the collector in
subsequent methods.
2023-04-29 17:29:43 -05:00
Kajetan Puchalski
40a118c8cd geekbench: Add support for Geekbench 6
Add support for Geekbench 6 as a workload on Android.
This commit adds 6.*.* as a valid version for the Geekbench workload and
updates the UIAuto apk accordingly.

It also refactors the update_result function seeing as the one
originally used for GB4 can now be used for 4, 5 and 6 and so it makes
more sense to treat it as a 'generic' update_result function. The
functionality should stay the same.

Backwards compatibility with GB2 & GB3 should be maintained.
2023-03-06 19:28:40 -06:00
Marc Bonnici
c4535320fa docs: Update plugin How To Guides
Fix the example instrument code and add additional note to indicate where
new plugins should be stored to be detected by WA's default configuration
to improve clarity.
2023-01-13 10:14:15 +00:00
dependabot[bot]
08b87291f8 build(deps): bump certifi from 2020.12.5 to 2022.12.7
Bumps [certifi](https://github.com/certifi/python-certifi) from 2020.12.5 to 2022.12.7.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2020.12.05...2022.12.07)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-08 09:26:41 +00:00
Steven Schaus
a3eacb877c Load RuntimeConfig from plugins
Implements support for dynamically loading additional RuntimeConfig
and associated RuntimeParameter that are defined in a plugin.

Currently, the various RuntimeConfig's are hard coded in a list within
WA. This patch extends RuntimeParameterManager to use PluginLoader to
load RuntimeConfig classes and append to the hard coded list.

The implementation, as written, does not allow loading RuntimeConfig
from a plugin if it has the same name as one of the hard coded
RuntimeConfig. This is meant to prevent conflicts and unexpected
behavior.
2022-12-08 09:26:29 +00:00
Kajetan Puchalski
48152224a8 schbench: Add support for schbench
Add support for running schbench as a workload.
Includes the arm64 binary for use on Android devices.
2022-08-23 12:54:57 +01:00
Marc Bonnici
095d6bc100 docs/.readthedocs: Bump the python version
Bump the python version used to build the documentation
to prevent installation errors in the build environment.
2022-08-18 18:46:59 +01:00
Marc Bonnici
8b94ed972d ci: fix the version of pylint to a known version
Later versions of pylint include additional suggestions
including features from the latest versions of Python. Until
we can update our code base to make use of the new features
fix the version of pylint to a known version.
2022-08-18 17:19:43 +01:00
Marc Bonnici
276f146c1e pylint: Update for newer versions of pylint
In later versions of pylint the 'version' attribute has been moved,
therefore check both options while attempting to detect the installed
version number.
2022-08-18 17:19:43 +01:00
Marc Bonnici
3b9fcd8001 ci: Bump the python version used for running tests
The latest devlib master branch requires Python >=3.7 therefore
bump the python version used to run the CI tests.
2022-08-18 17:19:43 +01:00
dependabot[bot]
88fb1de62b build(deps): bump paramiko from 2.7.2 to 2.10.1
Bumps [paramiko](https://github.com/paramiko/paramiko) from 2.7.2 to 2.10.1.
- [Release notes](https://github.com/paramiko/paramiko/releases)
- [Changelog](https://github.com/paramiko/paramiko/blob/main/NEWS)
- [Commits](https://github.com/paramiko/paramiko/compare/2.7.2...2.10.1)

---
updated-dependencies:
- dependency-name: paramiko
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-08 11:10:52 +01:00
dependabot[bot]
7dc337b7d0 build(deps): bump numpy from 1.19.4 to 1.22.0
Bumps [numpy](https://github.com/numpy/numpy) from 1.19.4 to 1.22.0.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst)
- [Commits](https://github.com/numpy/numpy/compare/v1.19.4...v1.22.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-08 11:10:41 +01:00
Kajetan Puchalski
b0f9072830 perf: Fix processing simpleperf stats
Currently, when processing the output of 'simpleperf stat', wa does not
skip the header and tries to process part of it as a number, leading
to type errors. This change skips the header (line starting with '#').

Furthermore, some events (e.g. cpu-clock or task-clock) include "(ms)"
in their count value and are floats instead of integers. Because of
this, when either of those is included, processing metrics fails due to
assuming every metric is an integer. Then another error happens when the
code tries to split the line on '(' assuming that there's only one set
of those around the percentage.

This change removes "(ms)" from the line
before it's processed and properly determines whether 'count' is an
integer or a float before attempting to convert it.
2022-08-08 11:04:13 +01:00
Qais Yousef
b109acac05 geekbench: Add/fix support for Geekbench5
The non corporate version of geekbench5 didn't work although the code
had everything needed, except for a number of tiny required tweaks:

1. Add '5' in the supported versions in __init__.py
2. Fix the name of the android package in__init__.py and
   UiAutomation.java
3. Improve handling of minorVersion to fix potential exception when we
   don't specify the minorVersion number in the yaml file. Launching
   geekbench5 works fine when it's the only one installed. But if you
   have multiple versions, then using the version string in the yaml
   agenda didn't like specifying '5' as the version and threw exception
   out of bound because we assume '5.X' as input. No reason I'm aware of
   to force support for a specific version of geekbench5. So keep it
   relaxed until we know for sure it breaks with a specific version.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:52 +00:00
Qais Yousef
9c7bae3440 gfxbench: Update uiauto APK
To support the new corporate version change.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:35 +00:00
Qais Yousef
7b5ffafbda gfxbench: Add a non corporate version
Which works as the corporate version except for a different in package
name and a set of Fixed Time Tests that don't exist on free version.

Only support v 4.X and v5.X as that's what's available.

Note there's a clash with glbenchmark package name. glbenchmark is an
ancient version provided by the same developers but was superseded by
gfxbench.  The version checks in both workloads should ensure we get the
right one in the unlikely case both are installed.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:35 +00:00
Qais Yousef
be02ad649c pcmark: Update uiauto APK
To include the new fix for the long delays to start the run.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:35 +00:00
Qais Yousef
5a121983fc pcmark: Check for description instead of text in installbenchmark
The test was hanging for a long time waiting for RUN text. Checking for
description first prevents that.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2022-03-01 08:25:35 +00:00
Marc Bonnici
69795628ed doc: Fix typo in requirements.txt 2021-10-28 10:59:45 +01:00
Marc Bonnici
7a332dfd5b docs: Update readthedocs package versions
The latest sphinx and docutils versions currently used on readthedocs
are no longer compatible. Explicitly list the package versions that
should be used when building the documentation.
2021-10-28 10:56:05 +01:00
Peter Collingbourne
4bad433670 Add support for Antutu 9.1.6.
The code for reading the results is almost the same as for Antutu 8,
but it needs to be adjusted to account for a slightly different set
of benchmarks.

At least on my device, Antutu 9 takes over 10 minutes to run, so increase
the timeout to 20 minutes.
2021-10-04 09:12:08 +01:00
Peter Collingbourne
0b558e408c Upgrade Gradle to 7.2 and Android Gradle plugin to 4.2.
The older versions of the plugin caused problems building with newer
NDK versions due to a lack of MIPS support.

This also required upgrading to a version of Gradle that knows about
the Google Maven repository.
2021-09-29 09:46:51 +01:00
setrofim
c023b9859c doc: update coding guide with APK rebuild info
Update the Contributing/Code section with instruction to rebuild the UI
automation APK if the corresponding source has been update.
2021-09-28 09:07:49 +01:00
Vladislav Ivanishin
284cc60b00 docs: Fix typo
as *with* all WA configuration, the more specific settings will take
precedence over the less specific ones
2021-08-25 11:02:12 +01:00
Kajetan Puchalski
06b508107b workloads/pcmark: Add PCMark 3.0 support
Add support for PCMark for Android version 3.
Use a 'version' Parameter to maintain support for v2.
2021-07-19 11:06:11 +01:00
dependabot[bot]
cb1107df8f build(deps): bump urllib3 from 1.26.4 to 1.26.5
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.4 to 1.26.5.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.4...1.26.5)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-02 18:30:33 +01:00
Javi Merino
789e150b0a perf: report a config error if stat is combined with report options
When running perf/simpleperf stat, the report_option_string,
report_sample_options and run_report_sample configuration parameters
don't make sense.  Instead of quietly ignoring them, raise a
ConfigError so that the user can fix the agenda.
2021-04-30 13:35:08 +01:00
Javi Merino
43cb80d854 perf: support report-sample
devlib learnt to use report-sample in
ARM-software/devlib@fe2fe3ae04 ("collector/perf: run simpleperf
report-sample in the target if requested").  Adapt the perf instrument
to use the new parameters of the PerfCollector.
2021-04-30 13:35:08 +01:00
Marc Bonnici
31d306c23a docs: Fix typo in variable name 2021-04-19 16:19:49 +01:00
Benjamin Mordaunt
591c85edec Remove pkg-resources
See https://github.com/pypa/pip/issues/4022
2021-04-09 18:44:35 +01:00
Stephen Kyle
72298ff9ac speedometer: address pylint complaints 2021-04-08 14:34:44 +01:00
Stephen Kyle
f08770884a speedometer: fix @once methods which declare attributes
We need to make those attributes class-attributes, to make sure they are still
defined in subsequent jobs. We still access them through 'self', however.
2021-04-08 14:34:44 +01:00
Stephen Kyle
a5e5920aca speedometer: ensure adb reverse works across reboots
Before this patch, if you used reboot_policy: "each_job", the adb reverse
connection would be lost.
2021-04-08 14:34:44 +01:00
Marc Bonnici
5558d43ddd version: Bump required devlib version 2021-04-07 18:26:38 +01:00
Marc Bonnici
c8ea525a00 fw/target/info: Utilise target properties
Use hostname and hostid target properties instead
of existing calls.
2021-04-07 18:26:38 +01:00
dependabot[bot]
c4c0230958 build(deps): bump urllib3 from 1.26.3 to 1.26.4
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.3 to 1.26.4.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.3...1.26.4)

Signed-off-by: dependabot[bot] <support@github.com>
2021-04-07 10:56:00 +01:00
Javi Merino
b65a371b9d docs/installation: Fix package name in pypi
WA is called wlauto in pypi. "pip install wa" installs workflow
automation, a very different project.
2021-04-07 09:15:29 +01:00
dependabot[bot]
7f0a6da86b build(deps): bump pyyaml from 5.3.1 to 5.4
Bumps [pyyaml](https://github.com/yaml/pyyaml) from 5.3.1 to 5.4.
- [Release notes](https://github.com/yaml/pyyaml/releases)
- [Changelog](https://github.com/yaml/pyyaml/blob/master/CHANGES)
- [Commits](https://github.com/yaml/pyyaml/compare/5.3.1...5.4)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-26 16:43:24 +00:00
dependabot[bot]
75a70ad181 build(deps): bump urllib3 from 1.26.2 to 1.26.3
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.2 to 1.26.3.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.2...1.26.3)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-22 18:26:11 +00:00
dependabot[bot]
84b5ea8a56 build(deps): bump cryptography from 3.3.1 to 3.3.2
Bumps [cryptography](https://github.com/pyca/cryptography) from 3.3.1 to 3.3.2.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/3.3.1...3.3.2)

Signed-off-by: dependabot[bot] <support@github.com>
2021-02-23 09:41:23 +00:00
Marc Bonnici
4b54e17020 fw/job: Fix workload cache check
Don't assume the first job iteration is already in the workload cache.
This may not always be the case, for example with the random execution
order a later iteration can be processed first.
Instead check to see if the job id is present or not.
2021-02-01 18:00:34 +00:00
Marc Bonnici
da4d10d4e7 fw/rt_config: Avoid querying online cpus if hotplug disabled
If the device does not have the hoptplug module installed, avoid
unnecessary querying of the device to check the number of cpus which
can cause issues with some devices.
2021-01-29 18:19:48 +00:00
Marc Bonnici
8882feed84 Dorkerfile: Set a default TZ and non-interactive install
Set DEBIAN_FRONTEND to prevent waiting on user input and provide
a default timezone before package installation.
2021-01-25 09:52:30 +00:00
Marc Bonnici
7f82480a26 Dockerfile: Update base image to 20.04
19.10 is now EOL so update to use 20.04 LTS as a base instead.
2021-01-25 09:52:30 +00:00
Marc Bonnici
e4be2b73ef fw/version: Prevent installation failure on systems without git
On systems that do not have git installed WA will currently fail
to install with a FileNotFound Exception. If git is not present then
we will not have a commit hash so just ignore this error.
2021-01-12 17:53:46 +00:00
Javi Merino
22750b15c7 perf: correctly parse csv when using "--csv --interval-only-values"
With the perf instrument configured as:

    perf:
      perf_type: simpleperf
      command: stat
      optionstring: '-a --interval-only-values --csv'

WA fails to parse simpleperf's output:

    INFO             Extracting reports from target...
    ERROR            Error in instrument perf
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/instrument.py", line 272, in __call__
    ERROR                self.callback(context)
    ERROR              File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 142, in update_output
    ERROR                self._process_simpleperf_output(context)
    ERROR              File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 155, in _process_simpleperf_output
    ERROR                self._process_simpleperf_stat_output(context)
    ERROR              File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 233, in _process_simpleperf_stat_output
    ERROR                self._process_simpleperf_stat_from_csv(stat_file, context, label)
    ERROR              File "/work/workload_automation/workload-automation/wa/instruments/perf.py", line 245, in _process_simpleperf_stat_from_csv
    ERROR                context.add_metric('{}_{}'.format(label, row[1]), row[0], 'count', classifiers=classifiers)
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/execution.py", line 222, in add_metric
    ERROR                self.output.add_metric(name, value, units, lower_is_better, classifiers)
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/output.py", line 142, in add_metric
    ERROR                self.result.add_metric(name, value, units, lower_is_better, classifiers)
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/output.py", line 390, in add_metric
    ERROR                metric = Metric(name, value, units, lower_is_better, classifiers)
    ERROR              File "/work/workload_automation/workload-automation/wa/framework/output.py", line 653, in __init__
    ERROR                self.value = numeric(value)
    ERROR              File "/work/workload_automation/devlib/devlib/utils/types.py", line 88, in numeric
    ERROR                raise ValueError('Not numeric: {}'.format(value))
    ERROR
    ERROR            ValueError(Not numeric: Performance counter statistics)

With the above options, the csv that simpleperf produces looks like
this:

    Performance counter statistics,
    123456789,raw-l1-dtlb,,(60%),
    42424242,raw-l1-itlb,,(60%),
    Total test time,1.001079,seconds,
    Performance counter statistics,
    123456789,raw-l1-dtlb,,(60%),
    42424242,raw-l1-itlb,,(60%),
    Total test time,2.001178,seconds,
    Performance counter statistics,
    [...]

That is, with "--interval-only-values", the "Performance counter
statistics," header is repeated every interval.  WA current expects
it only in the first line.  Modify the condition so that it is ignored
every time we find it in the file and not just the first time.
2021-01-11 17:48:27 +00:00
Marc Bonnici
e3703f0e1e utils/doc: Fix display of Falsey default parameters
Explicitly check for is `None` to determine if a default value is
not present or just a Falsey value.
2021-01-11 15:32:34 +00:00
Marc Bonnici
4ddd610149 Migrate to Github Actions
Now Travis has stopped providing a free tier migrate initial
tests over to use Github Actions.
2020-12-16 09:51:09 +00:00
Marc Bonnici
c5e3a421b1 fw/version: Dev version bump 2020-12-11 16:43:12 +00:00
184 changed files with 4931 additions and 638 deletions

92
.github/workflows/main.yml vendored Normal file
View File

@ -0,0 +1,92 @@
name: WA Test Suite
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
types: [opened, synchronize, reopened, ready_for_review]
schedule:
- cron: 0 2 * * *
# Allows runing this workflow manually from the Actions tab
workflow_dispatch:
jobs:
Run-Linters-and-Tests:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.8.18
uses: actions/setup-python@v2
with:
python-version: 3.8.18
- name: git-bash
uses: pkg-src/github-action-git-bash@v1.1
- name: Install dependencies
run: |
python -m pip install --upgrade pip
cd /tmp && git clone https://github.com/ARM-software/devlib.git && cd devlib && pip install .
cd $GITHUB_WORKSPACE && pip install .[test]
python -m pip install pylint==2.6.2 pep8 flake8 mock nose
- name: Run pylint
run: |
cd $GITHUB_WORKSPACE && ./dev_scripts/pylint wa/
- name: Run PEP8
run: |
cd $GITHUB_WORKSPACE && ./dev_scripts/pep8 wa
- name: Run nose tests
run: |
nosetests
Execute-Test-Workload-and-Process:
runs-on: ubuntu-22.04
strategy:
matrix:
python-version: [3.7.17, 3.8.18, 3.9.21, 3.10.16, 3.13.2]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: git-bash
uses: pkg-src/github-action-git-bash@v1.1
- name: Install dependencies
run: |
python -m pip install --upgrade pip
cd /tmp && git clone https://github.com/ARM-software/devlib.git && cd devlib && pip install .
cd $GITHUB_WORKSPACE && pip install .
- name: Run test workload
run: |
cd /tmp && wa run $GITHUB_WORKSPACE/tests/ci/idle_agenda.yaml -v -d idle_workload
- name: Test Process Command
run: |
cd /tmp && wa process -f -p csv idle_workload
Test-WA-Commands:
runs-on: ubuntu-22.04
strategy:
matrix:
python-version: [3.7.17, 3.8.18, 3.9.21, 3.10.16, 3.13.2]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: git-bash
uses: pkg-src/github-action-git-bash@v1.1
- name: Install dependencies
run: |
python -m pip install --upgrade pip
cd /tmp && git clone https://github.com/ARM-software/devlib.git && cd devlib && pip install .
cd $GITHUB_WORKSPACE && pip install .
- name: Test Show Command
run: |
wa show dhrystone && wa show generic_android && wa show trace-cmd && wa show csv
- name: Test List Command
run: |
wa list all
- name: Test Create Command
run: |
wa create agenda dhrystone generic_android csv trace_cmd && wa create package test && wa create workload test

View File

@ -13,9 +13,16 @@ sphinx:
# Build the docs in additional formats such as PDF and ePub
formats: all
# Set the version of Python and requirements required to build your docs
# Configure the build environment
build:
os: ubuntu-22.04
tools:
python: "3.11"
# Ensure doc dependencies are installed before building
python:
version: 3.7
install:
- method: setuptools
- requirements: doc/requirements.txt
- method: pip
path: .

View File

@ -1,46 +0,0 @@
# Copyright 2018 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
language: python
python:
- "3.6"
install:
- pip install nose
- pip install nose2
- pip install flake8
- pip install pylint==2.6.0
- git clone -v https://github.com/ARM-software/devlib.git /tmp/devlib && cd /tmp/devlib && python setup.py install
- cd $TRAVIS_BUILD_DIR && python setup.py install
env:
global:
- PYLINT="cd $TRAVIS_BUILD_DIR && ./dev_scripts/pylint wa"
- PEP8="cd $TRAVIS_BUILD_DIR && ./dev_scripts/pep8 wa"
- NOSETESTS="nose2 -s $TRAVIS_BUILD_DIR/tests"
- WORKLOAD="cd /tmp && wa run $TRAVIS_BUILD_DIR/tests/travis/idle_agenda.yaml -v -d idle_workload"
- PROCESS_CMD="$WORKLOAD && wa process -f -p csv idle_workload"
- SHOW_CMD="wa show dhrystone && wa show generic_android && wa show trace-cmd && wa show csv"
- LIST_CMD="wa list all"
- CREATE_CMD="wa create agenda dhrystone generic_android csv trace_cmd && wa create package test && wa create workload test"
matrix:
- TEST=$PYLINT
- TEST=$PEP8
- TEST=$NOSETESTS
- TEST=$WORKLOAD
- TEST="$PROCESS_CMD && $SHOW_CMD && $LIST_CMD && $CREATE_CMD"
script:
- echo $TEST && eval $TEST

View File

@ -36,6 +36,9 @@ pylint_version=$(python -c 'from pylint.__pkginfo__ import version; print(versio
if [ "x$pylint_version" == "x" ]; then
pylint_version=$(python3 -c 'from pylint.__pkginfo__ import version; print(version)' 2>/dev/null)
fi
if [ "x$pylint_version" == "x" ]; then
pylint_version=$(python3 -c 'from pylint import version; print(version)' 2>/dev/null)
fi
if [ "x$pylint_version" == "x" ]; then
echo "ERROR: no pylint verison found; is it installed?"
exit 1

View File

@ -32,17 +32,11 @@ def transform(mod):
if b'pylint:' in text[0]:
msg = 'pylint directive found on the first line of {}; please move to below copyright header'
raise RuntimeError(msg.format(mod.name))
if sys.version_info[0] == 3:
char = chr(text[0][0])
else:
char = text[0][0]
char = chr(text[0][0])
if text[0].strip() and char != '#':
msg = 'first line of {} is not a comment; is the copyright header missing?'
raise RuntimeError(msg.format(mod.name))
if sys.version_info[0] == 3:
text[0] = '# pylint: disable={}'.format(','.join(errors)).encode('utf-8')
else:
text[0] = '# pylint: disable={}'.format(','.join(errors))
text[0] = '# pylint: disable={}'.format(','.join(errors)).encode('utf-8')
mod.file_bytes = b'\n'.join(text)
# This is what *should* happen, but doesn't work.

View File

@ -1,4 +1,7 @@
nose
numpy
pandas
sphinx_rtd_theme>=0.3.1
sphinx_rtd_theme==1.0.0
sphinx==4.2
docutils<0.18
devlib @ git+https://github.com/ARM-software/devlib@master

View File

@ -2,6 +2,62 @@
What's New in Workload Automation
=================================
***********
Version 3.3.1
***********
.. warning:: This is the last release supporting Python 3.5 and Python 3.6.
Subsequent releases will support Python 3.7+.
New Features:
==============
Commands:
---------
Instruments:
------------
- ``perf``: Add support for ``report-sample``.
Workloads:
----------------
- ``PCMark``: Add support for PCMark 3.0.
- ``Antutu``: Add support for 9.1.6.
- ``Geekbench``: Add support for Geekbench5.
- ``gfxbench``: Support the non corporate version.
Fixes/Improvements
==================
Framework:
----------
- Fix installation on systems without git installed.
- Avoid querying online cpus if hotplug is disabled.
Dockerfile:
-----------
- Update base image to Ubuntu 20.04.
Instruments:
------------
- ``perf``: Fix parsing csv with using interval-only-values.
- ``perf``: Improve error reporting of an invalid agenda.
Output Processors:
------------------
- ``postgres``: Fixed SQL command when creating a new event.
Workloads:
----------
- ``speedometer``: Fix adb reverse when rebooting a device.
- ``googleplaybook``: Support newer apk version.
- ``googlephotos``: Support newer apk version.
- ``gmail``: Support newer apk version.
Other:
------
- Upgrade Android Gradle to 7.2 and Gradle plugin to 4.2.
***********
Version 3.3
***********

View File

@ -1,5 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright 2018 ARM Limited
# Copyright 2023 ARM Limited
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
@ -68,7 +68,7 @@ master_doc = 'index'
# General information about the project.
project = u'wa'
copyright = u'2018, ARM Limited'
copyright = u'2023, ARM Limited'
author = u'ARM Limited'
# The version info for the project you're documenting, acts as replacement for

View File

@ -47,6 +47,10 @@ submitting a pull request:
- If significant additions have been made to the framework, unit
tests should be added to cover the new functionality.
- If modifications have been made to the UI Automation source of a workload, the
corresponding APK should be rebuilt and submitted as part of the same pull
request. This can be done via the ``build.sh`` script in the relevant
``uiauto`` subdirectory.
- If modifications have been made to documentation (this includes description
attributes for Parameters and Extensions), documentation should be built to
make sure no errors or warning during build process, and a visual inspection

View File

@ -183,7 +183,7 @@ allow it to decide whether to keep the file or not.
# Pull the results file to the host
self.host_outfile = os.path.join(context.output_directory, 'timing_results')
self.target.pull(self.target_outfile, self.host_outfile)
context.add_artifact('ziptest-results', host_output_file, kind='raw')
context.add_artifact('ziptest-results', self.host_outfile, kind='raw')
The ``update_output`` method we can do any generation of metrics that we wish to
for our workload. In this case we are going to simply convert the times reported
@ -259,7 +259,7 @@ The full implementation of this workload would look something like:
# Pull the results file to the host
self.host_outfile = os.path.join(context.output_directory, 'timing_results')
self.target.pull(self.target_outfile, self.host_outfile)
context.add_artifact('ziptest-results', host_output_file, kind='raw')
context.add_artifact('ziptest-results', self.host_outfile, kind='raw')
def update_output(self, context):
super(ZipTestWorkload, self).update_output(context)
@ -492,9 +492,10 @@ Adding an Instrument
====================
This is an example of how we would create a instrument which will trace device
errors using a custom "trace" binary file. For more detailed information please see the
:ref:`Instrument Reference <instrument-reference>`. The first thing to do is to subclass
:class:`Instrument`, overwrite the variable name with what we want our instrument
to be called and locate our binary for our instrument.
:ref:`Instrument Reference <instrument-reference>`. The first thing to do is to create
a new file under ``$WA_USER_DIRECTORY/plugins/`` and subclass
:class:`Instrument`. Make sure to overwrite the variable name with what we want our instrument
to be called and then locate our binary for the instrument.
::
@ -502,8 +503,8 @@ to be called and locate our binary for our instrument.
name = 'trace-errors'
def __init__(self, target):
super(TraceErrorsInstrument, self).__init__(target)
def __init__(self, target, **kwargs):
super(TraceErrorsInstrument, self).__init__(target, **kwargs)
self.binary_name = 'trace'
self.binary_file = os.path.join(os.path.dirname(__file__), self.binary_name)
self.trace_on_target = None
@ -550,8 +551,9 @@ workload. The method can be passed 4 params, which are the metric `key`,
def update_output(self, context):
# pull the trace file from the target
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.target.pull(self.result, context.working_directory)
context.add_artifact('error_trace', self.result, kind='export')
self.outfile = os.path.join(context.output_directory, 'trace.txt')
self.target.pull(self.result, self.outfile)
context.add_artifact('error_trace', self.outfile, kind='export')
# parse the file if needs to be parsed, or add result directly to
# context.
@ -572,12 +574,14 @@ At the very end of the run we would want to uninstall the binary we deployed ear
So the full example would look something like::
from wa import Instrument
class TraceErrorsInstrument(Instrument):
name = 'trace-errors'
def __init__(self, target):
super(TraceErrorsInstrument, self).__init__(target)
def __init__(self, target, **kwargs):
super(TraceErrorsInstrument, self).__init__(target, **kwargs)
self.binary_name = 'trace'
self.binary_file = os.path.join(os.path.dirname(__file__), self.binary_name)
self.trace_on_target = None
@ -595,8 +599,9 @@ So the full example would look something like::
def update_output(self, context):
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.target.pull(self.result, context.working_directory)
context.add_artifact('error_trace', self.result, kind='export')
self.outfile = os.path.join(context.output_directory, 'trace.txt')
self.target.pull(self.result, self.outfile)
context.add_artifact('error_trace', self.outfile, kind='export')
metric = # ..
context.add_metric('number_of_errors', metric, lower_is_better=True
@ -613,8 +618,9 @@ Adding an Output Processor
==========================
This is an example of how we would create an output processor which will format
the run metrics as a column-aligned table. The first thing to do is to subclass
:class:`OutputProcessor` and overwrite the variable name with what we want our
the run metrics as a column-aligned table. The first thing to do is to create
a new file under ``$WA_USER_DIRECTORY/plugins/`` and subclass
:class:`OutputProcessor`. Make sure to overwrite the variable name with what we want our
processor to be called and provide a short description.
Next we need to implement any relevant methods, (please see

View File

@ -690,7 +690,7 @@ Workload-specific augmentation
It is possible to enable or disable (but not configure) augmentations at
workload or section level, as well as in the global config, in which case, the
augmentations would only be enabled/disabled for that workload/section. If the
same augmentation is enabled at one level and disabled at another, as will all
same augmentation is enabled at one level and disabled at another, as with all
WA configuration, the more specific settings will take precedence over the less
specific ones (i.e. workloads override sections that, in turn, override global
config).

View File

@ -192,12 +192,12 @@ Installing
Installing the latest released version from PyPI (Python Package Index)::
sudo -H pip install wa
sudo -H pip install wlauto
This will install WA along with its mandatory dependencies. If you would like to
install all optional dependencies at the same time, do the following instead::
sudo -H pip install wa[all]
sudo -H pip install wlauto[all]
Alternatively, you can also install the latest development version from GitHub

View File

@ -373,6 +373,7 @@ below:
device: generic_android
device_config:
adb_server: null
adb_port: null
big_core: null
core_clusters: null
core_names: null
@ -399,6 +400,7 @@ below:
no_install: false
report: true
report_on_target: false
mode: write-to-memory
csv:
extra_columns: null
use_all_classifiers: false

View File

@ -45,6 +45,7 @@ An example agenda can be seen here:
no_install: false
report: true
report_on_target: false
mode: write-to-disk
csv: # Provide config for the csv augmentation
use_all_classifiers: true

View File

@ -33,6 +33,7 @@ states.
iterations: 1
runtime_parameters:
screen_on: false
unlock_screen: 'vertical'
- name: benchmarkpi
iterations: 1
sections:
@ -208,6 +209,13 @@ Android Specific Runtime Parameters
:screen_on: A ``boolean`` to specify whether the devices screen should be
turned on. Defaults to ``True``.
:unlock_screen: A ``String`` to specify how the devices screen should be
unlocked. Unlocking screen is disabled by default. ``vertical``, ``diagonal``
and ``horizontal`` are the supported values (see :meth:`devlib.AndroidTarget.swipe_to_unlock`).
Note that unlocking succeeds when no passcode is set. Since unlocking screen
requires turning on the screen, this option overrides value of ``screen_on``
option.
.. _setting-sysfiles:
Setting Sysfiles

View File

@ -50,14 +50,18 @@
#
# We want to make sure to base this on a recent ubuntu release
FROM ubuntu:19.10
FROM ubuntu:20.04
# Please update the references below to use different versions of
# devlib, WA or the Android SDK
ARG DEVLIB_REF=v1.3
ARG WA_REF=v3.3
ARG DEVLIB_REF=v1.3.4
ARG WA_REF=v3.3.1
ARG ANDROID_SDK_URL=https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip
# Set a default timezone to use
ENV TZ=Europe/London
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
apache2-utils \
bison \

View File

@ -1,29 +1,30 @@
bcrypt==3.2.0
certifi==2020.12.5
cffi==1.14.4
chardet==3.0.4
colorama==0.4.4
cryptography==3.3.1
devlib==1.3.0
future==0.18.2
idna==2.10
bcrypt==4.0.1
certifi==2024.7.4
cffi==1.15.1
charset-normalizer==3.1.0
colorama==0.4.6
cryptography==43.0.1
devlib==1.3.4
future==0.18.3
idna==3.7
Louie-latest==1.3.1
lxml==4.9.2
nose==1.3.7
numpy==1.19.4
pandas==1.1.5
paramiko==2.7.2
numpy==1.24.3
pandas==2.0.1
paramiko==3.4.0
pexpect==4.8.0
pkg-resources==0.0.0
ptyprocess==0.6.0
pycparser==2.20
PyNaCl==1.4.0
ptyprocess==0.7.0
pycparser==2.21
PyNaCl==1.5.0
pyserial==3.5
python-dateutil==2.8.1
pytz==2020.4
PyYAML==5.3.1
requests==2.25.0
scp==0.13.3
six==1.15.0
urllib3==1.26.2
wlauto==3.0.0
wrapt==1.12.1
python-dateutil==2.8.2
pytz==2023.3
PyYAML==6.0
requests==2.32.0
scp==0.14.5
six==1.16.0
tzdata==2023.3
urllib3==1.26.19
wlauto==3.3.1
wrapt==1.15.0

View File

@ -79,6 +79,7 @@ params = dict(
license='Apache v2',
maintainer='ARM Architecture & Technology Device Lab',
maintainer_email='workload-automation@arm.com',
python_requires='>= 3.7',
setup_requires=[
'numpy<=1.16.4; python_version<"3"',
'numpy; python_version>="3"',

View File

@ -23,7 +23,6 @@ import re
import uuid
import getpass
from collections import OrderedDict
from distutils.dir_util import copy_tree # pylint: disable=no-name-in-module, import-error
from devlib.utils.types import identifier
try:
@ -43,6 +42,24 @@ from wa.utils.misc import (ensure_directory_exists as _d, capitalize,
from wa.utils.postgres import get_schema, POSTGRES_SCHEMA_DIR
from wa.utils.serializer import yaml
if sys.version_info >= (3, 8):
def copy_tree(src, dst):
from shutil import copy, copytree # pylint: disable=import-outside-toplevel
copytree(
src,
dst,
# dirs_exist_ok=True only exists in Python >= 3.8
dirs_exist_ok=True,
# Align with devlib and only copy the content without metadata
copy_function=copy
)
else:
def copy_tree(src, dst):
# pylint: disable=import-outside-toplevel, redefined-outer-name
from distutils.dir_util import copy_tree
# Align with devlib and only copy the content without metadata
copy_tree(src, dst, preserve_mode=False, preserve_times=False)
TEMPLATES_DIR = os.path.join(os.path.dirname(__file__), 'templates')

View File

@ -25,10 +25,6 @@ from wa.framework.target.manager import TargetManager
from wa.utils.revent import ReventRecorder
if sys.version_info[0] == 3:
raw_input = input # pylint: disable=redefined-builtin
class RecordCommand(Command):
name = 'record'
@ -137,11 +133,11 @@ class RecordCommand(Command):
def record(self, revent_file, name, output_path):
msg = 'Press Enter when you are ready to record {}...'
self.logger.info(msg.format(name))
raw_input('')
input('')
self.revent_recorder.start_record(revent_file)
msg = 'Press Enter when you have finished recording {}...'
self.logger.info(msg.format(name))
raw_input('')
input('')
self.revent_recorder.stop_record()
if not os.path.isdir(output_path):

View File

@ -73,11 +73,8 @@ class ShowCommand(Command):
if which('pandoc'):
p = Popen(['pandoc', '-f', 'rst', '-t', 'man'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
if sys.version_info[0] == 3:
output, _ = p.communicate(rst_output.encode(sys.stdin.encoding))
output = output.decode(sys.stdout.encoding)
else:
output, _ = p.communicate(rst_output)
output, _ = p.communicate(rst_output.encode(sys.stdin.encoding))
output = output.decode(sys.stdout.encoding)
# Make sure to double escape back slashes
output = output.replace('\\', '\\\\\\')

View File

@ -12,7 +12,7 @@ android {
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$$project.buildDir/apk/${package_name}.apk")
output.outputFileName = "${package_name}.apk"
}
}
}

View File

@ -31,8 +31,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
rm -f ../$package_name
if [[ -f app/build/apk/$package_name.apk ]]; then
cp app/build/apk/$package_name.apk ../$package_name.apk
if [[ -f app/build/outputs/apk/debug/$package_name.apk ]]; then
cp app/build/outputs/apk/debug/$package_name.apk ../$package_name.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.1'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -667,18 +667,18 @@ class RunConfiguration(Configuration):
``"each_spec"``
The device will be rebooted before running a new workload spec.
.. note:: this acts the same as each_job when execution order
.. note:: This acts the same as ``each_job`` when execution order
is set to by_iteration
''"run_completion"''
The device will be reboot after the run has been completed.
``"run_completion"``
The device will be rebooted after the run has been completed.
'''),
ConfigurationPoint(
'device',
kind=str,
default='generic_android',
description='''
This setting defines what specific Device subclass will be used to
This setting defines what specific ``Device`` subclass will be used to
interact the connected device. Obviously, this must match your
setup.
''',

View File

@ -238,20 +238,47 @@ def _load_file(filepath, error_name):
return raw, includes
def _config_values_from_includes(filepath, include_path, error_name):
source_dir = os.path.dirname(filepath)
included_files = []
if isinstance(include_path, str):
include_path = os.path.expanduser(os.path.join(source_dir, include_path))
replace_value, includes = _load_file(include_path, error_name)
included_files.append(include_path)
included_files.extend(includes)
elif isinstance(include_path, list):
replace_value = {}
for path in include_path:
include_path = os.path.expanduser(os.path.join(source_dir, path))
sub_replace_value, includes = _load_file(include_path, error_name)
for key, val in sub_replace_value.items():
replace_value[key] = merge_config_values(val, replace_value.get(key, None))
included_files.append(include_path)
included_files.extend(includes)
else:
message = "{} does not contain a valid {} structure; value for 'include#' must be a string or a list"
raise ConfigError(message.format(filepath, error_name))
return replace_value, included_files
def _process_includes(raw, filepath, error_name):
if not raw:
return []
source_dir = os.path.dirname(filepath)
included_files = []
replace_value = None
if hasattr(raw, 'items'):
for key, value in raw.items():
if key == 'include#':
include_path = os.path.expanduser(os.path.join(source_dir, value))
included_files.append(include_path)
replace_value, includes = _load_file(include_path, error_name)
replace_value, includes = _config_values_from_includes(filepath, value, error_name)
included_files.extend(includes)
elif hasattr(value, 'items') or isiterable(value):
includes = _process_includes(value, filepath, error_name)

View File

@ -244,10 +244,7 @@ class Http(ResourceGetter):
response.status_code,
response.reason))
return {}
if sys.version_info[0] == 3:
content = response.content.decode('utf-8')
else:
content = response.content
content = response.content.decode('utf-8')
return json.loads(content)
def download_asset(self, asset, owner_name):

View File

@ -98,7 +98,6 @@ and the code to clear these file goes in teardown method. ::
"""
import sys
import logging
import inspect
from collections import OrderedDict
@ -325,10 +324,7 @@ def install(instrument, context):
if not callable(attr):
msg = 'Attribute {} not callable in {}.'
raise ValueError(msg.format(attr_name, instrument))
if sys.version_info[0] == 3:
argspec = inspect.getfullargspec(attr)
else:
argspec = inspect.getargspec(attr) # pylint: disable=deprecated-method
argspec = inspect.getfullargspec(attr)
arg_num = len(argspec.args)
# Instrument callbacks will be passed exactly two arguments: self
# (the instrument instance to which the callback is bound) and

View File

@ -75,7 +75,7 @@ class Job(object):
def load(self, target, loader=pluginloader):
self.logger.info('Loading job {}'.format(self))
if self.iteration == 1:
if self.id not in self._workload_cache:
self.workload = loader.get_workload(self.spec.workload_name,
target,
**self.spec.workload_parameters)

View File

@ -39,7 +39,8 @@ from wa.framework.run import RunState, RunInfo
from wa.framework.target.info import TargetInfo
from wa.framework.version import get_wa_version_with_commit
from wa.utils.doc import format_simple_table
from wa.utils.misc import touch, ensure_directory_exists, isiterable, format_ordered_dict
from wa.utils.misc import (touch, ensure_directory_exists, isiterable,
format_ordered_dict, safe_extract)
from wa.utils.postgres import get_schema_versions
from wa.utils.serializer import write_pod, read_pod, Podable, json
from wa.utils.types import enum, numeric
@ -854,7 +855,7 @@ class DatabaseOutput(Output):
def _read_dir_artifact(self, artifact):
artifact_path = tempfile.mkdtemp(prefix='wa_')
with tarfile.open(fileobj=self.conn.lobject(int(artifact.path), mode='b'), mode='r|gz') as tar_file:
tar_file.extractall(artifact_path)
safe_extract(tar_file, artifact_path)
self.conn.commit()
return artifact_path

View File

@ -18,8 +18,6 @@
import os
import sys
import inspect
import imp
import string
import logging
from collections import OrderedDict, defaultdict
from itertools import chain
@ -32,16 +30,10 @@ from wa.framework.exception import (NotFoundError, PluginLoaderError, TargetErro
ValidationError, ConfigError, HostError)
from wa.utils import log
from wa.utils.misc import (ensure_directory_exists as _d, walk_modules, load_class,
merge_dicts_simple, get_article)
merge_dicts_simple, get_article, import_path)
from wa.utils.types import identifier
if sys.version_info[0] == 3:
MODNAME_TRANS = str.maketrans(':/\\.', '____')
else:
MODNAME_TRANS = string.maketrans(':/\\.', '____')
class AttributeCollection(object):
"""
Accumulator for plugin attribute objects (such as Parameters or Artifacts).
@ -384,7 +376,7 @@ class TargetedPlugin(Plugin):
"""
suppoted_targets = []
supported_targets = []
parameters = [
Parameter('cleanup_assets', kind=bool,
global_alias='cleanup_assets',
@ -398,8 +390,8 @@ class TargetedPlugin(Plugin):
@classmethod
def check_compatible(cls, target):
if cls.suppoted_targets:
if target.os not in cls.suppoted_targets:
if cls.supported_targets:
if target.os not in cls.supported_targets:
msg = 'Incompatible target OS "{}" for {}'
raise TargetError(msg.format(target.os, cls.name))
@ -622,24 +614,30 @@ class PluginLoader(object):
self.logger.debug('Checking path %s', path)
if os.path.isfile(path):
self._discover_from_file(path)
for root, _, files in os.walk(path, followlinks=True):
should_skip = False
for igpath in ignore_paths:
if root.startswith(igpath):
should_skip = True
break
if should_skip:
continue
for fname in files:
if os.path.splitext(fname)[1].lower() != '.py':
elif os.path.exists(path):
for root, _, files in os.walk(path, followlinks=True):
should_skip = False
for igpath in ignore_paths:
if root.startswith(igpath):
should_skip = True
break
if should_skip:
continue
filepath = os.path.join(root, fname)
self._discover_from_file(filepath)
for fname in files:
if os.path.splitext(fname)[1].lower() != '.py':
continue
filepath = os.path.join(root, fname)
self._discover_from_file(filepath)
elif not os.path.isabs(path):
try:
for module in walk_modules(path):
self._discover_in_module(module)
except Exception: # NOQA pylint: disable=broad-except
pass
def _discover_from_file(self, filepath):
try:
modname = os.path.splitext(filepath[1:])[0].translate(MODNAME_TRANS)
module = imp.load_source(modname, filepath)
module = import_path(filepath)
self._discover_in_module(module)
except (SystemExit, ImportError) as e:
if self.keep_going:

View File

@ -198,6 +198,12 @@ COMMON_TARGET_PARAMS = [
description='''
A regex that matches the shell prompt on the target.
'''),
Parameter('max_async', kind=int, default=50,
description='''
The maximum number of concurent asynchronous connections to the
target maintained at any time.
'''),
]
COMMON_PLATFORM_PARAMS = [
@ -302,6 +308,11 @@ CONNECTION_PARAMS = {
description="""
ADB server to connect to.
"""),
Parameter(
'adb_port', kind=int,
description="""
ADB port to connect to.
"""),
Parameter(
'poll_transfers', kind=bool,
default=True,
@ -333,6 +344,12 @@ CONNECTION_PARAMS = {
the destination size to appear the same over one or more sample
periods, causing improper transfer cancellation.
"""),
Parameter(
'adb_as_root', kind=bool,
default=False,
description="""
Specify whether the adb server should be started in root mode.
""")
],
SshConnection: [
Parameter(

View File

@ -230,16 +230,15 @@ def get_target_info(target):
info.is_rooted = target.is_rooted
info.kernel_version = target.kernel_version
info.kernel_config = target.config
info.hostname = target.hostname
info.hostid = target.hostid
try:
info.sched_features = target.read_value('/sys/kernel/debug/sched_features').split()
except TargetError:
# best effort -- debugfs might not be mounted
pass
hostid_string = target.execute('{} hostid'.format(target.busybox)).strip()
info.hostid = int(hostid_string, 16)
info.hostname = target.execute('{} hostname'.format(target.busybox)).strip()
for i, name in enumerate(target.cpuinfo.cpu_names):
cpu = CpuInfo()
cpu.id = i

View File

@ -178,7 +178,7 @@ class HotplugRuntimeConfig(RuntimeConfig):
raise TargetError('Target does not appear to support hotplug')
def validate_parameters(self):
if len(self.num_cores) == self.target.number_of_cpus:
if self.num_cores and len(self.num_cores) == self.target.number_of_cpus:
if all(v is False for v in list(self.num_cores.values())):
raise ValueError('Cannot set number of all cores to 0')
@ -878,6 +878,11 @@ class AndroidRuntimeConfig(RuntimeConfig):
if value is not None:
obj.config['screen_on'] = value
@staticmethod
def set_unlock_screen(obj, value):
if value is not None:
obj.config['unlock_screen'] = value
def __init__(self, target):
self.config = defaultdict(dict)
super(AndroidRuntimeConfig, self).__init__(target)
@ -930,6 +935,16 @@ class AndroidRuntimeConfig(RuntimeConfig):
Specify whether the device screen should be on
""")
param_name = 'unlock_screen'
self._runtime_params[param_name] = \
RuntimeParameter(
param_name, kind=str,
default=None,
setter=self.set_unlock_screen,
description="""
Specify how the device screen should be unlocked (e.g., vertical)
""")
def check_target(self):
if self.target.os != 'android' and self.target.os != 'chromeos':
raise ConfigError('Target does not appear to be running Android')
@ -940,6 +955,7 @@ class AndroidRuntimeConfig(RuntimeConfig):
pass
def commit(self):
# pylint: disable=too-many-branches
if 'airplane_mode' in self.config:
new_airplane_mode = self.config['airplane_mode']
old_airplane_mode = self.target.get_airplane_mode()
@ -964,13 +980,20 @@ class AndroidRuntimeConfig(RuntimeConfig):
if 'brightness' in self.config:
self.target.set_brightness(self.config['brightness'])
if 'rotation' in self.config:
self.target.set_rotation(self.config['rotation'])
if 'screen_on' in self.config:
if self.config['screen_on']:
self.target.ensure_screen_is_on()
else:
self.target.ensure_screen_is_off()
if self.config.get('unlock_screen'):
self.target.ensure_screen_is_on()
if self.target.is_screen_locked():
self.target.swipe_to_unlock(self.config['unlock_screen'])
def clear(self):
self.config = {}

View File

@ -22,6 +22,7 @@ from wa.framework.target.runtime_config import (SysfileValuesRuntimeConfig,
CpuidleRuntimeConfig,
AndroidRuntimeConfig)
from wa.utils.types import obj_dict, caseless_string
from wa.framework import pluginloader
class RuntimeParameterManager(object):
@ -37,9 +38,16 @@ class RuntimeParameterManager(object):
def __init__(self, target):
self.target = target
self.runtime_configs = [cls(self.target) for cls in self.runtime_config_cls]
self.runtime_params = {}
try:
for rt_cls in pluginloader.list_plugins(kind='runtime-config'):
if rt_cls not in self.runtime_config_cls:
self.runtime_config_cls.append(rt_cls)
except ValueError:
pass
self.runtime_configs = [cls(self.target) for cls in self.runtime_config_cls]
runtime_parameter = namedtuple('RuntimeParameter', 'cfg_point, rt_config')
for cfg in self.runtime_configs:
for param in cfg.supported_parameters:

View File

@ -11,8 +11,8 @@ android {
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support.test:runner:0.5'
implementation 'com.android.support.test:rules:0.5'
implementation 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
}

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.2'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
@ -16,6 +17,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

Binary file not shown.

View File

@ -21,9 +21,9 @@ from subprocess import Popen, PIPE
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev'])
version = VersionTuple(3, 3, 0, '')
version = VersionTuple(3, 4, 0, 'dev1')
required_devlib_version = VersionTuple(1, 3, 0, '')
required_devlib_version = VersionTuple(1, 4, 0, 'dev3')
def format_version(v):
@ -48,13 +48,13 @@ def get_wa_version_with_commit():
def get_commit():
p = Popen(['git', 'rev-parse', 'HEAD'],
cwd=os.path.dirname(__file__), stdout=PIPE, stderr=PIPE)
try:
p = Popen(['git', 'rev-parse', 'HEAD'],
cwd=os.path.dirname(__file__), stdout=PIPE, stderr=PIPE)
except FileNotFoundError:
return None
std, _ = p.communicate()
p.wait()
if p.returncode:
return None
if sys.version_info[0] == 3 and isinstance(std, bytes):
return std[:8].decode(sys.stdout.encoding or 'utf-8')
else:
return std[:8]
return std[:8].decode(sys.stdout.encoding or 'utf-8')

View File

@ -23,7 +23,7 @@ except ImportError:
from pipes import quote
from wa.utils.android import get_cacheable_apk_info
from wa.utils.android import get_cacheable_apk_info, build_apk_launch_command
from wa.framework.plugin import TargetedPlugin, Parameter
from wa.framework.resource import (ApkFile, ReventFile,
File, loose_version_matching,
@ -180,6 +180,7 @@ class ApkWorkload(Workload):
activity = None
view = None
clear_data_on_reset = True
apk_arguments = {}
# Set this to True to mark that this workload requires the target apk to be run
# for initialisation purposes before the main run is performed.
@ -289,7 +290,8 @@ class ApkWorkload(Workload):
clear_data_on_reset=self.clear_data_on_reset,
activity=self.activity,
min_version=self.min_version,
max_version=self.max_version)
max_version=self.max_version,
apk_arguments=self.apk_arguments)
def validate(self):
if self.min_version and self.max_version:
@ -686,7 +688,7 @@ class PackageHandler(object):
def __init__(self, owner, install_timeout=300, version=None, variant=None,
package_name=None, strict=False, force_install=False, uninstall=False,
exact_abi=False, prefer_host_package=True, clear_data_on_reset=True,
activity=None, min_version=None, max_version=None):
activity=None, min_version=None, max_version=None, apk_arguments=None):
self.logger = logging.getLogger('apk')
self.owner = owner
self.target = self.owner.target
@ -709,6 +711,7 @@ class PackageHandler(object):
self.apk_version = None
self.logcat_log = None
self.error_msg = None
self.apk_arguments = apk_arguments
def initialize(self, context):
self.resolve_package(context)
@ -853,11 +856,10 @@ class PackageHandler(object):
self.apk_version = host_version
def start_activity(self):
if not self.activity:
cmd = 'am start -W {}'.format(self.apk_info.package)
else:
cmd = 'am start -W -n {}/{}'.format(self.apk_info.package,
self.activity)
cmd = build_apk_launch_command(self.apk_info.package, self.activity,
self.apk_arguments)
output = self.target.execute(cmd)
if 'Error:' in output:
# this will dismiss any error dialogs
@ -943,7 +945,7 @@ class TestPackageHandler(PackageHandler):
def setup(self, context):
self.initialize_package(context)
words = ['am', 'instrument']
words = ['am', 'instrument', '--user', '0']
if self.raw:
words.append('-r')
if self.wait:

View File

@ -37,7 +37,7 @@ from wa import Instrument, Parameter, very_fast
from wa.framework.exception import ConfigError
from wa.framework.instrument import slow
from wa.utils.diff import diff_sysfs_dirs, diff_interrupt_files
from wa.utils.misc import as_relative
from wa.utils.misc import as_relative, safe_extract
from wa.utils.misc import ensure_file_directory_exists as _f
from wa.utils.misc import ensure_directory_exists as _d
from wa.utils.types import list_of_strings
@ -162,7 +162,7 @@ class SysfsExtractor(Instrument):
self.target.execute('chmod 0777 {}'.format(on_device_tarball), as_root=True)
self.target.pull(on_device_tarball, on_host_tarball)
with tarfile.open(on_host_tarball, 'r:gz') as tf:
tf.extractall(context.output_directory)
safe_extract(tf, context.output_directory)
self.target.remove(on_device_tarball)
os.remove(on_host_tarball)

View File

@ -21,7 +21,7 @@ import re
from devlib.collector.perf import PerfCollector
from wa import Instrument, Parameter
from wa import Instrument, Parameter, ConfigError
from wa.utils.types import list_or_string, list_of_strs, numeric
PERF_COUNT_REGEX = re.compile(r'^(CPU\d+)?\s*(\d+)\s*(.*?)\s*(\[\s*\d+\.\d+%\s*\])?\s*$')
@ -31,7 +31,7 @@ class PerfInstrument(Instrument):
name = 'perf'
description = """
Perf is a Linux profiling with performance counters.
Perf is a Linux profiling tool with performance counters.
Simpleperf is an Android profiling tool with performance counters.
It is highly recomended to use perf_type = simpleperf when using this instrument
@ -95,6 +95,11 @@ class PerfInstrument(Instrument):
description="""Specifies options to be used to gather report when record command
is used. It's highly recommended to use perf_type simpleperf when running on
android devices as reporting options are unstable with perf"""),
Parameter('run_report_sample', kind=bool, default=False, description="""If true, run
'perf/simpleperf report-sample'. It only works with the record command."""),
Parameter('report_sample_options', kind=str, default=None,
description="""Specifies options to pass to report-samples when run_report_sample
is true."""),
Parameter('labels', kind=list_of_strs, default=None,
global_alias='perf_labels',
description="""Provides labels for perf/simpleperf output for each optionstring.
@ -104,6 +109,10 @@ class PerfInstrument(Instrument):
description="""
always install perf binary even if perf is already present on the device.
"""),
Parameter('validate_pmu_events', kind=bool, default=True,
description="""
Query the hardware capabilities to verify the specified PMU events.
"""),
]
def __init__(self, target, **kwargs):
@ -111,15 +120,29 @@ class PerfInstrument(Instrument):
self.collector = None
self.outdir = None
def validate(self):
if self.report_option_string and (self.command != "record"):
raise ConfigError("report_option_string only works with perf/simpleperf record. Set command to record or remove report_option_string")
if self.report_sample_options and (self.command != "record"):
raise ConfigError("report_sample_options only works with perf/simpleperf record. Set command to record or remove report_sample_options")
if self.run_report_sample and (self.command != "record"):
raise ConfigError("run_report_sample only works with perf/simpleperf record. Set command to record or remove run_report_sample")
def initialize(self, context):
if self.report_sample_options:
self.run_report_sample = True
self.collector = PerfCollector(self.target,
self.perf_type,
self.command,
self.events,
self.optionstring,
self.report_option_string,
self.run_report_sample,
self.report_sample_options,
self.labels,
self.force_install)
self.force_install,
self.validate_pmu_events)
def setup(self, context):
self.outdir = os.path.join(context.output_directory, self.perf_type)
@ -240,8 +263,10 @@ class PerfInstrument(Instrument):
readCSV = csv.reader(csv_file, delimiter=',')
line_num = 0
for row in readCSV:
if line_num > 0 and 'Total test time' not in row:
classifiers = {'scaled from(%)': row[len(row) - 2].replace('(', '').replace(')', '').replace('%', '')}
if 'Performance counter statistics' not in row and 'Total test time' not in row:
classifiers = {}
if '%' in row:
classifiers['scaled from(%)'] = row[len(row) - 2].replace('(', '').replace(')', '').replace('%', '')
context.add_metric('{}_{}'.format(label, row[1]), row[0], 'count', classifiers=classifiers)
line_num += 1
@ -249,15 +274,21 @@ class PerfInstrument(Instrument):
def _process_simpleperf_stat_from_raw(stat_file, context, label):
with open(stat_file) as fh:
for line in fh:
if '#' in line:
if '#' in line and not line.startswith('#'):
units = 'count'
if "(ms)" in line:
line = line.replace("(ms)", "")
units = 'ms'
tmp_line = line.split('#')[0]
tmp_line = line.strip()
count, metric = tmp_line.split(' ')[0], tmp_line.split(' ')[2]
count = int(count.replace(',', ''))
scaled_percentage = line.split('(')[1].strip().replace(')', '').replace('%', '')
scaled_percentage = int(scaled_percentage)
count = float(count) if "." in count else int(count.replace(',', ''))
classifiers = {}
if '%' in line:
scaled_percentage = line.split('(')[1].strip().replace(')', '').replace('%', '')
classifiers['scaled from(%)'] = int(scaled_percentage)
metric = '{}_{}'.format(label, metric)
context.add_metric(metric, count, 'count', classifiers={'scaled from(%)': scaled_percentage})
context.add_metric(metric, count, units, classifiers=classifiers)
def _process_simpleperf_record_output(self, context):
for host_file in os.listdir(self.outdir):

101
wa/instruments/perfetto.py Normal file
View File

@ -0,0 +1,101 @@
# Copyright 2023 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
from devlib import PerfettoCollector
from wa import Instrument, Parameter
from wa.framework.instrument import very_slow, is_installed
from wa.framework.exception import InstrumentError
OUTPUT_PERFETTO_TRACE = 'devlib-trace.perfetto-trace'
PERFETTO_CONFIG_FILE = 'config.pbtx'
class PerfettoInstrument(Instrument):
name = 'perfetto'
description = """
perfetto is an instrument that interacts with Google's Perfetto tracing
infrastructure.
From Perfetto's website:
Perfetto is a production-grade open-source stack for performance instrumentation and trace analysis.
It offers services and libraries for recording system-level and app-level traces, native + java heap profiling,
a library for analyzing traces using SQL and a web-based UI to visualize and explore multi-GB traces.
The instrument either requires Perfetto to be present on the target device or the standalone tracebox binary
to be built from source and included in devlib's Package Bin directory.
For more information, consult the PerfettoCollector documentation in devlib.
More information can be found on https://perfetto.dev/
"""
parameters = [
Parameter('config', kind=str, mandatory=True,
description="""
Path to the Perfetto trace config file.
All the Perfetto-specific tracing configuration should be done inside
that file. This config option should just take a full
filesystem path to where the config can be found.
"""),
Parameter('force_tracebox', kind=bool, default=False,
description="""
Install tracebox even if traced is already running on the target device.
If set to true, the tracebox binary needs to be placed in devlib's Package Bin directory.
""")
]
def __init__(self, target, **kwargs):
super(PerfettoInstrument, self).__init__(target, **kwargs)
self.collector = None
def initialize(self, context): # pylint: disable=unused-argument
self.target_config = self.target.path.join(self.target.working_directory, PERFETTO_CONFIG_FILE)
# push the config file to target
self.target.push(self.config, self.target_config)
collector_params = dict(
config=self.target_config,
force_tracebox=self.force_tracebox
)
self.collector = PerfettoCollector(self.target, **collector_params)
@very_slow
def start(self, context): # pylint: disable=unused-argument
self.collector.start()
@very_slow
def stop(self, context): # pylint: disable=unused-argument
self.collector.stop()
def update_output(self, context):
self.logger.info('Extracting Perfetto trace from target...')
outfile = os.path.join(context.output_directory, OUTPUT_PERFETTO_TRACE)
self.collector.set_output(outfile)
self.collector.get_data()
context.add_artifact('perfetto-bin', outfile, 'data')
def teardown(self, context): # pylint: disable=unused-argument
self.target.remove(self.collector.target_output_file)
def finalize(self, context): # pylint: disable=unused-argument
self.target.remove(self.target_config)
def validate(self):
if is_installed('trace-cmd'):
raise InstrumentError('perfetto cannot be used at the same time as trace-cmd')
if not os.path.isfile(self.config):
raise InstrumentError('perfetto config file not found at "{}"'.format(self.config))

View File

@ -59,6 +59,12 @@ class FilePoller(Instrument):
Whether or not the poller will be run as root. This should be
used when the file you need to poll can only be accessed by root.
"""),
Parameter('reopen', kind=bool, default=False,
description="""
When enabled files will be re-opened with each read. This is
useful for some sysfs/debugfs entries that only generate a
value when opened.
"""),
]
def validate(self):
@ -91,13 +97,17 @@ class FilePoller(Instrument):
if self.align_with_ftrace:
marker_option = '-m'
signal.connect(self._adjust_timestamps, signal.AFTER_JOB_OUTPUT_PROCESSED)
self.command = '{} -t {} {} -l {} {} > {} 2>{}'.format(target_poller,
self.sample_interval * 1000,
marker_option,
','.join(self.labels),
' '.join(self.files),
self.target_output_path,
self.target_log_path)
reopen_option = ''
if self.reopen:
reopen_option = '-r'
self.command = '{} {} -t {} {} -l {} {} > {} 2>{}'.format(target_poller,
reopen_option,
self.sample_interval * 1000,
marker_option,
','.join(self.labels),
' '.join(self.files),
self.target_output_path,
self.target_log_path)
def start(self, context):
self.target.kick_off(self.command, as_root=self.as_root)

View File

@ -77,9 +77,10 @@ int main(int argc, char ** argv) {
char *labels;
int labelCount = 0;
int should_write_marker = 0;
int reopen_files = 0;
int ret;
static char usage[] = "usage: %s [-h] [-m] [-t INTERVAL] FILE [FILE ...]\n"
static char usage[] = "usage: %s [-h] [-m] [-r] [-t INTERVAL] FILE [FILE ...]\n"
"polls FILE(s) every INTERVAL microseconds and outputs\n"
"the results in CSV format including a timestamp to STDOUT\n"
"\n"
@ -87,6 +88,7 @@ int main(int argc, char ** argv) {
" -m Insert a marker into ftrace at the time of the first\n"
" sample. This marker may be used to align the timestamps\n"
" produced by the poller with those of ftrace events.\n"
" -r Reopen files on each read (needed for some sysfs/debugfs files)\n"
" -t The polling sample interval in microseconds\n"
" Defaults to 1000000 (1 second)\n"
" -l Comma separated list of labels to use in the CSV\n"
@ -94,7 +96,7 @@ int main(int argc, char ** argv) {
//Handling command line arguments
while ((c = getopt(argc, argv, "hmt:l:")) != -1)
while ((c = getopt(argc, argv, "hmrt:l:")) != -1)
{
switch(c) {
case 'h':
@ -104,7 +106,10 @@ int main(int argc, char ** argv) {
break;
case 'm':
should_write_marker = 1;
break;
break;
case 'r':
reopen_files = 1;
break;
case 't':
interval = (useconds_t)atoi(optarg);
break;
@ -184,7 +189,20 @@ int main(int argc, char ** argv) {
time_float += ((double)current_time.tv_nsec)/1000/1000/1000;
printf("%f", time_float);
for (i = 0; i < num_files; i++) {
lseek(files_to_poll[i].fd, 0, SEEK_SET);
if (reopen_files) {
// Close and reopen the file to get fresh data
close(files_to_poll[i].fd);
files_to_poll[i].fd = open(files_to_poll[i].path, O_RDONLY);
if (files_to_poll[i].fd == -1) {
fprintf(stderr, "WARNING: Could not reopen \"%s\", got: %s\n",
files_to_poll[i].path, strerror(errno));
printf(",");
continue;
}
} else {
lseek(files_to_poll[i].fd, 0, SEEK_SET);
}
bytes_read = read(files_to_poll[i].fd, buf, 1024);
if (bytes_read < 0) {

View File

@ -22,7 +22,7 @@ from devlib import FtraceCollector
from wa import Instrument, Parameter
from wa.framework import signal
from wa.framework.instrument import very_slow
from wa.framework.instrument import very_slow, is_installed
from wa.framework.exception import InstrumentError
from wa.utils.types import list_of_strings
from wa.utils.misc import which
@ -125,6 +125,13 @@ class TraceCmdInstrument(Instrument):
value by going down from the specified size in
``buffer_size_step`` intervals.
"""),
Parameter('top_buffer_size', kind=int, default=None,
global_alias='trace_top_buffer_size',
description="""
The same as buffer_size except it sets the size of the
top-level buffer instead of the devlib one. If left unset,
it will default to the same as the devlib buffer size.
"""),
Parameter('buffer_size_step', kind=int, default=1000,
global_alias='trace_buffer_size_step',
description="""
@ -155,6 +162,13 @@ class TraceCmdInstrument(Instrument):
installed on the host (the one in your
distribution's repos may be too old).
"""),
Parameter('mode', kind=str, default='write-to-memory',
allowed_values=['write-to-disk', 'write-to-memory'],
description="""
Specifies whether collected traces should be saved in memory or disk.
Extensive workloads may hit out of memory issue. Hence, write-to-disk
mode can help in such cases.
"""),
]
def __init__(self, target, **kwargs):
@ -168,6 +182,7 @@ class TraceCmdInstrument(Instrument):
events=self.events,
functions=self.functions,
buffer_size=self.buffer_size,
top_buffer_size=self.top_buffer_size,
buffer_size_step=1000,
automark=False,
autoreport=True,
@ -175,6 +190,7 @@ class TraceCmdInstrument(Instrument):
no_install=self.no_install,
strict=False,
report_on_target=False,
mode=self.mode,
)
if self.report and self.report_on_target:
collector_params['autoreport'] = True
@ -190,24 +206,31 @@ class TraceCmdInstrument(Instrument):
signal.connect(self.mark_stop, signal.AFTER_WORKLOAD_EXECUTION, priority=11)
def setup(self, context):
self.collector.reset()
if self.collector:
self.collector.reset()
@very_slow
def start(self, context):
self.collector.start()
if self.collector:
self.collector.start()
@very_slow
def stop(self, context):
self.collector.stop()
if self.collector:
self.collector.stop()
def update_output(self, context): # NOQA pylint: disable=R0912
if not self.collector:
return
self.logger.info('Extracting trace from target...')
outfile = os.path.join(context.output_directory, 'trace.dat')
outfile = os.path.join(context.output_directory, OUTPUT_TRACE_FILE)
self.collector.set_output(outfile)
self.collector.get_data()
context.add_artifact('trace-cmd-bin', outfile, 'data')
if self.report:
textfile = os.path.join(context.output_directory, 'trace.txt')
textfile = os.path.join(context.output_directory, OUTPUT_TEXT_FILE)
if not self.report_on_target:
self.collector.report(outfile, textfile)
context.add_artifact('trace-cmd-txt', textfile, 'export')
@ -222,6 +245,8 @@ class TraceCmdInstrument(Instrument):
def validate(self):
if self.report and not self.report_on_target and not which('trace-cmd'):
raise InstrumentError('trace-cmd is not in PATH; is it installed?')
if is_installed('perfetto'):
raise InstrumentError('trace-cmd cannot be used at the same time as perfetto')
def mark_start(self, context):
if self.is_enabled:

View File

@ -16,6 +16,7 @@
import logging
import os
from datetime import datetime
from shlex import quote
from devlib.utils.android import ApkInfo as _ApkInfo
@ -196,3 +197,29 @@ def get_cacheable_apk_info(path):
apk_info_cache = ApkInfoCache()
def build_apk_launch_command(package, activity=None, apk_args=None):
args_string = ''
if apk_args:
for k, v in apk_args.items():
if isinstance(v, str):
arg = '--es'
v = quote(v)
elif isinstance(v, float):
arg = '--ef'
elif isinstance(v, bool):
arg = '--ez'
elif isinstance(v, int):
arg = '--ei'
else:
raise ValueError('Unable to encode {} {}'.format(v, type(v)))
args_string = '{} {} {} {}'.format(args_string, arg, k, v)
if not activity:
cmd = 'am start -W {} {}'.format(package, args_string)
else:
cmd = 'am start -W -n {}/{} {}'.format(package, activity, args_string)
return cmd

View File

@ -285,7 +285,7 @@ def get_params_rst(parameters):
text += indent('\nallowed values: {}\n'.format(', '.join(map(format_literal, param.allowed_values))))
elif param.constraint:
text += indent('\nconstraint: ``{}``\n'.format(get_type_name(param.constraint)))
if param.default:
if param.default is not None:
value = param.default
if isinstance(value, str) and value.startswith(USER_HOME):
value = value.replace(USER_HOME, '~')

View File

@ -21,10 +21,12 @@ Miscellaneous functions that don't fit anywhere else.
import errno
import hashlib
import imp
import importlib
import inspect
import logging
import math
import os
import pathlib
import random
import re
import shutil
@ -39,13 +41,14 @@ from functools import reduce # pylint: disable=redefined-builtin
from operator import mul
from tempfile import gettempdir, NamedTemporaryFile
from time import sleep
if sys.version_info[0] == 3:
from io import StringIO
else:
from io import BytesIO as StringIO
from io import StringIO
# pylint: disable=wrong-import-position,unused-import
from itertools import chain, cycle
from distutils.spawn import find_executable # pylint: disable=no-name-in-module, import-error
try:
from shutil import which as find_executable
except ImportError:
from distutils.spawn import find_executable # pylint: disable=no-name-in-module, import-error
from dateutil import tz
@ -56,7 +59,7 @@ from devlib.utils.misc import (ABI_MAP, check_output, walk_modules,
normalize, convert_new_lines, get_cpu_mask, unique,
isiterable, getch, as_relative, ranges_to_list, memoized,
list_to_ranges, list_to_mask, mask_to_list, which,
to_identifier)
to_identifier, safe_extract, LoadSyntaxError)
check_output_logger = logging.getLogger('check_output')
@ -234,7 +237,12 @@ def load_class(classpath):
"""Loads the specified Python class. ``classpath`` must be a fully-qualified
class name (i.e. namspaced under module/package)."""
modname, clsname = classpath.rsplit('.', 1)
return getattr(__import__(modname), clsname)
mod = importlib.import_module(modname)
cls = getattr(mod, clsname)
if isinstance(cls, type):
return cls
else:
raise ValueError(f'The classpath "{classpath}" does not point at a class: {cls}')
def get_pager():
@ -285,7 +293,7 @@ def get_article(word):
in all case; e.g. this will return ``"a hour"``.
"""
return'an' if word[0] in 'aoeiu' else 'a'
return 'an' if word[0] in 'aoeiu' else 'a'
def get_random_string(length):
@ -293,80 +301,52 @@ def get_random_string(length):
return ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(length))
class LoadSyntaxError(Exception):
def import_path(filepath, module_name=None):
"""
Programmatically import the given Python source file under the name
``module_name``. If ``module_name`` is not provided, a stable name based on
``filepath`` will be created. Note that this module name cannot be relied
on, so don't make write import statements assuming this will be stable in
the future.
"""
if not module_name:
path = pathlib.Path(filepath).resolve()
id_ = to_identifier(str(path))
module_name = f'wa._user_import.{id_}'
def __init__(self, message, filepath, lineno):
super(LoadSyntaxError, self).__init__(message)
self.filepath = filepath
self.lineno = lineno
def __str__(self):
message = 'Syntax Error in {}, line {}:\n\t{}'
return message.format(self.filepath, self.lineno, self.message)
try:
return sys.modules[module_name]
except KeyError:
spec = importlib.util.spec_from_file_location(module_name, filepath)
module = importlib.util.module_from_spec(spec)
try:
sys.modules[module_name] = module
spec.loader.exec_module(module)
except BaseException:
sys.modules.pop(module_name, None)
raise
else:
# We could return the "module" object, but that would not take into
# account any manipulation the module did on sys.modules when
# executing. To be consistent with the import statement, re-lookup
# the module name.
return sys.modules[module_name]
RAND_MOD_NAME_LEN = 30
def load_struct_from_python(filepath=None, text=None):
def load_struct_from_python(filepath):
"""Parses a config structure from a .py file. The structure should be composed
of basic Python types (strings, ints, lists, dicts, etc.)."""
if not (filepath or text) or (filepath and text):
raise ValueError('Exactly one of filepath or text must be specified.')
try:
if filepath:
modname = to_identifier(filepath)
mod = imp.load_source(modname, filepath)
else:
modname = get_random_string(RAND_MOD_NAME_LEN)
while modname in sys.modules: # highly unlikely, but...
modname = get_random_string(RAND_MOD_NAME_LEN)
mod = imp.new_module(modname)
exec(text, mod.__dict__) # pylint: disable=exec-used
return dict((k, v)
for k, v in mod.__dict__.items()
if not k.startswith('_'))
mod = import_path(filepath)
except SyntaxError as e:
raise LoadSyntaxError(e.message, filepath, e.lineno)
def load_struct_from_yaml(filepath=None, text=None):
"""Parses a config structure from a .yaml file. The structure should be composed
of basic Python types (strings, ints, lists, dicts, etc.)."""
# Import here to avoid circular imports
# pylint: disable=wrong-import-position,cyclic-import, import-outside-toplevel
from wa.utils.serializer import yaml
if not (filepath or text) or (filepath and text):
raise ValueError('Exactly one of filepath or text must be specified.')
try:
if filepath:
with open(filepath) as fh:
return yaml.load(fh)
else:
return yaml.load(text)
except yaml.YAMLError as e:
lineno = None
if hasattr(e, 'problem_mark'):
lineno = e.problem_mark.line # pylint: disable=no-member
raise LoadSyntaxError(e.message, filepath=filepath, lineno=lineno)
def load_struct_from_file(filepath):
"""
Attempts to parse a Python structure consisting of basic types from the specified file.
Raises a ``ValueError`` if the specified file is of unkown format; ``LoadSyntaxError`` if
there is an issue parsing the file.
"""
extn = os.path.splitext(filepath)[1].lower()
if extn in ('.py', '.pyc', '.pyo'):
return load_struct_from_python(filepath)
elif extn == '.yaml':
return load_struct_from_yaml(filepath)
else:
raise ValueError('Unknown format "{}": {}'.format(extn, filepath))
return {
k: v
for k, v in inspect.getmembers(mod)
if not k.startswith('_')
}
def open_file(filepath):

View File

@ -29,19 +29,14 @@ import os
import re
import numbers
import shlex
import sys
from bisect import insort
if sys.version_info[0] == 3:
from urllib.parse import quote, unquote # pylint: disable=no-name-in-module, import-error
from past.builtins import basestring # pylint: disable=redefined-builtin
long = int # pylint: disable=redefined-builtin
else:
from urllib import quote, unquote # pylint: disable=no-name-in-module
from urllib.parse import quote, unquote # pylint: disable=no-name-in-module, import-error
# pylint: disable=wrong-import-position
from collections import defaultdict
from collections.abc import MutableMapping
from functools import total_ordering
from past.builtins import basestring # pylint: disable=redefined-builtin
from future.utils import with_metaclass
from devlib.utils.types import identifier, boolean, integer, numeric, caseless_string
@ -710,8 +705,6 @@ class ParameterDict(dict):
prefix = 'f'
elif isinstance(obj, bool):
prefix = 'b'
elif isinstance(obj, long):
prefix = 'i'
elif isinstance(obj, int):
prefix = 'i'
elif obj is None:
@ -792,10 +785,7 @@ class ParameterDict(dict):
return (key, self._decode(value))
def iter_encoded_items(self):
if sys.version_info[0] == 3:
return dict.items(self)
else:
return dict.iteritems(self)
return dict.items(self)
def get_encoded_value(self, name):
return dict.__getitem__(self, name)

View File

@ -14,18 +14,18 @@ android {
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/${packageName}.apk")
output.outputFileName = "${packageName}.apk"
}
}
}
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext: 'aar')
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support.test:runner:0.5'
implementation 'com.android.support.test:rules:0.5'
implementation 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
implementation(name: 'uiauto', ext: 'aar')
}
repositories {

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.1'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -47,8 +47,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
package=com.arm.wa.uiauto.adobereader
rm -f ../$package
if [[ -f app/build/apk/$package.apk ]]; then
cp app/build/apk/$package.apk ../$package.apk
if [[ -f app/build/outputs/apk/debug/$package.apk ]]; then
cp app/build/outputs/apk/debug/$package.apk ../$package.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -14,18 +14,18 @@ android {
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/${packageName}.apk")
output.outputFileName = "${packageName}.apk"
}
}
}
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext: 'aar')
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support.test:runner:0.5'
implementation 'com.android.support.test:rules:0.5'
implementation 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
implementation(name: 'uiauto', ext: 'aar')
}
repositories {

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.1'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -47,8 +47,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
package=com.arm.wa.uiauto.aitutu
rm -f ../$package
if [[ -f app/build/apk/$package.apk ]]; then
cp app/build/apk/$package.apk ../$package.apk
if [[ -f app/build/outputs/apk/debug/$package.apk ]]; then
cp app/build/outputs/apk/debug/$package.apk ../$package.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -14,18 +14,18 @@ android {
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/${packageName}.apk")
output.outputFileName = "${packageName}.apk"
}
}
}
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext: 'aar')
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support.test:runner:0.5'
implementation 'com.android.support.test:rules:0.5'
implementation 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
implementation(name: 'uiauto', ext: 'aar')
}
repositories {

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.1'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -47,8 +47,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
package=com.arm.wa.uiauto.androbench
rm -f ../$package
if [[ -f app/build/apk/$package.apk ]]; then
cp app/build/apk/$package.apk ../$package.apk
if [[ -f app/build/outputs/apk/debug/$package.apk ]]; then
cp app/build/outputs/apk/debug/$package.apk ../$package.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -48,6 +48,23 @@ class Antutu(ApkUiautoWorkload):
re.compile(r'ROM Sequential Read Score (.+)'),
re.compile(r'ROM Sequential Write Score (.+)'),
re.compile(r'ROM Random Access Score (.+)')]
regex_matches_v9 = [re.compile(r'CPU Mathematical Operations Score (.+)'),
re.compile(r'CPU Common Algorithms Score (.+)'),
re.compile(r'CPU Multi-Core Score (.+)'),
re.compile(r'GPU Terracotta Score (.+)'),
re.compile(r'GPU Swordsman Score (.+)'),
re.compile(r'GPU Refinery Score (.+)'),
re.compile(r'Data Security Score (.+)'),
re.compile(r'Data Processing Score (.+)'),
re.compile(r'Image Processing Score (.+)'),
re.compile(r'User Experience Score (.+)'),
re.compile(r'Video CTS Score (.+)'),
re.compile(r'Video Decode Score (.+)'),
re.compile(r'RAM Access Score (.+)'),
re.compile(r'ROM APP IO Score (.+)'),
re.compile(r'ROM Sequential Read Score (.+)'),
re.compile(r'ROM Sequential Write Score (.+)'),
re.compile(r'ROM Random Access Score (.+)')]
description = '''
Executes Antutu 3D, UX, CPU and Memory tests
@ -58,7 +75,7 @@ class Antutu(ApkUiautoWorkload):
Known working APK version: 8.0.4
'''
supported_versions = ['7.0.4', '7.2.0', '8.0.4', '8.1.9']
supported_versions = ['7.0.4', '7.2.0', '8.0.4', '8.1.9', '9.1.6']
parameters = [
Parameter('version', kind=str, allowed_values=supported_versions, override=True,
@ -69,6 +86,10 @@ class Antutu(ApkUiautoWorkload):
)
]
def __init__(self, device, **kwargs):
super(Antutu, self).__init__(device, **kwargs)
self.gui.timeout = 1200
def setup(self, context):
self.gui.uiauto_params['version'] = self.version
super(Antutu, self).setup(context)
@ -95,6 +116,8 @@ class Antutu(ApkUiautoWorkload):
def update_output(self, context):
super(Antutu, self).update_output(context)
if self.version.startswith('9'):
self.extract_scores(context, self.regex_matches_v9)
if self.version.startswith('8'):
self.extract_scores(context, self.regex_matches_v8)
if self.version.startswith('7'):

View File

@ -20,18 +20,18 @@ android {
}
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/${packageName}.apk")
output.outputFileName = "${packageName}.apk"
}
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext:'aar')
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support.test:runner:0.5'
implementation 'com.android.support.test:rules:0.5'
implementation 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
implementation(name: 'uiauto', ext:'aar')
}
repositories {

View File

@ -65,7 +65,9 @@ public class UiAutomation extends BaseUiAutomation {
@Test
public void extractResults() throws Exception{
if (version.startsWith("8")){
if (version.startsWith("9")){
getScoresv9();
} else if (version.startsWith("8")){
getScoresv8();
} else {
getScoresv7();
@ -273,4 +275,126 @@ public class UiAutomation extends BaseUiAutomation {
memscores.click();
}
public void getScoresv9() throws Exception {
UiScrollable list = new UiScrollable(new UiSelector().scrollable(true));
//Expand, Extract and Close CPU sub scores
UiObject cpuscores =
mDevice.findObject(new UiSelector().resourceId("com.antutu.ABenchMark:id/result_details_recyclerView"))
.getChild(new UiSelector().index(2))
.getChild(new UiSelector().index(4));
cpuscores.click();
UiObject cpumaths =
mDevice.findObject(new UiSelector().text("CPU Mathematical Operations").fromParent(new UiSelector().index(1)));
UiObject cpucommon =
mDevice.findObject(new UiSelector().text("CPU Common Algorithms").fromParent(new UiSelector().index(1)));
UiObject cpumulti =
mDevice.findObject(new UiSelector().text("CPU Multi-Core").fromParent(new UiSelector().index(1)));
Log.d(TAG, "CPU Mathematical Operations Score " + cpumaths.getText());
Log.d(TAG, "CPU Common Algorithms Score " + cpucommon.getText());
Log.d(TAG, "CPU Multi-Core Score " + cpumulti.getText());
cpuscores.click();
//Expand, Extract and Close GPU sub scores
UiObject gpuscores =
mDevice.findObject(new UiSelector().resourceId("com.antutu.ABenchMark:id/result_details_recyclerView"))
.getChild(new UiSelector().index(3))
.getChild(new UiSelector().index(4));
gpuscores.click();
UiObject gputerracotta =
mDevice.findObject(new UiSelector().text("Terracotta - Vulkan").fromParent(new UiSelector().index(1)));
UiObject gpuswordsman =
mDevice.findObject(new UiSelector().text("Swordsman - Vulkan").fromParent(new UiSelector().index(1)));
UiObject gpurefinery =
mDevice.findObject(new UiSelector().text("Refinery - OpenGL ES3.1+AEP").fromParent(new UiSelector().index(1)));
Log.d(TAG, "GPU Terracotta Score " + gputerracotta.getText());
Log.d(TAG, "GPU Swordsman Score " + gpuswordsman.getText());
Log.d(TAG, "GPU Refinery Score " + gpurefinery.getText());
gpuscores.click();
//Expand, Extract and Close UX sub scores
UiObject uxscores =
mDevice.findObject(new UiSelector().resourceId("com.antutu.ABenchMark:id/result_details_recyclerView"))
.getChild(new UiSelector().index(5))
.getChild(new UiSelector().index(4));
uxscores.click();
UiObject security =
mDevice.findObject(new UiSelector().text("Data Security").fromParent(new UiSelector().index(1)));
UiObject dataprocessing =
mDevice.findObject(new UiSelector().text("Data Processing").fromParent(new UiSelector().index(1)));
UiObject imageprocessing =
mDevice.findObject(new UiSelector().text("Image Processing").fromParent(new UiSelector().index(1)));
UiObject uxscore =
mDevice.findObject(new UiSelector().text("User Experience").fromParent(new UiSelector().index(1)));
UiObject videocts =
mDevice.findObject(new UiSelector().text("Video CTS").fromParent(new UiSelector().index(1)));
UiObject videodecode =
mDevice.findObject(new UiSelector().text("Video Decode").fromParent(new UiSelector().index(1)));
if (!security.exists() && list.waitForExists(60)) {
list.scrollIntoView(security);
}
Log.d(TAG, "Data Security Score " + security.getText());
if (!dataprocessing.exists() && list.waitForExists(60)) {
list.scrollIntoView(dataprocessing);
}
Log.d(TAG, "Data Processing Score " + dataprocessing.getText());
if (!imageprocessing.exists() && list.waitForExists(60)) {
list.scrollIntoView(imageprocessing);
}
Log.d(TAG, "Image Processing Score " + imageprocessing.getText());
if (!uxscore.exists() && list.waitForExists(60)) {
list.scrollIntoView(uxscore);
}
Log.d(TAG, "User Experience Score " + uxscore.getText());
if (!videocts.exists() && list.waitForExists(60)) {
list.scrollIntoView(videocts);
}
Log.d(TAG, "Video CTS Score " + videocts.getText());
if (!videodecode.exists() && list.waitForExists(60)) {
list.scrollIntoView(videodecode);
}
Log.d(TAG, "Video Decode Score " + videodecode.getText());
list.scrollToBeginning(10);
uxscores.click();
//Expand, Extract and Close MEM sub scores
UiObject memscores =
mDevice.findObject(new UiSelector().resourceId("com.antutu.ABenchMark:id/result_details_recyclerView"))
.getChild(new UiSelector().index(4))
.getChild(new UiSelector().index(4));
memscores.click();
UiObject ramaccess =
mDevice.findObject(new UiSelector().text("RAM Access").fromParent(new UiSelector().index(1)));
UiObject romapp =
mDevice.findObject(new UiSelector().text("ROM APP IO").fromParent(new UiSelector().index(1)));
UiObject romread =
mDevice.findObject(new UiSelector().text("ROM Sequential Read").fromParent(new UiSelector().index(1)));
UiObject romwrite =
mDevice.findObject(new UiSelector().text("ROM Sequential Write").fromParent(new UiSelector().index(1)));
UiObject romaccess =
mDevice.findObject(new UiSelector().text("ROM Random Access").fromParent(new UiSelector().index(1)));
if (!ramaccess.exists() && list.waitForExists(60)) {
list.scrollIntoView(ramaccess);
}
Log.d(TAG, "RAM Access Score " + ramaccess.getText());
if (!romapp.exists() && list.waitForExists(60)) {
list.scrollIntoView(romapp);
}
Log.d(TAG, "ROM APP IO Score " + romapp.getText());
if (!romread.exists() && list.waitForExists(60)) {
list.scrollIntoView(romread);
}
Log.d(TAG, "ROM Sequential Read Score " + romread.getText());
if (!romwrite.exists() && list.waitForExists(60)) {
list.scrollIntoView(romwrite);
}
Log.d(TAG, "ROM Sequential Write Score " + romwrite.getText());
if (!romaccess.exists() && list.waitForExists(60)) {
list.scrollIntoView(romaccess);
}
Log.d(TAG, "ROM Random Access Score " + romaccess.getText());
list.scrollToBeginning(10);
memscores.click();
}
}

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.2'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -47,8 +47,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
package=com.arm.wa.uiauto.antutu
rm -f ../$package
if [[ -f app/build/apk/$package.apk ]]; then
cp app/build/apk/$package.apk ../$package.apk
if [[ -f app/build/outputs/apk/debug/$package.apk ]]; then
cp app/build/outputs/apk/debug/$package.apk ../$package.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -14,18 +14,18 @@ android {
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/${packageName}.apk")
output.outputFileName = "${packageName}.apk"
}
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext:'aar')
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support.test:runner:0.5'
implementation 'com.android.support.test:rules:0.5'
implementation 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
implementation(name: 'uiauto', ext:'aar')
}
repositories {

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.2'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -47,8 +47,8 @@ fi
package=com.arm.wa.uiauto.applaunch
rm -f ../$package
if [[ -f app/build/apk/$package.apk ]]; then
cp app/build/apk/$package.apk ../$package.apk
if [[ -f app/build/outputs/apk/debug/$package.apk ]]; then
cp app/build/outputs/apk/debug/$package.apk ../$package.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -14,18 +14,18 @@ android {
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/${packageName}.apk")
output.outputFileName = "${packageName}.apk"
}
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext:'aar')
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support.test:runner:0.5'
implementation 'com.android.support.test:rules:0.5'
implementation 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
implementation(name: 'uiauto', ext:'aar')
}
repositories {

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.2'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -47,8 +47,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
package=com.arm.wa.uiauto.benchmarkpi
rm -f ../$package
if [[ -f app/build/apk/$package.apk ]]; then
cp app/build/apk/$package.apk ../$package.apk
if [[ -f app/build/outputs/apk/debug/$package.apk ]]; then
cp app/build/outputs/apk/debug/$package.apk ../$package.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,4 +3,5 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -12,18 +12,18 @@ android {
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/com.arm.wa.uiauto.chrome.apk")
output.outputFileName = "com.arm.wa.uiauto.chrome.apk"
}
}
}
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext: 'aar')
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support.test:runner:0.5'
implementation 'com.android.support.test:rules:0.5'
implementation 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
implementation(name: 'uiauto', ext: 'aar')
}
repositories {

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.1'
classpath 'com.android.tools.build:gradle:7.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -46,8 +46,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
rm -f ../com.arm.wa.uiauto.chrome
if [[ -f app/build/apk/com.arm.wa.uiauto.chrome.apk ]]; then
cp app/build/apk/com.arm.wa.uiauto.chrome.apk ../com.arm.wa.uiauto.chrome.apk
if [[ -f app/build/outputs/apk/debug/com.arm.wa.uiauto.chrome.apk ]]; then
cp app/build/outputs/apk/debug/com.arm.wa.uiauto.chrome.apk ../com.arm.wa.uiauto.chrome.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip

View File

@ -0,0 +1,112 @@
# Copyright 2023 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import pandas as pd
from wa import ApkWorkload, Parameter
from devlib.exception import TargetStableCalledProcessError
class DrArm(ApkWorkload):
name = 'drarm'
package_names = ['com.Arm.DrArm']
activity = 'com.unity3d.player.UnityPlayerActivity'
view = "SurfaceView[com.Arm.DrArm/com.unity3d.player.UnityPlayerActivity](BLAST)"
install_timeout = 200
description = """
Dr. Arms Amazing Adventures is a Souls-Like Mobile Action Role Playing Game developed at Arm.
"""
parameters = [
Parameter('timeout', kind=int, default=126,
description='The amount of time the game should run for'),
Parameter('auto_demo', kind=bool, default=False,
description='Start the demo automatically'),
Parameter('show_fps', kind=bool, default=False,
description='Show the FPS count window in-game'),
Parameter('adpf', kind=bool, default=True,
description='Enable ADPF'),
Parameter('adpf_auto', kind=bool, default=True,
description='Enable automatic ADPF mode'),
Parameter('adpf_logging', kind=bool, default=False,
description='Enable ADPF logging'),
Parameter('verbose_log', kind=bool, default=False,
description='Emit reported stats as debug logs'),
Parameter('adpf_interventions', kind=bool, default=True,
description='Enable ADPF interventions'),
Parameter('target_vsyncs', kind=int, default=1,
description='the number of vsyncs to target to a frame (1 = current display rate)'),
Parameter('target_framerate', kind=int, default=None,
description='Target framerate for the application'),
Parameter('fps_report_file', kind=str, default=None,
description='File name that the ADPF FPS report should use.'),
Parameter('fixed_time_step', kind=float, default=None,
description='Time, in seconds, that should be used in advancing the simulation'),
]
@property
def apk_arguments(self):
args = {
'showFPS': int(self.show_fps),
'doAdpf': int(self.adpf),
'adpfMode': int(self.adpf_auto),
'adpfLogging': int(self.adpf_logging),
'verboseLog': int(self.verbose_log),
'adpfInterventions': int(self.adpf_interventions),
'targetVsyncs': self.target_vsyncs,
'autoDemo': int(self.auto_demo),
}
if self.target_framerate is not None:
args['targetFramerate'] = self.target_framerate
if self.fixed_time_step is not None:
args['fixedTimeStep'] = self.fixed_time_step
if self.fps_report_file is not None:
args['fpsReportFileName'] = self.fps_report_file
return args
def run(self, context):
self.target.sleep(self.timeout)
def update_output(self, context):
super(DrArm, self).update_output(context)
outfile_glob = self.target.path.join(
self.target.external_storage_app_dir, self.apk.package, 'files', '*.csv'
)
try:
ls_output = self.target.execute('ls {}'.format(outfile_glob))
except TargetStableCalledProcessError:
self.logger.warning('Failed to find the ADPF report file.')
return
on_target_output_files = [f.strip() for f in ls_output.split('\n') if f]
self.logger.info('Extracting the ADPF FPS report from target...')
for file in on_target_output_files:
host_output_file = os.path.join(context.output_directory, os.path.basename(file))
self.target.pull(file, host_output_file)
context.add_artifact('adpf', host_output_file, kind='data',
description='ADPF report log in CSV format.')
adpf_df = pd.read_csv(host_output_file)
if not adpf_df.empty:
context.add_metric('Average FPS', round(adpf_df['average fps'].mean(), 2))
context.add_metric('Frame count', int(adpf_df['# frame count'].iloc[-1]))

View File

@ -1,4 +1,4 @@
# Copyright 2013-2018 ARM Limited
# Copyright 2013-2025 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -20,10 +20,11 @@ import tempfile
import json
from collections import defaultdict
from wa import ApkUiautoWorkload, Parameter
from wa import Workload, ApkUiautoWorkload, Parameter
from wa.framework.exception import ConfigError, WorkloadError
from wa.utils.misc import capitalize
from wa.utils.types import version_tuple
from wa.utils.types import version_tuple, list_or_integer
from wa.utils.exec_control import once
class Geekbench(ApkUiautoWorkload):
@ -53,8 +54,8 @@ class Geekbench(ApkUiautoWorkload):
"""
summary_metrics = ['score', 'multicore_score']
supported_versions = ['4.4.2', '4.4.0', '4.3.4', '4.3.2', '4.3.1', '4.2.0', '4.0.1', '3.4.1', '3.0.0', '2']
package_names = ['com.primatelabs.geekbench', 'com.primatelabs.geekbench3', 'ca.primatelabs.geekbench2']
supported_versions = ['6', '5', '4.4.2', '4.4.0', '4.3.4', '4.3.2', '4.3.1', '4.2.0', '4.0.1', '3.4.1', '3.0.0', '2']
package_names = ['com.primatelabs.geekbench6', 'com.primatelabs.geekbench5', 'com.primatelabs.geekbench', 'com.primatelabs.geekbench3', 'ca.primatelabs.geekbench2']
begin_regex = re.compile(r'^\s*D/WebViewClassic.loadDataWithBaseURL\(\s*\d+\s*\)'
r'\s*:\s*(?P<content>\<.*)\s*$')
@ -137,7 +138,7 @@ class Geekbench(ApkUiautoWorkload):
context.add_metric(namemify(section['name'] + '_multicore_score', i),
section['multicore_score'])
def update_result_4(self, context):
def update_result(self, context):
outfile_glob = self.target.path.join(self.target.package_data_directory, self.apk.package, 'files', '*gb*')
on_target_output_files = [f.strip() for f in self.target.execute('ls {}'.format(outfile_glob),
as_root=True).split('\n') if f]
@ -151,7 +152,7 @@ class Geekbench(ApkUiautoWorkload):
with open(host_output_file, 'w') as wfh:
json.dump(data, wfh, indent=4)
context.add_artifact('geekout', host_output_file, kind='data',
description='Geekbench 4 output from target.')
description='Geekbench output from target.')
context.add_metric(namemify('score', i), data['score'])
context.add_metric(namemify('multicore_score', i), data['multicore_score'])
for section in data['sections']:
@ -161,7 +162,9 @@ class Geekbench(ApkUiautoWorkload):
context.add_metric(namemify(section['name'] + '_' + workload_name + '_score', i),
workloads['score'])
update_result_5 = update_result_4
update_result_4 = update_result
update_result_5 = update_result
update_result_6 = update_result
class GBWorkload(object):
@ -368,3 +371,233 @@ class GeekbenchCorproate(Geekbench): # pylint: disable=too-many-ancestors
def namemify(basename, i):
return basename + (' {}'.format(i) if i else '')
class GeekbenchCmdline(Workload):
name = "geekbench_cli"
description = "Workload for running command line version Geekbench"
gb6_workloads = {
# Single-Core and Multi-Core
101: 'File Compression',
102: 'Navigation',
103: 'HTML5 Browser',
104: 'PDF Renderer',
105: 'Photo Library',
201: 'Clang',
202: 'Text Processing',
203: 'Asset Compression',
301: 'Object Detection',
402: 'Object Remover',
403: 'HDR',
404: 'Photo Filter',
501: 'Ray Tracer',
502: 'Structure from Motion',
# OpenCL and Vulkan
303: 'Face Detection',
406: 'Edge Detection',
407: 'Gaussian Blur',
503: 'Feature Matching',
504: 'Stereo Matching',
601: 'Particle Physics',
# Single-Core, Multi-Core, OpenCL, and Vulkan
302: 'Background Blur',
401: 'Horizon Detection',
}
gb5_workloads = {
# Single-Core and Multi-Core
101: 'AES-XTS',
201: 'Text Compression',
202: 'Image Compression',
203: 'Navigation',
204: 'HTML5',
205: 'SQLite',
206: 'PDF Rendering',
207: 'Text Rendering',
208: 'Clang',
209: 'Camera',
301: 'N-Body Physics',
302: 'Rigid Body Physics',
307: 'Image Inpainting',
308: 'HDR',
309: 'Ray Tracing',
310: 'Structure from Motion',
312: 'Speech Recognition',
313: 'Machine Learning',
# OpenCL and Vulkan
220: 'Sobel',
221: 'Canny',
222: 'Stereo Matching',
230: 'Histogram Equalization',
304: 'Depth of Field',
311: 'Feature Matching',
320: 'Particle Physics',
321: 'SFFT',
# Single-Core, Multi-Core, OpenCL, and Vulkan
303: 'Gaussian Blur',
305: 'Face Detection',
306: 'Horizon Detection',
}
binary_name = 'geekbench_aarch64'
allowed_extensions = ['json', 'csv', 'xml', 'html', 'text']
parameters = [
Parameter('cpumask', kind=str, default='',
description='CPU mask used by taskset.'),
Parameter('section', kind=int, default=1, allowed_values=[1, 4, 9],
description="""Run the specified sections. It should be 1 for CPU benchmarks,
4 for OpenCL benchmarks and 9 for Vulkan benchmarks."""),
Parameter('upload', kind=bool, default=False,
description='Upload results to Geekbench Browser'),
Parameter('is_single_core', kind=bool, default=True,
description='Run workload in single-core or multi-core mode.'),
Parameter('workload', kind=list_or_integer, default=301,
description='Specify workload to run'),
Parameter('iterations', kind=int, default=5,
description='Number of iterations'),
Parameter('workload_gap', kind=int, default=2000,
description='N milliseconds gap between workloads'),
Parameter('output_file', kind=str, default='gb_cli.json',
description=f"""Specify the name of the output results file.
If it is not specified, the output file will be generated as a JSON file.
It can be {', '.join(allowed_extensions)} files."""),
Parameter('timeout', kind=int, default=2000,
description='The test timeout in ms. It should be long for 1000 iterations.'),
Parameter('version', kind=str, default='6.3.0',
description='Specifies which version of the Geekbench should run.'),
]
def __init__(self, target, **kwargs):
super(GeekbenchCmdline, self).__init__(target, **kwargs)
self.target_result_json = None
self.host_result_json = None
self.workloads = self.gb6_workloads
self.params = ''
self.output = ''
self.target_exec_directory = ''
self.tar_file_src = ''
self.tar_file_dst = ''
self.file_exists = False
def init_resources(self, context):
"""
Retrieves necessary files to run the benchmark in TAR format.
WA will look for `gb_cli_artifacts_<version>.tar` file to deploy them to the
working directory. If there is no specified version, it will look for version
6.3.0 by default.
"""
self.deployable_assets = [''.join(['gb_cli_artifacts', '_', self.version, '.tar'])]
# Create an executables directory
self.target_exec_directory = self.target.path.join(self.target.executables_directory, f'gb_cli-{self.version}')
self.target.execute("mkdir -p {}".format(self.target_exec_directory))
# Source and Destination paths for the artifacts tar file
self.tar_file_src = self.target.path.join(self.target.working_directory, self.deployable_assets[0])
self.tar_file_dst = self.target.path.join(self.target_exec_directory, self.deployable_assets[0])
# Check the tar file if it already exists
if self.target.file_exists(self.tar_file_dst):
self.file_exists = True
else:
# Get the assets file
super(GeekbenchCmdline, self).init_resources(context)
@once
def initialize(self, context):
if self.version[0] == '5':
self.workloads = self.gb5_workloads
# If the tar file does not exist in the target, deploy the assets
if not self.file_exists:
super(GeekbenchCmdline, self).initialize(context)
# Move the tar file to the executables directory
self.target.execute(
'{} mv {} {}'.format(
self.target.busybox, self.tar_file_src, self.tar_file_dst))
# Extract the tar file
self.target.execute(
'{} tar -xf {} -C {}'.format(
self.target.busybox, self.tar_file_dst, self.target_exec_directory))
def setup(self, context):
super(GeekbenchCmdline, self).setup(context)
self.params = ''
self.params += '--section {} '.format(self.section)
if self.section == 1:
self.params += '--single-core ' if self.is_single_core else '--multi-core '
self.params += '--upload ' if self.upload else '--no-upload '
known_workloads = '\n'.join("{}: {}".format(k, v) for k, v in self.workloads.items())
if any([t not in self.workloads.keys() for t in self.workload]):
msg = 'Unknown workload(s) specified. Known workloads: {}'
raise ValueError(msg.format(known_workloads))
self.params += '--workload {} '.format(''.join("{},".format(i) for i in self.workload))
if self.iterations:
self.params += '--iterations {} '.format(self.iterations)
if self.workload_gap:
self.params += '--workload-gap {} '.format(self.workload_gap)
extension = os.path.splitext(self.output_file)[1][1:]
if self.output_file and extension not in self.allowed_extensions:
msg = f"No allowed extension specified. Allowed extensions: {', '.join(self.allowed_extensions)}"
raise ValueError(msg)
elif self.output_file:
# Output results file with the given name and extension
self.target_result_json = os.path.join(self.target_exec_directory, self.output_file)
self.params += '--export-{} {}'.format(extension, self.target_result_json)
self.host_result_json = os.path.join(context.output_directory, self.output_file)
else:
# The output file is not specified
self.target_result_json = os.path.join(self.target_exec_directory, self.output_file)
self.params += '--save {}'.format(self.target_result_json)
self.host_result_json = os.path.join(context.output_directory, self.output_file)
def run(self, context):
super(GeekbenchCmdline, self).run(context)
taskset = f"taskset {self.cpumask}" if self.cpumask else ""
binary = self.target.path.join(self.target_exec_directory, self.binary_name)
cmd = '{} {} {}'.format(taskset, binary, self.params)
try:
self.output = self.target.execute(cmd, timeout=self.timeout, as_root=True)
except KeyboardInterrupt:
self.target.killall(self.binary_name)
raise
def update_output(self, context):
super(GeekbenchCmdline, self).update_output(context)
if not self.output:
return
for workload in self.workload:
scores = []
matches = re.findall(self.workloads[workload] + '(.+\d)', self.output)
for match in matches:
scores.append(int(re.search(r'\d+', match).group(0)))
if self.section == 4:
context.add_metric("OpenCL Score " + self.workloads[workload], scores[0])
elif self.section == 9:
context.add_metric("Vulkan Score " + self.workloads[workload], scores[0])
else:
context.add_metric("Single-Core Score " + self.workloads[workload], scores[0])
if not self.is_single_core:
context.add_metric("Multi-Core Score " + self.workloads[workload], scores[1])
def extract_results(self, context):
# Extract results on the target
super(GeekbenchCmdline, self).extract_results(context)
self.target.pull(self.target_result_json, self.host_result_json)
context.add_artifact('GeekbenchCmdline_results', self.host_result_json, kind='raw')
@once
def finalize(self, context):
if self.cleanup_assets:
self.target.remove(self.target_exec_directory)

View File

@ -14,7 +14,7 @@ android {
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/${packageName}.apk")
output.outputFileName = "${packageName}.apk"
}
}
}

View File

@ -53,7 +53,8 @@ public class UiAutomation extends BaseUiAutomation {
params = getParams();
version = params.getString("version").split("\\.");
majorVersion = Integer.parseInt(version[0]);
minorVersion = Integer.parseInt(version[1]);
if (version.length > 1)
minorVersion = Integer.parseInt(version[1]);
isCorporate = params.getBoolean("is_corporate");
loops = params.getInt("loops");
}
@ -95,6 +96,7 @@ public class UiAutomation extends BaseUiAutomation {
break;
case 4:
case 5:
case 6:
runCpuBenchmarks(isCorporate);
waitForResultsv3onwards();
break;
@ -140,7 +142,7 @@ public class UiAutomation extends BaseUiAutomation {
scrollPage();
String packageName = isCorporate ? "com.primatelabs.geekbench.*.corporate"
: "com.primatelabs.geekbench";
: "com.primatelabs.geekbench.*";
UiObject runButton =
mDevice.findObject(new UiSelector().resourceIdMatches(packageName + ":id/runCpuBenchmarks"));

View File

@ -3,9 +3,10 @@
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.2'
classpath 'com.android.tools.build:gradle:4.2.0'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
@ -15,6 +16,7 @@ buildscript {
allprojects {
repositories {
jcenter()
google()
}
}

View File

@ -50,8 +50,8 @@ fi
# If successful move APK file to workload folder (overwrite previous)
package=com.arm.wa.uiauto.geekbench
rm -f ../$package
if [[ -f app/build/apk/$package.apk ]]; then
cp app/build/apk/$package.apk ../$package.apk
if [[ -f app/build/outputs/apk/debug/$package.apk ]]; then
cp app/build/outputs/apk/debug/$package.apk ../$package.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9

View File

@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.2-all.zip

Some files were not shown because too many files have changed in this diff Show More