1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-09-04 20:32:36 +01:00

300 Commits

Author SHA1 Message Date
Marc Bonnici
e1e5d466a2 version: Version Bump to 2.7.0 2018-07-06 14:44:08 +01:00
Marc Bonnici
3a863fa143 doc/requirements: Force sphinx to be an older version.
Readthedocs has updated their default sphinx version to be
incompatible with how our api is generated so force read the docs to use
the latest known working version (1.6.5)
2018-05-29 11:49:39 +01:00
Sergei Trofimov
e548e75017 Add a banner to readme pointing everyone to "next" 2018-05-25 17:27:08 +01:00
Aníbal Limón
1b0799bcc5 wlauto/utils/ssh.py: Fix set terminal window size
Ensure that the terminal window size is set to 500x200, if not long
commands (prompt + command > 80 chars) will fail because the pexpect
search to match the command string in order to get the result and the
command is wrapped to first 80 chars.

For example, when check a file and executes:

...
if [ -f '/sys/kernel/debug/sched_features' ]; then echo 1; else echo 0; fi
...
File
\"/usr/local/lib/python2.7/dist-packages/wlauto/common/linux/device.py\",
line 228, in get_properties
if self.is_file(propfile):
File
\"/usr/local/lib/python2.7/dist-packages/wlauto/common/linux/device.py\",
line 215, in is_file
return boolean(output.split()[-1])  # pylint: disable=maybe-no-member

IndexError(list index out of range)
...

In order to fix this scenario enables checkwinsize in the shell and use
stty to set the new terminal window size.

Signed-off-by: Aníbal Limón <anibal.limon@linaro.org>
2018-04-16 14:16:06 +01:00
Javi Merino
ea22bc062e recentfling: Add the ability to specify the device_name
Not all devices are supported by recentfling, but there is usually a
device that is supported and has a similar screensize.  Let the
recentfling workload specify the device to let us run it on more
devices.
2018-03-29 11:29:32 +01:00
Marc Bonnici
ba6889ea7f android/workload: Move parameter from GameWorkload to ApkWorkload Class
In addition to `GameWorkloads` some benchmarks also download additional
data upon initialisation. Moving this parameter to the `ApkWorkload`
parent class allows all APK based workloads to indicate that their data
should not be cleared prior to running the workload.
2018-02-08 15:02:42 +00:00
Jose Marinho
9a9e8395cf instrumentation/fps: Fix spaces in SurfaceFlinger.
When parsing the SurfaceFlinger list output spaces were used as
separators. Some view names have spaces enclosed.
This would lead to views with spaces in their names not being correctly
detected and as a consequence the fps stats not collected.
2018-02-02 13:52:48 +00:00
Marc Bonnici
0995e57689 instrumentation/fps: Allow measuring of views containing spaces
Ensure the view name is passed in quotes to allow for views containing
spaces.
2018-02-01 12:28:00 +00:00
Marc Bonnici
7c920f953c resource_getters: Add support for matching apks via package name
Allows distinguishing between apks based on the package name specified
in the workload.
2018-01-23 13:08:39 +00:00
Marc Bonnici
e5417e5b7f resource_getters: Change ExtensionAssetGetter to look in local env
Previously `ExtensionAssetGetter` subclassed `DependencyFileGetter`,
 commit 6e7087ee88 changed its
functionality to use a cached location instead of the local resource
location. This commit changes the `ExtensionAssetGetter` to subclass
`EnvironmentDependencyGetter` so it checks the local resource location
again.
2018-01-17 18:32:22 +00:00
Sergei Trofimov
d295b7d92d external/revent: make inlines static
Add "static" qualifier to inline functions. This avoids linking errors
when compiling without -O flags. The "static" forces the inlining as
described here:

https://gcc.gnu.org/onlinedocs/gcc/Inline.html
2017-12-04 11:17:24 +00:00
Sergei Trofimov
e33245ac72 external/revent: replace bzero with memset
Replace calls to bzero with equivalent calls to memset, as bzero is
deprecated and may not be supported by all tool chains.
2017-11-30 17:52:54 +00:00
Yingshiuan Pan
57a8e62be9 util/android, get_apk_versions: try to find 'aapt' in $PATH as well
Some Linux distro provide android build-tools in packages, we should also try
to find 'aapt' in $PATH if it cannot be found in $ANDROID_HOME.
2017-11-28 09:14:50 +00:00
Sergei Trofimov
7ce1044eff trace-cmd: Update armeabi binary to LSB version.
The armeabi version of trac-cmd binary was erroniously built as
MSB which pervented it from being used.
2017-11-06 16:24:57 +00:00
setrofim
5a70ab8534 Merge pull request #513 from jimboatarm/androbench_update
Androbench: Updating Versions
2017-10-11 15:36:08 +01:00
scott
486c0c6887 Androbench: Updating Versions
Updating the workload to work with version 5.0 of the Androbench application.
2017-10-11 15:22:22 +01:00
setrofim
9e498a911e Merge pull request #486 from setrofim/master
applaunch: do not attempt to uninstall uiauto apk
2017-09-21 17:15:35 +01:00
Sergei Trofimov
3d80e4ef34 applaunch: do not attempt to uninstall uiauto apk
applaunch  users the UI automation of the guest workload to detect
launch completion. However, the guest's automaiton apk is pushed to the
target and loaded via DexLoader; it is never actually installed;
therefore, it should not be uninstalled when running guest's teardown.
2017-09-21 16:53:13 +01:00
setrofim
713fcc44d9 Merge pull request #485 from setrofim/master
gmail: add missing import
2017-09-21 15:26:28 +01:00
Sergei Trofimov
39bc8eae20 gmail: add missing import 2017-09-21 15:25:58 +01:00
setrofim
adf13feb81 Merge pull request #477 from setrofim/master
utils/ipython: handle nbconvert import
2017-09-14 08:37:52 +01:00
Sergei Trofimov
b312bf317d utils/ipython: handle nbconvert import
nbconvert has been split into a separate package and is installed as a
dependency for Jupyter rather than IPython. Add a proper import guard
to prevent issues for those that don't need ipython_converter
functionality, and set the appropriate error message for those that do.
2017-09-14 08:35:41 +01:00
setrofim
d8a5d2fa37 Merge pull request #475 from setrofim/master
utils/ssh: fix telnet connection
2017-09-13 11:25:03 +01:00
Sergei Trofimov
a3d042e054 utils/ssh: fix telnet connection
This fixes the ability to connect over telnet rather than SSH which was
broken by commit 51db53d9

	 ssh: Back-port ssh_get_shell from devlib
2017-09-13 11:22:29 +01:00
setrofim
afe3c16908 Merge pull request #472 from bjackman/config-example-update
config_example: Update comment on available devices
2017-09-05 15:33:56 +01:00
setrofim
5eee03c689 Merge pull request #473 from bjackman/ipynb-exporter-tweaks
Ipynb exporter tweaks
2017-09-05 15:33:25 +01:00
Brendan Jackman
6266edad6f ipython: Add support for IPython 5
This does not seem to require any change beyond incrementing the
recognised version number.
2017-09-05 14:55:59 +01:00
Brendan Jackman
64426601fe ipynb_exporter: Use file_path to allow '~' in path parameters 2017-09-05 14:55:58 +01:00
Brendan Jackman
cea39a6193 utils/types: Add file_path type
This can be used to allow extension parameters that are paths to use
'~' to refer to the home directory.
2017-09-05 14:55:56 +01:00
Brendan Jackman
c952b2f0d4 config_example: Update comment on available devices 2017-09-05 12:23:37 +01:00
setrofim
ea05022f5f Merge pull request #471 from setrofim/master
spec2000: fix cpumask for genric
2017-08-22 15:40:22 +01:00
Sergei Trofimov
cdc7c96cdf spec2000: fix cpumask for generic
spec2000 expects binaries to be optimised for particular cores and uses
Device's core_names to figure out which cores the benchmark should run
on.

There is one special case, which is "generic", which is not optimised
for a particular uarch. cpumask for this was resolved the same way,
failing the lookup, resulting in the invalid mask 0x0.

To fix this, "generic" is now handled by specifying the mask for all
available CPUs.
2017-08-22 15:38:27 +01:00
setrofim
2f683e59d2 Merge pull request #465 from marcbonnici/FPS_Fix
Fps fix
2017-08-17 11:00:02 +01:00
setrofim
6915de98f1 Merge pull request #468 from marcbonnici/documentation
README: Update documentation link to ReadTheDocs
2017-08-16 17:09:57 +01:00
Marc Bonnici
27fd2a81d3 README: Update documentation link to ReadTheDocs 2017-08-16 16:41:57 +01:00
setrofim
a1c19b55b8 Merge pull request #467 from marcbonnici/documentation
Documentation Update
2017-08-16 16:35:07 +01:00
Marc Bonnici
890f4309d4 Documentation: Adds a requirement.txt
In order to build the test documentation the python package `nose` is
required therefore this commit adds a requirements.txt to ensure the
package is available when building the documentation on RTD.
2017-08-16 16:24:38 +01:00
Marc Bonnici
c8cb153f8e Documentation: Fixes missing links to Invocation label 2017-08-16 16:24:38 +01:00
Marc Bonnici
10c1f64216 Documentation/build_instrumentation: Cleanup 2017-08-16 16:24:38 +01:00
Marc Bonnici
f194ef281b Documentation/Builds: Delete previously generated docs
As the Makefile is no longer responsible for calling the extension and
instrumentation documentation ensure any previously generated
documentation is deleted from the respective build file.
2017-08-16 16:24:38 +01:00
Marc Bonnici
4d607b46c5 Documentation/Makefile: Removed additional calls to doc generation
The documentation build process is now automatically called during the
sphinx build therefore the addtional calls from the Makefile have been
removed.
2017-08-16 16:24:38 +01:00
Marc Bonnici
cdd0834447 Documentation: Modified conf.py to allow building on ReadTheDocs
This commit now automatically calls the extension and
instrumentation documention generator and calls Sphinx-api tool
to allow documention generation on ReadTheDocs.
2017-08-16 16:24:38 +01:00
Marc Bonnici
4c0d3f8d20 Bootstrap: Fixes not using newly created config files
If the $WA_USER_DIRECTORY folder is not present on the users
system it is automatically created and a default config.py file
created. Previously this newly created file would not be loaded into
WA causing it to crash.
2017-08-16 16:24:38 +01:00
Marc Bonnici
8dcf1ba6a5 Bootstrap: Make sure that _env_root is an absolute path
If the environment variable $WA_USER_DIRECTORY is not set `_env_root`
is automatically generated from the path expansion of `~`. In some cases
if a home directory is not avaliable for the user this may result in an empty
string. This commit ensure that '_env_root' is still an absolute path
using the current working directory.
2017-08-16 16:24:13 +01:00
Marc Bonnici
081358769d Workload/Manual: Update "view" parameter description 2017-08-15 10:34:37 +01:00
Marc Bonnici
10a614ff04 Workload/Manual: Adds package parameter to workload
Adds the `package` parameter to allow the workload to be used
with the gfxinfo method of collecting data for the FPS instrument.
2017-08-15 10:34:37 +01:00
Marc Bonnici
58a8ea9051 Instrumentation/FPS: Fixes attribute checking
Previously only the requirements for using SurfaceFlinger were checked,
regardless of the FPS method being used.
This commit now only ensure that a `View` attribute is present when
using SurfaceFlinge and a `package` name is available if using gfxinfo
otherwise falling back to SurfaceFlinger.
2017-08-15 10:22:59 +01:00
marcbonnici
335fff2a6f Merge pull request #464 from sdpenguin/master
revent.py: Fix handling of zero-event files
2017-08-11 17:43:06 +01:00
Waleed El-Geresy
cd0863d7fa revent.py: Fix handling of zero-event files
Previously the try clause worked to catch StopIteration exceptions correctly,
however upon catching the exception, another statment which set self._duration
to (last.time - first.time) was being run regardless, which defeated the point
of the try clause.

This has been fixed by introducing an else clause to contain said statement.
2017-08-11 16:58:44 +01:00
marcbonnici
5cd9bc756c Merge pull request #462 from setrofim/master
revent: fix playback timing
2017-08-10 09:44:00 +01:00
Sergei Trofimov
362e93c4cb revent: fix playback timing
This fixes an issue introduced by commit 5965956

	revent: fix off-by-one in replay

This moved the updating of the current event to the beginning of the
body of the loop, after the check of the while loop to prevent attempting
to assign past the end of array. The problem is that one the conditions
in the check relies on the event being updated, so it need to happen
before that part of the loop condition check.

This move event update back to the end of the loop body, but it moves
the array bounds check from the while loop condition into the body, just
before the update but after the counter is incremented. This should
satisfy both, the counter being bounds checked before it is used, and
the event being updated to the next one to be played before the timing
check in the loop condition.
2017-08-09 17:50:54 +01:00
setrofim
3f7d44de28 Merge pull request #458 from sdpenguin/revent_integrated
ReventWorkload becomes an Linux (not Android) workload + fixes.
2017-08-09 14:22:35 +01:00
Waleed El-Geresy
f35c444ddb generic_linux: Remove forbidden chars from device_model
Fix for Chromebook Plus and possibly other devices - removes forbidden
characters from the device_model such as the null character which was
causing a problem with getters getting file names including device_model.
2017-08-09 14:18:38 +01:00
Waleed El-Geresy
e35d0f2959 wa record: Update -h description for ReventWorkload
Added information pertaining to the new .teardown file and several
rewordings.
2017-08-09 14:18:38 +01:00
Waleed El-Geresy
6d9a03ad8f ReventWorkload: Move class to linux and add features
The ReventWorkload class has been moved to the linux directory and
two new features have been added: the option to run an idle workload
and the option to specify a .teardown revent file as well as a .setup file
which runs in the eponymous stage.
2017-08-09 14:18:38 +01:00
marcbonnici
c6cdd8c0e6 Merge pull request #459 from Sticklyman1936/patch-1
gem5: fixed a typo
2017-08-08 16:48:58 +01:00
Sascha Bischoff
8cf4c259c0 gem5: fixed a typo
Changed "self.longdelay" to "self.long_delay".
2017-08-08 16:34:52 +01:00
setrofim
77bb14331f Merge pull request #456 from marcbonnici/param_dict_fix
Types: Fix parameter dict get method
2017-08-08 08:28:04 +01:00
setrofim
dd8843d24a Merge pull request #455 from marcbonnici/documentation
Documentation fixes
2017-08-04 16:20:35 +01:00
Marc Bonnici
16b302679a workloads/appshare: Fixes missing new line in doc string 2017-08-04 16:12:22 +01:00
Marc Bonnici
f9db0a3798 workloads/rt-app: Fixes typos 2017-08-04 16:12:20 +01:00
Marc Bonnici
548eb99bdc workloads/rt-app: Removed absoulte path in documentation
Removed dynamically populated existing configs path from doc string as
this is also used to generate the online documentation and therefore should
not be an absolute path.
2017-08-04 16:11:53 +01:00
marcbonnici
f4669e2b20 Merge pull request #454 from jimboatarm/scojac01-updated-workloads
AdobeReader: Updated to handle latest version
2017-08-04 13:08:29 +01:00
scott
0578f26bc7 AdobeReader: Updated to handle latest version
The workload has been updated to handle the latest version
which no longer has the welcome view page. This has been
done so in a way that still provides backwards compatibility
to previous versions.
2017-08-04 10:16:14 +01:00
setrofim
12230da959 Merge pull request #445 from jimboatarm/scojac01-updated-workloads
Reworking the Gmail and GooglePlayBooks workloads to work with the la…
2017-08-03 10:00:19 +01:00
scott
afa2e307cc Gmail: Updated to work with Android 7
Android 7 no longer has the broadcast functions which
mean we have to refresh our image directory another way.
We are now using Google Photos to do this but only if the
current method is unable to find the correct test images.
2017-08-03 09:56:47 +01:00
scott
d662498ea8 GooglePlayBooks: Updated workload to work with the latest version
GooglePlayBooks UI has changed significantly in the latest
version. This includes extra dialog boxes upon first use
which need to be handled.
2017-08-03 09:56:47 +01:00
marcbonnici
9e934c6733 Merge pull request #453 from setrofim/master
revent: fix off-by-one in replay
2017-08-02 17:57:32 +01:00
Sergei Trofimov
596595698d revent: fix off-by-one in replay
Update idx and ev to the next event after the while check (which makes
sure that the next index is within bounds) to avoid a potential
access-past-end-of-array.
2017-08-02 17:52:04 +01:00
marcbonnici
220e0962e1 Merge pull request #452 from setrofim/master
Device: raise RuntimeError in _check_ready
2017-08-02 13:43:06 +01:00
Sergei Trofimov
be3c91b131 Device: raise RuntimeError in _check_ready
Previously, a AttributeError has been raised. This causes issues when
attempting to access some properties that rely on invoking commands on
the device. The error would get swallowed up in Python attribute
resolution machinery, resulting in an error claiming a missing
attribute.

For example, for AndroidDevice, "abi" is a property that internally
calls getprop(), which executes on the device, and thus requires a
connection. If attempting to access device.abi before device.connect()
has been invoked, the following sequence takes place

1. Caller tries to access attribute "abi" of device, which resolves to
   the device.abi property defined in the class.
2. device.abi calls device.getprop()
3. ...which calls device.execute()
4. ...which calls device._check_ready()
5. device._check_ready() raises AttributeError('device not ready.')
6. That gets propagated all the way up to 1., which gets interpreted
   as attribute not being found.
7. Since AndroidDevice defines a __getattr__(), that gets called next
8. __getattr__() looks for a loaded Device module that has an "abi"
   attribute. Since there isn't one, it raises AttributeError('abi').

The result is that the error reports a missing "abi" attribute, rather
than "device not ready", leading to some fun debugging.

Raising RuntimeError (which more appropriate for the circumstances
anyway) does not trigger __getattr__, so the correct error message is
reported to the user. The text of the message has also been adjusted to
make it clearer what has likely gone wrong.
2017-08-02 13:39:55 +01:00
marcbonnici
efad084b24 Merge pull request #446 from setrofim/master
utils/trace_cmd: expect ': ' in task name
2017-07-24 16:40:09 +01:00
Sergei Trofimov
6da0550b98 utils/trace_cmd: expect ': ' in task name
': ' is used as the delimiter for the different parts of of the event
line; unfortunately, it is also a valid character sequence to appear in
a task name.

This change attempts to perform the event line split correctly by
ensuring that the cpu id is present in the first part.
2017-07-24 09:31:31 +01:00
marcbonnici
9500bcd914 Merge pull request #443 from setrofim/master
utils/trace_cmd: fix event parsing
2017-07-20 17:10:21 +01:00
Sergei Trofimov
7f8e7fed4b utils/trace_cmd: fix event parsing
Make event preamble parsing more robust in cases there are multiple
instances of ' [' (e.g. as part of thread name) by splitting exactly
once from the right. The right-most columns are the timestamp and the
cpu id, and much more restricted (and therefore predictable) in their
formatting.
2017-07-20 15:52:31 +01:00
marcbonnici
87972e2605 Merge pull request #441 from setrofim/trace-cmd-optimization
Optimize parsing of trace-cmd output
2017-07-20 11:48:03 +01:00
Sergei Trofimov
33874a8f71 utils/trace_cmd: reduce regex use
Do not attempt to regex match each line for dropped events/preamble. Use
substring search to detect them first, and only use regex on relevant
lines.
2017-07-20 11:12:54 +01:00
Sergei Trofimov
93dba8b35d utils/trace_cmd: optimize event parsing
Use string searching and splitting instead of regex matching to parse
trace events. This significantly speeds up trace parsing.
2017-07-20 11:12:53 +01:00
Sergei Trofimov
6f629601be utils/trace_cmd: lazy evaluation of event fields
Parsing if a trace events body text into fields is potentially
expensive, so only do it if the fields are being accessed.
2017-07-20 11:12:53 +01:00
marcbonnici
565f352114 Merge pull request #439 from setrofim/master
AndroidDevice: adjust APK install timeout to 5min
2017-07-19 10:02:18 +01:00
Sergei Trofimov
920d80ad6e AndroidDevice: adjust APK install timeout to 5min
install_apk() was using the default timeout of 30 seconds, this can
cause problems for some applications on slower targets. Since
installation is expected to be a long-ish operation, there is not reason
for the timeout to be this short.

This commit adjusts the timeout to 5 minutes, which should be a more
reasonable value for most applications. For some apps with particularly
large APKs, this may not be enough, but in those cases, the default
should already be overriden on per-worload bases.
2017-07-19 10:01:00 +01:00
setrofim
9962b413d7 Merge pull request #438 from yln/master
Update docs
2017-07-13 08:08:13 +01:00
setrofim
fdff80a72b Merge pull request #437 from marcbonnici/fire_adobereader
Misc Fixes
2017-07-13 08:06:30 +01:00
Julian Lettner
6cc4626f50 Update docs: $WA_HOME -> $WA_USER_DIRECTORY 2017-07-12 16:02:02 -07:00
setrofim
665d76456b Merge pull request #436 from setrofim/master
utils/android: better error massage when no exit code found
2017-07-12 13:36:09 +01:00
Sergei Trofimov
f4b91b6c9e utils/android: better error massage when no exit code found
Provide the stdout/stderr text as part of the messsage in the
exception raised when no exit code can be identified from the adb
command output.
2017-07-12 13:34:20 +01:00
Marc Bonnici
4730ac2989 AndroidWorkload: Removal of explicit parameter setting
'exact_abi' was being explicitly set without proper checking, however
this should already be taken care of in the supers automagic parameter
setting.
2017-07-12 09:48:03 +01:00
Marc Bonnici
6737fda24e AdobeReader: Removes use of device search button
On amazon fire tablets the search button triggers a custom implemented
search dialog instead of performing a search within the app.
2017-07-12 09:48:03 +01:00
Marc Bonnici
f72bf95612 Types: Fix parameter dict get method
The overridden get method was incorrectly using __getitem__() when decoding the
data instead of get().
2017-07-10 11:30:54 +01:00
setrofim
2a07ed92fe Merge pull request #433 from marcbonnici/sysfs_fix
Instrumentation: Fixes SysfsExtractor with tmpfs
2017-07-10 08:24:00 +01:00
Marc Bonnici
a4c7340746 Instrumentation: Fixes SysfsExtractor with tmpfs
Previously due to WA automagic the initialization method would only be
called once globally, meaning that it was called for cpufreq extraction
but not for sysfiles. This commit splits SysfsExtractor into a base
FsExtractor and a SysfsExtractor subclass and the initialization methods
are explicitly called by both children.
2017-07-07 17:43:43 +01:00
setrofim
6eb4e59129 Merge pull request #432 from bjackman/geekbench-corporate
geekbench: Fix running multiple 'times' on Geekbench 4
2017-07-06 11:29:27 +01:00
Brendan Jackman
d65561e3e9 geekbench: Fix running multiple 'times' on Geekbench 4
You only have to press the back button once to get back to the "run
benchmarks" screen.
2017-07-06 11:12:19 +01:00
setrofim
64d3ddc8e6 Merge pull request #428 from bjackman/geekbench-corporate
Geekbench: Add support for Corporate version
2017-07-05 11:45:32 +01:00
setrofim
579b940d75 Merge pull request #431 from setrofim/master
trace-cmd: documentation fixes.
2017-07-05 08:39:54 +01:00
Brendan Jackman
0656e61bc2 Geekbench: Add support for Corporate version
The Corporate version is a specialised version of Geekbench. It has
different package names and does not require dismissing a EULA. The
new corporate version is added as a distinct
benchmark ("geekbench-corporate" vs "geekbench").

Note that this changes the wait-for-results UiAutomator snippet from
looking for "Running Benchmarks..." to "*Running*". This is because
the version I've tested updates the text widget with the name of each
benchmark phase as it is run. However I don't know if this is a
feature of the Corporate version or simply of Geekbench 4.1.0.
2017-07-04 18:05:20 +01:00
Sergei Trofimov
13e787b5d0 trace-cmd: documentation fixes.
- Remove reference to default events from the overall workload
  documentation. It was, as of recently, outdated, and was also
  redundant, as the actual defaults will be in the parameter-specific
  documentation.
- Remove reference to Android-specific trace-cmd binary -- this was not
  true for a long time.
- Clarify that the on-host trace-cmd binary is now optional due to the
  report_on_target config.
- Fix a few misc typos.
2017-07-04 16:53:32 +01:00
setrofim
4d7ef408c9 Merge pull request #429 from bjackman/remove-cpufreq-interactive
trace-cmd: remove cpufreq_interactive from default events
2017-07-04 13:23:33 +01:00
Brendan Jackman
a533680e49 trace-cmd: remove cpufreq_interactive from default events
The interactive governor isn't standard any more (and was
Android-only anyway). Remove this default so you don't get errors for
kernels that don't support it.
2017-07-04 12:15:18 +01:00
Sergei Trofimov
4ddb6f44fc geekbench: correct v3 version number
Current APK discovery rules will match against full APK version string,
so must be "3.0.0" rather than just "3".
2017-07-03 17:50:44 +01:00
Sergei Trofimov
1238cb71f0 ssh: raise correct error on EOF
Raise DeviceError, rather than TargetError, on EOF inside ssh_get_shell.
This is a fix for an issue introduced by a devlib backport in 51db53d9.
2017-07-03 11:01:11 +01:00
marcbonnici
53203bfb50 Merge pull request #426 from setrofim/master
AndroidDevice: more robust getprop parsing
2017-06-28 17:23:56 +01:00
Sergei Trofimov
11859af894 AndroidDevice: more robust getprop parsing
Ran into a development target on which one of the values in getprop
output contained a new line. Updating getprop parsing logic to handle
such cases by switching to a regex.
2017-06-28 17:05:47 +01:00
marcbonnici
1f5f9cf478 Merge pull request #425 from setrofim/master
acmecape: echo command that is going to be used
2017-06-28 11:52:57 +01:00
Sergei Trofimov
52a07160b9 acmecape: echo command that is going to be used 2017-06-28 11:49:20 +01:00
marcbonnici
0d8204121d Merge pull request #420 from setrofim/master
ApkWorkload: only uninstall if package is already installed
2017-06-22 14:43:45 +01:00
Sergei Trofimov
a47d6c6521 ApkWorkload: only uninstall if package is already installed
Attempting to uninstall an non-existing package will result in an error.
So, when replace=True for install_apk(), only attempt to uninstall if
the package is already present on the target.
2017-06-22 14:31:25 +01:00
setrofim
d391b28d2e Merge pull request #419 from AnthonyARM/master
Fixed missing comma in list
2017-06-21 16:43:35 +01:00
Anthony Barbier
e42099851b Fixed missing comma in list 2017-06-21 16:41:12 +01:00
marcbonnici
0c0ccb10d9 Merge pull request #418 from setrofim/master
cpustates: workaround for disabled cpuidle
2017-06-21 15:31:47 +01:00
Sergei Trofimov
486494f1f2 cpustates: workaround for disabled cpuidle
Add "no_idle" parameter to work around the problem where cores are
reported to be in "unknown" state if there are no cpuidle transitions in
the trace. If this parameter is set, cpustates will assume cores are
active without waiting for an idle transition.
2017-06-21 15:30:43 +01:00
marcbonnici
919a44baec Merge pull request #417 from setrofim/master
Revert "cpustates: fix idle state assumption on freq transition"
2017-06-21 14:22:58 +01:00
Sergei Trofimov
4f09c3118a Revert "cpustates: fix idle state assumption on freq transition"
This reverts commit 9bd745b347.

When a frequency is changed on a core, cpufreq reports a frequency
transition for all cores in the frequency domain. This means it is not
safe to assume a core is not idling just because there is a frequency
transition reported for it (that just means that there is at least one
core on that frequency domain that is not idling). Moreover, the
transitions are always reported in the same order so there is no way to
infer which core triggered it.
2017-06-21 14:13:24 +01:00
setrofim
b6b1b259dc Merge pull request #410 from setrofim/acmecape
instrumetnation: add support for BayLibre ACME cape
2017-06-21 09:30:47 +01:00
marcbonnici
f1b63b239c Merge pull request #415 from setrofim/master
cpustates: fix idle state assumption on freq transition
2017-06-20 16:03:36 +01:00
setrofim
797cb3c9ea Merge pull request #414 from marcbonnici/uiauto_install
AndroidWorkload: Uninstall uiauto apks before reinstalling
2017-06-20 15:58:42 +01:00
Sergei Trofimov
9bd745b347 cpustates: fix idle state assumption on freq transition
If a frequency transition is observed before an idle state can be
established, assume idle state is -1 (i.e. the core is running). This
will ensure correct stats for platforms with disabled cpuidle.
2017-06-20 15:48:51 +01:00
Marc Bonnici
9f3ecd3715 AndroidWorkload: Uninstall uiauto apks before reinstalling
Due to the fact that uiauto apk files built on different machines will
be signed with different keys, adb will fail to overwrite a previous
version even when set to replace. This commit now will uninstall the
previous uiauto apk file if present before attempting to install the new
version.
2017-06-20 15:19:19 +01:00
setrofim
1a177dd52f Merge pull request #413 from jimboatarm/skype-timeoutincrease
Skype: Change timeout value for endcall
2017-06-20 15:05:21 +01:00
Michael McGeagh
4010618631 Skype: Change timeout value for endcall
New override function added which uses uiAutoTimeout if no timeout value
has been specified.
Checking against the endcall button now uses this method.

Tested on a Huawei Mate 8 Device with just core 0 enabled (slowing it
down enough that the previous timeout of 0.5s caused the workload to
fail)
2017-06-20 15:01:05 +01:00
setrofim
1168c1ada8 Merge pull request #412 from marcbonnici/googleslides
GoogleSlides and Books Fixes
2017-06-20 10:35:09 +01:00
Marc Bonnici
8b7db92ab8 Googleslides: Fixes images not being found on some chromebooks.
On some chromebooks the "images" tab does not appear when attempting to
insert an image, instead the desired image needs to be found via local
storage.
2017-06-20 10:04:13 +01:00
Marc Bonnici
088102a81d GooglePlayBooks: Fixes
Updated to add in workarounds for cases where the workload would fail to
find elements correctly.
2017-06-19 16:33:49 +01:00
Sergei Trofimov
87ce1fd0ae instrumentation: add support for BayLibre ACME cape
Add an instrument for collecting power and energy from BayLibre
ACME cape probe.
2017-06-16 11:32:14 +01:00
Marc Bonnici
4fac39ada9 GoogleSlides: Fixed bug where settings could not be opened
For some reason although the settings element could be found, clicking
on it did not have the desired effect, to solve this the element has
been found as a UiObject2 instead.
2017-06-15 17:22:57 +01:00
setrofim
81fa5fdf81 Merge pull request #411 from bjackman/gfxinfo-ignore-extra-fields
fps: Ignore additional fields in gfxinfo frame data
2017-06-15 15:47:03 +01:00
Brendan Jackman
2702731532 fps: Ignore additional fields in gfxinfo frame data
Some versions of Android include additional fields in gfinxo which we
don't care about. The existing fields have the same order, so simply
ignore the extra ones.
2017-06-15 15:20:12 +01:00
setrofim
b95ea60213 Merge pull request #409 from marcbonnici/uiauto_timeout
Workload: Increased default uiautomator workload timeout
2017-06-15 11:27:32 +01:00
Marc Bonnici
8acebe12bd Workload: Increased default uiautomator workload timeout
After the upgrade to uiauto2 some workloads seem to take slightly longer
than previously.
2017-06-15 11:23:38 +01:00
setrofim
2172db5833 Merge pull request #406 from AnthonyARM/master
Replaced active_cpus by online_cpus in common/linux/device.py
2017-06-15 09:58:47 +01:00
Anthony Barbier
ff67ed1697 Replaced active_cpus by online_cpus in common/linux/device.py 2017-06-14 14:25:17 +01:00
setrofim
ae1a5019dc Merge pull request #399 from setrofim/master
ssh: Back-port ssh_get_shell from devlib
2017-06-14 13:13:23 +01:00
Sergei Trofimov
51db53d979 ssh: Back-port ssh_get_shell from devlib
This back-ports ssh_get_shell implementation from devlib. It includes
the following changes:

- original_prompt for Telnet verison of the connection can now be passed
  as an argument.
- Multiple attempts to connect with a timeout.
- Some additional implementation to the tty, including setting the size.
2017-06-13 09:14:26 +01:00
setrofim
42a4831092 Merge pull request #405 from marcbonnici/online_cores
Online cores
2017-06-13 08:55:47 +01:00
Marc Bonnici
8e94b79ed8 LinuxDevice: Now raises an error if trying to hotplug all cores.
Previously this would try to automatically enable an additional core to ensure
that all cores were not hot-plugged, however it would do this unnecessarily if
another core that wasn't the first of its types was already online.
2017-06-12 17:56:06 +01:00
Marc Bonnici
37d99e4a5d LinuxDevice: Remove duplicate method and rename for consistency
These function were duplicated of each other so one has been removed and the
remaining methods renamed for consistency with WA terminology and the relevant
calls updated.
2017-06-12 17:27:21 +01:00
marcbonnici
0683427acd Merge pull request #404 from marcbonnici/signal_fix
EntryPoint: Fixes local package being used over system level
2017-06-09 12:59:28 +01:00
Marc Bonnici
64574b1d4c EntryPoint: Fixes local package being used over system level
Due to a local file named 'signal.py' this was being imported instead of the system level 'signal' package. This commit ensures the the system level is used instead.
2017-06-09 11:10:44 +01:00
setrofim
3653e1b507 Merge pull request #403 from marcbonnici/uiauto2
Uiauto2 - Fixes
2017-06-08 17:52:38 +01:00
Marc Bonnici
e62262dbb3 Uiauto2 Workloads: Fixes applaunch bug in androind N
In order to workaround an bug in applaunch running on android N, all of the
workloads have been updated to the latest gradle build system, the timeout in
the baseclass has been changed from a TimeUnit to a regualr long and a
duplicately declared parameter bundle has been removed.
2017-06-08 16:55:47 +01:00
Marc Bonnici
18e47c89c5 Skype: Fixes launch command for intrumented testing.
Without specifying user `-3' for the launch command, the application is by default
attempted to be launched with permissions that are only grantable to system
apps.
2017-06-08 16:55:41 +01:00
marcbonnici
389a1ecf76 Merge pull request #402 from AnthonyARM/master
Convert TERM signals into INT to allow WA to exit cleanly
2017-06-08 15:16:48 +01:00
Anthony Barbier
260e9563b3 Convert TERM signals into INT to allow WA to exit cleanly 2017-06-08 14:09:15 +01:00
setrofim
48d0e1196c Merge pull request #401 from marcbonnici/uiauto2
Uiauto2: Fixes leftover references to `.uiautoapk`
2017-06-08 11:41:51 +01:00
Marc Bonnici
b64c615e8e Uiauto2: Fixes leftover references to .uiautoapk 2017-06-08 11:23:45 +01:00
setrofim
a4e27128e5 Merge pull request #397 from marcbonnici/uiauto2
Uiautomator2: Fix for newer adb versions.
2017-06-08 10:57:49 +01:00
Marc Bonnici
242cf7e33a Revert "Android Device: Add force flag to install_apk"
This reverts commit 5d8305cfdc.
2017-06-05 17:00:40 +01:00
Marc Bonnici
cc641e8dac UiautoWorkloads: Updated to use apk files for uiautoamtion tests. 2017-06-05 17:00:40 +01:00
Marc Bonnici
bb17590689 ResourceGetter: Updated to check if an apk is a uiautomation test apk.
The getter now checks the apk package name to distinguish between a regular
android apk and a uiautomator apk.
2017-06-05 17:00:40 +01:00
Marc Bonnici
4ce38fcd55 Revert "ResourceGetters: Added support for finding uiautoapk files."
This reverts commit b948d28c62.
2017-06-05 16:55:47 +01:00
Marc Bonnici
bf694ffdf1 Revert "AndroidResource: Add a UiautoApk resource type."
This reverts commit bc6af25366.
2017-06-05 16:55:47 +01:00
Marc Bonnici
b36e0061e1 AndroidWorkload: Updated to use an 'APK' file instead of a 'uiautoapk' 2017-06-05 16:55:47 +01:00
Marc Bonnici
526ad15c01 AndroidResource: Updated APKFile to add support for uiautomator tests.
In newer versions of adb, files cannot be installed unless they use the
`.apk` extension therefore we need to be able to distinguish between
regular apks and instrumented test files.
2017-06-05 16:55:38 +01:00
Sergei Trofimov
0694ab6792 LinuxDevice: rename get_number_of_online_cpus arg
Renamed "c" to "core", as that gets passed as a keyword argument inside
get_runtime_parameters().
2017-06-01 14:29:19 +01:00
setrofim
2fd7614d97 Merge pull request #392 from marcbonnici/uiauto2
Upgrading to UiAutomator2
2017-05-31 11:22:45 +01:00
marcbonnici
c75c102cd2 Merge pull request #393 from setrofim/master
utils/trace_cmd: add parsers for more sched events
2017-05-31 10:48:14 +01:00
Sergei Trofimov
c86ce64408 utils/trace_cmd: add parsers for more sched events
Added parsers for sched_wakeup(_new) and sched_stat_* events that have
non-standard text.
2017-05-31 10:42:55 +01:00
Marc Bonnici
d6909c5e6a PowerUtils: Pylint Fix 2017-05-31 10:36:38 +01:00
Marc Bonnici
55e140f20a Caffeinemark: Updated to Uiautomator2 2017-05-31 10:36:38 +01:00
Marc Bonnici
0c7a7e4dca BenchmarkPi: Updated to uiautomator2 2017-05-31 10:36:38 +01:00
Marc Bonnici
a7ed00d9ed Googleslides: Add workaround for opening navigation drawer
On Android N running under the instrumentation uiautomator appears to have
trouble retrieving the root node for the home screen of google slides. Therefore
we open the navigation drawer via a swipe which allows the node to be found again.
2017-05-31 10:36:38 +01:00
Marc Bonnici
89aa3e1208 GoogleSlides: Updated to uiautomator2
The latest version of uiautomator2 seems to have an issue with google slides not
being able to interact with any elements on the slide therefore we are using a
slightly older version which doesn't have this issue.
2017-05-31 10:36:38 +01:00
Marc Bonnici
f33fd97705 Videostreaming: Updated to uiautomator2 2017-05-31 10:36:38 +01:00
Marc Bonnici
bf598d1873 Camerarecord: Updated to only use root if device is rooted.
Previously the workload would always try and use root to pull framestats file
causing it to fail on unrooted devices.
2017-05-31 10:36:38 +01:00
Marc Bonnici
c094a47f12 Camerarecord: Updated to uiautomator2 2017-05-31 10:36:38 +01:00
Marc Bonnici
6dab2b48ab Cameracapture: Updated to uiautomator2 2017-05-31 10:36:38 +01:00
Marc Bonnici
1809def83f Peacekeeper: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
1158c62c55 Appshare: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
d13defe242 Glbenchmark: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
a9c5ab9bc8 Facebook: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
27af97b795 Quadrant: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
a0a189044e Applaunch: Updated to Uiautomator2
Certain configurations of this workload requires root and therefore on some
android devices, this will prompt to grant the app su permissions by default. To
ensure this does not interfere with the run, ensure that either, re-request
permission after reinstall/upgrade is not selected or grant access by default.
2017-05-31 10:36:37 +01:00
Marc Bonnici
eca6c10a75 Andebench: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
01efbe1807 Googlephotos: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
81f7f92b84 real_linpack: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
95c98fd458 Antutu: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
428abfb6a6 Geekbench: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
a8bf8458ec Youtube: Updated to Uiautomator2 2017-05-31 10:36:37 +01:00
Marc Bonnici
862eeadb39 Vellamo: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
ec66e1d166 Sqlitebm: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
3e4ffa8cd5 Smartbench: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
dc14e6ed11 Linpack: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
47ccae1a6b Googleplaybooks: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
a2eef179ef Gmail: Updated to work on new app version. 2017-05-31 10:36:36 +01:00
Marc Bonnici
7eb36b959b Gmail: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
b5563113b1 Cfbench: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
5720d3f214 Androbench: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
5b82b90939 Skype: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
bfcb829ab0 AdobeReader: Updated to Uiautomator2 2017-05-31 10:36:36 +01:00
Marc Bonnici
70f646f87d AndroidWorkload: Added unchangeable android permissions
Some android permissions are categorized as 'unchangeable' so we don't need to
try and set them.
2017-05-31 10:36:35 +01:00
Marc Bonnici
3f03dec7af CreateCommand: Updated to support new build system
Using the `android` command to create a new project has been deprecated in favour
of using Android Studio. To avoid this constraint a template project has been added
and this is simply duplicated with the relevant files populated when creating a new
project.
2017-05-31 10:36:35 +01:00
Marc Bonnici
076772da4d Uiautomator: Upgraded uiautomator base classes to Uiauto2
Uiautomator1 has been deprecated, therefore the uiautomation base classes have
been upgraded to use uiautomator2 and the new gradle build system.
Altered to retrieve `package_name` rather than `package` as per the previous commit.
2017-05-30 17:26:14 +01:00
Marc Bonnici
138c95745e AndroidWorkload: Renamed package parameter to package_name
Using the new instrumental based testing method, `package` is a reserved
parameter and is not passed to the workload correctly.
2017-05-30 12:06:50 +01:00
Marc Bonnici
f517e6c4da .gitignore: Updated for new uiauto2 gradle based builds 2017-05-30 12:06:50 +01:00
Marc Bonnici
8125c3e0a2 Android Workload: Updated to install and use Uiautomator 2 tests.
The new method of using uiautomation 2 is by using instrumented apk files rather
than JAR files. This commit updates the base workload has been updated to
install/uninstall the new uiautomation APK files and to invoke the relevant
instrumentation.
2017-05-30 12:06:50 +01:00
Marc Bonnici
81bdc7251b AndroidDevice: Pylint Fix 2017-05-30 12:06:36 +01:00
Marc Bonnici
b948d28c62 ResourceGetters: Added support for finding uiautoapk files.
Also ensures to check for a `.` to signify the start of a file extension to prevent
extensions with the same ending being mixed up.
2017-05-23 11:16:18 +01:00
Marc Bonnici
bc6af25366 AndroidResource: Add a UiautoApk resource type.
When moving to Uiautomation 2, tests are now complied into apk files rather than
jar files. To avoid conflicts with regular workload apks a new resource type is
added to retrieve the test files which will be renamed to have the extension
.uiautoapk
2017-05-23 11:15:55 +01:00
Marc Bonnici
5d8305cfdc Android Device: Add force flag to install_apk
A force flag has been added to the `install_apk` method which
ignores the check that the specified file is an apk. This is to support
the new UiAutomation Apk which have been given the extention .uiautoapk.
2017-05-17 13:47:39 +01:00
setrofim
e038fb924a Merge pull request #386 from marcbonnici/broadcast
Update media broadcast to work with android N
2017-05-12 08:01:04 +01:00
Marc Bonnici
51c92cb2f5 Workloads: Updated to use new media refresh method
Updated the base android workload and google photos workload to pass a list of
updated files rather than their directory.
2017-05-11 18:17:38 +01:00
Marc Bonnici
e98b653b3e AndroidDevice: Add refresh_device_files method
Adds a method to determine the appropriate method of triggering a media refresh
of a given list of file based on the devices android version and root status. If
a device is running android marshmallow and below or has root, trigger a refresh
of the files containing folder otherwise trigger a refresh of each individual
file.
2017-05-11 18:17:38 +01:00
Marc Bonnici
b1da0fe958 AndroidDevice: Add a common_base_path utility function
Adds a utility function to determine the lowest common base path of a passed list
of files.
2017-05-11 18:17:29 +01:00
Marc Bonnici
b10b5970c3 AndroidDevice: Add a broadcast_media_scan_file method
In android N it is no longer allowed to trigger a media refresh of a directory
without root, therefore this method has been added to trigger a refresh of an
individual file.
2017-05-11 09:44:49 +01:00
Marc Bonnici
e866cfed12 AndroidDevice: Add as_root flag to broadcast_media_mounted
Allows the `MEDIA_MOUNTED` broadcast to be performed as root as
this now requires elevated permission in android N.
2017-05-11 09:11:24 +01:00
Marc Bonnici
f5b40e3d64 AndroidUtils: Updated Android_Version_Map
Added Marshmallow and Nougat SDK versions codes to the version map.
2017-05-11 09:11:23 +01:00
Sergei Trofimov
cc3a815cb7 uxperf workloads: only broadcast media mounted if needed
Previously AndroidPerfWorkload broadcast media mounted unconditionally
inside push_assets and delete_assets. This change makes it so the
broadcast only happens if something was actually pushed/needs to be
delete.
2017-05-10 15:42:10 +01:00
setrofim
9ef8bddc4b Merge pull request #383 from jimboatarm/gbfix
Fix geekbench to use new ParameterDict types
2017-04-26 08:15:06 +01:00
James Hartley
a3011accf8 Fix geekbench to use new ParameterDict types
Tested using Huawei P10
2017-04-25 18:12:48 +01:00
Sergei Trofimov
317da0a1c4 trace-cmd: updating trace-cmd binary to a more recent version
The one we currently have is over two years old. This is, in particular,
for the following fixes:

https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git/commit/?id=e5e453917f247faa7d37728d59001759e7005258
https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git/commit/?id=709b47c3f7d4e41e89ee58a85cd106456f779445
2017-04-20 16:25:57 +01:00
Sergei Trofimov
7c79c24865 rt-app: fix classifer update
rt-app inserts its own classifiers into the results. Previosly, if was
replacing the existing classifier if there were any. Now, classifiers
are updated, rather than replaced.
2017-04-07 16:33:24 +01:00
Sergei Trofimov
6c5e52ad56 freqsweep: fixing additional classifiers
Properly handle the case where additional classifiers are not specified.
2017-04-07 10:05:29 +01:00
setrofim
1019b30174 Merge pull request #379 from marcbonnici/binary_sysfile
Device: Updated to set and retrieve binary data from sysfiles
2017-04-06 16:11:13 +01:00
Marc Bonnici
50236b0661 Device: Updated to set and retrieve binary data from sysfiles
`set_sysfile_values` now accepts a `^` symbol prefixed to the
file path to indicated that the value should be written as binary data.
To accommodate this an extra `binary` flag has been added to  the `set_sysfile_value`and `get_sysfile_value` methods to write and retrieve the binary data respectively.
2017-04-06 15:59:01 +01:00
setrofim
6cfcb89827 Merge pull request #378 from setrofim/master
freqsweep: added support for additional classifiers
2017-04-06 15:01:07 +01:00
Sergei Trofimov
cf0a4d3ff3 freqsweep: added support for additional classifiers 2017-04-05 14:48:11 +01:00
setrofim
06764dff00 Merge pull request #375 from setrofim/master
cpustates state tracking fix and a new output option
2017-03-29 10:02:48 +01:00
Sergei Trofimov
8f2ff4d2f8 cpustates: fix cluster shutdown tracking
This commit resolves a couple of issues that were causing impropper
cluster shutdown tracking in the cpustates script.

- requested_states in the PowerStateProcessor was being initalized to -1
  instead of None if no state was requested; the later checks are
  against None.
- requested_states was not being set if the request could be satisfied
  immediately, and was being cleared upon being statisfied at a later
  time. This caused a problem when a core leaves idle and then tries to
  re-enter cluster shutdown. Here is an example sequence of events that
  illustrates the issue (assume core0 and core1 are the only two cores
  on a cluster):

  	1. both cores are running
	2. core0 requests cluster-sleep. As core1 is running, it
	   is put into core-sleep instead, and its request is saved.
	3. core1 requests cluster-sleep. As core0 has a pending request
	   for cluster-sleep, both cores are put into cluster-sleep and
	   the pending request is cleared.
	4. core1 becomes active. core0 is elevated to core-sleep.
	5. core1 tries to enter cluster-sleep. Since core0 is currently
	   in core-sleep (and its prior request has laredy been
	   cleared), core1 is put into core-sleep instead, and its
	   request is saved. This is an ERROR as but cores have in fact
	   requested cluster-sleep at this stage.

  If, in step 4., core0 becomes active instead, exactly the same
  situation will result, as core1 was put into cluster-sleep immediately
  and its request was never saved.
  Idle state requests are now always tracked on entry and are only
  cleared when the core leave idle.
- Also removed a pointless identy assignment.
2017-03-24 08:55:47 +00:00
Sergei Trofimov
9a9de99c5f cpustates: added transition timeline option
Added an option to generate a timeline of state transitions, in
contrast to the existing timeline of computed states. This is primarily
intended for debugging of the latter, but may also be useful for certain
kinds of analyses.
2017-03-23 14:01:14 +00:00
setrofim
f52aa0f7e9 Merge pull request #372 from jimboatarm/appsharefix
Appshare .jar rebuild
2017-03-22 12:25:36 +00:00
scott
935edcdd01 Appshare .jar rebuild
Rebuilding the .jar file as it is not up to date with the current source.
2017-03-22 12:23:35 +00:00
Sergei Trofimov
869c0bc6bd meizumx6: fix: changing is_rooted to be a property 2017-03-16 13:40:39 +00:00
setrofim
57ab97df89 Merge pull request #367 from jimboatarm/googleplaybooks_intfix
Googleplaybooks: Updating workload to work with change in parameter types
2017-03-09 17:11:18 +00:00
scott
773bf26ff2 Googleplaybooks: Updating workload to work with change in parameter t…
…ypes

Due to a change in the parameter kind we have to update the googleplaybooks workload to work with integers for page numbers as opposed to strings.

This was missed in the previous update: 8f12066
2017-03-09 16:50:26 +00:00
setrofim
2822949517 Merge pull request #363 from marcbonnici/facebook_fix
Facebook: Updated workload to work with server changes.
2017-03-06 08:13:10 +00:00
Marc Bonnici
4dc0aaa724 Facebook: Updated workload to work with server changes.
Fixed being unable to find the 'update status' box.
Now selects a 'do you know' notification rather than the latest,
as this is now a link on how to upgrade the app which opens in
external browser.
2017-03-03 16:38:23 +00:00
setrofim
35df2fff30 Merge pull request #362 from Sticklyman1936/device_sleep
Extend device with sleep functionality
2017-03-02 15:02:04 +00:00
Sascha Bischoff
4abbe7602a Extend device with sleep functionality
This changeset adds the ability to sleep on the device via a
device.sleep() method. This invokes sleep on the target device. This
is useful for situations where the passage of time on the target
device does not match that of the host, e.g., gem5.

This changeset also updates a number of workloads to use this new
sleep method.
2017-03-02 14:22:05 +00:00
marcbonnici
55e40ded1c Merge pull request #361 from Sticklyman1936/audio_fix
Remove stale browser-specific commands from audio workload
2017-03-02 10:46:18 +00:00
Sascha Bischoff
441ecfa98c Remove stale browser-specific commands from audio workload
In this changeset we remove some left over browser specific commands
which are no longer required as part of the audio workload.
2017-03-02 10:40:11 +00:00
Sergei Trofimov
7130b3a4ab Removing browser launch from audio and video
For historical reasons audio and video workloads were launching the
browser as part of their setup. This is no longer necessary. Not only
that, since on recent devices the default Android browser is missing,
this causes problems with the workloads. This commit removes the browser
launch.
2017-03-02 10:29:00 +00:00
setrofim
510abf4666 Merge pull request #359 from jimboatarm/fps-override
Allow user to override the method of collecting FPS data
2017-02-24 16:33:31 +00:00
Michael McGeagh
70b265dd34 fps: Added parameter to override FPS collection method
In Android M and above, there is a new method of collecting fps
statistics, using gfxinfo rather than SurfaceFlinger, but the
SF method is still available however.
This parameter lets the user decide whether to always use SF,
or to allow the instrument to pick the best method based on
Android version it detects.
2017-02-24 16:22:16 +00:00
setrofim
ddbd94b2a8 Merge pull request #360 from jimboatarm/nanologger
Changed ActionLogger class to use nano timestamps.
2017-02-24 16:20:04 +00:00
Michael McGeagh
c4025cad66 Changed ActionLogger class to use nano timestamps. This is because fps instrument collects data in ns as well so is possible to match the two 2017-02-24 16:11:35 +00:00
setrofim
38b368eec2 Merge pull request #358 from marcbonnici/uiautomator
UXPerfUiAutomation and AppShare Workload Updated
2017-02-23 08:44:11 +00:00
Marc Bonnici
2e97bcce70 Appshare: Fixed typo 2017-02-22 16:25:59 +00:00
Marc Bonnici
9b13f65895 AppShare: Added check to see if additional back press is required.
On some devices after signing into skype, the back button press only
hides the keyboard and a second is required to return to the previous
app.
2017-02-22 16:25:55 +00:00
Marc Bonnici
78c1a80a51 Appshare: Updated to work with new sharing selector name.
On nexus 10 the app share selector has a new name, this commit
checks for either and uses the appropriate one.
2017-02-22 16:22:55 +00:00
Marc Bonnici
7f85da6633 AppShare: Updated to use new method of uiautomator parameter passing. 2017-02-22 16:22:55 +00:00
Marc Bonnici
f76d9f9a1f Appshare: Updated to use new methods in UxPerf.
The appshare workload has been updated to use the new getter/setter
methods added to UXPerf rather than accessing protect variables.
2017-02-22 16:21:37 +00:00
Marc Bonnici
6ffed382cf UxPerfUiAutomation: Added setWorkloadParameters and getPackageID.
To work around appshare requiring access to protected variables of a workload,
a `setWorkloadParameters` method has been added to manually supply a parameter
bundle, and a `getPackageID` method to retrieve a workloads package ID.
2017-02-22 16:21:10 +00:00
Sergei Trofimov
6e7087ee88 resource: remote resource resolution fixes
- "remote" getter priority reduced to be below "environment" so that
  resouces placed into "~/.workload_automation/dependencies" by the user
  take priority over those pulled from remote locations.
- "filer" getter now uses a cache location for the resource rather than
  downloading to the local resource location.
2017-02-21 16:35:16 +00:00
setrofim
96c6f010f8 Merge pull request #356 from marcbonnici/uiautomator
UIAutomator
2017-02-20 17:47:03 +00:00
Marc Bonnici
8f1206678a UiAutomatorWorkloads: Updated to use the new parameter passing functionality.
Each workload has be modfied to remove the old manual paremeter conversion
and instead to retrieve the desired type from the parameter bundle directly.
2017-02-20 16:44:24 +00:00
Marc Bonnici
d10e51e30b UiAutomator: Updated to decode provided parameters from ParameterDict
To prevent parameters having to be converted individually for each worload
the getParams() function has been overridden to perform the required type and
url decoding on the passed parameter bundle before passing the correctly typed
bundle to the workloads.
2017-02-20 16:44:24 +00:00
Marc Bonnici
bd0a6ac3c8 UiAutomatorWorkload: Changed to use a ParameterDict
Due to the limitations of UiAutomator, parameters are not allowed to contain
certain characters including spaces and newlines when passed on the command line.

The python UiAutomatorWorkload baseclasse has been updated to use a
ParameterDict when storing workload parameters.
2017-02-20 16:44:24 +00:00
Marc Bonnici
9ba30c27df WA Types: Added 'ParameterDict' type.
A new 'ParameterDict' has been added that automatically encodes and
decodes values when they are store in a dictionary. The dictionary uses 2 characters
prefixed to each value to store the original type information, e.g. 'fl' -> list of
floats, before being passed through a url encoder. The reverse process happens on retrieval.
To access the encoded values an `iterEncodedItems` and `getEncodedValue` methods have been added.

The appropriate unit tests have also been added.
2017-02-20 16:44:24 +00:00
Marc Bonnici
8f40e6aa40 PEP8 / Pylint Fixes 2017-02-20 16:44:24 +00:00
Marc Bonnici
8c9ae6db53 AndroidWorkload: Split up setup_workload_apk method.
Split up `setup_workload_apk` method into smaller methods to improve readability.
2017-02-20 16:44:24 +00:00
marcbonnici
6ac0c20619 Merge pull request #355 from jimboatarm/scojac01-appshareupdate
Removed surplus back command that was causing AppShare workload to fail
2017-02-20 15:03:20 +00:00
Scott Jackson
cfa1827445 Removing surplus spaces that were added with previous commit 2017-02-20 14:43:01 +00:00
Scott Jackson
821cbaff4e Removed surplus back command that was causing AppShare workload to fail 2017-02-20 12:00:49 +00:00
setrofim
c7db9a4543 Merge pull request #354 from jimboatarm/scojac01-slidesupdate
Updated googleslides workload to work with latest supported APK
2017-02-17 15:33:57 +00:00
Scott Jackson
6f67f00834 Added functions to work with the latest supported APK, to handle the pop up message looking to prompt the user to update and to handle the issue where the save/insert new slide button is not seen when running in maximum window on certain devices. 2017-02-17 15:16:41 +00:00
Sergei Trofimov
40db6a9933 meizumx64: always unrooted
This implements a workaround for the Meizu MX6. "su" in the rooted
version is broken from WA's perspective so all devices should be
considered unrooted.
2017-02-16 13:25:58 +00:00
setrofim
543228f7c3 Merge pull request #352 from marcbonnici/staleclasses
UiAutomator: Removed stale class files.
2017-02-10 08:15:07 +00:00
setrofim
58b97305d5 Merge pull request #350 from jimboatarm/skype-interface
Skype: implement ApplaunchInterface
2017-02-07 13:46:27 +00:00
setrofim
0246302814 Merge pull request #348 from jimboatarm/googleplaybooks-interface
Googleplaybooks: implement ApplaunchInterface
2017-02-07 13:44:15 +00:00
setrofim
0aee99a79c Merge pull request #347 from jimboatarm/youtube-interface
Youtube: implement ApplaunchInterface
2017-02-07 13:43:29 +00:00
setrofim
5012ab5f1b Merge pull request #346 from jimboatarm/googlephotos-interface
Googlephotos: implement ApplaunchInterface
2017-02-07 13:42:42 +00:00
setrofim
b7b2406a25 Merge pull request #349 from jimboatarm/fps_issue
Applaunch: Update package to measured workload pkg.
2017-02-07 13:32:09 +00:00
jummp01
cd05c36250 Skype: implement ApplaunchInterface
Adapts skype workload to support applaunch workload by implementing
the required methods of the interface.

Tested on Huawei Mate8 and Samsung S7.
2017-02-07 10:45:29 +00:00
jummp01
b169b45e57 Applaunch: Update package to measured workload pkg.
Application launch workload runs the package of the workload which is being
instrumented. Fps instrumentation requires the package name and hence the
package name is changed to the workload package name during initialization.

Tested on Huawei Mate8 with fps instrumentation ON.
2017-02-06 17:01:09 +00:00
setrofim
40cc830de1 Merge pull request #344 from marcbonnici/adobereaderfix
AdobeReader: Updated workload to work with latest version.
2017-02-06 08:10:27 +00:00
jummp01
1ebe56ca76 Googlephotos: implement ApplaunchInterface
Adapt googlephotos to support applaunch workload by implementing the methods
required by the interface.

Tested on Huawei Mate8 and Samsung S7.
2017-02-05 12:28:37 +00:00
jummp01
79f58e74b9 Googleplaybooks: implement ApplaunchInterface
Adapt googleplaybooks to support applaunch workload by implementing the methods
required by the interface.

Tested on Huawei Mate8 and Samsung S7.
2017-02-05 12:18:31 +00:00
jummp01
7fe699eb41 Youtube: implement ApplaunchInterface
Adapt youtube to support applaunch workload by implementing the required methods of the interface.

Tested on Huwaei Mate8 and Samsung S7.
2017-02-05 11:52:20 +00:00
Marc Bonnici
2bd77f0833 UiAutomator: Removed unnecessary class files.
Removed stale classes from the repository.
2017-02-03 17:14:33 +00:00
Marc Bonnici
7b7eb05e7f AdobeReader: Updated workload to work with latest version.
The 'My Documents' element that the workload used to check for completed
setup has been removed in the latest version, now checks for the
text itself which is also present in previous versions.
2017-02-03 15:42:53 +00:00
marcbonnici
ae0c9fa60b Merge pull request #342 from jimboatarm/applaunch-interface-v2
Applaunch Workload
2017-02-03 15:25:16 +00:00
jummp01
75b56e462c Adobereader: implement ApplaunchInterface
Adapts adobereader to support uxperfapplaunch workload by implementing the required methods
of the interface.

Tested on Huwaei Mate8 and Samsung S7.
2017-02-03 15:15:05 +00:00
jummp01
d8438c5ae8 Gmail: implement ApplaunchInterface
Adapts gmail workload to support applaunch workload by implementing the
required methods of the interface.

Tested on Huawei Mate8 and Samsung S7.
2017-02-03 15:14:56 +00:00
jummp01
ca7a1abe78 Add new applaunch workload.
The workload supports launch time measurement of other uxperf workloads that
implement the ApplicationlaunchInterface. It takes a uxperf workload as a parameter
and helps to instrument the application launch time in two modes.

a)launch_from_background
b)launch_from_long_idle

The workload executes in two major phases.
1- Setup phase: clears initial dialogues on the first launch of an application.
2- Run phase: Runs multiple iterations of the application launch and measures
the time taken for launch. Iteration number can be specified as parameter applaunch_iterations.

Applaunch measurements are captured in the logcat file.
2017-02-03 15:14:47 +00:00
jummp01
bd37973442 Remove existing applaunch workload
The applaunch workload is deprecated to be replaced by another implementation.
2017-02-03 15:14:37 +00:00
jummp01
c38dc287ab Move UxperfParser into utils
UxperfParser class is moved from the UxperfResultProcessor class into a new
python module in utils. This will help to use the UxperfParser even when
the result procesor is not configured.
2017-02-03 15:14:27 +00:00
jummp01
3feb702898 fps: move VSYNC_INTERVAL into utils
It is not anything to do with instrument, but a generic ocnstant,
and this way it can be used by the other parts of the code line.
2017-02-03 15:14:16 +00:00
jummp01
0f57dee6bf Add ApplaunchInterface class
Inorder to support application launch workload, a new interface is created
which has methods that need to be implemented by all workloads that
support application launch time instrumentation.
2017-02-03 15:14:03 +00:00
jummp01
99b46b4630 Add UiAutoUtils class
UiAutoUtility class added to support utility functions used for UiAutomation.
This class can help in refactoring some existing fucntionalities into utility functions.

Launch command generation is added as a utility class method which can be used
by workloads to construct their launch command from parameters passed.
2017-02-03 15:13:50 +00:00
jummp01
4ce20e2d3f Move package parameters to UxperfUiAutomation
All the uxperf workloads get some common package parameters.
These are moved to the parent class and a new method is introduced to fill
these parameter values. All the uxperf workloads can call this method to resolve
the package parameters.
2017-02-03 15:13:32 +00:00
jummp01
65aa00483b Add pressHome() method in the Base class
This function Presses Home button on the Android User Interface.
2017-02-03 15:12:35 +00:00
setrofim
da7a3276ed Merge pull request #339 from marcbonnici/revent_fix
Revent fix / improvments
2017-02-01 18:10:32 +00:00
Marc Bonnici
b89d9bbc4b StateDetection: Moved check for missing phase definition earlier.
Now checks to see if a phase is correctly defined before image matching
so that it is only performed if required.
2017-02-01 18:04:05 +00:00
Marc Bonnici
54bffb45ab Revent: Fixed uninitialized variable. 2017-02-01 18:04:05 +00:00
Marc Bonnici
e7a47f602d Revent: Removed check for 'wait_for_stdin'.
Revent is terminated from WA via a 'SIGINT', therefore this means
that in order for revent to receive the signal and deal with it
accordingly, revent always needs to be listening on STDIN regardless of
the 'wait_for_stdin' flag.
2017-02-01 18:03:28 +00:00
Sergei Trofimov
490dd404ae AndroidDevice: write "dumpsys window" output on host
On Android targets, WA collects device display information by running
"dumpsys window" during run initialisation. Previously, this was
redirectied into on-device file (under the working directory) and then
pulled from the target.

It looks like on Android-on-ChromeOS devices the redirect leads to an
"Unknown Error" and the resulting file is empty. To get around that,
this commit modfies the dumpsys command so that the output is collected
directly from the shell's stdout and then writen on the host.
2017-01-24 17:52:51 +00:00
setrofim
60f52c2187 Merge pull request #265 from jimboatarm/multiapp-workload
Multiapp Workload: Workload to test how responsive a device is when…
2017-01-16 16:41:35 +00:00
setrofim
fbb9908298 Merge pull request #330 from jimboatarm/break_setup
Splits ApkWorkload setup() into short methods.
2017-01-13 15:39:11 +00:00
jummp01
01fa0a3571 Splits ApkWorkload setup() into short methods.
Apkworkload setup phase performs many functionalities in
a single method that is broken down into short methods.
The split short methods can be called individually when
relevant use cases arise.
2017-01-13 11:56:39 +00:00
setrofim
be2b14a575 Merge pull request #331 from jimboatarm/skype_fix
Create a function for launching skype application.
2017-01-13 05:28:00 +00:00
setrofim
dab2bbb1c7 Merge pull request #329 from jimboatarm/epochtime
Changes Action Logger to give epoch time
2017-01-13 05:26:28 +00:00
setrofim
340ae72ca2 Merge pull request #332 from siddhanathan/typo
Fix typo
2017-01-13 05:25:02 +00:00
Siddhanathan Shanmugam
e64dc8bc26 Fix typo
Instrumention -> Instrumentation
2017-01-12 12:13:05 -08:00
Sergei Trofimov
cb48ece77f daq: fixing energy metric calculation
The summary energy metric was being calculated incorrectly. Instead of
dividing power by the sampling rate, it was being multiplied by it and
divided by a million for some reason.
2017-01-12 11:29:44 +00:00
jummp01
8f67b7f94b Create a function for launching skype application.
Skype has a unique launch command which is called in the setup
phase of the workload. The launch command is split into a stand alone
method which can be called as a separate method if required.

This can be used as common method if more applications in future
require their own customized launch command.
2017-01-12 11:27:42 +00:00
jummp01
fa553ee430 Changes Action Logger to give epoch time
Log time changed to produce epoch time in milli seconds.
Nano to milli second conversion done in uxperf result
processor is removed.

Tested on Mate8 and time obtained is verified.
2017-01-12 10:30:07 +00:00
Sergei Trofimov
01c9c88e79 perf: do not force root on Android in stop()
perf instrument was forcing killall() to run as root on Android devices.
This constraint was preventing perf from being used on unrooted devices.
However, it appears that it is possible for killall() to succeed on at
least some devices as a regular user.

This commit removes the constraint. Since killall() will default to
running as root whenever possible, the instrument will still behave
correctly on rooted Android devices where root is required.
2017-01-11 13:44:55 +00:00
setrofim
0c32e39ce0 Merge pull request #317 from jimboatarm/minapkremoved
Removed min_apk_version from UXperf workloads
2017-01-09 15:13:04 +00:00
Michael McGeagh
51f07c4473 Multiapp Workload: Workload to test how responsive a device is when context switching between appplication tasks. It combines workflows from googlephotos, gmail and skype.
Added tagName to give unique logger name for multiple share attempts. Turn off markers_enabled for subclasses

Renamed multiapp to appshare. Description is now more accurate

Changed appshare to use class instances for each sub workload. This allows APKs to be setup correctly on the device

Photos changed function name
2016-12-15 16:19:19 +00:00
Michael McGeagh
ffde7401ef Removed min_apk_version from UXperf workloads. The known working version is now part of the description instead. 2016-12-15 16:13:58 +00:00
518 changed files with 14808 additions and 5566 deletions

15
.gitignore vendored
View File

@@ -3,6 +3,7 @@
*.bak
*.o
*.cmd
*.iml
Module.symvers
modules.order
*~
@@ -14,9 +15,12 @@ wa_output/
doc/source/api/
doc/source/extensions/
MANIFEST
wlauto/external/uiauto/bin/
wlauto/external/uiauto/**/build/
wlauto/external/uiauto/*.properties
wlauto/external/uiauto/build.xml
wlauto/external/uiauto/app/libs/
wlauto/external/uiauto/app/proguard-rules.pro
wlauto/external/uiauto/**/.gradle
wlauto/external/uiauto/**/.idea
*.orig
local.properties
wlauto/external/revent/libs/
@@ -27,4 +31,9 @@ pmu_logger.mod.c
.tmp_versions
obj/
libs/armeabi
wlauto/workloads/*/uiauto/bin/
wlauto/workloads/*/uiauto/**/build/
wlauto/workloads/*/uiauto/*.properties
wlauto/workloads/*/uiauto/app/libs/
wlauto/workloads/*/uiauto/app/proguard-rules.pro
wlauto/workloads/*/uiauto/**/.gradle
wlauto/workloads/*/uiauto/**/.idea

View File

@@ -1,6 +1,19 @@
Workload Automation
+++++++++++++++++++
.. raw:: html
<span style="border: solid red 3px">
<p align="center">
<font color="red">DO NOT USE "MASTER" BRANCH</font>. WA2 has been
deprecated in favor of WA3, which is currently on "next" branch.
</p>
<p align="center">
The master branch will be aligned with next shortly. In the mean time,
please <font style="bold">USE "NEXT" BRANCH</font> instead.
</p>
</span>
Workload Automation (WA) is a framework for executing workloads and collecting
measurements on Android and Linux devices. WA includes automation for nearly 50
workloads (mostly Android), some common instrumentation (ftrace, ARM
@@ -46,7 +59,8 @@ documentation.
Documentation
=============
You can view pre-built HTML documentation `here <http://pythonhosted.org/wlauto/>`_.
You can view pre-built HTML documentation
`here <http://workload-automation.readthedocs.io/en/latest/>`_.
Documentation in reStructuredText format may be found under ``doc/source``. To
compile it into cross-linked HTML, make sure you have `Sphinx

View File

@@ -10,22 +10,29 @@ sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from wlauto.exceptions import WAError, ToolError
from wlauto.utils.doc import format_simple_table
from wlauto.utils.misc import check_output, get_null, which
def get_aapt_path():
"""Return the full path to aapt tool."""
sdk_path = os.getenv('ANDROID_HOME')
if not sdk_path:
raise ToolError('Please make sure you have Android SDK installed and have ANDROID_HOME set.')
build_tools_directory = os.path.join(sdk_path, 'build-tools')
versions = os.listdir(build_tools_directory)
for version in reversed(sorted(versions)):
aapt_path = os.path.join(build_tools_directory, version, 'aapt')
if os.path.isfile(aapt_path):
logging.debug('Found aapt for version {}'.format(version))
return aapt_path
if sdk_path:
build_tools_directory = os.path.join(sdk_path, 'build-tools')
versions = os.listdir(build_tools_directory)
for version in reversed(sorted(versions)):
aapt_path = os.path.join(build_tools_directory, version, 'aapt')
if os.path.isfile(aapt_path):
logging.debug('Found aapt for version {}'.format(version))
return aapt_path
else:
raise ToolError('aapt not found. Please make sure at least one Android platform is installed.')
else:
raise ToolError('aapt not found. Please make sure at least one Android platform is installed.')
logging.debug("ANDROID_HOME is not set, try to find in $PATH.")
aapt_path = which('aapt')
if aapt_path:
logging.debug('Using aapt from {}'.format(aapt_path))
return aapt_path
raise ToolError('Please make sure you have Android SDK installed and have ANDROID_HOME set.')
def get_apks(path):

View File

@@ -7,18 +7,11 @@ SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
SPHINXAPI = sphinx-apidoc
SPHINXAPIOPTS =
WAEXT = ./build_extension_docs.py
WAEXTOPTS = source/extensions ../wlauto ../wlauto/external ../wlauto/tests
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
ALLSPHINXAPIOPTS = -f $(SPHINXAPIOPTS) -o source/api ../wlauto
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
@@ -58,52 +51,38 @@ coverage:
@echo
@echo "Build finished. The coverage reports are in $(BUILDDIR)/coverage."
api: ../wlauto
rm -rf source/api/*
$(SPHINXAPI) $(ALLSPHINXAPIOPTS)
waext: ../wlauto
rm -rf source/extensions
mkdir -p source/extensions
$(WAEXT) $(WAEXTOPTS)
sigtab: ../wlauto/core/instrumentation.py source/instrumentation_method_map.template
rm -rf source/instrumentation_method_map.rst
./build_instrumentation_method_map.py source/instrumentation_method_map.rst
html: api waext sigtab
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml: api waext sigtab
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml: api waext sigtab
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle: api waext sigtab
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json: api waext sigtab
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp: api waext sigtab
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp: api waext sigtab
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
@@ -112,7 +91,7 @@ qthelp: api waext sigtab
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/WorkloadAutomation2.qhc"
devhelp: api
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@@ -121,64 +100,64 @@ devhelp: api
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/WorkloadAutomation2"
@echo "# devhelp"
epub: api waext sigtab
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex: api waext sigtab
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf: api waext sigtab
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text: api waext sigtab
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man: api waext sigtab
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo: api waext sigtab
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info: api waext sigtab
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext: api waext sigtab
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes: api waext sigtab
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck: api waext sigtab
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest: api waext sigtab
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."

View File

@@ -17,6 +17,7 @@
import os
import sys
import shutil
from wlauto import ExtensionLoader
from wlauto.utils.doc import get_rst_from_extension, underline
@@ -30,6 +31,9 @@ def generate_extension_documentation(source_dir, outdir, ignore_paths):
loader = ExtensionLoader(keep_going=True)
loader.clear()
loader.update(paths=[source_dir], ignore_paths=ignore_paths)
if os.path.exists(outdir):
shutil.rmtree(outdir)
os.makedirs(outdir)
for ext_type in loader.extension_kinds:
if not ext_type in GENERATE_FOR:
continue

View File

@@ -16,7 +16,6 @@
import os
import sys
import string
from copy import copy
from wlauto.core.instrumentation import SIGNAL_MAP, PRIORITY_MAP
from wlauto.utils.doc import format_simple_table
@@ -25,7 +24,7 @@ from wlauto.utils.doc import format_simple_table
CONVINIENCE_ALIASES = ['initialize', 'setup', 'start', 'stop', 'process_workload_result',
'update_result', 'teardown', 'finalize']
OUTPUT_TEMPLATE_FILE = os.path.join(os.path.dirname(__file__), 'source', 'instrumentation_method_map.template')
OUTPUT_TEMPLATE_FILE = os.path.join(os.path.dirname(__file__), 'source', 'instrumentation_method_map.template')
def escape_trailing_underscore(value):
@@ -37,7 +36,9 @@ def generate_instrumentation_method_map(outfile):
signal_table = format_simple_table([(k, v) for k, v in SIGNAL_MAP.iteritems()],
headers=['method name', 'signal'], align='<<')
priority_table = format_simple_table([(escape_trailing_underscore(k), v) for k, v in PRIORITY_MAP.iteritems()],
headers=['prefix', 'priority'], align='<>')
headers=['prefix', 'priority'], align='<>')
if os.path.isfile(outfile):
os.unlink(outfile)
with open(OUTPUT_TEMPLATE_FILE) as fh:
template = string.Template(fh.read())
with open(outfile, 'w') as wfh:

2
doc/requirements.txt Normal file
View File

@@ -0,0 +1,2 @@
nose
sphinx==1.6.5

View File

@@ -147,7 +147,7 @@ to 15 million. You can specify this using dhrystone's parameters:
wa show dhrystone
see the :ref:`Invocation` section for details.
see the :ref:`Invocation <invocation>` section for details.
In addition to configuring the workload itself, we can also specify
configuration for the underlying device. This can be done by setting runtime

View File

@@ -7,8 +7,8 @@ APK resolution
--------------
WA has various resource getters that can be configured to locate APK files but for most people APK files
should be kept in the ``$WA_HOME/dependencies/SOME_WORKLOAD/`` directory. (by default
``~/.workload_automation/dependencies/SOME_WORKLOAD/``). The ``WA_HOME`` enviroment variable can be used
should be kept in the ``$WA_USER_DIRECTORY/dependencies/SOME_WORKLOAD/`` directory. (by default
``~/.workload_automation/dependencies/SOME_WORKLOAD/``). The ``WA_USER_DIRECTORY`` enviroment variable can be used
to chnage the location of this folder. The APK files need to be put into the corresponding directories
for the workload they belong to. The name of the file can be anything but as explained below may need
to contain certain peices of information.

View File

@@ -28,12 +28,16 @@
import sys, os
import warnings
from sphinx.apidoc import main
warnings.filterwarnings('ignore', "Module louie was already imported")
this_dir = os.path.dirname(__file__)
sys.path.insert(0, os.path.join(this_dir, '..'))
sys.path.insert(0, os.path.join(this_dir, '../..'))
import wlauto
from build_extension_docs import generate_extension_documentation
from build_instrumentation_method_map import generate_instrumentation_method_map
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
@@ -264,7 +268,21 @@ texinfo_documents = [
#texinfo_show_urls = 'footnote'
def run_apidoc(_):
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
cur_dir = os.path.abspath(os.path.dirname(__file__))
api_output = os.path.join(cur_dir, 'api')
module = os.path.join(cur_dir, '..', '..', 'wlauto')
main(['-f', '-o', api_output, module, '--force'])
def setup(app):
module_dir = os.path.join('..', '..', 'wlauto')
excluded_extensions = [os.path.join(module_dir, 'external'),
os.path.join(module_dir, 'tests')]
os.chdir(os.path.dirname(__file__))
app.connect('builder-inited', run_apidoc)
generate_instrumentation_method_map('instrumentation_method_map.rst')
generate_extension_documentation(module_dir, 'extensions', excluded_extensions)
app.add_object_type('confval', 'confval',
objname='configuration value',
indextemplate='pair: %s; configuration value')

View File

@@ -964,7 +964,7 @@ that accompanies them, in addition to what has been outlined here, should
provide enough guidance.
:commands: This allows extending WA with additional sub-commands (to supplement
exiting ones outlined in the :ref:`invocation` section).
exiting ones outlined in the :ref:`invocation <invocation>` section).
:modules: Modules are "extensions for extensions". They can be loaded by other
extensions to expand their functionality (for example, a flashing
module maybe loaded by a device in order to support flashing).

View File

@@ -22,7 +22,6 @@ import textwrap
import argparse
import shutil
import getpass
import subprocess
from collections import OrderedDict
import yaml
@@ -30,8 +29,9 @@ import yaml
from wlauto import ExtensionLoader, Command, settings
from wlauto.exceptions import CommandError, ConfigError
from wlauto.utils.cli import init_argument_parser
from wlauto.utils.misc import (capitalize, check_output,
ensure_file_directory_exists as _f, ensure_directory_exists as _d)
from wlauto.utils.misc import (capitalize,
ensure_file_directory_exists as _f,
ensure_directory_exists as _d)
from wlauto.utils.types import identifier
from wlauto.utils.doc import format_body
@@ -41,20 +41,6 @@ __all__ = ['create_workload']
TEMPLATES_DIR = os.path.join(os.path.dirname(__file__), 'templates')
UIAUTO_BUILD_SCRIPT = """#!/bin/bash
class_dir=bin/classes/com/arm/wlauto/uiauto
base_class=`python -c "import os, wlauto; print os.path.join(os.path.dirname(wlauto.__file__), 'common', 'android', 'BaseUiAutomation.class')"`
mkdir -p $$class_dir
cp $$base_class $$class_dir
ant build
if [[ -f bin/${package_name}.jar ]]; then
cp bin/${package_name}.jar ..
fi
"""
class CreateSubcommand(object):
@@ -321,7 +307,7 @@ def create_basic_workload(path, name, class_name):
def create_uiautomator_workload(path, name, class_name):
uiauto_path = _d(os.path.join(path, 'uiauto'))
uiauto_path = os.path.join(path, 'uiauto')
create_uiauto_project(uiauto_path, name)
source_file = os.path.join(path, '__init__.py')
with open(source_file, 'w') as wfh:
@@ -335,37 +321,34 @@ def create_android_benchmark(path, name, class_name):
def create_android_uiauto_benchmark(path, name, class_name):
uiauto_path = _d(os.path.join(path, 'uiauto'))
uiauto_path = os.path.join(path, 'uiauto')
create_uiauto_project(uiauto_path, name)
source_file = os.path.join(path, '__init__.py')
with open(source_file, 'w') as wfh:
wfh.write(render_template('android_uiauto_benchmark', {'name': name, 'class_name': class_name}))
def create_uiauto_project(path, name, target='1'):
sdk_path = get_sdk_path()
android_path = os.path.join(sdk_path, 'tools', 'android')
def create_uiauto_project(path, name):
package_name = 'com.arm.wlauto.uiauto.' + name.lower()
# ${ANDROID_HOME}/tools/android create uitest-project -n com.arm.wlauto.uiauto.linpack -t 1 -p ../test2
command = '{} create uitest-project --name {} --target {} --path {}'.format(android_path,
package_name,
target,
path)
try:
check_output(command, shell=True)
except subprocess.CalledProcessError as e:
if 'is is not valid' in e.output:
message = 'No Android SDK target found; have you run "{} update sdk" and download a platform?'
raise CommandError(message.format(android_path))
shutil.copytree(os.path.join(TEMPLATES_DIR, 'uiauto_template'), path)
manifest_path = os.path.join(path, 'app', 'src', 'main')
mainifest = os.path.join(_d(manifest_path), 'AndroidManifest.xml')
with open(mainifest, 'w') as wfh:
wfh.write(render_template('uiauto_AndroidManifest.xml', {'package_name': package_name}))
build_gradle_path = os.path.join(path, 'app')
build_gradle = os.path.join(_d(build_gradle_path), 'build.gradle')
with open(build_gradle, 'w') as wfh:
wfh.write(render_template('uiauto_build.gradle', {'package_name': package_name}))
build_script = os.path.join(path, 'build.sh')
with open(build_script, 'w') as wfh:
template = string.Template(UIAUTO_BUILD_SCRIPT)
wfh.write(template.substitute({'package_name': package_name}))
wfh.write(render_template('uiauto_build_script', {'package_name': package_name}))
os.chmod(build_script, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
source_file = _f(os.path.join(path, 'src',
source_file = _f(os.path.join(path, 'app', 'src', 'main', 'java',
os.sep.join(package_name.split('.')[:-1]),
'UiAutomation.java'))
with open(source_file, 'w') as wfh:

View File

@@ -74,7 +74,7 @@ class RecordCommand(ReventCommand):
name = 'record'
description = '''Performs a revent recording
This command helps making revent recordings. It will automatically
This command helps create revent recordings. It will automatically
deploy revent and even has the option of automatically opening apps.
Revent allows you to record raw inputs such as screen swipes or button presses.
@@ -82,14 +82,14 @@ class RecordCommand(ReventCommand):
have XML UI layouts that can be used with UIAutomator. As a drawback from this,
revent recordings are specific to the device type they were recorded on.
WA uses two parts to the names of revent recordings in the format,
{device_name}.{suffix}.revent.
WA uses two parts to form the names of revent recordings in the format,
{device_name}.{suffix}.revent
- device_name can either be specified manually with the ``-d`` argument or
it can be automatically determined. On Android device it will be obtained
else the name of the device will be automatically determined. On an Android device it is obtained
from ``build.prop``, on Linux devices it is obtained from ``/proc/device-tree/model``.
- suffix is used by WA to determine which part of the app execution the
recording is for, currently these are either ``setup`` or ``run``. This
recording is for, currently either ``setup``, ``run`` or ``teardown``. This
should be specified with the ``-s`` argument.

View File

@@ -2,23 +2,30 @@ package ${package_name};
import android.app.Activity;
import android.os.Bundle;
import org.junit.Test;
import org.junit.runner.RunWith;
import android.support.test.runner.AndroidJUnit4;
import android.util.Log;
import android.view.KeyEvent;
// Import the uiautomator libraries
import com.android.uiautomator.core.UiObject;
import com.android.uiautomator.core.UiObjectNotFoundException;
import com.android.uiautomator.core.UiScrollable;
import com.android.uiautomator.core.UiSelector;
import com.android.uiautomator.testrunner.UiAutomatorTestCase;
import android.support.test.uiautomator.UiObject;
import android.support.test.uiautomator.UiObjectNotFoundException;
import android.support.test.uiautomator.UiScrollable;
import android.support.test.uiautomator.UiSelector;
import com.arm.wlauto.uiauto.BaseUiAutomation;
public class UiAutomation extends BaseUiAutomation {
@RunWith(AndroidJUnit4.class)
public class UiAutomation extends BaseUiAutomation {
public static String TAG = "${name}";
public void runUiAutomation() throws Exception {
@Test
public void runUiAutomation() throws Exception {
initialize_instrumentation();
// UI Automation code goes here
}

View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="${package_name}"
android:versionCode="1"
android:versionName="1.0">
<instrumentation
android:name="android.support.test.runner.AndroidJUnitRunner"
android:targetPackage="${package_name}"/>
</manifest>

View File

@@ -0,0 +1,33 @@
apply plugin: 'com.android.application'
android {
compileSdkVersion 18
buildToolsVersion '25.0.0'
defaultConfig {
applicationId "${package_name}"
minSdkVersion 18
targetSdkVersion 25
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$$project.buildDir/apk/${package_name}.apk")
}
}
}
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext: 'aar')
}
repositories {
flatDir {
dirs 'libs'
}
}

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# CD into build dir if possible - allows building from any directory
script_path='.'
if `readlink -f $$0 &>/dev/null`; then
script_path=`readlink -f $$0 2>/dev/null`
fi
script_dir=`dirname $$script_path`
cd $$script_dir
# Ensure gradelw exists before starting
if [[ ! -f gradlew ]]; then
echo 'gradlew file not found! Check that you are in the right directory.'
exit 9
fi
# Copy base class library from wlauto dist
libs_dir=app/libs
base_classes=`python -c "import os, wlauto; print os.path.join(os.path.dirname(wlauto.__file__), 'common', 'android', '*.aar')"`
mkdir -p $$libs_dir
cp $$base_classes $$libs_dir
# Build and return appropriate exit code if failed
# gradle build
./gradlew clean :app:assembleDebug
exit_code=$$?
if [[ $$exit_code -ne 0 ]]; then
echo "ERROR: 'gradle build' exited with code $$exit_code"
exit $$exit_code
fi
# If successful move APK file to workload folder (overwrite previous)
rm -f ../$package_name
if [[ -f app/build/apk/$package_name.apk ]]; then
cp app/build/apk/$package_name.apk ../$package_name.apk
else
echo 'ERROR: UiAutomator apk could not be found!'
exit 9
fi

View File

@@ -0,0 +1,23 @@
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}

View File

@@ -0,0 +1,6 @@
#Wed May 03 15:42:44 BST 2017
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip

View File

@@ -0,0 +1,160 @@
#!/usr/bin/env bash
##############################################################################
##
## Gradle start up script for UN*X
##
##############################################################################
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS=""
APP_NAME="Gradle"
APP_BASE_NAME=`basename "$0"`
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD="maximum"
warn ( ) {
echo "$*"
}
die ( ) {
echo
echo "$*"
echo
exit 1
}
# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
case "`uname`" in
CYGWIN* )
cygwin=true
;;
Darwin* )
darwin=true
;;
MINGW* )
msys=true
;;
esac
# Attempt to set APP_HOME
# Resolve links: $0 may be a link
PRG="$0"
# Need this for relative symlinks.
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
SAVED="`pwd`"
cd "`dirname \"$PRG\"`/" >/dev/null
APP_HOME="`pwd -P`"
cd "$SAVED" >/dev/null
CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD="java"
which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
# Increase the maximum file descriptors if we can.
if [ "$cygwin" = "false" -a "$darwin" = "false" ] ; then
MAX_FD_LIMIT=`ulimit -H -n`
if [ $? -eq 0 ] ; then
if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then
MAX_FD="$MAX_FD_LIMIT"
fi
ulimit -n $MAX_FD
if [ $? -ne 0 ] ; then
warn "Could not set maximum file descriptor limit: $MAX_FD"
fi
else
warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
fi
fi
# For Darwin, add options to specify how the application appears in the dock
if $darwin; then
GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
fi
# For Cygwin, switch paths to Windows format before running java
if $cygwin ; then
APP_HOME=`cygpath --path --mixed "$APP_HOME"`
CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`
JAVACMD=`cygpath --unix "$JAVACMD"`
# We build the pattern for arguments to be converted via cygpath
ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
SEP=""
for dir in $ROOTDIRSRAW ; do
ROOTDIRS="$ROOTDIRS$SEP$dir"
SEP="|"
done
OURCYGPATTERN="(^($ROOTDIRS))"
# Add a user-defined pattern to the cygpath arguments
if [ "$GRADLE_CYGPATTERN" != "" ] ; then
OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
fi
# Now convert the arguments - kludge to limit ourselves to /bin/sh
i=0
for arg in "$@" ; do
CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -`
CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option
if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition
eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"`
else
eval `echo args$i`="\"$arg\""
fi
i=$((i+1))
done
case $i in
(0) set -- ;;
(1) set -- "$args0" ;;
(2) set -- "$args0" "$args1" ;;
(3) set -- "$args0" "$args1" "$args2" ;;
(4) set -- "$args0" "$args1" "$args2" "$args3" ;;
(5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
(6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
(7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
(8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
(9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
esac
fi
# Split up the JVM_OPTS And GRADLE_OPTS values into an array, following the shell quoting and substitution rules
function splitJvmOpts() {
JVM_OPTS=("$@")
}
eval splitJvmOpts $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS
JVM_OPTS[${#JVM_OPTS[*]}]="-Dorg.gradle.appname=$APP_BASE_NAME"
exec "$JAVACMD" "${JVM_OPTS[@]}" -classpath "$CLASSPATH" org.gradle.wrapper.GradleWrapperMain "$@"

View File

@@ -0,0 +1,90 @@
@if "%DEBUG%" == "" @echo off
@rem ##########################################################################
@rem
@rem Gradle startup script for Windows
@rem
@rem ##########################################################################
@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS=
set DIRNAME=%~dp0
if "%DIRNAME%" == "" set DIRNAME=.
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if "%ERRORLEVEL%" == "0" goto init
echo.
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe
if exist "%JAVA_EXE%" goto init
echo.
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:init
@rem Get command-line arguments, handling Windowz variants
if not "%OS%" == "Windows_NT" goto win9xME_args
if "%@eval[2+2]" == "4" goto 4NT_args
:win9xME_args
@rem Slurp the command line arguments.
set CMD_LINE_ARGS=
set _SKIP=2
:win9xME_args_slurp
if "x%~1" == "x" goto execute
set CMD_LINE_ARGS=%*
goto execute
:4NT_args
@rem Get arguments from the 4NT Shell from JP Software
set CMD_LINE_ARGS=%$
:execute
@rem Setup the command line
set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS%
:end
@rem End local scope for the variables with windows NT shell
if "%ERRORLEVEL%"=="0" goto mainEnd
:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
exit /b 1
:mainEnd
if "%OS%"=="Windows_NT" endlocal
:omega

View File

@@ -0,0 +1 @@
include ':app'

View File

@@ -30,7 +30,7 @@ from wlauto.common.resources import Executable
from wlauto.core.resource import NO_ONE
from wlauto.common.linux.device import BaseLinuxDevice, PsEntry
from wlauto.exceptions import DeviceError, WorkerThreadError, TimeoutError, DeviceNotRespondingError
from wlauto.utils.misc import convert_new_lines, ABI_MAP
from wlauto.utils.misc import convert_new_lines, ABI_MAP, commonprefix
from wlauto.utils.types import boolean, regex
from wlauto.utils.android import (adb_shell, adb_background_shell, adb_list_devices,
adb_command, AndroidProperties, ANDROID_VERSION_MAP)
@@ -390,7 +390,7 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
else:
return self.install_executable(filepath, with_name)
def install_apk(self, filepath, timeout=default_timeout, replace=False, allow_downgrade=False): # pylint: disable=W0221
def install_apk(self, filepath, timeout=300, replace=False, allow_downgrade=False): # pylint: disable=W0221
self._check_ready()
ext = os.path.splitext(filepath)[1].lower()
if ext == '.apk':
@@ -550,11 +550,10 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
props['android_id'] = self.get_android_id()
self._update_build_properties(props)
dumpsys_target_file = self.path.join(self.working_directory, 'window.dumpsys')
dumpsys_host_file = os.path.join(context.host_working_directory, 'window.dumpsys')
self.execute('{} > {}'.format('dumpsys window', dumpsys_target_file))
self.pull_file(dumpsys_target_file, dumpsys_host_file)
context.add_run_artifact('dumpsys_window', dumpsys_host_file, 'meta')
with open(dumpsys_host_file, 'w') as wfh:
wfh.write(self.execute('dumpsys window'))
context.add_run_artifact('dumpsys_window', dumpsys_host_file, 'meta')
prop_file = os.path.join(context.host_working_directory, 'android-props.json')
with open(prop_file, 'w') as wfh:
@@ -717,23 +716,40 @@ class AndroidDevice(BaseLinuxDevice): # pylint: disable=W0223
except KeyError:
return None
def broadcast_media_mounted(self, dirpath):
def refresh_device_files(self, file_list):
"""
Depending on the devices android version and root status, determine the
appropriate method of forcing a re-index of the mediaserver cache for a given
list of files.
"""
if self.device.is_rooted or self.device.get_sdk_version() < 24: # MM and below
common_path = commonprefix(file_list, sep=self.device.path.sep)
self.broadcast_media_mounted(common_path, self.device.is_rooted)
else:
for f in file_list:
self.broadcast_media_scan_file(f)
def broadcast_media_scan_file(self, filepath):
"""
Force a re-index of the mediaserver cache for the specified file.
"""
command = 'am broadcast -a android.intent.action.MEDIA_SCANNER_SCAN_FILE -d file://'
self.execute(command + filepath)
def broadcast_media_mounted(self, dirpath, as_root=False):
"""
Force a re-index of the mediaserver cache for the specified directory.
"""
command = 'am broadcast -a android.intent.action.MEDIA_MOUNTED -d file://'
self.execute(command + dirpath)
command = 'am broadcast -a android.intent.action.MEDIA_MOUNTED -d file://'
self.execute(command + dirpath, as_root=as_root)
# Internal methods: do not use outside of the class.
def _update_build_properties(self, props):
try:
def strip(somestring):
return somestring.strip().replace('[', '').replace(']', '')
for line in self.execute("getprop").splitlines():
key, value = line.split(':', 1)
key = strip(key)
value = strip(value)
regex = re.compile(r'\[([^\]]+)\]\s*:\s*\[([^\]]+)\]')
for match in regex.finditer(self.execute("getprop")):
key = match.group(1).strip()
value = match.group(2).strip()
props[key] = value
except ValueError:
self.logger.warning('Could not parse build.prop.')

View File

@@ -35,9 +35,12 @@ class ApkFile(FileResource):
name = 'apk'
def __init__(self, owner, platform=None):
def __init__(self, owner, platform=None, uiauto=False, package=None):
super(ApkFile, self).__init__(owner)
self.platform = platform
self.uiauto = uiauto
self.package = package
def __str__(self):
return '<{}\'s {} APK>'.format(self.owner, self.platform)
apk_type = 'uiautomator ' if self.uiauto else ''
return '<{}\'s {} {}APK>'.format(self.owner, self.platform, apk_type)

Binary file not shown.

View File

@@ -16,25 +16,24 @@
import os
import sys
import time
from math import ceil
from distutils.version import LooseVersion
from wlauto.core.extension import Parameter, ExtensionMeta, ListCollection
from wlauto.core.workload import Workload
from wlauto.core.resource import NO_ONE
from wlauto.common.android.resources import ApkFile, ReventFile
from wlauto.common.resources import ExtensionAsset, Executable, File
from wlauto.common.android.resources import ApkFile
from wlauto.common.resources import ExtensionAsset, File
from wlauto.exceptions import WorkloadError, ResourceError, DeviceError
from wlauto.utils.android import ApkInfo, ANDROID_NORMAL_PERMISSIONS, UNSUPPORTED_PACKAGES
from wlauto.utils.types import boolean
from wlauto.utils.revent import ReventRecording
from wlauto.utils.android import (ApkInfo, ANDROID_NORMAL_PERMISSIONS,
ANDROID_UNCHANGEABLE_PERMISSIONS, UNSUPPORTED_PACKAGES)
from wlauto.utils.types import boolean, ParameterDict
import wlauto.utils.statedetect as state_detector
import wlauto.common.android.resources
from wlauto.common.linux.workload import ReventWorkload
DELAY = 5
# Due to the way `super` works you have to call it at every level but WA executes some
# methods conditionally and so has to call them directly via the class, this breaks super
# and causes it to run things mutiple times ect. As a work around for this untill workloads
@@ -43,12 +42,12 @@ DELAY = 5
class UiAutomatorWorkload(Workload):
"""
Base class for all workloads that rely on a UI Automator JAR file.
Base class for all workloads that rely on a UI Automator APK file.
This class should be subclassed by workloads that rely on android UiAutomator
to work. This class handles transferring the UI Automator JAR file to the device
and invoking it to run the workload. By default, it will look for the JAR file in
the same directory as the .py file for the workload (this can be changed by overriding
to work. This class handles installing the UI Automator APK to the device
and invoking it to run the workload. By default, it will look for the ``*.apk`` file
in the same directory as the .py file for the workload (this can be changed by overriding
the ``uiauto_file`` property in the subclassing workload).
To inintiate UI Automation, the fully-qualified name of the Java class and the
@@ -60,7 +59,7 @@ class UiAutomatorWorkload(Workload):
match what is expected, or you could override ``uiauto_package``, ``uiauto_class`` and
``uiauto_method`` class attributes with the value that match your Java code.
You can also pass parameters to the JAR file. To do this add the parameters to
You can also pass parameters to the APK file. To do this add the parameters to
``self.uiauto_params`` dict inside your class's ``__init__`` or ``setup`` methods.
"""
@@ -69,39 +68,43 @@ class UiAutomatorWorkload(Workload):
uiauto_package = ''
uiauto_class = 'UiAutomation'
uiauto_method = 'runUiAutomation'
uiauto_method = 'android.support.test.runner.AndroidJUnitRunner'
# Can be overidden by subclasses to adjust to run time of specific
# benchmarks.
run_timeout = 4 * 60 # seconds
run_timeout = 10 * 60 # seconds
uninstall_uiauto_apk = True
def __init__(self, device, _call_super=True, **kwargs): # pylint: disable=W0613
if _call_super:
Workload.__init__(self, device, **kwargs)
self.uiauto_file = None
self.device_uiauto_file = None
self.command = None
self.uiauto_params = {}
self.uiauto_params = ParameterDict()
def init_resources(self, context):
self.uiauto_file = context.resolver.get(wlauto.common.android.resources.JarFile(self))
self.uiauto_file = context.resolver.get(ApkFile(self, uiauto=True))
if not self.uiauto_file:
raise ResourceError('No UI automation JAR file found for workload {}.'.format(self.name))
self.device_uiauto_file = self.device.path.join(self.device.working_directory,
os.path.basename(self.uiauto_file))
raise ResourceError('No UI automation APK file found for workload {}.'.format(self.name))
if not self.uiauto_package:
self.uiauto_package = os.path.splitext(os.path.basename(self.uiauto_file))[0]
def setup(self, context):
Workload.setup(self, context)
method_string = '{}.{}#{}'.format(self.uiauto_package, self.uiauto_class, self.uiauto_method)
params_dict = self.uiauto_params
params_dict['workdir'] = self.device.working_directory
params = ''
for k, v in self.uiauto_params.iteritems():
for k, v in self.uiauto_params.iter_encoded_items():
params += ' -e {} "{}"'.format(k, v)
self.command = 'uiautomator runtest {}{} -c {}'.format(self.device_uiauto_file, params, method_string)
self.device.push_file(self.uiauto_file, self.device_uiauto_file)
if self.device.package_is_installed(self.uiauto_package):
self.device.uninstall(self.uiauto_package)
self.device.install_apk(self.uiauto_file)
instrumention_string = 'am instrument -w -r {} -e class {}.{} {}/{}'
self.command = instrumention_string.format(params, self.uiauto_package,
self.uiauto_class, self.uiauto_package,
self.uiauto_method)
self.device.killall('uiautomator')
def run(self, context):
@@ -116,11 +119,12 @@ class UiAutomatorWorkload(Workload):
pass
def teardown(self, context):
self.device.delete_file(self.device_uiauto_file)
if self.uninstall_uiauto_apk:
self.device.uninstall(self.uiauto_package)
def validate(self):
if not self.uiauto_file:
raise WorkloadError('No UI automation JAR file found for workload {}.'.format(self.name))
raise WorkloadError('No UI automation APK file found for workload {}.'.format(self.name))
if not self.uiauto_package:
raise WorkloadError('No UI automation package specified for workload {}.'.format(self.name))
@@ -191,6 +195,11 @@ class ApkWorkload(Workload):
If ``True``, workload will check that the APK matches the target
device ABI, otherwise any APK found will be used.
'''),
Parameter('clear_data_on_reset', kind=bool, default=True,
description="""
If set to ``False``, this will prevent WA from clearing package
data for this workload prior to running it.
"""),
]
def __init__(self, device, _call_super=True, **kwargs):
@@ -200,11 +209,15 @@ class ApkWorkload(Workload):
self.apk_version = None
self.logcat_log = None
self.exact_apk_version = None
self.exact_abi = kwargs.get('exact_abi')
def setup(self, context): # pylint: disable=too-many-branches
Workload.setup(self, context)
self.setup_workload_apk(context)
self.launch_application()
self.kill_background()
self.device.clear_logcat()
def setup_workload_apk(self, context):
# Get target version
target_version = self.device.get_installed_package_version(self.package)
if target_version:
@@ -212,7 +225,8 @@ class ApkWorkload(Workload):
self.logger.debug("Found version '{}' on target device".format(target_version))
# Get host version
self.apk_file = context.resolver.get(ApkFile(self, self.device.abi),
self.apk_file = context.resolver.get(ApkFile(self, self.device.abi,
package=getattr(self, 'package', None)),
version=getattr(self, 'version', None),
variant_name=getattr(self, 'variant_name', None),
strict=False)
@@ -224,7 +238,8 @@ class ApkWorkload(Workload):
# Get host version, primary abi is first, and then try to find supported.
for abi in self.device.supported_abi:
self.apk_file = context.resolver.get(ApkFile(self, abi),
self.apk_file = context.resolver.get(ApkFile(self, abi,
package=getattr(self, 'package', None)),
version=getattr(self, 'version', None),
variant_name=getattr(self, 'variant_name', None),
strict=False)
@@ -233,13 +248,30 @@ class ApkWorkload(Workload):
if self.apk_file or self.exact_abi:
break
host_version = self.check_host_version()
self.verify_apk_version(target_version, target_abi, host_version)
if self.force_install:
self.force_install_apk(context, host_version)
elif self.check_apk:
self.prefer_host_apk(context, host_version, target_version)
else:
self.prefer_target_apk(context, host_version, target_version)
self.reset(context)
self.apk_version = self.device.get_installed_package_version(self.package)
context.add_classifiers(apk_version=self.apk_version)
def check_host_version(self):
host_version = None
if self.apk_file is not None:
host_version = ApkInfo(self.apk_file).version_name
if host_version:
host_version = LooseVersion(host_version)
self.logger.debug("Found version '{}' on host".format(host_version))
return host_version
def verify_apk_version(self, target_version, target_abi, host_version):
# Error if apk was not found anywhere
if target_version is None and host_version is None:
msg = "Could not find APK for '{}' on the host or target device"
@@ -256,22 +288,12 @@ class ApkWorkload(Workload):
msg = "APK abi '{}' not found on the host and target is '{}'"
raise ResourceError(msg.format(self.device.abi, target_abi))
# Ensure the apk is setup on the device
if self.force_install:
self.force_install_apk(context, host_version)
elif self.check_apk:
self.prefer_host_apk(context, host_version, target_version)
else:
self.prefer_target_apk(context, host_version, target_version)
self.reset(context)
self.apk_version = self.device.get_installed_package_version(self.package)
context.add_classifiers(apk_version=self.apk_version)
def launch_application(self):
if self.launch_main:
self.launch_package() # launch default activity without intent data
def kill_background(self):
self.device.execute('am kill-all') # kill all *background* activities
self.device.clear_logcat()
def force_install_apk(self, context, host_version):
if host_version is None:
@@ -387,7 +409,8 @@ class ApkWorkload(Workload):
def reset(self, context): # pylint: disable=W0613
self.device.execute('am force-stop {}'.format(self.package))
self.device.execute('pm clear {}'.format(self.package))
if self.clear_data_on_reset:
self.device.execute('pm clear {}'.format(self.package))
# As of android API level 23, apps can request permissions at runtime,
# this will grant all of them so requests do not pop up when running the app
@@ -397,7 +420,7 @@ class ApkWorkload(Workload):
def install_apk(self, context, replace=False):
success = False
if replace:
if replace and self.device.package_is_installed(self.package):
self.device.uninstall(self.package)
output = self.device.install_apk(self.apk_file, timeout=self.install_timeout,
replace=replace, allow_downgrade=True)
@@ -432,16 +455,18 @@ class ApkWorkload(Workload):
# "Normal" Permisions are automatically granted and cannot be changed
permission_name = permission.rsplit('.', 1)[1]
if permission_name not in ANDROID_NORMAL_PERMISSIONS:
# On some API 23+ devices, this may fail with a SecurityException
# on previously granted permissions. In that case, just skip as it
# is not fatal to the workload execution
try:
self.device.execute("pm grant {} {}".format(self.package, permission))
except DeviceError as e:
if "changeable permission" in e.message or "Unknown permission" in e.message:
self.logger.debug(e)
else:
raise e
# Some permissions are not allowed to be "changed"
if permission_name not in ANDROID_UNCHANGEABLE_PERMISSIONS:
# On some API 23+ devices, this may fail with a SecurityException
# on previously granted permissions. In that case, just skip as it
# is not fatal to the workload execution
try:
self.device.execute("pm grant {} {}".format(self.package, permission))
except DeviceError as e:
if "changeable permission" in e.message or "Unknown permission" in e.message:
self.logger.debug(e)
else:
raise e
def do_post_install(self, context):
""" May be overwritten by derived classes."""
@@ -463,102 +488,8 @@ class ApkWorkload(Workload):
if self.uninstall_apk:
self.device.uninstall(self.package)
AndroidBenchmark = ApkWorkload # backward compatibility
class ReventWorkload(Workload):
# pylint: disable=attribute-defined-outside-init
def __init__(self, device, _call_super=True, **kwargs):
if _call_super:
Workload.__init__(self, device, **kwargs)
devpath = self.device.path
self.on_device_revent_binary = devpath.join(self.device.binaries_directory, 'revent')
self.setup_timeout = kwargs.get('setup_timeout', None)
self.run_timeout = kwargs.get('run_timeout', None)
self.revent_setup_file = None
self.revent_run_file = None
self.on_device_setup_revent = None
self.on_device_run_revent = None
self.statedefs_dir = None
if self.check_states:
state_detector.check_match_state_dependencies()
def setup(self, context):
self.revent_setup_file = context.resolver.get(ReventFile(self, 'setup'))
self.revent_run_file = context.resolver.get(ReventFile(self, 'run'))
devpath = self.device.path
self.on_device_setup_revent = devpath.join(self.device.working_directory,
os.path.split(self.revent_setup_file)[-1])
self.on_device_run_revent = devpath.join(self.device.working_directory,
os.path.split(self.revent_run_file)[-1])
self._check_revent_files(context)
default_setup_timeout = ceil(ReventRecording(self.revent_setup_file).duration) + 30
default_run_timeout = ceil(ReventRecording(self.revent_run_file).duration) + 30
self.setup_timeout = self.setup_timeout or default_setup_timeout
self.run_timeout = self.run_timeout or default_run_timeout
Workload.setup(self, context)
self.device.killall('revent')
command = '{} replay {}'.format(self.on_device_revent_binary, self.on_device_setup_revent)
self.device.execute(command, timeout=self.setup_timeout)
def run(self, context):
command = '{} replay {}'.format(self.on_device_revent_binary, self.on_device_run_revent)
self.logger.debug('Replaying {}'.format(os.path.basename(self.on_device_run_revent)))
self.device.execute(command, timeout=self.run_timeout)
self.logger.debug('Replay completed.')
def update_result(self, context):
pass
def teardown(self, context):
self.device.killall('revent')
self.device.delete_file(self.on_device_setup_revent)
self.device.delete_file(self.on_device_run_revent)
def _check_revent_files(self, context):
# check the revent binary
revent_binary = context.resolver.get(Executable(NO_ONE, self.device.abi, 'revent'))
if not os.path.isfile(revent_binary):
message = '{} does not exist. '.format(revent_binary)
message += 'Please build revent for your system and place it in that location'
raise WorkloadError(message)
if not self.revent_setup_file:
# pylint: disable=too-few-format-args
message = '{0}.setup.revent file does not exist, Please provide one for your device, {0}'.format(self.device.name)
raise WorkloadError(message)
if not self.revent_run_file:
# pylint: disable=too-few-format-args
message = '{0}.run.revent file does not exist, Please provide one for your device, {0}'.format(self.device.name)
raise WorkloadError(message)
self.on_device_revent_binary = self.device.install_executable(revent_binary)
self.device.push_file(self.revent_run_file, self.on_device_run_revent)
self.device.push_file(self.revent_setup_file, self.on_device_setup_revent)
def _check_statedetection_files(self, context):
try:
self.statedefs_dir = context.resolver.get(File(self, 'state_definitions'))
except ResourceError:
self.logger.warning("State definitions directory not found. Disabling state detection.")
self.check_states = False
def check_state(self, context, phase):
try:
self.logger.info("\tChecking workload state...")
screenshotPath = os.path.join(context.output_directory, "screen.png")
self.device.capture_screen(screenshotPath)
stateCheck = state_detector.verify_state(screenshotPath, self.statedefs_dir, phase)
if not stateCheck:
raise WorkloadError("Unexpected state after setup")
except state_detector.StateDefinitionError as e:
msg = "State definitions or template files missing or invalid ({}). Skipping state detection."
self.logger.warning(msg.format(e.message))
class AndroidUiAutoBenchmark(UiAutomatorWorkload, AndroidBenchmark):
supported_platforms = ['android']
@@ -639,17 +570,27 @@ class AndroidUxPerfWorkload(AndroidUiAutoBenchmark):
return self.device.path.join(dirname, fname)
def push_assets(self, context):
pushed = False
file_list = []
for f in self.deployable_assets:
fpath = context.resolver.get(File(self, f))
device_path = self._path_on_device(fpath)
if self.force_push_assets or not self.device.file_exists(device_path):
self.device.push_file(fpath, device_path, timeout=300)
self.device.broadcast_media_mounted(self.device.working_directory)
file_list.append(device_path)
pushed = True
if pushed:
self.device.refresh_device_files(file_list)
def delete_assets(self):
for f in self.deployable_assets:
self.device.delete_file(self._path_on_device(f))
self.device.broadcast_media_mounted(self.device.working_directory)
if self.deployable_assets:
file_list = []
for f in self.deployable_assets:
f = self._path_on_device(f)
self.device.delete_file(f)
file_list.append(f)
self.device.delete_file(f)
self.device.refresh_device_files(file_list)
def __init__(self, device, **kwargs):
super(AndroidUxPerfWorkload, self).__init__(device, **kwargs)
@@ -658,7 +599,7 @@ class AndroidUxPerfWorkload(AndroidUiAutoBenchmark):
def validate(self):
super(AndroidUxPerfWorkload, self).validate()
self.uiauto_params['package'] = self.package
self.uiauto_params['package_name'] = self.package
self.uiauto_params['markers_enabled'] = self.markers_enabled
def setup(self, context):
@@ -703,6 +644,7 @@ class GameWorkload(ApkWorkload, ReventWorkload):
view = 'SurfaceView'
loading_time = 10
supported_platforms = ['android']
setup_required = True
parameters = [
Parameter('install_timeout', default=500, override=True),
@@ -711,16 +653,13 @@ class GameWorkload(ApkWorkload, ReventWorkload):
after setup and run"""),
Parameter('assets_push_timeout', kind=int, default=500,
description='Timeout used during deployment of the assets package (if there is one).'),
Parameter('clear_data_on_reset', kind=bool, default=True,
description="""
If set to ``False``, this will prevent WA from clearing package
data for this workload prior to running it.
"""),
]
def __init__(self, device, **kwargs): # pylint: disable=W0613
ApkWorkload.__init__(self, device, **kwargs)
ReventWorkload.__init__(self, device, _call_super=False, **kwargs)
if self.check_states:
state_detector.check_match_state_dependencies()
self.logcat_process = None
self.module_dir = os.path.dirname(sys.modules[self.__module__].__file__)
self.revent_dir = os.path.join(self.module_dir, 'revent_files')
@@ -795,3 +734,22 @@ class GameWorkload(ApkWorkload, ReventWorkload):
self.device.busybox,
ondevice_cache)
self.device.execute(deploy_command, timeout=timeout, as_root=True)
def _check_statedetection_files(self, context):
try:
self.statedefs_dir = context.resolver.get(File(self, 'state_definitions'))
except ResourceError:
self.logger.warning("State definitions directory not found. Disabling state detection.")
self.check_states = False # pylint: disable=W0201
def check_state(self, context, phase):
try:
self.logger.info("\tChecking workload state...")
screenshotPath = os.path.join(context.output_directory, "screen.png")
self.device.capture_screen(screenshotPath)
stateCheck = state_detector.verify_state(screenshotPath, self.statedefs_dir, phase)
if not stateCheck:
raise WorkloadError("Unexpected state after setup")
except state_detector.StateDefinitionError as e:
msg = "State definitions or template files missing or invalid ({}). Skipping state detection."
self.logger.warning(msg.format(e.message))

Binary file not shown.

Binary file not shown.

View File

@@ -462,7 +462,7 @@ class BaseGem5Device(object):
# gem5 might be slow. Hence, we need to make the ping timeout very long.
def ping(self):
self.logger.debug("Pinging gem5 to see if it is still alive")
self.gem5_shell('ls /', timeout=self.longdelay)
self.gem5_shell('ls /', timeout=self.long_delay)
# Additional Android-specific methods.
def forward_port(self, _): # pylint: disable=R0201

View File

@@ -17,6 +17,7 @@
import os
import re
import time
import base64
import socket
from collections import namedtuple
from subprocess import CalledProcessError
@@ -105,7 +106,7 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
runtime_parameters = [
RuntimeParameter('sysfile_values', 'get_sysfile_values', 'set_sysfile_values', value_name='params'),
CoreParameter('${core}_cores', 'get_number_of_online_cpus', 'set_number_of_online_cpus',
CoreParameter('${core}_cores', 'get_number_of_online_cores', 'set_number_of_online_cores',
value_name='number'),
CoreParameter('${core}_min_frequency', 'get_core_min_frequency', 'set_core_min_frequency',
value_name='freq'),
@@ -241,7 +242,7 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
self.logger.debug('Could not pull property file "{}"'.format(propfile))
return {}
def get_sysfile_value(self, sysfile, kind=None):
def get_sysfile_value(self, sysfile, kind=None, binary=False):
"""
Get the contents of the specified sysfile.
@@ -251,28 +252,49 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
be any Python callable that takes a single str argument.
If not specified or is None, the contents will be returned
as a string.
:param binary: Whether the value should be encoded into base64 for reading
to deal with binary format.
"""
output = self.execute('cat \'{}\''.format(sysfile), as_root=self.is_rooted).strip() # pylint: disable=E1103
if binary:
output = self.execute('{} base64 {}'.format(self.busybox, sysfile), as_root=self.is_rooted).strip()
output = output.decode('base64')
else:
output = self.execute('cat \'{}\''.format(sysfile), as_root=self.is_rooted).strip() # pylint: disable=E1103
if kind:
return kind(output)
else:
return output
def set_sysfile_value(self, sysfile, value, verify=True):
def set_sysfile_value(self, sysfile, value, verify=True, binary=False):
"""
Set the value of the specified sysfile. By default, the value will be checked afterwards.
Can be overridden by setting ``verify`` parameter to ``False``.
Can be overridden by setting ``verify`` parameter to ``False``. By default binary values
will not be written correctly this can be changed by setting the ``binary`` parameter to
``True``.
"""
value = str(value)
self.execute('echo {} > \'{}\''.format(value, sysfile), check_exit_code=False, as_root=True)
if binary:
# Value is already string encoded, so need to decode before encoding in base64
try:
value = str(value.decode('string_escape'))
except ValueError as e:
msg = 'Can not interpret value "{}" for "{}": {}'
raise ValueError(msg.format(value, sysfile, e.message))
encoded_value = base64.b64encode(value)
cmd = 'echo {} | {} base64 -d > \'{}\''.format(encoded_value, self.busybox, sysfile)
else:
cmd = 'echo {} > \'{}\''.format(value, sysfile)
self.execute(cmd, check_exit_code=False, as_root=True)
if verify:
output = self.get_sysfile_value(sysfile)
output = self.get_sysfile_value(sysfile, binary=binary)
if output.strip() != value: # pylint: disable=E1103
message = 'Could not set the value of {} to {}'.format(sysfile, value)
raise DeviceError(message)
self._written_sysfiles.append(sysfile)
self._written_sysfiles.append((sysfile, binary))
def get_sysfile_values(self):
"""
@@ -281,21 +303,24 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
"""
values = {}
for sysfile in self._written_sysfiles:
values[sysfile] = self.get_sysfile_value(sysfile)
for sysfile, binary in self._written_sysfiles:
values[sysfile] = self.get_sysfile_value(sysfile, binary=binary)
return values
def set_sysfile_values(self, params):
"""
The plural version of ``set_sysfile_value``. Takes a single parameter which is a mapping of
file paths to values to be set. By default, every value written will be verified. The can
be disabled for individual paths by appending ``'!'`` to them.
file paths to values to be set. By default, every value written will be verified. This can
be disabled for individual paths by appending ``'!'`` to them. To enable values being
written as binary data, a ``'^'`` can be prefixed to the path.
"""
for sysfile, value in params.iteritems():
verify = not sysfile.endswith('!')
sysfile = sysfile.rstrip('!')
self.set_sysfile_value(sysfile, value, verify=verify)
binary = sysfile.startswith('^')
sysfile = sysfile.lstrip('^')
self.set_sysfile_value(sysfile, value, verify=verify, binary=binary)
def deploy_busybox(self, context, force=False):
"""
@@ -452,20 +477,6 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
else:
raise ValueError(c)
def get_number_of_online_cpus(self, c):
return len(self.get_online_cpus(c))
def set_number_of_online_cpus(self, core, number):
core_ids = [i for i, c in enumerate(self.core_names) if c == core]
max_cores = len(core_ids)
if number > max_cores:
message = 'Attempting to set the number of active {} to {}; maximum is {}'
raise ValueError(message.format(core, number, max_cores))
for i in xrange(0, number):
self.enable_cpu(core_ids[i])
for i in xrange(number, max_cores):
self.disable_cpu(core_ids[i])
# hotplug
def enable_cpu(self, cpu):
@@ -504,17 +515,17 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
sysfile = '/sys/devices/system/cpu/{}/online'.format(cpu)
self.set_sysfile_value(sysfile, status)
def get_number_of_active_cores(self, core):
def get_number_of_online_cores(self, core):
if core not in self.core_names:
raise ValueError('Unexpected core: {}; must be in {}'.format(core, list(set(self.core_names))))
active_cpus = self.active_cpus
online_cpus = self.online_cpus
num_active_cores = 0
for i, c in enumerate(self.core_names):
if c == core and i in active_cpus:
if c == core and i in online_cpus:
num_active_cores += 1
return num_active_cores
def set_number_of_active_cores(self, core, number): # NOQA
def set_number_of_online_cores(self, core, number): # NOQA
if core not in self.core_names:
raise ValueError('Unexpected core: {}; must be in {}'.format(core, list(set(self.core_names))))
core_ids = [i for i, c in enumerate(self.core_names) if c == core]
@@ -527,8 +538,7 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
# make sure at least one other core is enabled to avoid trying to
# hotplug everything.
for i, c in enumerate(self.core_names):
if c != core:
self.enable_cpu(i)
if c != core and i in self.online_cpus:
break
else: # did not find one
raise ValueError('Cannot hotplug all cpus on the device!')
@@ -580,7 +590,8 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
def get_device_model(self):
if self.file_exists("/proc/device-tree/model"):
raw_model = self.execute("cat /proc/device-tree/model")
return '_'.join(raw_model.split()[:2])
device_model_to_return = '_'.join(raw_model.split()[:2])
return device_model_to_return.rstrip(' \t\r\n\0')
# Right now we don't know any other way to get device model
# info in linux on arm platforms
return None
@@ -589,7 +600,7 @@ class BaseLinuxDevice(Device): # pylint: disable=abstract-method
def _check_ready(self):
if not self._is_ready:
raise AttributeError('Device not ready.')
raise RuntimeError('Device not ready (has connect() been called?)')
def _get_core_cluster(self, core):
"""Returns the first cluster that has cores of the specified type. Raises

View File

@@ -0,0 +1,165 @@
# Copyright 2017 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
from math import ceil
from wlauto.core.extension import Parameter
from wlauto.core.workload import Workload
from wlauto.core.resource import NO_ONE
from wlauto.common.resources import Executable
from wlauto.common.android.resources import ReventFile
from wlauto.exceptions import WorkloadError
from wlauto.utils.revent import ReventRecording
class ReventWorkload(Workload):
# pylint: disable=attribute-defined-outside-init
description = """
A workload for playing back revent recordings. You can supply three
different files:
1. {device_model}.setup.revent
2. {device_model}.run.revent
3. {device_model}.teardown.revent
You may generate these files using the wa record command using the -s flag
to specify the stage (``setup``, ``run``, ``teardown``)
You may also supply an 'idle_time' in seconds in place of the run file.
The ``run`` file may only be omitted if you choose to run this way, but
while running idle may supply ``setup`` and ``teardown`` files.
To use a ``setup`` or ``teardown`` file set the setup_required and/or
teardown_required class attributes to True (default: False).
N.B. This is the default description. You may overwrite this for your
workload to include more specific information.
"""
setup_required = False
teardown_required = False
parameters = [
Parameter(
'idle_time', kind=int, default=None,
description='''
The time you wish the device to remain idle for (if a value is
given then this overrides any .run revent file).
'''),
]
def __init__(self, device, _call_super=True, **kwargs):
if _call_super:
Workload.__init__(self, device, **kwargs)
self.setup_timeout = kwargs.get('setup_timeout', None)
self.run_timeout = kwargs.get('run_timeout', None)
self.teardown_timeout = kwargs.get('teardown_timeout', None)
self.revent_setup_file = None
self.revent_run_file = None
self.revent_teardown_file = None
self.on_device_setup_revent = None
self.on_device_run_revent = None
self.on_device_teardown_revent = None
self.statedefs_dir = None
def initialize(self, context):
devpath = self.device.path
self.on_device_revent_binary = devpath.join(self.device.binaries_directory, 'revent')
def setup(self, context):
devpath = self.device.path
if self.setup_required:
self.revent_setup_file = context.resolver.get(ReventFile(self, 'setup'))
if self.revent_setup_file:
self.on_device_setup_revent = devpath.join(self.device.working_directory,
os.path.split(self.revent_setup_file)[-1])
duration = ReventRecording(self.revent_setup_file).duration
self.default_setup_timeout = ceil(duration) + 30
if not self.idle_time:
self.revent_run_file = context.resolver.get(ReventFile(self, 'run'))
if self.revent_run_file:
self.on_device_run_revent = devpath.join(self.device.working_directory,
os.path.split(self.revent_run_file)[-1])
self.default_run_timeout = ceil(ReventRecording(self.revent_run_file).duration) + 30
if self.teardown_required:
self.revent_teardown_file = context.resolver.get(ReventFile(self, 'teardown'))
if self.revent_teardown_file:
self.on_device_teardown_revent = devpath.join(self.device.working_directory,
os.path.split(self.revent_teardown_file)[-1])
duration = ReventRecording(self.revent_teardown_file).duration
self.default_teardown_timeout = ceil(duration) + 30
self._check_revent_files(context)
Workload.setup(self, context)
if self.revent_setup_file is not None:
self.setup_timeout = self.setup_timeout or self.default_setup_timeout
self.device.killall('revent')
command = '{} replay {}'.format(self.on_device_revent_binary, self.on_device_setup_revent)
self.device.execute(command, timeout=self.setup_timeout)
self.logger.debug('Revent setup completed.')
def run(self, context):
if not self.idle_time:
self.run_timeout = self.run_timeout or self.default_run_timeout
command = '{} replay {}'.format(self.on_device_revent_binary, self.on_device_run_revent)
self.logger.debug('Replaying {}'.format(os.path.basename(self.on_device_run_revent)))
self.device.execute(command, timeout=self.run_timeout)
self.logger.debug('Replay completed.')
else:
self.logger.info('Idling for ' + str(self.idle_time) + ' seconds.')
self.device.sleep(self.idle_time)
self.logger.info('Successfully did nothing for ' + str(self.idle_time) + ' seconds!')
def update_result(self, context):
pass
def teardown(self, context):
if self.revent_teardown_file is not None:
self.teardown_timeout = self.teardown_timeout or self.default_teardown_timeout
command = '{} replay {}'.format(self.on_device_revent_binary,
self.on_device_teardown_revent)
self.device.execute(command, timeout=self.teardown_timeout)
self.logger.debug('Replay completed.')
self.device.killall('revent')
if self.revent_setup_file is not None:
self.device.delete_file(self.on_device_setup_revent)
if not self.idle_time:
self.device.delete_file(self.on_device_run_revent)
if self.revent_teardown_file is not None:
self.device.delete_file(self.on_device_teardown_revent)
def _check_revent_files(self, context):
# check the revent binary
revent_binary = context.resolver.get(Executable(NO_ONE, self.device.abi, 'revent'))
if not os.path.isfile(revent_binary):
message = '{} does not exist. '.format(revent_binary)
message += 'Please build revent for your system and place it in that location'
raise WorkloadError(message)
if not self.revent_run_file and not self.idle_time:
# pylint: disable=too-few-format-args
message = 'It seems a {0}.run.revent file does not exist, '\
'Please provide one for your device: {0}.'
raise WorkloadError(message.format(self.device.name))
self.on_device_revent_binary = self.device.install_executable(revent_binary)
if self.revent_setup_file is not None:
self.device.push_file(self.revent_setup_file, self.on_device_setup_revent)
if not self.idle_time:
self.device.push_file(self.revent_run_file, self.on_device_run_revent)
if self.revent_teardown_file is not None:
self.device.push_file(self.revent_teardown_file, self.on_device_teardown_revent)

View File

@@ -61,7 +61,8 @@ clean_up = False
######################################### Device Settings ##########################################
####################################################################################################
# Specify the device you want to run workload automation on. This must be a #
# string with the ID of the device. At the moment, only 'TC2' is supported. #
# string with the ID of the device. Common options are 'generic_android' and #
# 'generic_linux'. Run ``wa list devices`` to see all available options. #
# #
device = 'generic_android'
@@ -87,7 +88,7 @@ device_config = dict(
####################################################################################################
################################### Instrumention Configuration ####################################
################################### Instrumentation Configuration ####################################
####################################################################################################
# This defines the additionnal instrumentation that will be enabled during workload execution, #
# which in turn determines what additional data (such as /proc/interrupts content or Streamline #
@@ -192,7 +193,7 @@ logging = {
####################################################################################################
#################################### Instruments Configuration #####################################
####################################################################################################
# Instrumention Configuration is related to specific instrument's settings. Some of the #
# Instrumentation Configuration is related to specific instrument's settings. Some of the #
# instrumentations require specific settings in order for them to work. These settings are #
# specified here. #
# Note that these settings only take effect if the corresponding instrument is

View File

@@ -152,7 +152,8 @@ def init_environment(env_root, dep_dir, extension_paths, overwrite_existing=Fals
os.makedirs(env_root)
with open(os.path.join(_this_dir, '..', 'config_example.py')) as rf:
text = re.sub(r'""".*?"""', '', rf.read(), 1, re.DOTALL)
with open(os.path.join(_env_root, 'config.py'), 'w') as wf:
config_path = os.path.join(env_root, 'config.py')
with open(config_path, 'w') as wf:
wf.write(text)
os.makedirs(dep_dir)
@@ -173,9 +174,11 @@ def init_environment(env_root, dep_dir, extension_paths, overwrite_existing=Fals
os.chown(os.path.join(root, d), uid, gid)
for f in files: # pylint: disable=W0621
os.chown(os.path.join(root, f), uid, gid)
return config_path
_env_root = os.getenv('WA_USER_DIRECTORY', os.path.join(_user_home, '.workload_automation'))
_env_root = os.path.abspath(_env_root)
_dep_dir = os.path.join(_env_root, 'dependencies')
_extension_paths = [os.path.join(_env_root, ext.default_path) for ext in _extensions]
_env_var_paths = os.getenv('WA_EXTENSION_PATHS', '')
@@ -189,7 +192,8 @@ for filename in ['config.py', 'config.yaml']:
_env_configs.append(filepath)
if not os.path.isdir(_env_root):
init_environment(_env_root, _dep_dir, _extension_paths)
cfg_path = init_environment(_env_root, _dep_dir, _extension_paths)
_env_configs.append(cfg_path)
elif not _env_configs:
filepath = os.path.join(_env_root, 'config.py')
with open(os.path.join(_this_dir, '..', 'config_example.py')) as f:

View File

@@ -381,7 +381,20 @@ class Device(Extension):
"""
raise NotImplementedError()
def set_sysfile_value(self, filepath, value, verify=True):
def sleep(self, seconds):
"""Sleep for the specified time on the target device.
:param seconds: Time in seconds to sleep on the device
The sleep is executed on the device using self.execute(). We
set the timeout for this command to be 10 seconds longer than
the sleep itself to make sure the command has time to complete
before we timeout.
"""
self.execute("sleep {}".format(seconds), timeout=seconds + 10)
def set_sysfile_value(self, filepath, value, verify=True, binary=False):
"""
Write the specified value to the specified file on the device
and verify that the value has actually been written.
@@ -390,13 +403,14 @@ class Device(Extension):
:param value: The value to be written to the file. Must be
an int or a string convertable to an int.
:param verify: Specifies whether the value should be verified, once written.
:param binary: Specifies whether the value should be written as binary data.
Should raise DeviceError if could write value.
"""
raise NotImplementedError()
def get_sysfile_value(self, sysfile, kind=None):
def get_sysfile_value(self, sysfile, kind=None, binary=False):
"""
Get the contents of the specified sysfile.
@@ -406,6 +420,8 @@ class Device(Extension):
be any Python callable that takes a single str argument.
If not specified or is None, the contents will be returned
as a string.
:param binary: Whether the value should be encoded into base64 for reading
to deal with binary format.
"""
raise NotImplementedError()

View File

@@ -14,10 +14,12 @@
#
from __future__ import absolute_import
import sys
import argparse
import logging
import os
import signal
import subprocess
import warnings
@@ -41,6 +43,9 @@ def load_commands(subparsers):
for command in ext_loader.list_commands():
settings.commands[command.name] = ext_loader.get_command(command.name, subparsers=subparsers)
def convert_TERM_into_INT_handler(signal, frame):
logger.critical("TERM received, aborting")
raise KeyboardInterrupt()
def main():
try:
@@ -62,6 +67,7 @@ def main():
settings.update(args.config)
init_logging(settings.verbosity)
signal.signal(signal.SIGTERM, convert_TERM_into_INT_handler)
command = settings.commands[args.command]
sys.exit(command.execute(args))

View File

@@ -642,7 +642,7 @@ class Runner(object):
job.iteration = self.context.current_iteration
if job.result.status in self.config.retry_on_status:
if job.retry >= self.config.max_retries:
self.logger.error('Exceeded maxium number of retries. Abandoning job.')
self.logger.error('Exceeded maximum number of retries. Abandoning job.')
else:
self.logger.info('Job status was {}. Retrying...'.format(job.result.status))
retry_job = RunnerJob(job.spec, job.retry + 1)

View File

@@ -38,8 +38,8 @@ class GetterPriority(object):
"""
cached = 20
preferred = 10
remote = 5
environment = 0
remote = -4
external_package = -5
package = -10

View File

@@ -18,7 +18,7 @@ from collections import namedtuple
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision'])
version = VersionTuple(2, 6, 0)
version = VersionTuple(2, 7, 0)
def get_wa_version():

View File

@@ -0,0 +1,14 @@
from wlauto import AndroidDevice
class MeizuMX6(AndroidDevice):
name = 'meizumx6'
@property
def is_rooted(self):
# "su" executable on a rooted Meizu MX6 is targeted
# specifically towards Android application and cannot
# be used to execute a command line shell. This makes it
# "broken" from WA prespective.
return False

View File

@@ -183,7 +183,7 @@ off_t get_file_size(const char *filename) {
}
int get_device_info(int fd, device_info_t *info) {
bzero(info, sizeof(device_info_t));
memset(info, 0, sizeof(device_info_t));
if (ioctl(fd, EVIOCGID, &info->id) < 0)
return errno;
@@ -226,31 +226,31 @@ void destroy_replay_device(int fd)
die("Could not destroy replay device");
}
inline void set_evbit(int fd, int bit)
static inline void set_evbit(int fd, int bit)
{
if(ioctl(fd, UI_SET_EVBIT, bit) < 0)
die("Could not set EVBIT %i", bit);
}
inline void set_keybit(int fd, int bit)
static inline void set_keybit(int fd, int bit)
{
if(ioctl(fd, UI_SET_KEYBIT, bit) < 0)
die("Could not set KEYBIT %i", bit);
}
inline void set_absbit(int fd, int bit)
static inline void set_absbit(int fd, int bit)
{
if(ioctl(fd, UI_SET_ABSBIT, bit) < 0)
die("Could not set ABSBIT %i", bit);
}
inline void set_relbit(int fd, int bit)
static inline void set_relbit(int fd, int bit)
{
if(ioctl(fd, UI_SET_RELBIT, bit) < 0)
die("Could not set RELBIT %i", bit);
}
inline void block_sigterm(sigset_t *oldset)
static inline void block_sigterm(sigset_t *oldset)
{
sigset_t sigset;
sigemptyset(&sigset);
@@ -295,7 +295,7 @@ int write_record_header(int fd, const revent_record_desc_t *desc)
if (ret < sizeof(uint16_t))
return errno;
bzero(padding, HEADER_PADDING_SIZE);
memset(padding, 0, HEADER_PADDING_SIZE);
ret = write(fd, padding, HEADER_PADDING_SIZE);
if (ret < HEADER_PADDING_SIZE)
return errno;
@@ -711,7 +711,7 @@ int init_general_input_devices(input_devices_t *devices)
uint32_t num, i, path_len;
char paths[INPDEV_MAX_DEVICES][INPDEV_MAX_PATH];
int fds[INPDEV_MAX_DEVICES];
int max_fd;
int max_fd = 0;
num = 0;
for(i = 0; i < INPDEV_MAX_DEVICES; ++i) {
@@ -941,7 +941,7 @@ int create_replay_device_or_die(const device_info_t *info)
return fd;
}
inline void read_revent_recording_or_die(const char *filepath, revent_recording_t *recording)
static inline void read_revent_recording_or_die(const char *filepath, revent_recording_t *recording)
{
int ret;
FILE *fin;
@@ -1083,7 +1083,7 @@ void record(const char *filepath, int delay, recording_mode_t mode)
die("Could not initialise event count: %s", strerror(errno));
char padding[EVENT_PADDING_SIZE];
bzero(padding, EVENT_PADDING_SIZE);
memset(padding, 0, EVENT_PADDING_SIZE);
fd_set readfds;
struct timespec tout;
@@ -1099,8 +1099,7 @@ void record(const char *filepath, int delay, recording_mode_t mode)
while(1)
{
FD_ZERO(&readfds);
if (wait_for_stdin)
FD_SET(STDIN_FILENO, &readfds);
FD_SET(STDIN_FILENO, &readfds);
for (i=0; i < devices.num; i++)
FD_SET(devices.fds[i], &readfds);
@@ -1241,7 +1240,7 @@ void replay(const char *filepath)
adjust_event_times(&recording);
struct timeval start_time, now, desired_time, last_event_delta, delta;
bzero(&last_event_delta, sizeof(struct timeval));
memset(&last_event_delta, 0, sizeof(struct timeval));
gettimeofday(&start_time, NULL);
int ret;
@@ -1265,13 +1264,16 @@ void replay(const char *filepath)
int32_t idx = (recording.events[i]).dev_idx;
struct input_event ev = (recording.events[i]).event;
while((i < recording.num_events) && !timercmp(&ev.time, &last_event_delta, !=)) {
while(!timercmp(&ev.time, &last_event_delta, !=)) {
ret = write(recording.devices.fds[idx], &ev, sizeof(ev));
if (ret != sizeof(ev))
die("Could not replay event");
dprintf("replayed event: type %d code %d value %d\n", ev.type, ev.code, ev.value);
i++;
if (i >= recording.num_events) {
break;
}
idx = recording.events[i].dev_idx;
ev = recording.events[i].event;
}

18
wlauto/external/uiauto/app/build.gradle vendored Normal file
View File

@@ -0,0 +1,18 @@
apply plugin: 'com.android.library'
android {
compileSdkVersion 25
buildToolsVersion '25.0.3'
defaultConfig {
minSdkVersion 18
targetSdkVersion 25
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
}

View File

@@ -0,0 +1,9 @@
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.arm.wlauto.uiauto">
<uses-permission android:name="android.permission.READ_LOGS"/>
<application>
<uses-library android:name="android.test.runner"/>
</application>
</manifest>

View File

@@ -0,0 +1,58 @@
/* Copyright 2013-2016 ARM Limited
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.arm.wlauto.uiauto;
import android.os.Bundle;
import android.support.test.uiautomator.UiObject;
// Import the uiautomator libraries
/**
* ApplaunchInterface.java
* Interface used for enabling uxperfapplaunch workload.
* This interface gets implemented by all workloads that support application launch
* instrumentation.
*/
public interface ApplaunchInterface {
/**
* Sets the launchEndObject of a workload, which is a UiObject that marks
* the end of the application launch.
*/
public UiObject getLaunchEndObject();
/**
* Runs the Uiautomation methods for clearing the initial run
* dialogues on the first time installation of an application package.
*/
public void runApplicationInitialization() throws Exception;
/**
* Provides the application launch command of the application which is
* constructed as a string from the workload.
*/
public String getLaunchCommand();
/** Passes the workload parameters. */
public void setWorkloadParameters(Bundle parameters);
/** Initialize the instrumentation for the workload */
public void initialize_instrumentation();
}

View File

@@ -1,4 +1,4 @@
/* Copyright 2013-2015 ARM Limited
/* Copyright 2013-2016 ARM Limited
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -13,38 +13,40 @@
* limitations under the License.
*/
package com.arm.wlauto.uiauto;
import java.io.File;
import android.app.Instrumentation;
import android.content.Context;
import android.support.test.InstrumentationRegistry;
import android.support.test.uiautomator.UiDevice;
import android.support.test.uiautomator.UiObject;
import android.support.test.uiautomator.UiSelector;
import android.graphics.Point;
import android.graphics.Rect;
import android.os.Bundle;
import android.os.SystemClock;
import android.support.test.uiautomator.UiObjectNotFoundException;
import android.support.test.uiautomator.UiScrollable;
import android.support.test.uiautomator.UiWatcher;
import android.util.Log;
import java.io.BufferedReader;
import java.io.File;
import java.io.InputStreamReader;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.TimeUnit;
import java.util.Arrays;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import android.app.Activity;
import android.os.Bundle;
import android.os.SystemClock;
import android.graphics.Point;
import android.graphics.Rect;
import android.util.Log;
import static android.support.test.InstrumentationRegistry.getArguments;
// Import the uiautomator libraries
import com.android.uiautomator.core.UiObject;
import com.android.uiautomator.core.UiObjectNotFoundException;
import com.android.uiautomator.core.UiScrollable;
import com.android.uiautomator.core.UiSelector;
import com.android.uiautomator.core.UiDevice;
import com.android.uiautomator.core.UiWatcher;
import com.android.uiautomator.testrunner.UiAutomatorTestCase;
public class BaseUiAutomation {
public class BaseUiAutomation extends UiAutomatorTestCase {
public long uiAutoTimeout = TimeUnit.SECONDS.toMillis(4);
// Time in milliseconds
public long uiAutoTimeout = 4000;
public enum ScreenOrientation { RIGHT, NATURAL, LEFT };
public enum Direction { UP, DOWN, LEFT, RIGHT, NULL };
@@ -53,6 +55,18 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
public static final int CLICK_REPEAT_INTERVAL_MINIMUM = 5;
public static final int CLICK_REPEAT_INTERVAL_DEFAULT = 50;
public Bundle parameters;
public Instrumentation mInstrumentation;
public Context mContext;
public UiDevice mDevice;
public void initialize_instrumentation(){
mInstrumentation = InstrumentationRegistry.getInstrumentation();
mDevice = UiDevice.getInstance(mInstrumentation);
mContext = mInstrumentation.getTargetContext();
}
/*
* Used by clickUiObject() methods in order to provide a consistent API
*/
@@ -84,7 +98,7 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
public ActionLogger(String testTag, Bundle parameters) {
this.testTag = testTag;
this.enabled = Boolean.parseBoolean(parameters.getString("markers_enabled"));
this.enabled = parameters.getBoolean("markers_enabled");
}
public void start() {
@@ -101,7 +115,7 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public void sleep(int second) {
super.sleep(second * 1000);
SystemClock.sleep(second * 1000);
}
public boolean takeScreenshot(String name) {
@@ -109,7 +123,7 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
String pngDir = params.getString("workdir");
try {
return getUiDevice().takeScreenshot(new File(pngDir, name + ".png"));
return mDevice.takeScreenshot(new File(pngDir, name + ".png"));
} catch (NoSuchMethodError e) {
return true;
}
@@ -121,8 +135,8 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
public void waitText(String text, int second) throws UiObjectNotFoundException {
UiSelector selector = new UiSelector();
UiObject textObj = new UiObject(selector.text(text)
.className("android.widget.TextView"));
UiObject textObj = mDevice.findObject(selector.text(text)
.className("android.widget.TextView"));
waitObject(textObj, second);
}
@@ -213,47 +227,51 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public void registerWatcher(String name, UiWatcher watcher) {
UiDevice.getInstance().registerWatcher(name, watcher);
mDevice.registerWatcher(name, watcher);
}
public void runWatchers() {
UiDevice.getInstance().runWatchers();
mDevice.runWatchers();
}
public void removeWatcher(String name) {
UiDevice.getInstance().removeWatcher(name);
mDevice.removeWatcher(name);
}
public void pressEnter() {
UiDevice.getInstance().pressEnter();
mDevice.pressEnter();
}
public void pressHome() {
mDevice.pressHome();
}
public void pressBack() {
UiDevice.getInstance().pressBack();
mDevice.pressBack();
}
public void pressDPadUp() {
UiDevice.getInstance().pressDPadUp();
mDevice.pressDPadUp();
}
public void pressDPadDown() {
UiDevice.getInstance().pressDPadDown();
mDevice.pressDPadDown();
}
public void pressDPadLeft() {
UiDevice.getInstance().pressDPadLeft();
mDevice.pressDPadLeft();
}
public void pressDPadRight() {
UiDevice.getInstance().pressDPadRight();
mDevice.pressDPadRight();
}
public int getDisplayHeight() {
return UiDevice.getInstance().getDisplayHeight();
return mDevice.getDisplayHeight();
}
public int getDisplayWidth() {
return UiDevice.getInstance().getDisplayWidth();
return mDevice.getDisplayWidth();
}
public int getDisplayCentreWidth() {
@@ -269,11 +287,11 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public void tapDisplay(int x, int y) {
UiDevice.getInstance().click(x, y);
mDevice.click(x, y);
}
public void uiDeviceSwipeUp(int steps) {
UiDevice.getInstance().swipe(
mDevice.swipe(
getDisplayCentreWidth(),
(getDisplayCentreHeight() + (getDisplayCentreHeight() / 2)),
getDisplayCentreWidth(),
@@ -282,7 +300,7 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public void uiDeviceSwipeDown(int steps) {
UiDevice.getInstance().swipe(
mDevice.swipe(
getDisplayCentreWidth(),
(getDisplayCentreHeight() / 2),
getDisplayCentreWidth(),
@@ -291,7 +309,7 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public void uiDeviceSwipeLeft(int steps) {
UiDevice.getInstance().swipe(
mDevice.swipe(
(getDisplayCentreWidth() + (getDisplayCentreWidth() / 2)),
getDisplayCentreHeight(),
(getDisplayCentreWidth() / 2),
@@ -300,7 +318,7 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public void uiDeviceSwipeRight(int steps) {
UiDevice.getInstance().swipe(
mDevice.swipe(
(getDisplayCentreWidth() / 2),
getDisplayCentreHeight(),
(getDisplayCentreWidth() + (getDisplayCentreWidth() / 2)),
@@ -405,13 +423,13 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
public void setScreenOrientation(ScreenOrientation orientation) throws Exception {
switch (orientation) {
case RIGHT:
getUiDevice().setOrientationRight();
mDevice.setOrientationRight();
break;
case NATURAL:
getUiDevice().setOrientationNatural();
mDevice.setOrientationNatural();
break;
case LEFT:
getUiDevice().setOrientationLeft();
mDevice.setOrientationLeft();
break;
default:
throw new Exception("No orientation specified");
@@ -419,25 +437,25 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public void unsetScreenOrientation() throws Exception {
getUiDevice().unfreezeRotation();
mDevice.unfreezeRotation();
}
public void uiObjectPerformLongClick(UiObject view, int steps) throws Exception {
public void uiObjectPerformLongClick(UiObject view, int steps) throws Exception {
Rect rect = view.getBounds();
UiDevice.getInstance().swipe(rect.centerX(), rect.centerY(),
rect.centerX(), rect.centerY(), steps);
mDevice.swipe(rect.centerX(), rect.centerY(),
rect.centerX(), rect.centerY(), steps);
}
public void uiDeviceSwipeVertical(int startY, int endY, int xCoordinate, int steps) {
getUiDevice().swipe(startY, xCoordinate, endY, xCoordinate, steps);
mDevice.swipe(startY, xCoordinate, endY, xCoordinate, steps);
}
public void uiDeviceSwipeHorizontal(int startX, int endX, int yCoordinate, int steps) {
getUiDevice().swipe(startX, yCoordinate, endX, yCoordinate, steps);
mDevice.swipe(startX, yCoordinate, endX, yCoordinate, steps);
}
public void uiObjectPinch(UiObject view, PinchType direction, int steps,
int percent) throws Exception {
int percent) throws Exception {
if (direction.equals(PinchType.IN)) {
view.pinchIn(percent, steps);
} else if (direction.equals(PinchType.OUT)) {
@@ -446,7 +464,7 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public void uiObjectVertPinch(UiObject view, PinchType direction,
int steps, int percent) throws Exception {
int steps, int percent) throws Exception {
if (direction.equals(PinchType.IN)) {
uiObjectVertPinchIn(view, steps, percent);
} else if (direction.equals(PinchType.OUT)) {
@@ -511,20 +529,20 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public UiObject getUiObjectByResourceId(String resourceId, String className, long timeout) throws Exception {
UiObject object = new UiObject(new UiSelector().resourceId(resourceId)
.className(className));
UiObject object = mDevice.findObject(new UiSelector().resourceId(resourceId)
.className(className));
if (!object.waitForExists(timeout)) {
throw new UiObjectNotFoundException(String.format("Could not find \"%s\" \"%s\"",
resourceId, className));
throw new UiObjectNotFoundException(String.format("Could not find \"%s\" \"%s\"",
resourceId, className));
}
return object;
}
public UiObject getUiObjectByResourceId(String id) throws Exception {
UiObject object = new UiObject(new UiSelector().resourceId(id));
UiObject object = mDevice.findObject(new UiSelector().resourceId(id));
if (!object.waitForExists(uiAutoTimeout)) {
throw new UiObjectNotFoundException("Could not find view with resource ID: " + id);
throw new UiObjectNotFoundException("Could not find view with resource ID: " + id);
}
return object;
}
@@ -534,20 +552,20 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public UiObject getUiObjectByDescription(String description, String className, long timeout) throws Exception {
UiObject object = new UiObject(new UiSelector().descriptionContains(description)
.className(className));
UiObject object = mDevice.findObject(new UiSelector().descriptionContains(description)
.className(className));
if (!object.waitForExists(timeout)) {
throw new UiObjectNotFoundException(String.format("Could not find \"%s\" \"%s\"",
description, className));
description, className));
}
return object;
}
public UiObject getUiObjectByDescription(String desc) throws Exception {
UiObject object = new UiObject(new UiSelector().descriptionContains(desc));
UiObject object = mDevice.findObject(new UiSelector().descriptionContains(desc));
if (!object.waitForExists(uiAutoTimeout)) {
throw new UiObjectNotFoundException("Could not find view with description: " + desc);
throw new UiObjectNotFoundException("Could not find view with description: " + desc);
}
return object;
}
@@ -557,8 +575,8 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public UiObject getUiObjectByText(String text, String className, long timeout) throws Exception {
UiObject object = new UiObject(new UiSelector().textContains(text)
.className(className));
UiObject object = mDevice.findObject(new UiSelector().textContains(text)
.className(className));
if (!object.waitForExists(timeout)) {
throw new UiObjectNotFoundException(String.format("Could not find \"%s\" \"%s\"",
text, className));
@@ -567,10 +585,10 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
}
public UiObject getUiObjectByText(String text) throws Exception {
UiObject object = new UiObject(new UiSelector().textContains(text));
UiObject object = mDevice.findObject(new UiSelector().textContains(text));
if (!object.waitForExists(uiAutoTimeout)) {
throw new UiObjectNotFoundException("Could not find view with text: " + text);
throw new UiObjectNotFoundException("Could not find view with text: " + text);
}
return object;
}
@@ -578,10 +596,10 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
// Helper to select a folder in the gallery
public void selectGalleryFolder(String directory) throws Exception {
UiObject workdir =
new UiObject(new UiSelector().text(directory)
.className("android.widget.TextView"));
mDevice.findObject(new UiSelector().text(directory)
.className("android.widget.TextView"));
UiScrollable scrollView =
new UiScrollable(new UiSelector().scrollable(true));
new UiScrollable(new UiSelector().scrollable(true));
// If the folder is not present wait for a short time for
// the media server to refresh its index.
@@ -615,4 +633,96 @@ public class BaseUiAutomation extends UiAutomatorTestCase {
throw new UiObjectNotFoundException("Could not find folder : " + directory);
}
}
// Override getParams function to decode a url encoded parameter bundle before
// passing it to workloads.
public Bundle getParams() {
// Get the original parameter bundle
parameters = getArguments();
// Decode each parameter in the bundle, except null values and "class", as this
// used to control instrumentation and therefore not encoded.
for (String key : parameters.keySet()) {
String param = parameters.getString(key);
if (param != null && !key.equals("class")) {
param = android.net.Uri.decode(param);
parameters = decode(parameters, key, param);
}
}
return parameters;
}
// Helper function to decode a string and insert it as an appropriate type
// into a provided bundle with its key.
// Each bundle parameter will be a urlencoded string with 2 characters prefixed to the value
// used to store the original type information, e.g. 'fl' -> list of floats.
private Bundle decode(Bundle parameters, String key, String value) {
char value_type = value.charAt(0);
char value_dimension = value.charAt(1);
String param = value.substring(2);
if (value_dimension == 's') {
if (value_type == 's') {
parameters.putString(key, param);
} else if (value_type == 'f') {
parameters.putFloat(key, Float.parseFloat(param));
} else if (value_type == 'd') {
parameters.putDouble(key, Double.parseDouble(param));
} else if (value_type == 'b') {
parameters.putBoolean(key, Boolean.parseBoolean(param));
} else if (value_type == 'i') {
parameters.putInt(key, Integer.parseInt(param));
} else if (value_type == 'n') {
parameters.putString(key, "None");
} else {
throw new IllegalArgumentException("Error decoding:" + key + value
+ " - unknown format");
}
} else if (value_dimension == 'l') {
return decodeArray(parameters, key, value_type, param);
} else {
throw new IllegalArgumentException("Error decoding:" + key + value
+ " - unknown format");
}
return parameters;
}
// Helper function to deal with decoding arrays and update the bundle with
// an appropriate array type. The string "0newelement0" is used to distinguish
// each element for each other in the array when encoded.
private Bundle decodeArray(Bundle parameters, String key, char type, String value) {
String[] string_list = value.split("0newelement0");
if (type == 's') {
parameters.putStringArray(key, string_list);
}
else if (type == 'i') {
int[] int_list = new int[string_list.length];
for (int i = 0; i < string_list.length; i++){
int_list[i] = Integer.parseInt(string_list[i]);
}
parameters.putIntArray(key, int_list);
} else if (type == 'f') {
float[] float_list = new float[string_list.length];
for (int i = 0; i < string_list.length; i++){
float_list[i] = Float.parseFloat(string_list[i]);
}
parameters.putFloatArray(key, float_list);
} else if (type == 'd') {
double[] double_list = new double[string_list.length];
for (int i = 0; i < string_list.length; i++){
double_list[i] = Double.parseDouble(string_list[i]);
}
parameters.putDoubleArray(key, double_list);
} else if (type == 'b') {
boolean[] boolean_list = new boolean[string_list.length];
for (int i = 0; i < string_list.length; i++){
boolean_list[i] = Boolean.parseBoolean(string_list[i]);
}
parameters.putBooleanArray(key, boolean_list);
} else {
throw new IllegalArgumentException("Error decoding array: " +
value + " - unknown format");
}
return parameters;
}
}

View File

@@ -0,0 +1,35 @@
/* Copyright 2013-2016 ARM Limited
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.arm.wlauto.uiauto;
import android.os.Bundle;
public final class UiAutoUtils {
/** Construct launch command of an application. */
public static String createLaunchCommand(Bundle parameters) {
String launchCommand;
String activityName = parameters.getString("launch_activity");
String packageName = parameters.getString("package_name");
if (activityName.equals("None")) {
launchCommand = String.format("am start --user -3 %s", packageName);
}
else {
launchCommand = String.format("am start --user -3 -n %s/%s", packageName, activityName);
}
return launchCommand;
}
}

View File

@@ -15,10 +15,31 @@
package com.arm.wlauto.uiauto;
import android.os.Bundle;
import java.util.logging.Logger;
public class UxPerfUiAutomation extends BaseUiAutomation {
protected String packageName;
protected String packageID;
//Get application package parameters and create package ID
public void getPackageParameters() {
packageName = parameters.getString("package_name");
packageID = packageName + ":id/";
}
public void setWorkloadParameters(Bundle parameters, String packageName, String packageID){
this.parameters = parameters;
this.packageName = packageName;
this.packageID = packageID;
}
public String getPackageID(){
return packageID;
}
private Logger logger = Logger.getLogger(UxPerfUiAutomation.class.getName());
public enum GestureType { UIDEVICE_SWIPE, UIOBJECT_SWIPE, PINCH };

24
wlauto/external/uiauto/build.gradle vendored Normal file
View File

@@ -0,0 +1,24 @@
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.2'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}

View File

@@ -14,14 +14,18 @@
# limitations under the License.
#
# Ensure gradelw exists before starting
if [[ ! -f gradlew ]]; then
echo 'gradlew file not found! Check that you are in the right directory.'
exit 9
fi
# Build and return appropriate exit code if failed
ant build
./gradlew clean :app:assembleDebug
exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "ERROR: 'ant build' exited with code $exit_code"
echo "ERROR: 'gradle build' exited with code $exit_code"
exit $exit_code
fi
cp bin/classes/com/arm/wlauto/uiauto/*.class ../../common/android
cp app/build/outputs/aar/app-debug.aar ../../common/android/uiauto.aar

View File

@@ -1,92 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project name="com.arm.wlauto.uiauto" default="help">
<!-- The local.properties file is created and updated by the 'android' tool.
It contains the path to the SDK. It should *NOT* be checked into
Version Control Systems. -->
<property file="local.properties" />
<!-- The ant.properties file can be created by you. It is only edited by the
'android' tool to add properties to it.
This is the place to change some Ant specific build properties.
Here are some properties you may want to change/update:
source.dir
The name of the source directory. Default is 'src'.
out.dir
The name of the output directory. Default is 'bin'.
For other overridable properties, look at the beginning of the rules
files in the SDK, at tools/ant/build.xml
Properties related to the SDK location or the project target should
be updated using the 'android' tool with the 'update' action.
This file is an integral part of the build system for your
application and should be checked into Version Control Systems.
-->
<property file="ant.properties" />
<!-- if sdk.dir was not set from one of the property file, then
get it from the ANDROID_HOME env var.
This must be done before we load project.properties since
the proguard config can use sdk.dir -->
<property environment="env" />
<condition property="sdk.dir" value="${env.ANDROID_HOME}">
<isset property="env.ANDROID_HOME" />
</condition>
<!-- The project.properties file is created and updated by the 'android'
tool, as well as ADT.
This contains project specific properties such as project target, and library
dependencies. Lower level build properties are stored in ant.properties
(or in .classpath for Eclipse projects).
This file is an integral part of the build system for your
application and should be checked into Version Control Systems. -->
<loadproperties srcFile="project.properties" />
<!-- quick check on sdk.dir -->
<fail
message="sdk.dir is missing. Make sure to generate local.properties using 'android update project' or to inject it through the ANDROID_HOME environment variable."
unless="sdk.dir"
/>
<!--
Import per project custom build rules if present at the root of the project.
This is the place to put custom intermediary targets such as:
-pre-build
-pre-compile
-post-compile (This is typically used for code obfuscation.
Compiled code location: ${out.classes.absolute.dir}
If this is not done in place, override ${out.dex.input.absolute.dir})
-post-package
-post-build
-pre-clean
-->
<import file="custom_rules.xml" optional="true" />
<!-- Import the actual build file.
To customize existing targets, there are two options:
- Customize only one target:
- copy/paste the target into this file, *before* the
<import> task.
- customize it to your needs.
- Customize the whole content of build.xml
- copy/paste the content of the rules files (minus the top node)
into this file, replacing the <import> task.
- customize to your needs.
***********************
****** IMPORTANT ******
***********************
In all cases you must update the value of version-tag below to read 'custom' instead of an integer,
in order to avoid having your file be overridden by tools such as "android update project"
-->
<!-- version-tag: VERSION_TAG -->
<import file="${sdk.dir}/tools/ant/uibuild.xml" />
</project>

Binary file not shown.

View File

@@ -0,0 +1,6 @@
#Wed May 03 15:42:44 BST 2017
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip

160
wlauto/external/uiauto/gradlew vendored Executable file
View File

@@ -0,0 +1,160 @@
#!/usr/bin/env bash
##############################################################################
##
## Gradle start up script for UN*X
##
##############################################################################
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS=""
APP_NAME="Gradle"
APP_BASE_NAME=`basename "$0"`
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD="maximum"
warn ( ) {
echo "$*"
}
die ( ) {
echo
echo "$*"
echo
exit 1
}
# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
case "`uname`" in
CYGWIN* )
cygwin=true
;;
Darwin* )
darwin=true
;;
MINGW* )
msys=true
;;
esac
# Attempt to set APP_HOME
# Resolve links: $0 may be a link
PRG="$0"
# Need this for relative symlinks.
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
SAVED="`pwd`"
cd "`dirname \"$PRG\"`/" >/dev/null
APP_HOME="`pwd -P`"
cd "$SAVED" >/dev/null
CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD="java"
which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
# Increase the maximum file descriptors if we can.
if [ "$cygwin" = "false" -a "$darwin" = "false" ] ; then
MAX_FD_LIMIT=`ulimit -H -n`
if [ $? -eq 0 ] ; then
if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then
MAX_FD="$MAX_FD_LIMIT"
fi
ulimit -n $MAX_FD
if [ $? -ne 0 ] ; then
warn "Could not set maximum file descriptor limit: $MAX_FD"
fi
else
warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
fi
fi
# For Darwin, add options to specify how the application appears in the dock
if $darwin; then
GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
fi
# For Cygwin, switch paths to Windows format before running java
if $cygwin ; then
APP_HOME=`cygpath --path --mixed "$APP_HOME"`
CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`
JAVACMD=`cygpath --unix "$JAVACMD"`
# We build the pattern for arguments to be converted via cygpath
ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
SEP=""
for dir in $ROOTDIRSRAW ; do
ROOTDIRS="$ROOTDIRS$SEP$dir"
SEP="|"
done
OURCYGPATTERN="(^($ROOTDIRS))"
# Add a user-defined pattern to the cygpath arguments
if [ "$GRADLE_CYGPATTERN" != "" ] ; then
OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
fi
# Now convert the arguments - kludge to limit ourselves to /bin/sh
i=0
for arg in "$@" ; do
CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -`
CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option
if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition
eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"`
else
eval `echo args$i`="\"$arg\""
fi
i=$((i+1))
done
case $i in
(0) set -- ;;
(1) set -- "$args0" ;;
(2) set -- "$args0" "$args1" ;;
(3) set -- "$args0" "$args1" "$args2" ;;
(4) set -- "$args0" "$args1" "$args2" "$args3" ;;
(5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
(6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
(7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
(8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
(9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
esac
fi
# Split up the JVM_OPTS And GRADLE_OPTS values into an array, following the shell quoting and substitution rules
function splitJvmOpts() {
JVM_OPTS=("$@")
}
eval splitJvmOpts $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS
JVM_OPTS[${#JVM_OPTS[*]}]="-Dorg.gradle.appname=$APP_BASE_NAME"
exec "$JAVACMD" "${JVM_OPTS[@]}" -classpath "$CLASSPATH" org.gradle.wrapper.GradleWrapperMain "$@"

90
wlauto/external/uiauto/gradlew.bat vendored Normal file
View File

@@ -0,0 +1,90 @@
@if "%DEBUG%" == "" @echo off
@rem ##########################################################################
@rem
@rem Gradle startup script for Windows
@rem
@rem ##########################################################################
@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS=
set DIRNAME=%~dp0
if "%DIRNAME%" == "" set DIRNAME=.
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if "%ERRORLEVEL%" == "0" goto init
echo.
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe
if exist "%JAVA_EXE%" goto init
echo.
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:init
@rem Get command-line arguments, handling Windowz variants
if not "%OS%" == "Windows_NT" goto win9xME_args
if "%@eval[2+2]" == "4" goto 4NT_args
:win9xME_args
@rem Slurp the command line arguments.
set CMD_LINE_ARGS=
set _SKIP=2
:win9xME_args_slurp
if "x%~1" == "x" goto execute
set CMD_LINE_ARGS=%*
goto execute
:4NT_args
@rem Get arguments from the 4NT Shell from JP Software
set CMD_LINE_ARGS=%$
:execute
@rem Setup the command line
set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS%
:end
@rem End local scope for the variables with windows NT shell
if "%ERRORLEVEL%"=="0" goto mainEnd
:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
exit /b 1
:mainEnd
if "%OS%"=="Windows_NT" endlocal
:omega

View File

@@ -1,14 +0,0 @@
# This file is automatically generated by Android Tools.
# Do not modify this file -- YOUR CHANGES WILL BE ERASED!
#
# This file must be checked in Version Control Systems.
#
# To customize properties used by the Ant build system edit
# "ant.properties", and override values to adapt the script to your
# project structure.
#
# To enable ProGuard to shrink and obfuscate your code, uncomment this (available properties: sdk.dir, user.home):
#proguard.config=${sdk.dir}/tools/proguard/proguard-android.txt:proguard-project.txt
# Project target.
target=android-18

View File

@@ -0,0 +1 @@
include ':app'

View File

@@ -0,0 +1,131 @@
#pylint: disable=attribute-defined-outside-init
from __future__ import division
import csv
import os
import signal
import time
from fcntl import fcntl, F_GETFL, F_SETFL
from string import Template
from subprocess import Popen, PIPE, STDOUT
from wlauto import Instrument, Parameter
from wlauto.exceptions import HostError
from wlauto.utils.misc import which
IIOCAP_CMD_TEMPLATE = Template("""
${iio_capture} -n ${host} -b ${buffer_size} -c -f ${outfile} ${iio_device}
""")
def _read_nonblock(pipe, size=1024):
fd = pipe.fileno()
flags = fcntl(fd, F_GETFL)
flags |= os.O_NONBLOCK
fcntl(fd, F_SETFL, flags)
output = ''
try:
while True:
output += pipe.read(size)
except IOError:
pass
return output
class AcmeCapeInstrument(Instrument):
name = 'acmecape'
description = """
Instrumetnation for the BayLibre ACME cape for power/energy measurment.
"""
parameters = [
Parameter('iio-capture', default=which('iio-capture'),
description="""
Path to the iio-capture binary will be taken from the
environment, if not specfied.
"""),
Parameter('host', default='baylibre-acme.local',
description="""
Host name (or IP address) of the ACME cape board.
"""),
Parameter('iio-device', default='iio:device0',
description="""
"""),
Parameter('buffer-size', kind=int, default=256,
description="""
Size of the capture buffer (in KB).
"""),
]
def initialize(self, context):
if self.iio_capture is None:
raise HostError('Missing iio-capture binary')
self.command = None
self.subprocess = None
def setup(self, context):
self.outfile = os.path.join(context.output_directory, 'acme-capture.csv')
params = dict(
iio_capture=self.iio_capture,
host=self.host,
buffer_size=self.buffer_size,
iio_device=self.iio_device,
outfile=self.outfile,
)
self.command = IIOCAP_CMD_TEMPLATE.substitute(**params)
self.logger.debug('ACME cape command: {}'.format(self.command))
def very_fast_start(self, context): # pylint: disable=unused-argument
self.subprocess = Popen(self.command.split(), stdout=PIPE, stderr=STDOUT)
def very_fast_stop(self, context): # pylint: disable=unused-argument
self.subprocess.terminate()
def update_result(self, context):
timeout_secs = 10
for _ in xrange(timeout_secs):
if self.subprocess.poll() is not None:
break
time.sleep(1)
else:
output = _read_nonblock(self.subprocess.stdout)
self.subprocess.kill()
self.logger.error('iio-capture did not terminate gracefully')
if self.subprocess.poll() is None:
msg = 'Could not terminate iio-capture:\n{}'
raise HostError(msg.format(output))
if not os.path.isfile(self.outfile):
raise HostError('Output CSV not generated.')
context.add_iteration_artifact('iio-capture', self.outfile, 'data')
if os.stat(self.outfile).st_size == 0:
self.logger.warning('"{}" appears to be empty'.format(self.outfile))
return
self._compute_stats(context)
def _compute_stats(self, context):
with open(self.outfile, 'rb') as fh:
reader = csv.reader(fh, skipinitialspace=True)
header = reader.next()
power_index = header.index('power mW')
ts_index = header.index('timestamp ms')
last_ts = 0.0
energy_uj = 0
ave_power_mw = 0.0
for i, row in enumerate(reader):
row_power_mw = float(row[power_index])
row_ts = float(row[ts_index])
if i == 0:
ave_power_mw = row_power_mw
else:
ave_power_mw = ave_power_mw + (row_power_mw - ave_power_mw) / i
energy_uj += row_power_mw * (row_ts - last_ts)
last_ts = row_ts
context.add_metric('power', ave_power_mw, 'milliwatts')
context.add_metric('energy', energy_uj / 1000000, 'joules')

View File

@@ -266,7 +266,7 @@ class Daq(Instrument):
metric_name = '{}_{}'.format(port, metric)
context.result.add_metric(metric_name, round(value, 3), UNITS[metric])
self._results[key][metric_name] = round(value, 3)
energy = sum(data[metrics.index('power')]) * (self.sampling_rate / 1000000)
energy = sum(data[metrics.index('power')]) / self.sampling_rate
context.result.add_metric('{}_energy'.format(port), round(energy, 3), UNITS['energy'])
def teardown(self, context):

View File

@@ -650,7 +650,7 @@ class EnergyModelInstrument(Instrument):
def enable_all_cores(self):
counter = Counter(self.device.core_names)
for core, number in counter.iteritems():
self.device.set_number_of_online_cpus(core, number)
self.device.set_number_of_online_cores(core, number)
self.big_cpus = self.device.get_online_cpus(self.big_core)
self.little_cpus = self.device.get_online_cpus(self.little_core)

View File

@@ -39,10 +39,9 @@ from wlauto.instrumentation import instrument_is_installed
from wlauto.exceptions import (InstrumentError, WorkerThreadError, ConfigError,
DeviceNotRespondingError, TimeoutError)
from wlauto.utils.types import boolean, numeric
from wlauto.utils.fps import FpsProcessor, SurfaceFlingerFrame, GfxInfoFrame, GFXINFO_EXEMPT
from wlauto.utils.fps import (FpsProcessor, SurfaceFlingerFrame, GfxInfoFrame, GFXINFO_EXEMPT,
VSYNC_INTERVAL)
VSYNC_INTERVAL = 16666667
PAUSE_LATENCY = 20
EPSYLON = 0.0001
@@ -133,6 +132,12 @@ class FpsInstrument(Instrument):
android/services/surfaceflinger/FrameTracker.h (as of the time of writing
currently 128) and a frame rate of 60 fps that is applicable to most devices.
"""),
Parameter('force_surfaceflinger', kind=boolean, default=False,
description="""
By default, the method to capture fps data is based on Android version.
If this is set to true, force the instrument to use the SurfaceFlinger method
regardless of its Android version.
"""),
]
def __init__(self, device, **kwargs):
@@ -153,37 +158,43 @@ class FpsInstrument(Instrument):
def setup(self, context):
workload = context.workload
if hasattr(workload, 'view'):
self.fps_outfile = os.path.join(context.output_directory, 'fps.csv')
self.outfile = os.path.join(context.output_directory, 'frames.csv')
# Android M brings a new method of collecting FPS data
if self.device.get_sdk_version() >= 23:
# gfxinfo takes in the package name rather than a single view/activity
# so there is no 'list_command' to run and compare against a list of
# views/activities. Additionally, clearing the stats requires the package
# so we need to clear for every package in the workload.
# Usually there is only one package, but some workloads may run multiple
# packages so each one must be reset before continuing
self.fps_method = 'gfxinfo'
runcmd = 'dumpsys gfxinfo {} framestats'
lstcmd = None
params = workload.package
params = [params] if isinstance(params, basestring) else params
for pkg in params:
self.device.execute('dumpsys gfxinfo {} reset'.format(pkg))
else:
self.fps_method = 'surfaceflinger'
runcmd = 'dumpsys SurfaceFlinger --latency {}'
lstcmd = 'dumpsys SurfaceFlinger --list'
params = workload.view
self.device.execute('dumpsys SurfaceFlinger --latency-clear ')
self.collector = LatencyCollector(self.outfile, self.device, params or '',
self.keep_raw, self.logger, self.dumpsys_period,
runcmd, lstcmd, self.fps_method)
else:
use_gfxinfo = not self.force_surfaceflinger and (self.device.get_sdk_version() >= 23)
if use_gfxinfo and not hasattr(workload, 'package'):
self.logger.debug('Workload does not contain a package; falling back to SurfaceFlinger...')
use_gfxinfo = False
if not use_gfxinfo and not hasattr(workload, 'view'):
self.logger.debug('Workload does not contain a view; disabling...')
self.is_enabled = False
return
self.fps_outfile = os.path.join(context.output_directory, 'fps.csv')
self.outfile = os.path.join(context.output_directory, 'frames.csv')
# Android M brings a new method of collecting FPS data
if use_gfxinfo:
# gfxinfo takes in the package name rather than a single view/activity
# so there is no 'list_command' to run and compare against a list of
# views/activities. Additionally, clearing the stats requires the package
# so we need to clear for every package in the workload.
# Usually there is only one package, but some workloads may run multiple
# packages so each one must be reset before continuing
self.fps_method = 'gfxinfo'
runcmd = 'dumpsys gfxinfo {} framestats'
lstcmd = None
params = workload.package
params = [params] if isinstance(params, basestring) else params
for pkg in params:
self.device.execute('dumpsys gfxinfo {} reset'.format(pkg))
else:
self.fps_method = 'surfaceflinger'
runcmd = 'dumpsys SurfaceFlinger --latency "{}"'
lstcmd = 'dumpsys SurfaceFlinger --list'
params = workload.view
self.device.execute('dumpsys SurfaceFlinger --latency-clear ')
self.collector = LatencyCollector(self.outfile, self.device, params or '',
self.keep_raw, self.logger, self.dumpsys_period,
runcmd, lstcmd, self.fps_method)
def start(self, context):
if self.is_enabled:
@@ -306,7 +317,7 @@ class LatencyCollector(threading.Thread):
# Then check for each activity in this list and if there is a match,
# process the output. If no command is provided, then always process.
if self.list_command:
view_list = self.device.execute(self.list_command).split()
view_list = self.device.execute(self.list_command).replace('\r\n', '\n').replace('\r', '\n').split('\n')
for activity in self.activities:
if activity in view_list:
wfh.write(self.device.execute(self.command_template.format(activity)))
@@ -387,6 +398,8 @@ class LatencyCollector(threading.Thread):
if match:
data = match.group(0)[:-1]
data = map(int, data.split(','))
# Ignore additional fields
data = data[:len(self.header)]
frame = GfxInfoFrame(*data)
if frame not in self.frames:
if frame.Flags & GFXINFO_EXEMPT:

View File

@@ -147,6 +147,8 @@ class FreqSweep(Instrument):
spec.id = '{}_{}_{}'.format(spec.id, sweep_spec['label'], freq)
spec.classifiers['core'] = sweep_spec['cluster']
spec.classifiers['freq'] = freq
for k, v in sweep_spec.get('classifiers', {}).iteritems():
spec.classifiers[k] = v
spec.load(self.device, context.config.ext_loader)
spec.workload.init_resources(context)
spec.workload.validate()

View File

@@ -46,14 +46,7 @@ from wlauto.utils.types import list_of_strings
logger = logging.getLogger(__name__)
class SysfsExtractor(Instrument):
name = 'sysfs_extractor'
description = """
Collects the contest of a set of directories, before and after workload execution
and diffs the result.
"""
class FsExtractor(Instrument):
mount_command = 'mount -t tmpfs -o size={} tmpfs {}'
extract_timeout = 30
@@ -81,12 +74,11 @@ class SysfsExtractor(Instrument):
description="""Size of the tempfs partition."""),
]
def initialize(self, context):
def initialize_tmpfs(self, context):
if not self.device.is_rooted and self.use_tmpfs: # pylint: disable=access-member-before-definition
raise ConfigError('use_tempfs must be False for an unrooted device.')
elif self.use_tmpfs is None: # pylint: disable=access-member-before-definition
self.use_tmpfs = self.device.is_rooted
if self.use_tmpfs:
self.on_device_before = self.device.path.join(self.tmpfs_mount_point, 'before')
self.on_device_after = self.device.path.join(self.tmpfs_mount_point, 'after')
@@ -197,6 +189,19 @@ class SysfsExtractor(Instrument):
return os.path.dirname(as_relative(directory).replace(self.device.path.sep, os.sep))
class SysfsExtractor(FsExtractor):
name = 'sysfs_extractor'
description = """
Collects the contest of a set of directories, before and after workload execution
and diffs the result.
"""
def initialize(self, context):
self.initialize_tmpfs(context)
class ExecutionTimeInstrument(Instrument):
name = 'execution_time'
@@ -261,7 +266,7 @@ class InterruptStatsInstrument(Instrument):
_diff_interrupt_files(self.before_file, self.after_file, _f(self.diff_file))
class DynamicFrequencyInstrument(SysfsExtractor):
class DynamicFrequencyInstrument(FsExtractor):
name = 'cpufreq'
description = """
@@ -275,6 +280,9 @@ class DynamicFrequencyInstrument(SysfsExtractor):
Parameter('paths', mandatory=False, override=True),
]
def initialize(self, context):
self.initialize_tmpfs(context)
def setup(self, context):
self.paths = ['/sys/devices/system/cpu']
if self.use_tmpfs:

View File

@@ -106,8 +106,7 @@ class PerfInstrument(Instrument):
self.device.kick_off(command)
def stop(self, context):
as_root = self.device.platform == 'android'
self.device.killall('sleep', as_root=as_root)
self.device.killall('sleep')
def update_result(self, context):
for label in self.labels:

View File

@@ -35,13 +35,13 @@ class TraceCmdInstrument(Instrument):
name = 'trace-cmd'
description = """
trace-cmd is an instrument which interacts with Ftrace Linux kernel internal
trace-cmd is an instrument which interacts with ftrace Linux kernel internal
tracer
From trace-cmd man page:
trace-cmd command interacts with the Ftrace tracer that is built inside the
Linux kernel. It interfaces with the Ftrace specific files found in the
trace-cmd command interacts with the ftrace tracer that is built inside the
Linux kernel. It interfaces with the ftrace specific files found in the
debugfs file system under the tracing directory.
trace-cmd reads a list of events it will trace, which can be specified in
@@ -49,12 +49,8 @@ class TraceCmdInstrument(Instrument):
trace_events = ['irq*', 'power*']
If no event is specified in the config file, trace-cmd traces the following events:
- sched*
- irq*
- power*
- cpufreq_interactive*
If no event is specified, a default set of events that are generally considered useful
for debugging/profiling purposes will be enabled.
The list of available events can be obtained by rooting and running the following
command line on the device ::
@@ -67,7 +63,7 @@ class TraceCmdInstrument(Instrument):
trace_cmd_buffer_size = 8000
The maximum buffer size varies from device to device, but there is a maximum and trying
to set buffer size beyound that will fail. If you plan on collecting a lot of trace over
to set buffer size beyond that will fail. If you plan on collecting a lot of trace over
long periods of time, the buffer size will not be enough and you will only get trace for
the last portion of your run. To deal with this you can set the ``trace_mode`` setting to
``'record'`` (the default is ``'start'``)::
@@ -76,23 +72,28 @@ class TraceCmdInstrument(Instrument):
This will cause trace-cmd to trace into file(s) on disk, rather than the buffer, and so the
limit for the max size of the trace is set by the storage available on device. Bear in mind
that ``'record'`` mode *is* more instrusive than the default, so if you do not plan on
that ``'record'`` mode *is* more intrusive than the default, so if you do not plan on
generating a lot of trace, it is best to use the default ``'start'`` mode.
.. note:: Mode names correspend to the underlying trace-cmd exectuable's command used to
.. note:: Mode names correspond to the underlying trace-cmd executable's command used to
implement them. You can find out more about what is happening in each case from
trace-cmd documentation: https://lwn.net/Articles/341902/.
This instrument comes with an Android trace-cmd binary that will be copied and used on the
device, however post-processing will be done on-host and you must have trace-cmd installed and
in your path. On Ubuntu systems, this may be done with::
This instrument comes with an trace-cmd binary that will be copied and used
on the device, however post-processing will be, by default, done on-host and you must
have trace-cmd installed and in your path. On Ubuntu systems, this may be
done with::
sudo apt-get install trace-cmd
Alternatively, you may set ``report_on_target`` parameter to ``True`` to enable on-target
processing (this is useful when running on non-Linux hosts, but is likely to take longer
and may fail on particularly resource-constrained targets).
"""
parameters = [
Parameter('events', kind=list, default=['sched*', 'irq*', 'power*', 'cpufreq_interactive*'],
Parameter('events', kind=list, default=['sched*', 'irq*', 'power*'],
global_alias='trace_events',
description="""
Specifies the list of events to be traced. Each event in the list will be passed to

View File

@@ -90,12 +90,14 @@ class ReventGetter(ResourceGetter):
self.resolver.register(self, 'revent', GetterPriority.package)
def get(self, resource, **kwargs):
# name format: [model/device_name.stage.revent]
device_model = resource.owner.device.get_device_model()
wa_device_name = resource.owner.device.name
for name in [device_model, wa_device_name]:
if not name:
continue
filename = '.'.join([name, resource.stage, 'revent']).lower()
self.logger.debug('Trying to get {0}.'.format(str(filename)))
location = _d(os.path.join(self.get_base_location(resource), 'revent_files'))
for candidate in os.listdir(location):
if candidate.lower() == filename.lower():
@@ -227,7 +229,8 @@ class DependencyFileGetter(ResourceGetter):
def get(self, resource, **kwargs):
force = kwargs.get('force')
remote_path = os.path.join(self.mount_point, self.relative_path, resource.path)
local_path = os.path.join(resource.owner.dependencies_directory, os.path.basename(resource.path))
local_path = _f(os.path.join(settings.dependencies_directory, '__remote',
resource.owner.name, os.path.basename(resource.path)))
if not os.path.exists(local_path) or force:
if not os.path.exists(remote_path):
@@ -288,7 +291,7 @@ class EnvironmentDependencyGetter(ResourceGetter):
return path
class ExtensionAssetGetter(DependencyFileGetter):
class ExtensionAssetGetter(EnvironmentDependencyGetter):
name = 'extension_asset'
resource_type = 'extension_asset'
@@ -552,7 +555,7 @@ def get_from_location_by_extension(resource, location, extension, version=None,
def get_from_list_by_extension(resource, filelist, extension, version=None, variant=None):
filelist = [ff for ff in filelist if os.path.splitext(ff)[1].lower().endswith(extension)]
filelist = [ff for ff in filelist if os.path.splitext(ff)[1].lower().endswith('.' + extension)]
if variant:
filelist = [ff for ff in filelist if variant.lower() in os.path.basename(ff).lower()]
if version:
@@ -562,6 +565,8 @@ def get_from_list_by_extension(resource, filelist, extension, version=None, vari
filelist = [ff for ff in filelist if version.lower() in os.path.basename(ff).lower()]
if extension == 'apk':
filelist = [ff for ff in filelist if not ApkInfo(ff).native_code or resource.platform in ApkInfo(ff).native_code]
filelist = [ff for ff in filelist if not resource.package or resource.package == ApkInfo(ff).package]
filelist = [ff for ff in filelist if resource.uiauto == ('com.arm.wlauto.uiauto' in ApkInfo(ff).package)]
if len(filelist) == 1:
return filelist[0]
elif not filelist:

View File

@@ -121,6 +121,20 @@ class CpuStatesProcessor(ResultProcessor):
:`ignore`: The start marker will be ignored. All events in the trace will be used.
:`error`: An error will be raised if the start marker is not found in the trace.
:`try`: If the start marker is not found, all events in the trace will be used.
"""),
Parameter('no_idle', kind=bool, default=False,
description="""
Indicate that there will be no idle transitions in the trace. By default, a core
will be reported as being in an "unknown" state until the first idle transtion for
that core. Normally, this is not an issue, as cores are "nudged" as part of the setup
to ensure that there is an idle transtion before the meassured region. However, if all
idle states for the core have been disabled, or if the kernel does not have cpuidle,
the nudge will not result in an idle transition, which would cause the cores to be
reported to be in "unknown" state for the entire execution.
If this parameter is set to ``True``, the processor will assuming that cores are
running prior to the begining of the issue, and they will leave unknown state on
the first frequency transition.
""")
]
@@ -215,6 +229,7 @@ class CpuStatesProcessor(ResultProcessor):
cpu_utilisation=cpu_utilisation,
max_freq_list=self.max_freq_list,
start_marker_handling=self.start_marker_handling,
no_idle=self.no_idle,
)
parallel_report = reports.pop(0)
powerstate_report = reports.pop(0)

View File

@@ -29,6 +29,7 @@ from wlauto import File, Parameter, ResultProcessor
from wlauto.exceptions import ConfigError, ResultProcessorError
import wlauto.utils.ipython as ipython
from wlauto.utils.misc import open_file
from wlauto.utils.types import file_path
DEFAULT_NOTEBOOK_TEMPLATE = 'template.ipynb'
@@ -59,6 +60,7 @@ class IPythonNotebookExporter(ResultProcessor):
parameters = [
Parameter('notebook_template', default=DEFAULT_NOTEBOOK_TEMPLATE,
kind=file_path,
description='''Filename of the ipython notebook template. If
no `notebook_template` is specified, the example template
above is used.'''),
@@ -72,7 +74,7 @@ class IPythonNotebookExporter(ResultProcessor):
ending in ``.pdf``.'''),
Parameter('show_notebook', kind=bool,
description='Open a web browser with the resulting notebook.'),
Parameter('notebook_directory',
Parameter('notebook_directory', kind=file_path,
description='''Path to the notebooks directory served by the
ipython notebook server. You must set it if
``show_notebook`` is selected. The ipython notebook

View File

@@ -14,17 +14,13 @@
#
import os
import re
import logging
from collections import defaultdict
from distutils.version import LooseVersion
from wlauto import ResultProcessor, Parameter
from wlauto.instrumentation import instrument_is_enabled
from wlauto.instrumentation.fps import VSYNC_INTERVAL
from wlauto.exceptions import ResultProcessorError, ConfigError
from wlauto.utils.fps import FpsProcessor, SurfaceFlingerFrame, GfxInfoFrame
from wlauto.utils.types import numeric, boolean
from wlauto.utils.uxperf import UxPerfParser
try:
import pandas as pd
@@ -77,6 +73,7 @@ class UxPerfResultProcessor(ResultProcessor):
]
def initialize(self, context):
# needed for uxperf parser
if not pd or LooseVersion(pd.__version__) < LooseVersion('0.13.1'):
message = ('uxperf result processor requires pandas Python package '
'(version 0.13.1 or higher) to be installed.\n'
@@ -101,161 +98,3 @@ class UxPerfResultProcessor(ResultProcessor):
if self.add_frames:
self.logger.debug('Adding per-action frame metrics')
parser.add_action_frames(framelog, self.drop_threshold, self.generate_csv)
class UxPerfParser(object):
'''
Parses logcat messages for UX Performance markers.
UX Performance markers are output from logcat under a debug priority. The
logcat tag for the marker messages is UX_PERF. The messages associated with
this tag consist of a name for the action to be recorded and a timestamp.
These fields are delimited by a single space. e.g.
<TAG> : <MESSAGE>
UX_PERF : gestures_swipe_left_start 861975087367
...
...
UX_PERF : gestures_swipe_left_end 862132085804
Timestamps are produced using the running Java Virtual Machine's
high-resolution time source, in nanoseconds.
'''
def __init__(self, context):
self.context = context
self.actions = defaultdict(list)
self.logger = logging.getLogger('UxPerfParser')
# regex for matching logcat message format:
self.regex = re.compile(r'UX_PERF.*?:\s*(?P<message>.*\d+$)')
def parse(self, log):
'''
Opens log file and parses UX_PERF markers.
Actions delimited by markers are captured in a dictionary with
actions mapped to timestamps.
'''
loglines = self._read(log)
self._gen_action_timestamps(loglines)
def add_action_frames(self, frames, drop_threshold, generate_csv): # pylint: disable=too-many-locals
'''
Uses FpsProcessor to parse frame.csv extracting fps, frame count, jank
and vsync metrics on a per action basis. Adds results to metrics.
'''
refresh_period = self._parse_refresh_peroid()
for action in self.actions:
# default values
fps, frame_count, janks, not_at_vsync = float('nan'), 0, 0, 0
p90, p95, p99 = [float('nan')] * 3
metrics = (fps, frame_count, janks, not_at_vsync)
df = self._create_sub_df(self.actions[action], frames)
if not df.empty: # pylint: disable=maybe-no-member
fp = FpsProcessor(df, action=action)
try:
per_frame_fps, metrics = fp.process(refresh_period, drop_threshold)
fps, frame_count, janks, not_at_vsync = metrics
if generate_csv:
name = action + '_fps'
filename = name + '.csv'
fps_outfile = os.path.join(self.context.output_directory, filename)
per_frame_fps.to_csv(fps_outfile, index=False, header=True)
self.context.add_artifact(name, path=filename, kind='data')
p90, p95, p99 = fp.percentiles()
except AttributeError:
self.logger.warning('Non-matched timestamps in dumpsys output: action={}'
.format(action))
self.context.result.add_metric(action + '_FPS', fps)
self.context.result.add_metric(action + '_frame_count', frame_count)
self.context.result.add_metric(action + '_janks', janks, lower_is_better=True)
self.context.result.add_metric(action + '_not_at_vsync', not_at_vsync, lower_is_better=True)
self.context.result.add_metric(action + '_frame_time_90percentile', p90, 'ms', lower_is_better=True)
self.context.result.add_metric(action + '_frame_time_95percentile', p95, 'ms', lower_is_better=True)
self.context.result.add_metric(action + '_frame_time_99percentile', p99, 'ms', lower_is_better=True)
def add_action_timings(self):
'''
Add simple action timings in millisecond resolution to metrics
'''
for action, timestamps in self.actions.iteritems():
# nanosecond precision, but not necessarily nanosecond resolution
# truncate to guarantee millisecond precision
ts_ms = tuple(int(ts[:-6]) for ts in timestamps)
if len(ts_ms) == 2:
start, finish = ts_ms
duration = finish - start
result = self.context.result
result.add_metric(action + "_start", start, units='ms')
result.add_metric(action + "_finish", finish, units='ms')
result.add_metric(action + "_duration", duration, units='ms', lower_is_better=True)
else:
self.logger.warning('Expected two timestamps. Received {}'.format(ts_ms))
def _gen_action_timestamps(self, lines):
'''
Parses lines and matches against logcat tag.
Groups timestamps by action name.
Creates a dictionary of lists with actions mapped to timestamps.
'''
for line in lines:
match = self.regex.search(line)
if match:
message = match.group('message')
action_with_suffix, timestamp = message.rsplit(' ', 1)
action, _ = action_with_suffix.rsplit('_', 1)
self.actions[action].append(timestamp)
def _parse_refresh_peroid(self):
'''
Reads the first line of the raw dumpsys output for the refresh period.
'''
raw_path = os.path.join(self.context.output_directory, 'surfaceflinger.raw')
if os.path.isfile(raw_path):
raw_lines = self._read(raw_path)
refresh_period = int(raw_lines.next())
else:
refresh_period = VSYNC_INTERVAL
return refresh_period
def _create_sub_df(self, action, frames):
'''
Creates a data frame containing fps metrics for a captured action.
'''
if len(action) == 2:
start, end = map(int, action)
df = pd.read_csv(frames)
# SurfaceFlinger Algorithm
if df.columns.tolist() == list(SurfaceFlingerFrame._fields): # pylint: disable=maybe-no-member
field = 'actual_present_time'
# GfxInfo Algorithm
elif df.columns.tolist() == list(GfxInfoFrame._fields): # pylint: disable=maybe-no-member
field = 'FrameCompleted'
else:
field = ''
self.logger.error('frames.csv not in a recognised format. Cannot parse.')
if field:
df = df[start < df[field]]
df = df[df[field] <= end]
else:
self.logger.warning('Discarding action. Expected 2 timestamps, got {}!'.format(len(action)))
df = pd.DataFrame()
return df
def _read(self, log):
'''
Opens a file a yields the lines with whitespace stripped.
'''
try:
with open(log, 'r') as rfh:
for line in rfh:
yield line.strip()
except IOError:
self.logger.error('Could not open {}'.format(log))

View File

@@ -17,11 +17,12 @@
# pylint: disable=R0201
from unittest import TestCase
from nose.tools import raises, assert_equal, assert_not_equal # pylint: disable=E0611
from nose.tools import raises, assert_equal, assert_not_equal, assert_true # pylint: disable=E0611
from wlauto.utils.android import check_output
from wlauto.utils.misc import merge_dicts, merge_lists, TimeoutError
from wlauto.utils.types import list_or_integer, list_or_bool, caseless_string, arguments
from wlauto.utils.types import (list_or_integer, list_or_bool, caseless_string, arguments,
ParameterDict)
class TestCheckOutput(TestCase):
@@ -88,3 +89,126 @@ class TestTypes(TestCase):
assert_equal(arguments('--foo 7 --bar "fizz buzz"'),
['--foo', '7', '--bar', 'fizz buzz'])
assert_equal(arguments(['test', 42]), ['test', '42'])
class TestParameterDict(TestCase):
# Define test parameters
orig_params = {
'string' : 'A Test String',
'string_list' : ['A Test', 'List', 'With', '\n in.'],
'bool_list' : [False, True, True],
'int' : 42,
'float' : 1.23,
'long' : long(987),
'none' : None,
}
def setUp(self):
self.params = ParameterDict()
self.params['string'] = self.orig_params['string']
self.params['string_list'] = self.orig_params['string_list']
self.params['bool_list'] = self.orig_params['bool_list']
self.params['int'] = self.orig_params['int']
self.params['float'] = self.orig_params['float']
self.params['long'] = self.orig_params['long']
self.params['none'] = self.orig_params['none']
# Test values are encoded correctly
def test_getEncodedItems(self):
encoded = {
'string' : 'ssA%20Test%20String',
'string_list' : 'slA%20Test0newelement0List0newelement0With0newelement0%0A%20in.',
'bool_list' : 'blFalse0newelement0True0newelement0True',
'int' : 'is42',
'float' : 'fs1.23',
'long' : 'ds987',
'none' : 'nsNone',
}
# Test iter_encoded_items
for k, v in self.params.iter_encoded_items():
assert_equal(v, encoded[k])
# Test get single encoded value
assert_equal(self.params.get_encoded_value('string'), encoded['string'])
assert_equal(self.params.get_encoded_value('string_list'), encoded['string_list'])
assert_equal(self.params.get_encoded_value('bool_list'), encoded['bool_list'])
assert_equal(self.params.get_encoded_value('int'), encoded['int'])
assert_equal(self.params.get_encoded_value('float'), encoded['float'])
assert_equal(self.params.get_encoded_value('long'), encoded['long'])
assert_equal(self.params.get_encoded_value('none'), encoded['none'])
# Test it behaves like a normal dict
def test_getitem(self):
assert_equal(self.params['string'], self.orig_params['string'])
assert_equal(self.params['string_list'], self.orig_params['string_list'])
assert_equal(self.params['bool_list'], self.orig_params['bool_list'])
assert_equal(self.params['int'], self.orig_params['int'])
assert_equal(self.params['float'], self.orig_params['float'])
assert_equal(self.params['long'], self.orig_params['long'])
assert_equal(self.params['none'], self.orig_params['none'])
def test_get(self):
assert_equal(self.params.get('string'), self.orig_params['string'])
assert_equal(self.params.get('string_list'), self.orig_params['string_list'])
assert_equal(self.params.get('bool_list'), self.orig_params['bool_list'])
assert_equal(self.params.get('int'), self.orig_params['int'])
assert_equal(self.params.get('float'), self.orig_params['float'])
assert_equal(self.params.get('long'), self.orig_params['long'])
assert_equal(self.params.get('none'), self.orig_params['none'])
def test_contains(self):
assert_true(self.orig_params['string'] in self.params.values())
assert_true(self.orig_params['string_list'] in self.params.values())
assert_true(self.orig_params['bool_list'] in self.params.values())
assert_true(self.orig_params['int'] in self.params.values())
assert_true(self.orig_params['float'] in self.params.values())
assert_true(self.orig_params['long'] in self.params.values())
assert_true(self.orig_params['none'] in self.params.values())
def test_pop(self):
assert_equal(self.params.pop('string'), self.orig_params['string'])
assert_equal(self.params.pop('string_list'), self.orig_params['string_list'])
assert_equal(self.params.pop('bool_list'), self.orig_params['bool_list'])
assert_equal(self.params.pop('int'), self.orig_params['int'])
assert_equal(self.params.pop('float'), self.orig_params['float'])
assert_equal(self.params.pop('long'), self.orig_params['long'])
assert_equal(self.params.pop('none'), self.orig_params['none'])
self.params['string'] = self.orig_params['string']
assert_equal(self.params.popitem(), ('string', self.orig_params['string']))
def test_iteritems(self):
for k, v in self.params.iteritems():
assert_equal(v, self.orig_params[k])
def test_parameter_dict_update(self):
params_1 = ParameterDict()
params_2 = ParameterDict()
# Test two ParameterDicts
params_1['string'] = self.orig_params['string']
params_1['string_list'] = self.orig_params['string_list']
params_1['bool_list'] = self.orig_params['bool_list']
params_2['int'] = self.orig_params['int']
params_2['float'] = self.orig_params['float']
params_2['long'] = self.orig_params['long']
params_2['none'] = self.orig_params['none']
params_1.update(params_2)
assert_equal(params_1, self.params)
# Test update with normal dict
params_3 = ParameterDict()
std_dict = dict()
params_3['string'] = self.orig_params['string']
std_dict['string_list'] = self.orig_params['string_list']
std_dict['bool_list'] = self.orig_params['bool_list']
std_dict['int'] = self.orig_params['int']
std_dict['float'] = self.orig_params['float']
std_dict['long'] = self.orig_params['long']
std_dict['none'] = self.orig_params['none']
params_3.update(std_dict)
for key in params_3.keys():
assert_equal(params_3[key], self.params[key])

View File

@@ -27,7 +27,7 @@ import re
from wlauto.exceptions import DeviceError, ConfigError, HostError, WAError
from wlauto.utils.misc import (check_output, escape_single_quotes,
escape_double_quotes, get_null,
escape_double_quotes, get_null, which,
CalledProcessErrorWithStderr, ABI_MAP)
@@ -38,6 +38,9 @@ logger = logging.getLogger('android')
# See:
# http://developer.android.com/guide/topics/manifest/uses-sdk-element.html#ApiLevels
ANDROID_VERSION_MAP = {
25: 'NOUGAT_MR1',
24: 'NOUGAT',
23: 'MARSHMALLOW',
22: 'LOLLIPOP_MR1',
21: 'LOLLIPOP',
20: 'KITKAT_WATCH',
@@ -100,6 +103,24 @@ ANDROID_NORMAL_PERMISSIONS = [
'UNINSTALL_SHORTCUT',
]
ANDROID_UNCHANGEABLE_PERMISSIONS = [
'USE_CREDENTIALS',
'MANAGE_ACCOUNTS',
'DOWNLOAD_WITHOUT_NOTIFICATION',
'AUTHENTICATE_ACCOUNTS',
'WRITE_SETTINGS',
'WRITE_SYNC_STATS',
'SUBSCRIBED_FEEDS_WRITE',
'SUBSCRIBED_FEEDS_READ',
'READ_PROFILE',
'WRITE_MEDIA_STORAGE',
'RESTART_PACKAGES',
'MOUNT_UNMOUNT_FILESYSTEMS',
'CLEAR_APP_CACHE',
'GET_TASKS',
]
# Package versions that are known to have problems with AndroidUiAutoBenchmark workloads.
# NOTE: ABI versions are not included.
UNSUPPORTED_PACKAGES = {
@@ -346,7 +367,10 @@ def adb_shell(device, command, timeout=None, check_exit_code=False, as_root=Fals
message += '\n{}'.format(am_start_error.findall(output)[0])
raise DeviceError(message)
else:
raise DeviceError('adb has returned early; did not get an exit code. Was kill-server invoked?')
message = 'adb has returned early; did not get an exit code. '\
'Was kill-server invoked?\nOUTPUT:\n-----\n{}\n'\
'-----ERROR:\n-----\n{}\n-----'
raise DeviceError(message.format(raw_output, error))
else: # do not check exit code
try:
output, error = check_output(full_command, timeout, shell=True)
@@ -434,19 +458,26 @@ def _initialize_without_android_home(env):
def _init_common(env):
logger.debug('ANDROID_HOME: {}'.format(env.android_home))
build_tools_directory = os.path.join(env.android_home, 'build-tools')
if not os.path.isdir(build_tools_directory):
msg = 'ANDROID_HOME ({}) does not appear to have valid Android SDK install (cannot find build-tools)'
raise HostError(msg.format(env.android_home))
versions = os.listdir(build_tools_directory)
for version in reversed(sorted(versions)):
aapt_path = os.path.join(build_tools_directory, version, 'aapt')
if os.path.isfile(aapt_path):
logger.debug('Using aapt for version {}'.format(version))
env.aapt = aapt_path
break
if os.path.isdir(build_tools_directory):
versions = os.listdir(build_tools_directory)
for version in reversed(sorted(versions)):
aapt_path = os.path.join(build_tools_directory, version, 'aapt')
if os.path.isfile(aapt_path):
logger.debug('Using aapt for version {}'.format(version))
env.aapt = aapt_path
break
else:
raise HostError('aapt not found. Please make sure at least one Android platform is installed.')
else:
raise HostError('aapt not found. Please make sure at least one Android platform is installed.')
# User may already installed 'aapt' from package provided by distro's
# Try finding in $PATH
aapt_path = which('aapt')
if aapt_path:
logger.debug('Using aapt from {}'.format(aapt_path))
env.aapt = aapt_path
else:
msg = 'ANDROID_HOME ({}) does not appear to have valid Android SDK install (cannot find build-tools)'
raise HostError(msg.format(env.android_home))
def _check_env():
global android_home, platform_tools, adb, aapt # pylint: disable=W0603

View File

@@ -29,6 +29,8 @@ GfxInfoFrame = collections.namedtuple('GfxInfoFrame', 'Flags IntendedVsync Vsync
# Android M: WindowLayoutChanged | SurfaceCanvas
GFXINFO_EXEMPT = 1 | 4
VSYNC_INTERVAL = 16666667
class FpsProcessor(object):
"""

View File

@@ -36,20 +36,23 @@ NBFORMAT_VERSION = 3
if IPython:
if LooseVersion('5.0.0') > LooseVersion(IPython.__version__) >= LooseVersion('4.0.0'):
import nbformat
from jupyter_client.manager import KernelManager
if LooseVersion('6.0.0') > LooseVersion(IPython.__version__) >= LooseVersion('4.0.0'):
try:
import nbformat
from jupyter_client.manager import KernelManager
def read_notebook(notebook_in): # pylint: disable=function-redefined
return nbformat.reads(notebook_in, NBFORMAT_VERSION) # pylint: disable=E1101
def read_notebook(notebook_in): # pylint: disable=function-redefined
return nbformat.reads(notebook_in, NBFORMAT_VERSION) # pylint: disable=E1101
def write_notebook(notebook, fout): # pylint: disable=function-redefined
nbformat.write(notebook, fout) # pylint: disable=E1101
def write_notebook(notebook, fout): # pylint: disable=function-redefined
nbformat.write(notebook, fout) # pylint: disable=E1101
NotebookNode = nbformat.NotebookNode # pylint: disable=E1101
NotebookNode = nbformat.NotebookNode # pylint: disable=E1101
IPYTHON_NBCONVERT_HTML = ['jupyter', 'nbconvert', '--to html']
IPYTHON_NBCONVERT_PDF = ['jupyter', 'nbconvert', '--to pdf']
IPYTHON_NBCONVERT_HTML = ['jupyter', 'nbconvert', '--to html']
IPYTHON_NBCONVERT_PDF = ['jupyter', 'nbconvert', '--to pdf']
except ImportError:
import_error_str = 'Please install "jupyter" as well as "ipython" packages.'
elif LooseVersion('4.0.0') > LooseVersion(IPython.__version__) >= LooseVersion('3.0.0'):
from IPython.kernel import KernelManager

View File

@@ -848,3 +848,16 @@ def memoized(func):
return __memo_cache[id_string]
return memoize_wrapper
def commonprefix(file_list, sep=os.sep):
"""
Find the lowest common base folder of a passed list of files.
"""
common_path = os.path.commonprefix(file_list)
cp_split = common_path.split(sep)
other_split = file_list[0].split(sep)
last = len(cp_split) - 1
if cp_split[last] != other_split[last]:
cp_split = cp_split[:-1]
return sep.join(cp_split)

View File

@@ -113,11 +113,12 @@ class SystemPowerState(object):
def num_cores(self):
return len(self.cpus)
def __init__(self, num_cores):
def __init__(self, num_cores, no_idle=False):
self.timestamp = None
self.cpus = []
idle_state = -1 if no_idle else None
for _ in xrange(num_cores):
self.cpus.append(CpuPowerState())
self.cpus.append(CpuPowerState(idle_state=idle_state))
def copy(self):
new = SystemPowerState(self.num_cores)
@@ -154,9 +155,9 @@ class PowerStateProcessor(object):
def __init__(self, core_clusters, num_idle_states,
first_cluster_state=sys.maxint, first_system_state=sys.maxint,
wait_for_start_marker=False):
self.power_state = SystemPowerState(len(core_clusters))
self.requested_states = defaultdict(lambda: -1) # cpu_id -> requeseted state
wait_for_start_marker=False, no_idle=False):
self.power_state = SystemPowerState(len(core_clusters), no_idle=no_idle)
self.requested_states = {} # cpu_id -> requeseted state
self.wait_for_start_marker = wait_for_start_marker
self._saw_start_marker = False
self._saw_stop_marker = False
@@ -230,6 +231,7 @@ class PowerStateProcessor(object):
def _process_idle_entry(self, event):
if self.cpu_states[event.cpu_id].is_idling:
raise ValueError('Got idle state entry event for an idling core: {}'.format(event))
self.requested_states[event.cpu_id] = event.idle_state
self._try_transition_to_idle_state(event.cpu_id, event.idle_state)
def _process_idle_exit(self, event):
@@ -250,18 +252,11 @@ class PowerStateProcessor(object):
def _try_transition_to_idle_state(self, cpu_id, idle_state):
related_ids = self.idle_related_cpus[(cpu_id, idle_state)]
idle_state = idle_state
# Tristate: True - can transition, False - can't transition,
# None - unknown idle state on at least one related cpu
transition_check = self._can_enter_state(related_ids, idle_state)
if not transition_check:
# If we can't enter an idle state right now, record that we've
# requested it, so that we may enter it later (once all related
# cpus also want a state at least as deep).
self.requested_states[cpu_id] = idle_state
if transition_check is None:
# Unknown state on a related cpu means we're not sure whether we're
# entering requested state or a shallower one
@@ -276,8 +271,6 @@ class PowerStateProcessor(object):
self.cpu_states[cpu_id].idle_state = idle_state
for rid in related_ids:
self.cpu_states[rid].idle_state = idle_state
if self.requested_states[rid] == idle_state:
del self.requested_states[rid] # request satisfied, so remove
def _can_enter_state(self, related_ids, state):
"""
@@ -288,12 +281,13 @@ class PowerStateProcessor(object):
"""
for rid in related_ids:
rid_requested_state = self.requested_states[rid]
rid_requested_state = self.requested_states.get(rid, None)
rid_current_state = self.cpu_states[rid].idle_state
if rid_current_state is None:
return None
if rid_current_state < state and rid_requested_state < state:
return False
if rid_current_state < state:
if rid_requested_state is None or rid_requested_state < state:
return False
return True
@@ -340,6 +334,36 @@ def gather_core_states(system_state_stream, freq_dependent_idle_states=None): #
yield (system_state.timestamp, core_states)
def record_state_transitions(reporter, stream):
for event in stream:
if event.kind == 'transition':
reporter.record_transition(event)
yield event
class PowerStateTransitions(object):
def __init__(self, filepath):
self.filepath = filepath
self._wfh = open(filepath, 'w')
self.writer = csv.writer(self._wfh)
headers = ['timestamp', 'cpu_id', 'frequency', 'idle_state']
self.writer.writerow(headers)
def update(self, timestamp, core_states): # NOQA
# Just recording transitions, not doing anything
# with states.
pass
def record_transition(self, transition):
row = [transition.timestamp, transition.cpu_id,
transition.frequency, transition.idle_state]
self.writer.writerow(row)
def report(self):
self._wfh.close()
class PowerStateTimeline(object):
def __init__(self, filepath, core_names, idle_state_names):
@@ -626,7 +650,8 @@ def report_power_stats(trace_file, idle_state_names, core_names, core_clusters,
num_idle_states, first_cluster_state=sys.maxint,
first_system_state=sys.maxint, use_ratios=False,
timeline_csv_file=None, cpu_utilisation=None,
max_freq_list=None, start_marker_handling='error'):
max_freq_list=None, start_marker_handling='error',
transitions_csv_file=None, no_idle=False):
# pylint: disable=too-many-locals,too-many-branches
trace = TraceCmdTrace(trace_file,
filter_markers=False,
@@ -647,7 +672,8 @@ def report_power_stats(trace_file, idle_state_names, core_names, core_clusters,
num_idle_states=num_idle_states,
first_cluster_state=first_cluster_state,
first_system_state=first_system_state,
wait_for_start_marker=wait_for_start_marker)
wait_for_start_marker=wait_for_start_marker,
no_idle=no_idle)
reporters = [
ParallelStats(core_clusters, use_ratios),
PowerStateStats(core_names, idle_state_names, use_ratios)
@@ -663,7 +689,13 @@ def report_power_stats(trace_file, idle_state_names, core_names, core_clusters,
event_stream = trace.parse()
transition_stream = stream_cpu_power_transitions(event_stream)
power_state_stream = ps_processor.process(transition_stream)
if transitions_csv_file:
trans_reporter = PowerStateTransitions(transitions_csv_file)
reporters.append(trans_reporter)
recorded_trans_stream = record_state_transitions(trans_reporter, transition_stream)
power_state_stream = ps_processor.process(recorded_trans_stream)
else:
power_state_stream = ps_processor.process(transition_stream)
core_state_stream = gather_core_states(power_state_stream)
for timestamp, states in core_state_stream:
@@ -700,6 +732,8 @@ def main():
cpu_utilisation=args.cpu_utilisation,
max_freq_list=args.max_freq_list,
start_marker_handling=args.start_marker_handling,
transitions_csv_file=args.transitions_file,
no_idle=args.no_idle,
)
parallel_report = reports.pop(0)
@@ -773,6 +807,11 @@ def parse_arguments(): # NOQA
A timeline of core power states will be written to the specified file in
CSV format.
''')
parser.add_argument('-T', '--transitions-file', metavar='FILE',
help='''
A timeline of core power state transitions will be
written to the specified file in CSV format.
''')
parser.add_argument('-u', '--cpu-utilisation', metavar='FILE',
help='''
A timeline of cpu(s) utilisation will be written to the specified file in
@@ -797,6 +836,13 @@ def parse_arguments(): # NOQA
error: An error will be raised if the start marker is not found in the trace.
try: If the start marker is not found, all events in the trace will be used.
''')
parser.add_argument('-N', '--no-idle', action='store_true',
help='''
Assume that cpuidle is not present or disabled on the system, and therefore that the
initial state of the cores is that they are running. This flag is necessary because
the processor assumes the cores are in an unknown state until it sees the first idle
transition, which will never come if cpuidle is absent.
''')
args = parser.parse_args()

View File

@@ -142,9 +142,10 @@ class ReventRecording(object):
first = last = events.next()
except StopIteration:
self._duration = 0
for last in events:
pass
self._duration = (last.time - first.time).total_seconds()
else:
for last in events:
pass
self._duration = (last.time - first.time).total_seconds()
else: # not streaming
if not self._events:
self._duration = 0

View File

@@ -22,6 +22,7 @@ import re
import threading
import tempfile
import shutil
import time
from pexpect import EOF, TIMEOUT, spawn, pxssh
@@ -36,21 +37,45 @@ sshpass = None
logger = logging.getLogger('ssh')
def ssh_get_shell(host, username, password=None, keyfile=None, port=None, timeout=10, telnet=False):
def ssh_get_shell(host, username, password=None, keyfile=None, port=None, timeout=10, telnet=False, original_prompt=None):
_check_env()
if telnet:
if keyfile:
raise ConfigError('keyfile may not be used with a telnet connection.')
conn = TelnetConnection()
else: # ssh
conn = pxssh.pxssh() # pylint: disable=redefined-variable-type
try:
if keyfile:
conn.login(host, username, ssh_key=keyfile, port=port, login_timeout=timeout)
else:
conn.login(host, username, password, port=port, login_timeout=timeout)
except EOF:
raise DeviceError('Could not connect to {}; is the host name correct?'.format(host))
start_time = time.time()
extra_login_args = {}
while True:
if telnet:
if keyfile:
raise ValueError('keyfile may not be used with a telnet connection.')
conn = TelnetConnection()
if original_prompt:
extra_login_args['original_prompt'] = original_prompt
if port is None:
port = 23
else: # ssh
conn = pxssh.pxssh()
try:
if keyfile:
conn.login(host, username, ssh_key=keyfile, port=port, login_timeout=timeout, **extra_login_args)
else:
conn.login(host, username, password, port=port, login_timeout=timeout, **extra_login_args)
break
except EOF:
timeout -= time.time() - start_time
if timeout <= 0:
message = 'Could not connect to {}; is the host name correct?'
raise DeviceError(message.format(host))
time.sleep(5)
conn.sendline('shopt -s checkwinsize')
conn.prompt()
conn.setwinsize(500,200)
conn.sendline('')
conn.prompt()
conn.sendline('stty rows 500')
conn.prompt()
conn.sendline('stty cols 200')
conn.prompt()
conn.setecho(False)
return conn

View File

@@ -136,9 +136,6 @@ def verify_state(screenshot_file, state_defs_path, workload_phase):
with open(statedefs_file) as fh:
state_definitions = yaml.load(fh)
# run a match on the screenshot
matched_state = match_state(screenshot_file, state_defs_path, state_definitions)
# find what the expected state is for the given workload phase
expected_state = None
for phase in state_definitions["workload_phases"]:
@@ -148,4 +145,7 @@ def verify_state(screenshot_file, state_defs_path, workload_phase):
if expected_state is None:
raise StateDefinitionError("Phase not defined")
# run a match on the screenshot
matched_state = match_state(screenshot_file, state_defs_path, state_definitions)
return expected_state == matched_state

View File

@@ -41,7 +41,22 @@ class TraceCmdEvent(object):
"""
__slots__ = ['thread', 'reporting_cpu_id', 'timestamp', 'name', 'text', 'fields']
__slots__ = ['thread', 'reporting_cpu_id', 'timestamp', 'name', 'text', '_fields', '_parser']
@property
def fields(self):
if self._fields is not None:
return self._fields
self._fields = {}
if self._parser:
try:
self._parser(self, self.text)
except Exception: # pylint: disable=broad-except
# unknown format assume user does not care or know how to
# parse self.text
pass
return self._fields
def __init__(self, thread, cpu_id, ts, name, body, parser=None):
"""
@@ -70,15 +85,8 @@ class TraceCmdEvent(object):
self.timestamp = numeric(ts)
self.name = name
self.text = body
self.fields = {}
if parser:
try:
parser(self, self.text)
except Exception: # pylint: disable=broad-except
# unknown format assume user does not care or know how to
# parse self.text
pass
self._fields = None
self._parser = parser
def __getattr__(self, name):
try:
@@ -143,7 +151,7 @@ def default_body_parser(event, text):
v = int(v)
except ValueError:
pass
event.fields[k] = v
event._fields[k] = v
def regex_body_parser(regex, flags=0):
@@ -166,9 +174,9 @@ def regex_body_parser(regex, flags=0):
if match:
for k, v in match.groupdict().iteritems():
try:
event.fields[k] = int(v)
event._fields[k] = int(v)
except ValueError:
event.fields[k] = v
event._fields[k] = v
return regex_parser_func
@@ -191,6 +199,20 @@ def sched_switch_parser(event, text):
return default_body_parser(event, text.replace('==>', ''))
def sched_stat_parser(event, text):
"""
sched_stat_* events unclude the units, "[ns]", in an otherwise
regular key=value sequence; so the units need to be stripped out first.
"""
return default_body_parser(event, text.replace(' [ns]', ''))
def sched_wakeup_parser(event, text):
regex = re.compile(r'(?P<comm>\S+):(?P<pid>\d+) \[(?P<prio>\d+)\] success=(?P<success>\d) CPU:(?P<cpu>\d+)')
parse_func = regex_body_parser(regex)
return parse_func(event, text)
# Maps event onto the corresponding parser for its body text. A parser may be
# a callable with signature
#
@@ -200,12 +222,16 @@ def sched_switch_parser(event, text):
# regex). In case of a string/regex, its named groups will be used to populate
# the event's attributes.
EVENT_PARSER_MAP = {
'sched_stat_blocked': sched_stat_parser,
'sched_stat_iowait': sched_stat_parser,
'sched_stat_runtime': sched_stat_parser,
'sched_stat_sleep': sched_stat_parser,
'sched_stat_wait': sched_stat_parser,
'sched_switch': sched_switch_parser,
'sched_wakeup': sched_wakeup_parser,
'sched_wakeup_new': sched_wakeup_parser,
}
TRACE_EVENT_REGEX = re.compile(r'^\s+(?P<thread>\S+.*?\S+)\s+\[(?P<cpu_id>\d+)\]\s+(?P<ts>[\d.]+):\s+'
r'(?P<name>[^:]+):\s+(?P<body>.*?)\s*$')
HEADER_REGEX = re.compile(r'^\s*(?:version|cpus)\s*=\s*([\d.]+)\s*$')
DROPPED_EVENTS_REGEX = re.compile(r'CPU:(?P<cpu_id>\d+) \[\d*\s*EVENTS DROPPED\]')
@@ -213,6 +239,28 @@ DROPPED_EVENTS_REGEX = re.compile(r'CPU:(?P<cpu_id>\d+) \[\d*\s*EVENTS DROPPED\]
EMPTY_CPU_REGEX = re.compile(r'CPU \d+ is empty')
def split_trace_event_line(line):
"""
Split a trace-cmd event line into the preamble (containing the task, cpu id
and timestamp), the event name, and the event body. Each of these is
delimited by a ': ' (optionally followed by more whitespace), however ': '
may also appear in the body of the event and in the thread name. This
attempts to identify the correct split by ensureing the there is a '['
(used to mark the cpu id and not a valid character for a task name) in the
peramble.
"""
parts = line.split(': ')
if len(parts) <= 3:
return parts
preamble = parts.pop(0)
while '[' not in preamble:
preamble += ': ' + parts.pop(0)
event_name = parts.pop(0)
return (preamble, event_name, ': '.join(parts))
class TraceCmdTrace(object):
@property
@@ -248,27 +296,30 @@ class TraceCmdTrace(object):
elif TRACE_MARKER_STOP in line:
break
match = DROPPED_EVENTS_REGEX.search(line)
if match:
yield DroppedEventsEvent(match.group('cpu_id'))
continue
matched = False
for rx in [HEADER_REGEX, EMPTY_CPU_REGEX]:
match = rx.search(line)
if 'EVENTS DROPPED' in line:
match = DROPPED_EVENTS_REGEX.search(line)
if match:
logger.debug(line.strip())
matched = True
break
if matched:
yield DroppedEventsEvent(match.group('cpu_id'))
continue
if line.startswith('version') or line.startswith('cpus') or\
line.startswith('CPU:'):
matched = False
for rx in [HEADER_REGEX, EMPTY_CPU_REGEX]:
match = rx.search(line)
if match:
logger.debug(line.strip())
matched = True
break
if matched:
continue
# <thread/cpu/timestamp>: <event name>: <event body>
parts = split_trace_event_line(line)
if len(parts) != 3:
continue
match = TRACE_EVENT_REGEX.search(line)
if not match:
logger.warning('Invalid trace event: "{}"'.format(line))
continue
event_name = match.group('name')
event_name = parts[1].strip()
if filters:
found = False
@@ -279,10 +330,22 @@ class TraceCmdTrace(object):
if not found:
continue
thread_string, rest = parts[0].rsplit(' [', 1)
cpu_id, ts_string = rest.split('] ')
body = parts[2].strip()
body_parser = EVENT_PARSER_MAP.get(event_name, default_body_parser)
if isinstance(body_parser, basestring) or isinstance(body_parser, re._pattern_type): # pylint: disable=protected-access
body_parser = regex_body_parser(body_parser)
yield TraceCmdEvent(parser=body_parser, **match.groupdict())
yield TraceCmdEvent(
thread=thread_string.strip(),
cpu_id=cpu_id,
ts=ts_string.strip(),
name=event_name,
body=body,
parser=body_parser,
)
else:
if self.filter_markers and inside_marked_region:
logger.warning('Did not encounter a stop marker in trace')

View File

@@ -30,6 +30,7 @@ import re
import math
import shlex
from collections import defaultdict
from urllib import quote, unquote
from wlauto.utils.misc import isiterable, to_identifier
@@ -82,6 +83,9 @@ def numeric(value):
return ivalue
return fvalue
def file_path(value):
"""Handles expansion of paths containing '~'"""
return os.path.expanduser(value)
def list_of_strs(value):
"""
@@ -328,3 +332,119 @@ class range_dict(dict):
def __setitem__(self, i, v):
i = int(i)
super(range_dict, self).__setitem__(i, v)
class ParameterDict(dict):
"""
A dict-like object that automatically encodes various types into a url safe string,
and enforces a single type for the contents in a list.
Each value is first prefixed with 2 letters to preserve type when encoding to a string.
The format used is "value_type, value_dimension" e.g a 'list of floats' would become 'fl'.
"""
# Function to determine the appropriate prefix based on the parameters type
@staticmethod
def _get_prefix(obj):
if isinstance(obj, basestring):
prefix = 's'
elif isinstance(obj, float):
prefix = 'f'
elif isinstance(obj, long):
prefix = 'd'
elif isinstance(obj, bool):
prefix = 'b'
elif isinstance(obj, int):
prefix = 'i'
elif obj is None:
prefix = 'n'
else:
raise ValueError('Unable to encode {} {}'.format(obj, type(obj)))
return prefix
# Function to add prefix and urlencode a provided parameter.
@staticmethod
def _encode(obj):
if isinstance(obj, list):
t = type(obj[0])
prefix = ParameterDict._get_prefix(obj[0]) + 'l'
for item in obj:
if not isinstance(item, t):
msg = 'Lists must only contain a single type, contains {} and {}'
raise ValueError(msg.format(t, type(item)))
obj = '0newelement0'.join(str(x) for x in obj)
else:
prefix = ParameterDict._get_prefix(obj) + 's'
return quote(prefix + str(obj))
# Function to decode a string and return a value of the original parameter type.
# pylint: disable=too-many-return-statements
@staticmethod
def _decode(string):
value_type = string[:1]
value_dimension = string[1:2]
value = unquote(string[2:])
if value_dimension == 's':
if value_type == 's':
return str(value)
elif value_type == 'b':
return boolean(value)
elif value_type == 'd':
return long(value)
elif value_type == 'f':
return float(value)
elif value_type == 'i':
return int(value)
elif value_type == 'n':
return None
elif value_dimension == 'l':
return [ParameterDict._decode(value_type + 's' + x)
for x in value.split('0newelement0')]
else:
raise ValueError('Unknown {} {}'.format(type(string), string))
def __init__(self, *args, **kwargs):
for k, v in kwargs.iteritems():
self.__setitem__(k, v)
dict.__init__(self, *args)
def __setitem__(self, name, value):
dict.__setitem__(self, name, self._encode(value))
def __getitem__(self, name):
return self._decode(dict.__getitem__(self, name))
def __contains__(self, item):
return dict.__contains__(self, self._encode(item))
def __iter__(self):
return iter((k, self._decode(v)) for (k, v) in self.items())
def iteritems(self):
return self.__iter__()
def get(self, name):
return self._decode(dict.get(self, name))
def pop(self, key):
return self._decode(dict.pop(self, key))
def popitem(self):
key, value = dict.popitem(self)
return (key, self._decode(value))
def iter_encoded_items(self):
return dict.iteritems(self)
def get_encoded_value(self, name):
return dict.__getitem__(self, name)
def values(self):
return [self[k] for k in dict.keys(self)]
def update(self, *args, **kwargs):
for d in list(args) + [kwargs]:
if isinstance(d, ParameterDict):
dict.update(self, d)
else:
for k, v in d.iteritems():
self[k] = v

170
wlauto/utils/uxperf.py Executable file
View File

@@ -0,0 +1,170 @@
import os
import re
import logging
from collections import defaultdict
from wlauto.utils.fps import FpsProcessor, SurfaceFlingerFrame, GfxInfoFrame, VSYNC_INTERVAL
try:
import pandas as pd
except ImportError:
pd = None
class UxPerfParser(object):
'''
Parses logcat messages for UX Performance markers.
UX Performance markers are output from logcat under a debug priority. The
logcat tag for the marker messages is UX_PERF. The messages associated with
this tag consist of a name for the action to be recorded and a timestamp.
These fields are delimited by a single space. e.g.
<TAG> : <MESSAGE>
UX_PERF : gestures_swipe_left_start 861975087367
...
...
UX_PERF : gestures_swipe_left_end 862132085804
Timestamps are produced using the running Java Virtual Machine's
high-resolution time source, in nanoseconds.
'''
def __init__(self, context, prefix=''):
self.context = context
self.prefix = prefix
self.actions = defaultdict(list)
self.logger = logging.getLogger('UxPerfParser')
# regex for matching logcat message format:
self.regex = re.compile(r'UX_PERF.*?:\s*(?P<message>.*\d+$)')
def parse(self, log):
'''
Opens log file and parses UX_PERF markers.
Actions delimited by markers are captured in a dictionary with
actions mapped to timestamps.
'''
loglines = self._read(log)
self._gen_action_timestamps(loglines)
def add_action_frames(self, frames, drop_threshold, generate_csv): # pylint: disable=too-many-locals
'''
Uses FpsProcessor to parse frame.csv extracting fps, frame count, jank
and vsync metrics on a per action basis. Adds results to metrics.
'''
refresh_period = self._parse_refresh_peroid()
for action in self.actions:
# default values
fps, frame_count, janks, not_at_vsync = float('nan'), 0, 0, 0
p90, p95, p99 = [float('nan')] * 3
metrics = (fps, frame_count, janks, not_at_vsync)
df = self._create_sub_df(self.actions[action], frames)
if not df.empty: # pylint: disable=maybe-no-member
fp = FpsProcessor(df, action=action)
try:
per_frame_fps, metrics = fp.process(refresh_period, drop_threshold)
fps, frame_count, janks, not_at_vsync = metrics
if generate_csv:
name = action + '_fps'
filename = name + '.csv'
fps_outfile = os.path.join(self.context.output_directory, filename)
per_frame_fps.to_csv(fps_outfile, index=False, header=True)
self.context.add_artifact(name, path=filename, kind='data')
p90, p95, p99 = fp.percentiles()
except AttributeError:
self.logger.warning('Non-matched timestamps in dumpsys output: action={}'
.format(action))
self.context.result.add_metric(self.prefix + action + '_FPS', fps)
self.context.result.add_metric(self.prefix + action + '_frame_count', frame_count)
self.context.result.add_metric(self.prefix + action + '_janks', janks, lower_is_better=True)
self.context.result.add_metric(self.prefix + action + '_not_at_vsync', not_at_vsync, lower_is_better=True)
self.context.result.add_metric(self.prefix + action + '_frame_time_90percentile', p90, 'ms', lower_is_better=True)
self.context.result.add_metric(self.prefix + action + '_frame_time_95percentile', p95, 'ms', lower_is_better=True)
self.context.result.add_metric(self.prefix + action + '_frame_time_99percentile', p99, 'ms', lower_is_better=True)
def add_action_timings(self):
'''
Add simple action timings in millisecond resolution to metrics
'''
for action, timestamps in self.actions.iteritems():
# nanosecond precision, but not necessarily nanosecond resolution
# truncate to guarantee millisecond precision
ts_ms = tuple(int(int(ts) / 1e6) for ts in timestamps)
if len(ts_ms) == 2:
start, finish = ts_ms
duration = finish - start
result = self.context.result
result.add_metric(self.prefix + action + "_start", start, units='ms')
result.add_metric(self.prefix + action + "_finish", finish, units='ms')
result.add_metric(self.prefix + action + "_duration", duration, units='ms', lower_is_better=True)
else:
self.logger.warning('Expected two timestamps. Received {}'.format(ts_ms))
def _gen_action_timestamps(self, lines):
'''
Parses lines and matches against logcat tag.
Groups timestamps by action name.
Creates a dictionary of lists with actions mapped to timestamps.
'''
for line in lines:
match = self.regex.search(line)
if match:
message = match.group('message')
action_with_suffix, timestamp = message.rsplit(' ', 1)
action, _ = action_with_suffix.rsplit('_', 1)
self.actions[action].append(timestamp)
def _parse_refresh_peroid(self):
'''
Reads the first line of the raw dumpsys output for the refresh period.
'''
raw_path = os.path.join(self.context.output_directory, 'surfaceflinger.raw')
if os.path.isfile(raw_path):
raw_lines = self._read(raw_path)
refresh_period = int(raw_lines.next())
else:
refresh_period = VSYNC_INTERVAL
return refresh_period
def _create_sub_df(self, action, frames):
'''
Creates a data frame containing fps metrics for a captured action.
'''
if len(action) == 2:
start, end = map(int, action)
df = pd.read_csv(frames)
# SurfaceFlinger Algorithm
if df.columns.tolist() == list(SurfaceFlingerFrame._fields): # pylint: disable=maybe-no-member
field = 'actual_present_time'
# GfxInfo Algorithm
elif df.columns.tolist() == list(GfxInfoFrame._fields): # pylint: disable=maybe-no-member
field = 'FrameCompleted'
else:
field = ''
self.logger.error('frames.csv not in a recognised format. Cannot parse.')
if field:
df = df[start < df[field]]
df = df[df[field] <= end]
else:
self.logger.warning('Discarding action. Expected 2 timestamps, got {}!'.format(len(action)))
df = pd.DataFrame()
return df
def _read(self, log):
'''
Opens a file a yields the lines with whitespace stripped.
'''
try:
with open(log, 'r') as rfh:
for line in rfh:
yield line.strip()
except IOError:
self.logger.error('Could not open {}'.format(log))

View File

@@ -24,7 +24,6 @@ class AdobeReader(AndroidUxPerfWorkload):
name = 'adobereader'
package = 'com.adobe.reader'
min_apk_version = '16.1'
activity = 'com.adobe.reader.AdobeReader'
view = [package + '/com.adobe.reader.help.AROnboardingHelpActivity',
package + '/com.adobe.reader.viewer.ARSplitPaneActivity',
@@ -45,6 +44,8 @@ class AdobeReader(AndroidUxPerfWorkload):
3. Search test:
Search ``document_name`` for each string in the ``search_string_list``
4. Close the document
Known working APK version: 16.1
'''
default_search_strings = [
@@ -75,8 +76,8 @@ class AdobeReader(AndroidUxPerfWorkload):
def validate(self):
super(AdobeReader, self).validate()
self.uiauto_params['filename'] = self.document_name.replace(' ', '0space0')
self.uiauto_params['search_string_list'] = '0newline0'.join([x.replace(' ', '0space0') for x in self.search_string_list])
self.uiauto_params['filename'] = self.document_name
self.uiauto_params['search_string_list'] = self.search_string_list
# Only accept certain file formats
if os.path.splitext(self.document_name.lower())[1] not in ['.pdf']:
raise ValidationError('{} must be a PDF file'.format(self.document_name))

Some files were not shown because too many files have changed in this diff Show More