1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-07-15 03:23:47 +01:00

147 Commits

Author SHA1 Message Date
e312efc113 fw/version: Version bump for minor fixes 2019-01-10 13:21:16 +00:00
0ea9e2fb63 setup: Update devlib dependency to the release version 2019-01-10 13:21:16 +00:00
78090bd94e doc/changes: Add change log for v3.1.1 2019-01-10 13:21:16 +00:00
ef45b6b615 MANIFEST: Fix including all of the wa subdirectory
Ensure that all subfolders are included in the MANIFEST otherwise when
packaging WA there can be missing files.
2019-01-10 13:21:16 +00:00
22c237ebe9 extras/Docker: Update to use latest release version.
Update the dockerfile to use the latest released versions of WA and Devlib.
2019-01-10 13:21:16 +00:00
ed95755af5 fw/output: better classifiers format for metrics
Use a dict-like string representation for classifiers, rather than the
default OrderedDict one, which is a lot more verbose and difficult to
read.
2019-01-10 13:03:29 +00:00
4c6636eb72 tools/revent: update binaries to latest version
- cross-compiled revent binaries to match latest version (with recording timestamp fix f64aaf6 on 12 Oct 2018)
toolchains used:
gcc-linaro-7.3.1-2018.05-x86_64_arm-linux-gnueabi
gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu

- fixes error in utils/revent.py when reading timestamps from recordings made with previous wa revent binaries
2019-01-07 13:31:07 +00:00
60fe412548 wa/version: Update to development version
Update WA and devlib versions to development tags.
2019-01-04 11:29:10 +00:00
e187e7efd6 fw/version: Verion Bump to v3.1.0 2018-12-21 14:31:07 +00:00
d9e16bfebd setup.py: Update devlib release versions
Update WA to use the new release verion of devlib on PyPi instead of
the github repo.
2018-12-21 14:31:06 +00:00
d21258e24d docs/changes: Update changelog for V3.1 2018-12-21 14:26:55 +00:00
8770888685 workloads/googleslides: Misc Fixes:
- Move the slide editing test into the main runWorkload instead of
setup.
- On some devices the folder picker has changed layout so add support for
navigating.
- Add support for differently capitalized splash buttons.
- Add workaround for adding a new slide if click the button doesn't work
the first time.
2018-12-21 14:26:55 +00:00
755417f139 workloads/speedometer: Misc Fixes
- Fix formatting
- Skip teardown automation if elements are not present on some devices
instead of failing the workload.
- Give extra time for start button to appear as some devices can be slow
to load.
2018-12-21 14:26:55 +00:00
ba4004db5f workloads/googlemaps: Fix for alternative layouts.
Add additional check for text based directions button as id can be
missing on some devices and allow for skipping the view steps stage for
large screen devices which do not require this step.
2018-12-21 14:26:55 +00:00
87ac9c6ab3 workloads/androbench: Fix extracting benchmark results
On some devices the entire results page fits on one screen and does not
present a scrollable element, therefore only attempt to scroll if
available.
2018-12-21 14:26:55 +00:00
ea5ea90bb6 docs/dev_info/processing_output: Fix formatting 2018-12-21 14:26:55 +00:00
b93beb3f1f commands/show: Revert quoting method switch
In commit bb282eb19c devlibs
`escape_double_quotes` method was retired in favour of the `pipes.quote`
method however this does not format correctly for this purpose therefore
revert back to the original escaping method.
2018-12-21 14:05:14 +00:00
ca0d2eaaf5 setup.py: Add missing classifier for Python3 2018-12-14 07:44:44 +00:00
06961d6adb docs/how_tos: Fix incorrect spacing 2018-12-14 07:44:44 +00:00
7d8cd85951 doc/rt_params: Fix incorrect documentaion of parameter names 2018-12-14 07:44:44 +00:00
6b03653227 fw/rt_config: Update tunables parameter to match other formats
Update RT param `governor_tunables` to `gov_tunables` to match the style
of the other paramters e.g. `big_gov_tunables`.
2018-12-14 07:44:44 +00:00
a9e254742a fw/rt_param_manager: Add support for aliased parameters
Additionally check for aliases when matching runtime parameters to their
corresponding cfg points.
2018-12-14 07:44:44 +00:00
f2d6f351cb output_processors/postgres: Fix incorrect parameter
When verifying the database schema the connection instead of a cursor
should be passed.
2018-12-07 10:51:18 +00:00
916f7cbb17 docs: Update documentation about database output API and create command 2018-12-07 09:55:17 +00:00
72046f5f0b fw/output: Convert Status enums to/from POD during (de)serialization
Previously the `Status` Enum was converted to a string as part of
serialization however now use the Enum `to_pod` method and make the
respective changes for de-serialization.
2018-12-07 09:55:17 +00:00
4f67cda89f utils/types: When creating an enum also try to deserialize from POD
Allows for recreating an Enum from a full string representation of the Enum
rather than just the name of the Enum.
2018-12-07 09:55:17 +00:00
0113940c85 fw/execution: Fix status being assigned as strings 2018-12-07 09:55:17 +00:00
0fb8d261fa fw/output: Add check for schema versions 2018-12-07 09:55:17 +00:00
0426a966da utils/postgres: Relocate functions to retrieve schema information
Move the functions to retrieve schema information to general utilities to
be used in other classes.
2018-12-07 09:55:17 +00:00
eabe15750c commands/create: Allow for upgrading database schema
Provide a method of upgrading existing postgres databases to a new
schema version.
2018-12-07 09:55:17 +00:00
250bf61c4b postgres: Update schema to v1.2
Update the postgres database schema:
    - Rename "resourcegetters" schema to "resource_getters" for
      consistency
    - Rename "retreies" colum to "retry" to better relflect it purpose
    - Store additional information including:
        - POD serialization data
        - Missing target information
        - JSON formatted runstate
2018-12-07 09:55:17 +00:00
64f7c2431e utils/postgres: Rename postgres_covert to house more general methods
Rename the postgres_covert file to allow for place more general postgres
utility functions.
2018-12-07 09:55:17 +00:00
0fee3debea fw/output: Implement the Output API for using a database backend
Allow for the creating of a RunDatabaseOutput to allow for utilizing WA
output API from run data stored in a postgres database.
2018-12-07 09:55:17 +00:00
423882a8e6 output_processors/postgres: Update target info to use POD representation
Instead of taking values directly when storing target information use
the POD representation to allow for restoring the state.
2018-12-07 09:55:17 +00:00
86287831b3 utilts/serializer: Update exception method to support Python3 2018-12-07 09:55:17 +00:00
e81aaf3421 framework/output: Split out common Output functionality
In preparation for the creation of a DatabaseRunOut split out
functionality that can be shared.
2018-12-07 09:55:17 +00:00
2d7dc61686 output_processors/postgresql: Serialize parameters in json
To make it easier to deserialize the data again ensure that the data is
converted to json rather than using the built in string representation.
2018-12-07 09:55:17 +00:00
88a4677434 utils/serializer: Fix attempting to deserialize a single value. 2018-12-07 08:46:12 +00:00
dcf0418379 fw/config/execution: Implement CombinedConfig as Podable
Ensure that the various Configuration structures now have serialization
versions.
2018-12-07 08:46:12 +00:00
1723ac8132 fw/output: Implement Output structures as Podable
Ensure that the various Output structures now have serialization
versions.
2018-12-07 08:46:12 +00:00
1462f26b2e fw/run: Implement Run Structures as Podable
Ensure that Run structures now have serialization versions.
Also fix serialization/de-serialization of `Status` type as previously this
was formatted as a String instead a pod.
2018-12-07 08:46:12 +00:00
8ee924b896 fw/config/core: Implement Configuration structures as Podable
Ensure that the various Configuration structures now have serialization versions.
2018-12-07 08:46:12 +00:00
92cf132cf2 fw/target/info: Implement TargetInfo structures as Podable
Ensure that the various data structures used to store target information
now have a serialization versions.
2018-12-07 08:46:12 +00:00
4ff7e4aab0 utils/serializer: Add Podable Mix-in class
Add a new mix-in class for classes that are serialized to PODs, the aim
of this class is to provide a way to ensure that both the original data
version and the current serialization version are known. When attempting
to de-serialize a POD the serialization version will be compared to the
latest version in WA if not matching will call the appropriate method to
upgrade the pod to a known structure state populating any missing fields
with a sensible default or converting the existing data to the new
format.
2018-12-07 08:46:12 +00:00
e0ffd84239 fw/output: Ensure that Event message is converted to a string
Explicitly convert the passed message into a string as this is expected when
generating a event summary, otherwise splitting can fail.
2018-12-04 15:15:47 +00:00
d3d5ca9154 workloads/glbench: Port workload to WA3 2018-11-23 17:24:41 +00:00
88f708abf5 target/descriptor: Update default sudo command format
Due to changes introduced in devlib https://github.com/ARM-software/devlib/pull/339
the command placeholder should no longer be in quote so remove them from
the default value.
2018-11-21 15:07:25 +00:00
bb282eb19c wa: Remove reference to devlibs escaping methods
As part of https://github.com/ARM-software/devlib/pull/339 the escaping
method are being removed in favour of using `quote` from `pipes` so
also make reflecting changes here.
2018-11-21 15:07:25 +00:00
285bc2cd0b workloads/gfxbench: Move test selection into setup phase
Previously the configuration of the tests was performed in the run stage
instead of the setup.
2018-11-20 10:10:19 +00:00
0d9dbe8845 workloads/gfxbench: Fix clicking on select tests
The X coordinate was miscalculated when attempting to load the test
selection menu.
2018-11-20 10:10:19 +00:00
c89ea9875e workloads/gfxbench: Fix parameter description 2018-11-20 10:10:19 +00:00
e4283729c1 workloads/gfxbenchmark: Fix score matching
On some devices the score string obtained can contain extra characters.
Only use the numerical values from the score when converting, otherwise
if not found set the result to 'NaN'.
2018-11-20 10:10:19 +00:00
a2eb6e96e2 commands/process: Fix initialization of ProcessContext ordering
Ensure that that ProcessContext is initialized before attempting to
initialize any of the output processors.
2018-11-19 10:17:53 +00:00
3bd8f033d5 workloads: Updating geekbench to support v4.3.1
v4.3.1 has made a minor change to the run cpu benchmark element.
Refactoring to support both the new and previous elements.
2018-11-15 15:56:09 +00:00
ea1d4e9071 workloads/gfxbench: Do not clear package data on launch
By clearing the application data each time the workload is run this
forces the required assets to be re-installed each time. As the
workload is not affected by persistent state do not perform the
clearing.
2018-11-15 07:54:43 +00:00
cc0cfaafe3 fw/workload: Add attribute to control if package data should be cleared.
Allow specifying that the package data should not be cleared
before starting the workload.
2018-11-15 07:54:43 +00:00
1b4fc68542 workloads/gfxbench: Fix formatting 2018-11-13 13:06:54 +00:00
e40517ab95 workloads/gfxbench: Fix not detecting missing asset popup
Add check for a differently worded popup informing that assets are
missing.
2018-11-13 13:06:54 +00:00
ce94638436 fw/target: record page size as part of TargetInfo
Record target.page_size_kb as part of target info.
2018-11-02 12:11:00 +00:00
d1fba957b3 fw/target: add versioning to TargetInfo
Add format_version class attribute to TargetInfo to track format
changes. This is checked when deserializing from POD to catch format
changes between cached and obtained TargetInfo's.
2018-11-02 12:11:00 +00:00
17bb0083e5 doc/installation: Update installation instructions
Update the instructions for installing WA from git not to use
pip as this method does not process dependency_links correctly and
results in an incompatible version of devlib being installed.
2018-10-25 10:32:28 +01:00
c4ad7467e0 doc: Fix formatting and typo 2018-10-25 10:32:28 +01:00
2f75261567 doc: Add WA icon to documentation 2018-10-25 10:32:28 +01:00
281eb6adf9 output_processors/postgresql: Refactor and fix uploading duplication
Previously run level artifacts would be added with a particular job_id,
and updated artifacts would be stored as new objects each time. Refactor
to remove unnecessary instance variables, only provide a job_id when
required and add an update capability for largeobjects to ensure this
does not happen.
2018-10-24 10:42:28 +01:00
576df80379 output_processors/postgres: Move logging message
Print the debug message warning about writing a large object to the
database before writing the object.
2018-10-24 10:42:28 +01:00
f2f210c37f utils/postgres_convert: PEP8 fix
Remove unused local variable.
2018-10-24 10:34:44 +01:00
6b04cbffd2 worklods: Fix whitespace 2018-10-24 10:34:44 +01:00
dead312ff7 workloads/uiauto: Update workloads to dismiss android version warning
Update workloads that use uiautomator and can display a warning about
using an old version of the app to dismiss the popup if present.
2018-10-24 10:34:44 +01:00
7632ee8288 fw/uiauto: Add method to baseclass to dismiss android version popup
In Android Q a popup will be displayed warning if the application has
not been designed for the latest version of android. This has currently been
dealt with on a per workload basis however this is a common popup so
add a method to dismiss the popup if present to the base class.
2018-10-24 10:34:44 +01:00
8f6b1a7fae workloads/vellamo: Close warning popup message to run on Android Q
While attempting to run vellamo on Android Q, a popup warning with
the message, "This app was built for an older version of Android and may not
work properly. Try checking for updates, or contact the developer." would
appear, causing the workload to halt.

Close the popup warning before dismissing EULA and executing the remaining
steps to run vellamo.

Tested with vellamo apk version 3.2.4.

Signed-off-by: Lisa Nguyen <lisa.nguyen@linaro.org>
2018-10-23 10:17:45 +01:00
f64aaf64a0 tools/revent: recording timestamp fix
- force cast start/end timestamps to uint64_t to correct recording format issue on 32bit devices (i.e. 4 bytes timespec tv_sec written on 8 bytes memory slot)
2018-10-15 09:48:02 +01:00
7dce0fb208 workloads/jankbench: Ensure logcat monitor thread is terminated
Previously the LogcatRunMonitor left the logcat process running in the
background causing issues with concurrent accesses. Now ensure the thread
terminates correctly.
2018-10-12 13:41:21 +01:00
375a36c155 utils/log: Ensure to convert all arguments to strings
Ensure that all arguments provided for an Exception are converted to
stings before attempting to join them for debug information.
2018-10-09 15:26:53 +01:00
7c3054b54b commands/run: Update run output with final run config
The RunInfo object in the run output is initally created before the
config has been fully parsed therefore attributes for the project and
run name are never updated, once the config has been finalized make sure
to update the relavant information.
2018-10-09 15:26:53 +01:00
98727bce30 utils/revent: recording parser fixes
- change magic string literal to a b'' string so that the comparison
  works in python 3
- expand timestamp tuples (struct.unpack always returns a tuple) before
  attempting to cast to float.
2018-10-08 17:46:35 +01:00
93ffe0434c workloads/meabo: Support python 3
Ensure output is encoded correctly if running with python 3
2018-10-05 10:10:43 +01:00
75f3080c9b workloads: Use uninstall method instead of uninstall_executable
For workloads that support Linux targets do not use
`uninstall_executable` as this is not available, instead use `uninstall` as
other targets should be able to determine the appropriate uninstallation
method.
2018-10-05 10:10:43 +01:00
75c0e40bb0 workloads/androbench: Fix extracting results with small resolutions
Previously the workload assumed that all the scores were visible on a
single screen however for devices with smaller displays the results need
to scrolled.
2018-10-03 14:33:09 +01:00
e73b299fbe pcmark: update uiautomation to fix Android-Q breakage
A new popup appears when running pcmark on android Q that complains
about the app being built for an older version of android.

Since this popup will be temporary, the fix has to make sure not to
break in the future when this popup disappears or when the test is ran
on a compatible version of android.

To achieve this, we attempt to dismiss the popup and if we timeout we
silently carry on with the test assuming no popup will appear.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
2018-10-01 16:12:05 +01:00
9a4a90e0a6 GFXBench: New workload
Creating a new workload to execute the following tests on GFXBench.

* Car Chase
* Car Chase Offscreen
* Manhattan 3.1
* 1080p Manhattan 3.1 Offscreen
* 1440p Manhattan 3.1 Offscreen
* Tessellation
* Tessellation Offscreen
2018-09-25 17:27:58 +01:00
8ba602cf83 Googlephotos: Updating to work with latest version
Updating the googlephotos workload to work with app version 4.0.0.212659618
2018-09-25 10:50:03 +01:00
891ef60f4d configuration: Add support for section groups
Now allows for specifying a `group` value for each section which will
cross product the sections within that group with the sections in each
other group. Additionally classifiers will automatically be added to
each job spec with the relevant group information.
2018-09-24 10:17:26 +01:00
6632223ac5 output_processors: Move variable initialization to __init__
In the case of a failure in the initialization of one output_processor the
remaining `initialize` methods may not get called causing variables to
not be initialized correctly.
2018-09-21 15:06:30 +01:00
5dcac0c8ef output_processors/postgres: Do not process output if not connected
Only try to process the run output if the processor is connected to a
database.
2018-09-21 15:06:30 +01:00
9a9a2c0742 commands/create: Add version check for Postgres Server
The 'jsonB' datatype was only added in v9.4 so ensure that the Postgres
server to is running this or later and inform the user if this is not
the case.
2018-09-21 15:06:30 +01:00
57aa5ca588 fw/version: Add developement tag to version number 2018-09-21 15:06:30 +01:00
fb42d11a83 setup: Update devlib dependency version number
Ensure that a sufficiently up to date version of devlib is installed and
enable of installing directly from github to satisfy the requirements.
2018-09-21 15:06:30 +01:00
fce506eb02 instruments/misc: Fix typo 2018-09-21 15:06:30 +01:00
a1213cf84e commands/create: Use class name rather than user supplied name
Use the actual name of the plugin instead of the user supplied value,
for consistency and ensure that duplicate entries cannot be specified.
2018-09-21 15:06:30 +01:00
a7d0b6fdbd target/descriptor: Do not convert the module list to strings
Change the type of the `modules` to `list` so that additional
configuration can be supplied to individual modules as a dict of values.
2018-09-21 15:06:30 +01:00
f5fed26758 doc/device_setup: Fix typo 2018-09-21 15:06:30 +01:00
7d01258bce fw/target/manager: Do no finalize target if not instantiated
In the case of an error occurring during target initialization do not
try and check for disconnecting upon finalizing.
2018-09-21 15:06:30 +01:00
ccaca3d6d8 Speedometer: Extending teardown function
Some devices throw errors if too many browser tabs are open. Have
added a method to close down the tabs in the teardown function.
2018-09-20 10:26:34 +01:00
6d654157b2 Add Postgres Output Processor
The Output processor which is used to upload the results found in the
wa_output folder to a Postgres database, whose schema is defined by the
WA Create Database command.
2018-09-12 10:13:34 +01:00
bb255de9ad Add WA Create Database Command
Add a command to create a PostgreSQL database with supplied parameters
which draws its structure from the supplied schema (Version 1.1). This
database is of a format intended to be used with the forthcoming WA
Postgres output processor.
2018-09-12 10:13:34 +01:00
ca03f21f46 workloads/jankbench: Update to clear logcat using devlib
Leftover code from WA2 meant that logcat was cleared on the device by
the workload directly instead of using devlib, this caused issues if logcat was
still being cleared from other areas of the code.
2018-09-10 13:30:59 +01:00
59e29de285 workloads/jankbench: Replace errors during decoding
When running jankbench invalid bytes can be read from the device causing
decoding in the monitor to fail, now replace any invalid sequences.
2018-09-10 13:30:59 +01:00
0440c41266 Googleslides: Updating the workload to support the new range of huawei devices 2018-09-06 18:01:24 +01:00
b20b9f9cad instruments/perf: Port the perf instrument to WA3 2018-09-06 08:39:09 +01:00
8cd79f2ac4 fw/instrument: Fix compatibility with Python2
The Python2 inspect module does not contain the `getfullargspec` method so call
the appropriate method depending on Python version.
2018-09-05 15:44:48 +01:00
6239f6ab2f Update documentation
Update documentation with new API for Output Processors which includes
the context in the intialize and finalize methods.
2018-09-05 14:40:42 +01:00
718f2c1c90 Expose context in OP initialize and finalize
Expose the context to the initialize and finalize functions for Output
Processors. This was found to be necessary for the upcoming PostgreSQL
Output Processor.
2018-09-05 14:40:42 +01:00
4c4fd2a267 Gmail: Minor change to allow workload to run correctly on Huawei devices 2018-09-05 14:01:01 +01:00
6366a2c264 framework/version: Specify default encoding when parsing commit id 2018-08-22 14:41:12 +01:00
42b3f4cf9f commands/create: Add special case for EnergyInstruemntBackends
Previously when using the create command for adding
EnergyInstruemntBackends they were treated like any other plugin and
generated incorrect configuration. Now automatically add the
`energy_measurement` instrument and populate it's configuration with the
relevant defaults for the specified Backend.
2018-08-14 13:41:39 +01:00
1eaffb6744 commands/create: Only add instruments/output processors once
Ensure that instruments and output processors are only added to the
generated agenda once.
2018-08-14 13:41:39 +01:00
4a9b24a9a8 instruments/energy_measurments: Improve instrument description
Add note to users that all configuration for the backends should be
added through this instrument rather than directly.
2018-08-14 13:41:39 +01:00
5afc96dc4d workloads/pcmark: Fix reading results in python3
Ensure that the results file is decoded when using python3.
2018-08-14 13:39:04 +01:00
d858435c3d utils/version: Fix check to only decode bytes
When using Python3 the returned value of the commit is a byte string and
therefore needs to be decoded.
2018-07-27 10:11:32 +01:00
caf805e851 setup.py: Update minimum version of devlib to be v1.0.0 2018-07-27 10:11:32 +01:00
973be6326a Travis: Update to exercise wa commands
Add tests for additional wa commands.
2018-07-26 12:07:17 +01:00
778bc46217 commands/process: Add dummy method to ProcessContext
In commit d2ece we are now tracking augmentations which are used during a
run in the run_config via the context when installing augmentations.
Update the Process command and its ProcessContext with a dummy method
to relect this change.
2018-07-26 12:07:17 +01:00
fc226fbb6e fw/execution: Ensure that identifiers are used when retrieving plugins.
Make sure that when retrieving plugin information from the plugin
cache the name is converted to an identifier first.
2018-07-24 11:34:19 +01:00
d007b283df instruments/trace_cmd: Fix reporting on target
If reporting on the target the extracted trace data file was not
defined, now locate the file correctly.
2018-07-24 11:34:00 +01:00
8464c32808 tests: add unit test for includes 2018-07-23 16:47:10 +01:00
e4a856ad03 fw/config: preserve included config files
Save included config files, along with the explicitly-specified config
that included them, under run output's __meta/raw_config/.
2018-07-23 16:47:10 +01:00
7d833ec112 fw/config: add includes
Add the ability to include other YAML files inside agendas and config
files using "include#:" entries.
2018-07-23 16:47:10 +01:00
b729f7c9e4 fw/parsers: Ensure plug-in names are converted to an identifier
Ensure that a plug-ins config entry is converted to an identifier before being
stored in the PluginCache so that the relevant configuration can be
retrieved appropriately. For example this allows for both 'trace-cmd' and
'trace_cmd' to be used as config entries to provide configuration for the
'trace-cmd' plugin.
2018-07-19 17:15:26 +01:00
fbfd81caeb commands/revent: Fix missing target initialization
In commit 8da911 the initialization of the target was split into a
separate method of the TargetManger. Ensure that we call the relevant
 method after creating the manager.
2018-07-19 12:13:14 +01:00
0e69a9808d commands/record: Fix argument validation
When ensuring that at least one stage for a workload recording was
present there was a missing check to see if recording for a workload was
specified.
2018-07-19 12:13:14 +01:00
4478bd4574 travis: Force version 1.9.2 of pylint to be used
Force version 1.9.2 of pylint to be used rather than 2.0.0. Currently
there appears to be issues raising errors that are explicitly ignored.
2018-07-19 08:13:55 +01:00
0e84cf6d64 dev_scripts/pylint: fix default path
Fix the default scan path used if one has not been specified.
2018-07-18 11:20:48 +01:00
6d9ec3138c In lint we trust
Quieten pylint with regard to import order.
2018-07-18 11:20:48 +01:00
e8f545861d fw: cache target info
Cache target info after pulling it from the device. Attempt to retrieve
from cache before querying target.
2018-07-13 15:53:01 +01:00
770d2b2f0e fw: add cache subdir under $WA_USER_DIRECTORY
Add a sub-directory for caching stuff.
2018-07-13 15:53:01 +01:00
8a2c660fdd Travis: Improve testing organisation
Split the tests into their own jobs so it is easier to see what test is
failing, also only run pep8 and pylint tests with python 3.6 as it is
uncessary to run with both python versions.
2018-07-13 13:32:41 +01:00
dacb350992 fw/target: add system_id to TargetInfo
Add target's system_id to TargetInfo. This ID is intended to be unique
of the combination of hardware and software on the target.
2018-07-13 13:28:50 +01:00
039758948e workloads/androbench: Update uiauto apk with fix 2018-07-11 17:32:08 +01:00
ce93823967 fw/execution: write config after installing augs
Add Context.write_config() to write the combined config into run output
__meta. Use it after instruments and result processors get installed to
make sure their configuration gets serialized in the output.
2018-07-11 13:28:04 +01:00
7755363efd fw/config: add get_config() to ConfigManager
Add a method to allow obtaining the combined config after config
manager has been finalized.
2018-07-11 13:28:04 +01:00
bcea1bd0af fw/config: add resource getter to run config
Track resource getter configuration as part of the run config.
2018-07-11 13:28:04 +01:00
a062a39f78 fw/config: add installed aug configs to run config
Track configuration used for installed augmentations inside RunConfig.
2018-07-11 13:28:04 +01:00
b1a01f777f fw/execution: rename things for clarity
- Rename "instrument_name" to "instrument" inside do_execute(), as
  ConfigManger.get_instrument() returns a list of Instrument objects,
  not names.
- To avoid name clash, rename the imported instrument module to
  "instrumentation".
2018-07-11 13:28:04 +01:00
96dd100b70 utils/toggle_set: fix merge behavior
- Change how "source" and "dest" are handled inside merge() to be more
  sane and less confusing, ensuring that disabling toggles are merged
  correctly.
- Do not drop disabling values during merge, to ensure that merging
  is a transitive operation.
- Add unit tests for the above fixes.
2018-07-11 10:48:00 +01:00
e485b9ed39 utils/version: do not decode bytes
Check that the resulting output inside get_commit() is a str before
attempting to decode it when running on Python 3.
2018-07-11 10:39:38 +01:00
86dcfbf595 workloads/androbench: Fix Formatting 2018-07-10 15:57:18 +01:00
1b58390ff5 workloads/androbench: Fix for devices running Android 8.1
On some devices running Android 8.1 the start benchmark button was
failing to be clicked, this is a workaround to click on the coordinates
of the button instead of the UiObject iteslf.
2018-07-10 15:57:18 +01:00
ae4fdb9e77 dev_scripts: pylint: check both Python Versions
Check both "python" and "python3" for the pylint package, as it is
possible that pylint will be installed via Python 3 on Python 2 systems.
2018-07-10 12:56:51 +01:00
5714c8e6a1 wa: Additional pylint fixes 2018-07-10 12:56:51 +01:00
791d9496a7 wa: Pylint Fixes for Travis
Pylint has trouble using imports from the distutils module in
virtualenvs so we need to explicitly ignore these imports.
2018-07-10 12:56:51 +01:00
e8b0d42758 wa: PEP8 Fixes 2018-07-10 12:56:51 +01:00
97cf0ac059 travis: Enable pylint and pep8 checkers 2018-07-10 12:56:51 +01:00
f3dc94b95e travis: Remove tests for Python 3.4
Testing with Python3.4 takes significantly longer than with 2.7 or 3.6
due to having to compile some dependencies from source each time.
2018-07-10 12:56:51 +01:00
ad87a40e06 Travis: Run the idle workload as part of the tests. 2018-07-10 12:56:51 +01:00
fd1dd789bf fw/output: update internal state on write_config()
Update the internal _combined_config object with the one that
has been written to ensure that the serialized and run time states are
the same.
2018-07-09 16:00:07 +01:00
c410d2e1a1 I lint, therefore I am
Implement fixes for the most recent pylint version.
2018-07-09 15:59:40 +01:00
0e0d4e0ff0 dev_scripts: port pylint plugins to Python 3 2018-07-09 15:59:40 +01:00
138 changed files with 4961 additions and 422 deletions
.travis.ymlMANIFEST.in
dev_scripts
doc/source
extras
setup.py
tests
wa
__init__.py
assets
bin
arm64
armeabi
commands
framework
instruments
output_processors
tools
revent
utils
workloads
androbench
com.arm.wa.uiauto.androbench.apk
uiauto
app
src
main
java
com
arm
wa
uiauto
antutu
__init__.pycom.arm.wa.uiauto.antutu.apk
uiauto
app
src
main
java
com
arm
wa
uiauto
benchmarkpi
com.arm.wa.uiauto.benchmarkpi.apk
uiauto
app
src
main
java
com
arm
wa
uiauto
exoplayer
geekbench
__init__.pycom.arm.wa.uiauto.geekbench.apk
uiauto
app
src
main
java
com
arm
wa
uiauto
gfxbench
glbenchmark
gmail
com.arm.wa.uiauto.gmail.apk
uiauto
app
src
main
java
com
arm
wa
googlemaps
com.arm.wa.uiauto.googlemaps.apk
uiauto
app
src
main
java
com
googlephotos
__init__.pycom.arm.wa.uiauto.googlephotos.apk
uiauto
app
src
main
java
com
arm
wa
uiauto
googlephotos
googleslides
com.arm.wa.uiauto.googleslides.apk
uiauto
app
src
main
java
com
arm
wa
uiauto
googleslides
jankbench
lmbench
meabo
pcmark
__init__.pycom.arm.wa.uiauto.pcmark.apk
uiauto
app
src
main
java
com
arm
wa
uiauto
speedometer
com.arm.wa.uiauto.speedometer.apk
uiauto
app
src
main
java
com
arm
wa
uiauto
vellamo
__init__.pycom.arm.wa.uiauto.vellamo.apk
uiauto
app
src
main
java
com
arm
wa
uiauto

@ -16,16 +16,39 @@
language: python
python:
- "2.7"
- "3.4"
- "3.6"
- "2.7"
install:
- pip install nose
- pip install nose2
script:
- pip install flake8
- pip install pylint==1.9.2
- git clone -v https://github.com/ARM-software/devlib.git /tmp/devlib && cd /tmp/devlib && python setup.py install
- cd $TRAVIS_BUILD_DIR && python setup.py install
- nose2 -s $TRAVIS_BUILD_DIR/tests
env:
global:
- PYLINT="cd $TRAVIS_BUILD_DIR && ./dev_scripts/pylint wa"
- PEP8="cd $TRAVIS_BUILD_DIR && ./dev_scripts/pep8 wa"
- NOSETESTS="nose2 -s $TRAVIS_BUILD_DIR/tests"
- WORKLOAD="cd /tmp && wa run $TRAVIS_BUILD_DIR/tests/travis/idle_agenda.yaml -v -d idle_workload"
- PROCESS_CMD="$WORKLOAD && wa process -f -p csv idle_workload"
- SHOW_CMD="wa show dhrystone && wa show generic_android && wa show trace-cmd && wa show csv"
- LIST_CMD="wa list all"
- CREATE_CMD="wa create agenda dhrystone generic_android csv trace_cmd && wa create package test && wa create workload test"
matrix:
- TEST=$PYLINT
- TEST=$PEP8
- TEST=$NOSETESTS
- TEST=$WORKLOAD
- TEST="$PROCESS_CMD && $SHOW_CMD && $LIST_CMD && $CREATE_CMD"
script:
- echo $TEST && eval $TEST
matrix:
exclude:
- python: "2.7"
env: TEST=$PYLINT
- python: "2.7"
env: TEST=$PEP8

@ -1,2 +1,3 @@
recursive-include scripts *
recursive-include doc *
recursive-include wa *

@ -1,6 +1,4 @@
#!/bin/bash
set -e
DEFAULT_DIRS=(
wa
)
@ -34,7 +32,15 @@ compare_versions() {
return 0
}
pylint_version=$(python3 -c 'from pylint.__pkginfo__ import version; print(version)')
pylint_version=$(python -c 'from pylint.__pkginfo__ import version; print(version)' 2>/dev/null)
if [ "x$pylint_version" == "x" ]; then
pylint_version=$(python3 -c 'from pylint.__pkginfo__ import version; print(version)' 2>/dev/null)
fi
if [ "x$pylint_version" == "x" ]; then
echo "ERROR: no pylint verison found; is it installed?"
exit 1
fi
compare_versions $pylint_version "1.9.2"
result=$?
if [ "$result" == "2" ]; then
@ -42,12 +48,13 @@ if [ "$result" == "2" ]; then
exit 1
fi
set -e
THIS_DIR="`dirname \"$0\"`"
CWD=$PWD
pushd $THIS_DIR > /dev/null
if [[ "$target" == "" ]]; then
for dir in "${DEFAULT_DIRS[@]}"; do
PYTHONPATH=. pylint --rcfile ../extras/pylintrc --load-plugins pylint_plugins $THIS_DIR/../$dir
PYTHONPATH=. pylint --rcfile ../extras/pylintrc --load-plugins pylint_plugins ../$dir
done
else
PYTHONPATH=. pylint --rcfile ../extras/pylintrc --load-plugins pylint_plugins $CWD/$target

@ -1,3 +1,5 @@
import sys
from astroid import MANAGER
from astroid import scoped_nodes
@ -23,18 +25,25 @@ def transform(mod):
if not text.strip():
return
text = text.split('\n')
text = text.split(b'\n')
# NOTE: doing it this way because the "correct" approach below does not
# work. We can get away with this, because in well-formated WA files,
# the initial line is the copyright header's blank line.
if 'pylint:' in text[0]:
if b'pylint:' in text[0]:
msg = 'pylint directive found on the first line of {}; please move to below copyright header'
raise RuntimeError(msg.format(mod.name))
if text[0].strip() and text[0][0] != '#':
if sys.version_info[0] == 3:
char = chr(text[0][0])
else:
char = text[0][0]
if text[0].strip() and char != '#':
msg = 'first line of {} is not a comment; is the copyright header missing?'
raise RuntimeError(msg.format(mod.name))
text[0] = '# pylint: disable={}'.format(','.join(errors))
mod.file_bytes = '\n'.join(text)
if sys.version_info[0] == 3:
text[0] = '# pylint: disable={}'.format(','.join(errors)).encode('utf-8')
else:
text[0] = '# pylint: disable={}'.format(','.join(errors))
mod.file_bytes = b'\n'.join(text)
# This is what *should* happen, but doesn't work.
# text.insert(0, '# pylint: disable=attribute-defined-outside-init')

@ -0,0 +1,78 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="231.99989"
height="128.625"
id="svg4921"
version="1.1"
inkscape:version="0.48.4 r9939"
sodipodi:docname="WA-logo-black.svg">
<defs
id="defs4923" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.70000001"
inkscape:cx="80.419359"
inkscape:cy="149.66406"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1676"
inkscape:window-height="1027"
inkscape:window-x="0"
inkscape:window-y="19"
inkscape:window-maximized="0"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0" />
<metadata
id="metadata4926">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-135.03125,-342.375)">
<path
style="fill:#ffffff;fill-opacity:1;stroke:none"
d="m 239,342.375 0,11.21875 c -5.57308,1.24469 -10.80508,3.40589 -15.5,6.34375 l -8.34375,-8.34375 -15.5625,15.5625 8.28125,8.28125 c -3.25948,5.08895 -5.62899,10.81069 -6.875,16.9375 l -11,0 0,22 11.46875,0 c 1.38373,5.61408 3.71348,10.8741 6.8125,15.5625 l -8.15625,8.1875 15.5625,15.53125 8.46875,-8.46875 c 4.526,2.73972 9.527,4.77468 14.84375,5.96875 l 0,11.21875 14.59375,0 c -4.57581,-6.7196 -7.25,-14.81979 -7.25,-23.5625 0,-5.85191 1.21031,-11.43988 3.375,-16.5 -10.88114,-0.15024 -19.65625,-9.02067 -19.65625,-19.9375 0,-10.66647 8.37245,-19.40354 18.90625,-19.9375 0.3398,-0.0172 0.68717,0 1.03125,0 10.5808,0 19.2466,8.24179 19.90625,18.65625 5.54962,-2.70912 11.78365,-4.25 18.375,-4.25 7.94803,0 15.06896,2.72769 21.71875,6.0625 l 0,-10.53125 -11.03125,0 c -1.13608,-5.58713 -3.20107,-10.85298 -6.03125,-15.59375 l 8.1875,-8.21875 -15.5625,-15.53125 -7.78125,7.78125 C 272.7607,357.45113 267.0827,354.99261 261,353.625 l 0,-11.25 z m 11,46 c -7.73198,0 -14,6.26802 -14,14 0,7.732 6.26802,14 14,14 1.05628,0 2.07311,-0.12204 3.0625,-0.34375 2.84163,-4.38574 6.48859,-8.19762 10.71875,-11.25 C 263.91776,403.99646 264,403.19884 264,402.375 c 0,-7.73198 -6.26801,-14 -14,-14 z m -87.46875,13.25 -11.78125,4.78125 2.4375,6 c -2.7134,1.87299 -5.02951,4.16091 -6.90625,6.75 L 140,416.5 l -4.96875,11.6875 6.21875,2.65625 c -0.64264,3.42961 -0.65982,6.98214 0,10.53125 l -5.875,2.40625 4.75,11.78125 6.15625,-2.5 c 1.95629,2.70525 4.32606,5.00539 7,6.84375 l -2.59375,6.15625 11.6875,4.9375 2.71875,-6.34375 c 3.01575,0.48636 6.11446,0.48088 9.21875,-0.0312 l 2.4375,6 11.78125,-4.75 -2.4375,-6.03125 c 2.70845,-1.87526 5.03044,-4.16169 6.90625,-6.75 l 6.21875,2.625 4.96875,-11.6875 -6.15625,-2.625 c 0.56936,-3.04746 0.64105,-6.22008 0.1875,-9.375 l 6.125,-2.46875 -4.75,-11.78125 -5.90625,2.40625 c -1.8179,-2.74443 -4.05238,-5.13791 -6.59375,-7.0625 L 189.6875,406.9688 178,402.0313 l -2.5,5.84375 c -3.41506,-0.712 -6.97941,-0.8039 -10.53125,-0.21875 z m 165.28125,7.125 -7.09375,19.125 -9.59375,23 -1.875,-42.0625 -14.1875,0 -18.1875,42.0625 -1.78125,-42.0625 -13.8125,0 2.5,57.875 17.28125,0 18.71875,-43.96875 1.9375,43.96875 16.90625,0 0.0312,-0.0625 2.71875,0 1.78125,-5.0625 7.90625,-22.90625 0.0312,0 1.59375,-4.65625 4.46875,-10.40625 7.46875,21.75 -11.125,0 -3.71875,10.75 18.625,0 3.625,10.53125 15,0 -21.4375,-57.875 z m -158,15.875 c 4.48547,0.0706 8.71186,2.76756 10.5,7.1875 2.38422,5.89328 -0.45047,12.61577 -6.34375,15 -5.89327,2.38421 -12.61578,-0.48172 -15,-6.375 -2.3097,-5.70909 0.29002,-12.18323 5.8125,-14.75 0.17811,-0.0828 0.34709,-0.14426 0.53125,-0.21875 1.47332,-0.59605 3.00484,-0.86727 4.5,-0.84375 z m -0.1875,3.40625 c -0.2136,5.4e-4 -0.44162,0.0134 -0.65625,0.0312 -0.79249,0.0658 -1.56779,0.24857 -2.34375,0.5625 -4.13846,1.67427 -6.14302,6.3928 -4.46875,10.53125 1.67428,4.13847 6.3928,6.14301 10.53125,4.46875 4.13847,-1.67428 6.11177,-6.3928 4.4375,-10.53125 -1.27532,-3.15234 -4.29605,-5.07059 -7.5,-5.0625 z"
id="rect4081-3-8"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cccccccccccccccccscscscsccccccccccsssccsscccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccssssccssssssss" />
<g
id="g3117"
transform="translate(-244.99999,-214.64287)">
<g
transform="translate(83.928571,134.28571)"
id="text4037-4-7"
style="font-size:79.3801651px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:DejaVu Sans;-inkscape-font-specification:DejaVu Sans Bold" />
<g
transform="translate(83.928571,134.28571)"
id="text4041-5-8"
style="font-size:79.3801651px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:DejaVu Sans;-inkscape-font-specification:DejaVu Sans" />
</g>
</g>
</svg>

After

(image error) Size: 5.7 KiB

@ -23,6 +23,23 @@ iterating over all WA output directories found.
:param path: must be the path to the top-level output directory (the one
containing ``__meta`` subdirectory and ``run.log``).
WA output stored in a Postgres database by the ``Postgres`` output processor
can be accessed via a :class:`RunDatabaseOutput` which can be initialized as follows:
.. class:: RunDatabaseOutput(password, host='localhost', user='postgres', port='5432', dbname='wa', run_uuid=None, list_runs=False)
The main interface into Postgres database containing WA results.
:param password: The password used to authenticate with
:param host: The database host address. Defaults to ``'localhost'``
:param user: The user name used to authenticate with. Defaults to ``'postgres'``
:param port: The database connection port number. Defaults to ``'5432'``
:param dbname: The database name. Defaults to ``'wa'``
:param run_uuid: The ``run_uuid`` to identify the selected run
:param list_runs: Will connect to the database and will print out the available runs
with their corresponding run_uuids. Defaults to ``False``
Example
-------
@ -39,6 +56,32 @@ called ``wa_output`` in the current working directory we can initialize a
...: output_directory = 'wa_output'
...: run_output = RunOutput(output_directory)
Alternatively if the results have been stored in a Postgres database we can
initialize a ``RunDatabaseOutput`` as follows:
.. code-block:: python
In [1]: from wa import RunDatabaseOutput
...:
...: db_settings = {
...: host: 'localhost',
...: port: '5432',
...: dbname: 'wa'
...: user: 'postgres',
...: password: 'wa'
...: }
...:
...: RunDatabaseOutput(list_runs=True, **db_settings)
Available runs are:
========= ============ ============= =================== =================== ====================================
Run Name Project Project Stage Start Time End Time run_uuid
========= ============ ============= =================== =================== ====================================
Test Run my_project None 2018-11-29 14:53:08 2018-11-29 14:53:24 aa3077eb-241a-41d3-9610-245fd4e552a9
run_1 my_project None 2018-11-29 14:53:34 2018-11-29 14:53:37 4c2885c9-2f4a-49a1-bbc5-b010f8d6b12a
========= ============ ============= =================== =================== ====================================
In [2]: run_uuid = '4c2885c9-2f4a-49a1-bbc5-b010f8d6b12a'
...: run_output = RunDatabaseOutput(run_uuid=run_uuid, **db_settings)
From here we can retrieve various information about the run. For example if we
@ -65,7 +108,7 @@ parameters and the metrics recorded from the first job was we can do the followi
Out[5]: u'dhrystone'
# Print out all the runtime parameters and their values for this job
In [6]: for k, v in job_1.spec.runtime_parameters.iteritems():
In [6]: for k, v in job_1.spec.runtime_parameters.items():
...: print (k, v)
(u'airplane_mode': False)
(u'brightness': 100)
@ -73,7 +116,7 @@ parameters and the metrics recorded from the first job was we can do the followi
(u'big_frequency': 1700000)
(u'little_frequency': 1400000)
# Print out all the metrics avalible for this job
# Print out all the metrics available for this job
In [7]: job_1.metrics
Out[7]:
[<thread 0 score: 14423105 (+)>,
@ -92,6 +135,15 @@ parameters and the metrics recorded from the first job was we can do the followi
<total DMIPS: 52793 (+)>,
<total score: 92758402 (+)>]
# Load the run results csv file into pandas
In [7]: pd.read_csv(run_output.get_artifact_path('run_result_csv'))
Out[7]:
id workload iteration metric value units
0 450000-wk1 dhrystone 1 thread 0 score 1.442310e+07 NaN
1 450000-wk1 dhrystone 1 thread 0 DMIPS 8.209700e+04 NaN
2 450000-wk1 dhrystone 1 thread 1 score 1.442310e+07 NaN
3 450000-wk1 dhrystone 1 thread 1 DMIPS 8.720900e+04 NaN
...
We can also retrieve information about the target that the run was performed on
@ -214,7 +266,7 @@ methods
Return the :class:`Metric` associated with the run (not the individual jobs)
with the specified `name`.
:return: The :class`Metric` object for the metric with the specified name.
:return: The :class:`Metric` object for the metric with the specified name.
.. method:: RunOutput.get_job_spec(spec_id)
@ -232,6 +284,46 @@ methods
:return: A list of `str` labels of workloads that were part of this run.
:class:`RunDatabaseOutput`
---------------------------
:class:`RunDatabaseOutput` provides access to the output of a WA :term:`run`,
including metrics,artifacts, metadata, and configuration stored in a postgres database.
The majority of attributes and methods are the same :class:`RunOutput` however the
noticeable differences are:
``jobs``
A list of :class:`JobDatabaseOutput` objects for each job that was executed
during the run.
``basepath``
A representation of the current database and host information backing this object.
methods
~~~~~~~
.. method:: RunDatabaseOutput.get_artifact(name)
Return the :class:`Artifact` specified by ``name``. This will only look
at the run artifacts; this will not search the artifacts of the individual
jobs. The `path` attribute of the :class:`Artifact` will be set to the Database OID of the object.
:param name: The name of the artifact who's path to retrieve.
:return: The :class:`Artifact` with that name
:raises HostError: If the artifact with the specified name does not exist.
.. method:: RunDatabaseOutput.get_artifact_path(name)
Returns a `StringIO` object containing the contents of the artifact
specified by ``name``. This will only look at the run artifacts; this will
not search the artifacts of the individual jobs.
:param name: The name of the artifact who's path to retrieve.
:return: A `StringIO` object with the contents of the artifact
:raises HostError: If the artifact with the specified name does not exist.
:class:`JobOutput`
------------------
@ -311,11 +403,10 @@ methods
Return the :class:`Artifact` specified by ``name`` associated with this job.
:param name: The name of the artifact who's path to retrieve.
:param name: The name of the artifact to retrieve.
:return: The :class:`Artifact` with that name
:raises HostError: If the artifact with the specified name does not exist.
.. method:: RunOutput.get_artifact_path(name)
Return the path to the file backing the artifact specified by ``name``,
@ -325,13 +416,48 @@ methods
:return: The path to the artifact
:raises HostError: If the artifact with the specified name does not exist.
.. method:: RunOutput.get_metric(name)
Return the :class:`Metric` associated with this job with the specified
`name`.
:return: The :class`Metric` object for the metric with the specified name.
:return: The :class:`Metric` object for the metric with the specified name.
:class:`JobDatabaseOutput`
---------------------------
:class:`JobOutput` provides access to the output of a single :term:`job`
executed during a WA :term:`run`, including metrics, artifacts, metadata, and
configuration stored in a postgres database.
The majority of attributes and methods are the same :class:`JobOutput` however the
noticeable differences are:
``basepath``
A representation of the current database and host information backing this object.
methods
~~~~~~~
.. method:: JobDatabaseOutput.get_artifact(name)
Return the :class:`Artifact` specified by ``name`` associated with this job.
The `path` attribute of the :class:`Artifact` will be set to the Database
OID of the object.
:param name: The name of the artifact to retrieve.
:return: The :class:`Artifact` with that name
:raises HostError: If the artifact with the specified name does not exist.
.. method:: JobDatabaseOutput.get_artifact_path(name)
Returns a ``StringIO`` object containing the contents of the artifact
specified by ``name`` associated with this job.
:param name: The name of the artifact who's path to retrieve.
:return: A `StringIO` object with the contents of the artifact
:raises HostError: If the artifact with the specified name does not exist.
:class:`Metric`
@ -420,7 +546,7 @@ An :class:`Artifact` has the following attributes:
it is the opposite of ``export``, but in general may also be
discarded.
.. note:: whether a file is marked as ``log``/``data`` or ``raw``
.. note:: Whether a file is marked as ``log``/``data`` or ``raw``
depends on how important it is to preserve this file,
e.g. when archiving, vs how much space it takes up.
Unlike ``export`` artifacts which are (almost) always

@ -2,9 +2,109 @@
What's New in Workload Automation
=================================
-------------
*************
Version 3.1.1
*************
Fixes/Improvements
==================
Other
-----
- Improve formatting when displaying metrics
- Update revent binaries to include latest fixes
- Update DockerImage to use new released version of WA and Devlib
- Fix broken package on PyPi
*************
Version 3.1.0
*************
New Features:
==============
Commands
---------
- ``create database``: Added :ref:`create subcommand <create-command>`
command in order to initialize a PostgresSQL database to allow for storing
WA output with the Postgres Output Processor.
Output Processors:
------------------
- ``Postgres``: Added output processor which can be used to populate a
Postgres database with the output generated from a WA run.
- ``logcat-regex``: Add new output processor to extract arbitrary "key"
"value" pairs from logcat.
Configuration:
--------------
- :ref:`Configuration Includes <config-include>`: Add support for including
other YAML files inside agendas and config files using ``"include#:"``
entries.
- :ref:`Section groups <section-groups>`: This allows for a ``group`` entry
to be specified for each section and will automatically cross product the
relevant sections with sections from other groups adding the relevant
classifiers.
Framework:
----------
- Added support for using the :ref:`OutputAPI <output_processing_api>` with a
Postgres Database backend. Used to retrieve and
:ref:`process <processing_output>` run data uploaded by the ``Postgres``
output processor.
Workloads:
----------
- ``gfxbench-corporate``: Execute a set of on and offscreen graphical benchmarks from
GFXBench including Car Chase and Manhattan.
- ``glbench``: Measures the graphics performance of Android devices by
testing the underlying OpenGL (ES) implementation.
Fixes/Improvements
==================
Framework:
----------
- Remove quotes from ``sudo_cmd`` parameter default value due to changes in
devlib.
- Various Python 3 related fixes.
- Ensure plugin names are converted to identifiers internally to act more
consistently when dealing with names containing ``-``'s etc.
- Now correctly updates RunInfo with project and run name information.
- Add versioning support for POD structures with the ability to
automatically update data structures / formats to new versions.
Commands:
---------
- Fix revent target initialization.
- Fix revent argument validation.
Workloads:
----------
- ``Speedometer``: Close open tabs upon workload completion.
- ``jankbench``: Ensure that the logcat monitor thread is terminated
correctly to prevent left over adb processes.
- UiAutomator workloads are now able to dismiss android warning that a
workload has not been designed for the latest version of android.
Other:
------
- Report additional metadata about target, including: system_id,
page_size_kb.
- Uses cache directory to reduce target calls, e.g. will now use cached
version of TargetInfo if local copy is found.
- Update recommended :ref:`installation <github>` commands when installing from
github due to pip not following dependency links correctly.
- Fix incorrect parameter names in runtime parameter documentation.
--------------------------------------------------
*************
Version 3.0.0
-------------
*************
WA3 is a more or less from-scratch re-write of WA2. We have attempted to
maintain configuration-level compatibility wherever possible (so WA2 agendas
@ -29,7 +129,7 @@ believe to be no longer useful.
do the port yourselves :-) ).
New Features
~~~~~~~~~~~~
============
- Python 3 support. WA now runs on both Python 2 and Python 3.
@ -75,7 +175,7 @@ New Features
.. _devlib: https://github.com/ARM-software/devlib
Changes
~~~~~~~
=======
- Configuration files ``config.py`` are now specified in YAML format in
``config.yaml``. WA3 has support for automatic conversion of the default

@ -135,7 +135,9 @@ html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
html_theme_options = {
'logo_only': True
}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
@ -149,7 +151,7 @@ html_theme = 'sphinx_rtd_theme'
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
html_logo = 'WA-logo-white.svg'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32

@ -343,7 +343,7 @@ see the
A list of additional :class:`Parameters` the output processor can take.
:initialize():
:initialize(context):
This method will only be called once during the workload run
therefore operations that only need to be performed initially should
@ -373,7 +373,7 @@ see the
existing data collected/generated for the run as a whole. E.g.
uploading them to a database etc.
:finalize():
:finalize(context):
This method is the complement to the initialize method and will also
only be called once.

@ -26,7 +26,8 @@ CPU frequency fixed to max, and once with CPU frequency fixed to min.
Classifiers are used to indicate the configuration in the output.
First, create the :class:`RunOutput` object, which is the main interface for
interacting with WA outputs.
interacting with WA outputs. Or alternatively a :class:`RunDatabaseOutput`
if storing your results in a postgres database.
.. code-block:: python
@ -151,10 +152,6 @@ For the purposes of this report, they will be used to augment the metric's name.
scores[workload][name][freq] = metric
rows = []
for workload in sorted(scores.keys()):
wldata = scores[workload]
Once the metrics have been sorted, generate the report showing the delta
between the two configurations (indicated by the "frequency" classifier) and
highlight any unexpected deltas (based on the ``lower_is_better`` attribute of
@ -164,23 +161,27 @@ statically significant deltas.)
.. code-block:: python
for name in sorted(wldata.keys()):
min_score = wldata[name]['min'].value
max_score = wldata[name]['max'].value
delta = max_score - min_score
units = wldata[name]['min'].units or ''
lib = wldata[name]['min'].lower_is_better
rows = []
for workload in sorted(scores.keys()):
wldata = scores[workload]
warn = ''
if (lib and delta > 0) or (not lib and delta < 0):
warn = '!!!'
for name in sorted(wldata.keys()):
min_score = wldata[name]['min'].value
max_score = wldata[name]['max'].value
delta = max_score - min_score
units = wldata[name]['min'].units or ''
lib = wldata[name]['min'].lower_is_better
rows.append([workload, name,
'{:.3f}'.format(min_score), '{:.3f}'.format(max_score),
'{:.3f}'.format(delta), units, warn])
warn = ''
if (lib and delta > 0) or (not lib and delta < 0):
warn = '!!!'
# separate workloads with a blank row
rows.append(['', '', '', '', '', '', ''])
rows.append([workload, name,
'{:.3f}'.format(min_score), '{:.3f}'.format(max_score),
'{:.3f}'.format(delta), units, warn])
# separate workloads with a blank row
rows.append(['', '', '', '', '', '', ''])
write_table(rows, sys.stdout, align='<<>>><<',
@ -275,23 +276,23 @@ Below is the complete example code, and a report it generated for a sample run.
for workload in sorted(scores.keys()):
wldata = scores[workload]
for name in sorted(wldata.keys()):
min_score = wldata[name]['min'].value
max_score = wldata[name]['max'].value
delta = max_score - min_score
units = wldata[name]['min'].units or ''
lib = wldata[name]['min'].lower_is_better
for name in sorted(wldata.keys()):
min_score = wldata[name]['min'].value
max_score = wldata[name]['max'].value
delta = max_score - min_score
units = wldata[name]['min'].units or ''
lib = wldata[name]['min'].lower_is_better
warn = ''
if (lib and delta > 0) or (not lib and delta < 0):
warn = '!!!'
warn = ''
if (lib and delta > 0) or (not lib and delta < 0):
warn = '!!!'
rows.append([workload, name,
'{:.3f}'.format(min_score), '{:.3f}'.format(max_score),
'{:.3f}'.format(delta), units, warn])
rows.append([workload, name,
'{:.3f}'.format(min_score), '{:.3f}'.format(max_score),
'{:.3f}'.format(delta), units, warn])
# separate workloads with a blank row
rows.append(['', '', '', '', '', '', ''])
# separate workloads with a blank row
rows.append(['', '', '', '', '', '', ''])
write_table(rows, sys.stdout, align='<<>>><<',

@ -489,6 +489,75 @@ Note that the ``config`` section still applies to every spec in the agenda. So
the precedence order is -- spec settings override section settings, which in
turn override global settings.
.. _section-groups:
Section Groups
---------------
Section groups are a way of grouping sections together and are used to produce a
cross product of each of the different groups. This can be useful when you want
to run a set of experiments with all the available combinations without having
to specify each combination manually.
For example if we want to investigate the differences between running the
maximum and minimum frequency with both the maximum and minimum number of cpus
online, we can create an agenda as follows:
.. code-block:: yaml
sections:
- id: min_freq
runtime_parameters:
freq: min
group: frequency
- id: max_freq
runtime_parameters:
freq: max
group: frequency
- id: min_cpus
runtime_parameters:
cpus: 1
group: cpus
- id: max_cpus
runtime_parameters:
cpus: 8
group: cpus
workloads:
- dhrystone
This will results in 8 jobs being generated for each of the possible combinations.
::
min_freq-min_cpus-wk1 (dhrystone)
min_freq-max_cpus-wk1 (dhrystone)
max_freq-min_cpus-wk1 (dhrystone)
max_freq-max_cpus-wk1 (dhrystone)
min_freq-min_cpus-wk1 (dhrystone)
min_freq-max_cpus-wk1 (dhrystone)
max_freq-min_cpus-wk1 (dhrystone)
max_freq-max_cpus-wk1 (dhrystone)
Each of the generated jobs will have :ref:`classifiers <classifiers>` for
each group and the associated id automatically added.
.. code-block:: python
# ...
print('Job ID: {}'.format(job.id))
print('Classifiers:')
for k, v in job.classifiers.items():
print(' {}: {}'.format(k, v))
Job ID: min_freq-min_cpus-no_idle-wk1
Classifiers:
frequency: min_freq
cpus: min_cpus
.. _augmentations:
Augmentations

@ -76,7 +76,7 @@ A typical ``device_config`` inside ``config.yaml`` may look something like
# ...
or a more specific config could be be
or a more specific config could be:
.. code-block:: yaml

@ -196,7 +196,8 @@ Alternatively, you can also install the latest development version from GitHub
(you will need git installed for this to work)::
git clone git@github.com:ARM-software/workload-automation.git workload-automation
sudo -H pip install ./workload-automation
cd workload-automation
sudo -H python setup.py install

@ -18,6 +18,3 @@ User Reference
-------------------
.. include:: user_information/user_reference/output_directory.rst

@ -102,6 +102,91 @@ remove the high level configuration.
Dependent on specificity, configuration parameters from different sources will
have different inherent priorities. Within an agenda, the configuration in
"workload" entries wil be more specific than "sections" entries, which in turn
"workload" entries will be more specific than "sections" entries, which in turn
are more specific than parameters in the "config" entry.
.. _config-include:
Configuration Includes
----------------------
It is possible to include other files in your config files and agendas. This is
done by specifying ``include#`` (note the trailing hash) as a key in one of the
mappings, with the value being the path to the file to be included. The path
must be either absolute, or relative to the location of the file it is being
included from (*not* to the current working directory). The path may also
include ``~`` to indicate current user's home directory.
The include is performed by removing the ``include#`` loading the contents of
the specified into the mapping that contained it. In cases where the mapping
already contains the key to be loaded, values will be merged using the usual
merge method (for overwrites, values in the mapping take precedence over those
from the included files).
Below is an example of an agenda that includes other files. The assumption is
that all of those files are in one directory
.. code-block:: yaml
# agenda.yaml
config:
augmentations: [trace-cmd]
include#: ./my-config.yaml
sections:
- include#: ./section1.yaml
- include#: ./section2.yaml
include#: ./workloads.yaml
.. code-block:: yaml
# my-config.yaml
augmentations: [cpufreq]
.. code-block:: yaml
# section1.yaml
runtime_parameters:
frequency: max
.. code-block:: yaml
# section2.yaml
runtime_parameters:
frequency: min
.. code-block:: yaml
# workloads.yaml
workloads:
- dhrystone
- memcpy
The above is equivalent to having a single file like this:
.. code-block:: yaml
# agenda.yaml
config:
augmentations: [cpufreq, trace-cmd]
sections:
- runtime_parameters:
frequency: max
- runtime_parameters:
frequency: min
workloads:
- dhrystone
- memcpy
Some additional details about the implementation and its limitations:
- The ``include#`` *must* be a key in a mapping, and the contents of the
included file *must* be a mapping as well; it is not possible to include a
list (e.g. in the examples above ``workload:`` part *must* be in the included
file.
- Being a key in a mapping, there can only be one ``include#`` entry per block.
- The included file *must* have a ``.yaml`` extension.
- Nested inclusions *are* allowed. I.e. included files may themselves include
files; in such cases the included paths must be relative to *that* file, and
not the "main" file.

@ -238,6 +238,33 @@ Which will produce something like::
This will be populated with default values which can then be customised for the
particular use case.
Additionally the create command can be used to initialize (and update) a
Postgres database which can be used by the ``postgres`` output processor.
The most of database connection parameters have a default value however they can
be overridden via command line arguments. When initializing the database WA will
also save the supplied parameters into the default user config file so that they
do not need to be specified time the output processor is used.
As an example if we had a database server running on at 10.0.0.2 using the
standard port we could use the following command to initialize a database for
use with WA::
wa create database -a 10.0.0.2 -u my_username -p Pa55w0rd
This will log into the database server with the supplied credentials and create
a database (defaulting to 'wa') and will save the configuration to the
``~/.workload_automation/config.yaml`` file.
With updates to WA there may be changes to the database schema used. In this
case the create command can also be used with the ``-U`` flag to update the
database to use the new schema as follows::
wa create database -a 10.0.0.2 -u my_username -p Pa55w0rd -U
This will upgrade the database sequentially until the database schema is using
the latest version.
.. _process-command:
Process

@ -87,6 +87,7 @@ __failed
the failed attempts.
.. _job_execution_subd:
job execution output subdirectory
Each subdirectory will be named ``<job id>_<workload label>_<iteration
number>``, and will, at minimum, contain a ``result.json`` (see above).

@ -98,7 +98,7 @@ CPUFreq
:governor: A ``string`` that can be used to specify the governor for all cores if there are common governors available.
:governor_tunable: A ``dict`` that can be used to specify governor
:gov_tunables: A ``dict`` that can be used to specify governor
tunables for all cores, unlike the other common parameters these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -113,7 +113,7 @@ CPUFreq
:<core_name>_governor: A ``string`` that can be used to specify the governor for cores of a particular type e.g. 'A72'.
:<core_name>_governor_tunable: A ``dict`` that can be used to specify governor
:<core_name>_gov_tunables: A ``dict`` that can be used to specify governor
tunables for cores of a particular type e.g. 'A72', these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -129,7 +129,7 @@ CPUFreq
:cpu<no>_governor: A ``string`` that can be used to specify the governor for a particular core e.g. 'cpu0'.
:cpu<no>_governor_tunable: A ``dict`` that can be used to specify governor
:cpu<no>_gov_tunables: A ``dict`` that can be used to specify governor
tunables for a particular core e.g. 'cpu0', these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -147,7 +147,7 @@ If big.LITTLE is detected for the device an additional set of parameters are ava
:big_governor: A ``string`` that can be used to specify the governor for the big cores.
:big_governor_tunable: A ``dict`` that can be used to specify governor
:big_gov_tunables: A ``dict`` that can be used to specify governor
tunables for the big cores, these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.
@ -162,7 +162,7 @@ If big.LITTLE is detected for the device an additional set of parameters are ava
:little_governor: A ``string`` that can be used to specify the governor for the little cores.
:little_governor_tunable: A ``dict`` that can be used to specify governor
:little_gov_tunables: A ``dict`` that can be used to specify governor
tunables for the little cores, these are not
validated at the beginning of the run therefore incorrect values will cause
an error during runtime.

@ -42,8 +42,8 @@ FROM ubuntu:17.10
# Please update the references below to use different versions of
# devlib, WA or the Android SDK
ARG DEVLIB_REF=v1.0.0
ARG WA_REF=v3.0.0
ARG DEVLIB_REF=v1.1.0
ARG WA_REF=v3.1.1
ARG ANDROID_SDK_URL=https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip
RUN apt-get update

10
setup.py Normal file → Executable file

@ -31,7 +31,7 @@ wa_dir = os.path.join(os.path.dirname(__file__), 'wa')
sys.path.insert(0, os.path.join(wa_dir, 'framework'))
from version import get_wa_version, get_wa_version_with_commit
# happends if falling back to distutils
# happens if falling back to distutils
warnings.filterwarnings('ignore', "Unknown distribution option: 'install_requires'")
warnings.filterwarnings('ignore', "Unknown distribution option: 'extras_require'")
@ -41,7 +41,7 @@ except OSError:
pass
packages = []
data_files = {}
data_files = {'': [os.path.join(wa_dir, 'commands', 'postgres_schema.sql')]}
source_dir = os.path.dirname(__file__)
for root, dirs, files in os.walk(wa_dir):
rel_dir = os.path.relpath(root, source_dir)
@ -67,6 +67,7 @@ params = dict(
version=get_wa_version_with_commit(),
packages=packages,
package_data=data_files,
include_package_data=True,
scripts=scripts,
url='https://github.com/ARM-software/workload-automation',
license='Apache v2',
@ -82,14 +83,12 @@ params = dict(
'colorama', # Printing with colors
'pyYAML', # YAML-formatted agenda parsing
'requests', # Fetch assets over HTTP
'devlib>=0.0.4', # Interacting with devices
'devlib>=1.1.0', # Interacting with devices
'louie-latest', # callbacks dispatch
'wrapt', # better decorators
'pandas>=0.23.0', # Data analysis and manipulation
'future', # Python 2-3 compatiblity
],
dependency_links=['https://github.com/ARM-software/devlib/tarball/master#egg=devlib-0.0.4'],
extras_require={
'other': ['jinja2'],
'test': ['nose', 'mock'],
@ -104,6 +103,7 @@ params = dict(
'License :: OSI Approved :: Apache Software License',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
],
)

@ -0,0 +1,7 @@
config:
augmentations: [~execution_time]
include#: configs/test.yaml
sections:
- include#: sections/section1.yaml
- include#: sections/section2.yaml
include#: workloads.yaml

@ -0,0 +1 @@
augmentations: [cpufreq, trace-cmd]

@ -0,0 +1,2 @@
classifiers:
included: true

@ -0,0 +1 @@
classifiers: {'section': 'one'}

@ -0,0 +1,2 @@
classifiers: {'section': 'two'}
include#: ../section-include.yaml

@ -0,0 +1,2 @@
augmentations: [execution_time]

@ -0,0 +1,5 @@
workloads:
- dhrystone
- name: memcpy
classifiers:
memcpy: True

@ -17,19 +17,26 @@
# pylint: disable=E0611
# pylint: disable=R0201
import os
import sys
import yaml
from collections import defaultdict
from unittest import TestCase
from nose.tools import assert_equal, assert_in, raises, assert_true
DATA_DIR = os.path.join(os.path.dirname(__file__), 'data')
os.environ['WA_USER_DIRECTORY'] = os.path.join(DATA_DIR, 'includes')
from wa.framework.configuration.execution import ConfigManager
from wa.framework.configuration.parsers import AgendaParser
from wa.framework.exception import ConfigError
from wa.utils.types import reset_all_counters
YAML_TEST_FILE = os.path.join(os.path.dirname(__file__), 'data', 'test-agenda.yaml')
YAML_BAD_SYNTAX_FILE = os.path.join(os.path.dirname(__file__), 'data', 'bad-syntax-agenda.yaml')
YAML_TEST_FILE = os.path.join(DATA_DIR, 'test-agenda.yaml')
YAML_BAD_SYNTAX_FILE = os.path.join(DATA_DIR, 'bad-syntax-agenda.yaml')
INCLUDES_TEST_FILE = os.path.join(DATA_DIR, 'includes', 'agenda.yaml')
invalid_agenda_text = """
workloads:
@ -171,3 +178,37 @@ class AgendaTest(TestCase):
@raises(ConfigError)
def test_bad_syntax(self):
self.parser.load_from_path(self.config, YAML_BAD_SYNTAX_FILE)
class FakeTargetManager:
def merge_runtime_parameters(self, params):
return params
def validate_runtime_parameters(self, params):
pass
class IncludesTest(TestCase):
def test_includes(self):
from pprint import pprint
parser = AgendaParser()
cm = ConfigManager()
tm = FakeTargetManager()
includes = parser.load_from_path(cm, INCLUDES_TEST_FILE)
include_set = set([os.path.basename(i) for i in includes])
assert_equal(include_set,
set(['test.yaml', 'section1.yaml', 'section2.yaml',
'section-include.yaml', 'workloads.yaml']))
job_classifiers = {j.id: j.classifiers
for j in cm.jobs_config.generate_job_specs(tm)}
assert_equal(job_classifiers,
{
's1-wk1': {'section': 'one'},
's2-wk1': {'section': 'two', 'included': True},
's1-wk2': {'section': 'one', 'memcpy': True},
's2-wk2': {'section': 'two', 'included': True, 'memcpy': True},
})

@ -21,7 +21,7 @@ from nose.tools import raises, assert_equal, assert_not_equal, assert_in, assert
from nose.tools import assert_true, assert_false, assert_raises, assert_is, assert_list_equal
from wa.utils.types import (list_or_integer, list_or_bool, caseless_string,
arguments, prioritylist, enum, level)
arguments, prioritylist, enum, level, toggle_set)
@ -149,3 +149,44 @@ class TestEnumLevel(TestCase):
s = e.one.to_pod()
l = e.from_pod(s)
assert_equal(l, e.one)
class TestToggleSet(TestCase):
def test_equality(self):
ts1 = toggle_set(['one', 'two',])
ts2 = toggle_set(['one', 'two', '~three'])
assert_not_equal(ts1, ts2)
assert_equal(ts1.values(), ts2.values())
assert_equal(ts2, toggle_set(['two', '~three', 'one']))
def test_merge(self):
ts1 = toggle_set(['one', 'two', 'three', '~four', '~five'])
ts2 = toggle_set(['two', '~three', 'four', '~five'])
ts3 = ts1.merge_with(ts2)
assert_equal(ts1, toggle_set(['one', 'two', 'three', '~four', '~five']))
assert_equal(ts2, toggle_set(['two', '~three', 'four', '~five']))
assert_equal(ts3, toggle_set(['one', 'two', '~three', 'four', '~five']))
assert_equal(ts3.values(), set(['one', 'two','four']))
ts4 = ts1.merge_into(ts2)
assert_equal(ts1, toggle_set(['one', 'two', 'three', '~four', '~five']))
assert_equal(ts2, toggle_set(['two', '~three', 'four', '~five']))
assert_equal(ts4, toggle_set(['one', 'two', 'three', '~four', '~five']))
assert_equal(ts4.values(), set(['one', 'two', 'three']))
def test_drop_all_previous(self):
ts1 = toggle_set(['one', 'two', 'three'])
ts2 = toggle_set(['four', '~~', 'five'])
ts3 = toggle_set(['six', 'seven', '~three'])
ts4 = ts1.merge_with(ts2).merge_with(ts3)
assert_equal(ts4, toggle_set(['four', 'five', 'six', 'seven', '~three', '~~']))
ts5 = ts2.merge_into(ts3).merge_into(ts1)
assert_equal(ts5, toggle_set(['four', 'five', '~~']))
ts6 = ts2.merge_into(ts3).merge_with(ts1)
assert_equal(ts6, toggle_set(['one', 'two', 'three', 'four', 'five', '~~']))

@ -0,0 +1,23 @@
config:
iterations: 1
augmentations:
- ~~
- status
device: generic_local
device_config:
big_core: null
core_clusters: null
core_names: null
executables_directory: null
keep_password: true
load_default_modules: false
model: null
modules: null
password: null
shell_prompt: !<tag:wa:regex> '40:^.*(shell|root|juno)@?.*:[/~]\S* *[#$] '
unrooted: True
working_directory: null
workloads:
- name: idle
params:
duration: 1

@ -17,7 +17,7 @@ from wa.framework import pluginloader, signal
from wa.framework.command import Command, ComplexCommand, SubCommand
from wa.framework.configuration import settings
from wa.framework.configuration.core import Status
from wa.framework.exception import (CommandError, ConfigError, HostError, InstrumentError,
from wa.framework.exception import (CommandError, ConfigError, HostError, InstrumentError, # pylint: disable=redefined-builtin
JobError, NotFoundError, OutputProcessorError,
PluginLoaderError, ResourceError, TargetError,
TargetNotRespondingError, TimeoutError, ToolError,

Binary file not shown.

Binary file not shown.

@ -13,28 +13,314 @@
# limitations under the License.
#
import os
import sys
import stat
import shutil
import string
import re
import uuid
import getpass
from collections import OrderedDict
from distutils.dir_util import copy_tree
from distutils.dir_util import copy_tree # pylint: disable=no-name-in-module, import-error
from devlib.utils.types import identifier
try:
import psycopg2
from psycopg2 import connect, OperationalError, extras
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
except ImportError as e:
psycopg2 = None
import_error_msg = e.args[0] if e.args else str(e)
from wa import ComplexCommand, SubCommand, pluginloader, settings
from wa.framework.target.descriptor import list_target_descriptions
from wa.framework.exception import ConfigError, CommandError
from wa.instruments.energy_measurement import EnergyInstrumentBackend
from wa.utils.misc import (ensure_directory_exists as _d, capitalize,
ensure_file_directory_exists as _f)
from wa.utils.postgres import get_schema, POSTGRES_SCHEMA_DIR
from wa.utils.serializer import yaml
from devlib.utils.types import identifier
TEMPLATES_DIR = os.path.join(os.path.dirname(__file__), 'templates')
class CreateDatabaseSubcommand(SubCommand):
name = 'database'
description = """
Create a Postgresql database which is compatible with the WA Postgres
output processor.
"""
schemafilepath = os.path.join(POSTGRES_SCHEMA_DIR, 'postgres_schema.sql')
schemaupdatefilepath = os.path.join(POSTGRES_SCHEMA_DIR, 'postgres_schema_update_v{}.{}.sql')
def __init__(self, *args, **kwargs):
super(CreateDatabaseSubcommand, self).__init__(*args, **kwargs)
self.sql_commands = None
self.schema_major = None
self.schema_minor = None
self.postgres_host = None
self.postgres_port = None
self.username = None
self.password = None
self.dbname = None
self.config_file = None
self.force = None
def initialize(self, context):
self.parser.add_argument(
'-a', '--postgres-host', default='localhost',
help='The host on which to create the database.')
self.parser.add_argument(
'-k', '--postgres-port', default='5432',
help='The port on which the PostgreSQL server is running.')
self.parser.add_argument(
'-u', '--username', default='postgres',
help='The username with which to connect to the server.')
self.parser.add_argument(
'-p', '--password',
help='The password for the user account.')
self.parser.add_argument(
'-d', '--dbname', default='wa',
help='The name of the database to create.')
self.parser.add_argument(
'-f', '--force', action='store_true',
help='Force overwrite the existing database if one exists.')
self.parser.add_argument(
'-F', '--force-update-config', action='store_true',
help='Force update the config file if an entry exists.')
self.parser.add_argument(
'-r', '--config-file', default=settings.user_config_file,
help='Path to the config file to be updated.')
self.parser.add_argument(
'-x', '--schema-version', action='store_true',
help='Display the current schema version.')
self.parser.add_argument(
'-U', '--upgrade', action='store_true',
help='Upgrade the database to use the latest schema version.')
def execute(self, state, args): # pylint: disable=too-many-branches
if not psycopg2:
raise CommandError(
'The module psycopg2 is required for the wa ' +
'create database command.')
if args.dbname == 'postgres':
raise ValueError('Databasename to create cannot be postgres.')
self._parse_args(args)
self.schema_major, self.schema_minor, self.sql_commands = get_schema(self.schemafilepath)
# Display the version if needed and exit
if args.schema_version:
self.logger.info(
'The current schema version is {}.{}'.format(self.schema_major,
self.schema_minor))
return
if args.upgrade:
self.update_schema()
return
# Open user configuration
with open(self.config_file, 'r') as config_file:
config = yaml.load(config_file)
if 'postgres' in config and not args.force_update_config:
raise CommandError(
"The entry 'postgres' already exists in the config file. " +
"Please specify the -F flag to force an update.")
possible_connection_errors = [
(
re.compile('FATAL: role ".*" does not exist'),
'Username does not exist or password is incorrect'
),
(
re.compile('FATAL: password authentication failed for user'),
'Password was incorrect'
),
(
re.compile('fe_sendauth: no password supplied'),
'Passwordless connection is not enabled. '
'Please enable trust in pg_hba for this host '
'or use a password'
),
(
re.compile('FATAL: no pg_hba.conf entry for'),
'Host is not allowed to connect to the specified database '
'using this user according to pg_hba.conf. Please change the '
'rules in pg_hba or your connection method'
),
(
re.compile('FATAL: pg_hba.conf rejects connection'),
'Connection was rejected by pg_hba.conf'
),
]
def predicate(error, handle):
if handle[0].match(str(error)):
raise CommandError(handle[1] + ': \n' + str(error))
# Attempt to create database
try:
self.create_database()
except OperationalError as e:
for handle in possible_connection_errors:
predicate(e, handle)
raise e
# Update the configuration file
self._update_configuration_file(config)
def create_database(self):
self._validate_version()
self._check_database_existence()
self._create_database_postgres()
self._apply_database_schema(self.sql_commands, self.schema_major, self.schema_minor)
self.logger.info(
"Successfully created the database {}".format(self.dbname))
def update_schema(self):
self._validate_version()
schema_major, schema_minor, _ = get_schema(self.schemafilepath)
meta_oid, current_major, current_minor = self._get_database_schema_version()
while not (schema_major == current_major and schema_minor == current_minor):
current_minor = self._update_schema_minors(current_major, current_minor, meta_oid)
current_major, current_minor = self._update_schema_major(current_major, current_minor, meta_oid)
msg = "Database schema update of '{}' to v{}.{} complete"
self.logger.info(msg.format(self.dbname, schema_major, schema_minor))
def _update_schema_minors(self, major, minor, meta_oid):
# Upgrade all available minor versions
while True:
minor += 1
schema_update = os.path.join(POSTGRES_SCHEMA_DIR,
self.schemaupdatefilepath.format(major, minor))
if not os.path.exists(schema_update):
break
_, _, sql_commands = get_schema(schema_update)
self._apply_database_schema(sql_commands, major, minor, meta_oid)
msg = "Updated the database schema to v{}.{}"
self.logger.debug(msg.format(major, minor))
# Return last existing update file version
return minor - 1
def _update_schema_major(self, current_major, current_minor, meta_oid):
current_major += 1
schema_update = os.path.join(POSTGRES_SCHEMA_DIR,
self.schemaupdatefilepath.format(current_major, 0))
if not os.path.exists(schema_update):
return (current_major - 1, current_minor)
# Reset minor to 0 with major version bump
current_minor = 0
_, _, sql_commands = get_schema(schema_update)
self._apply_database_schema(sql_commands, current_major, current_minor, meta_oid)
msg = "Updated the database schema to v{}.{}"
self.logger.debug(msg.format(current_major, current_minor))
return (current_major, current_minor)
def _validate_version(self):
conn = connect(user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
if conn.server_version < 90400:
msg = 'Postgres version too low. Please ensure that you are using atleast v9.4'
raise CommandError(msg)
def _get_database_schema_version(self):
conn = connect(dbname=self.dbname, user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
cursor = conn.cursor()
cursor.execute('''SELECT
DatabaseMeta.oid,
DatabaseMeta.schema_major,
DatabaseMeta.schema_minor
FROM
DatabaseMeta;''')
return cursor.fetchone()
def _check_database_existence(self):
try:
connect(dbname=self.dbname, user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
except OperationalError as e:
# Expect an operational error (database's non-existence)
if not re.compile('FATAL: database ".*" does not exist').match(str(e)):
raise e
else:
if not self.force:
raise CommandError(
"Database {} already exists. ".format(self.dbname) +
"Please specify the -f flag to create it from afresh."
)
def _create_database_postgres(self):
conn = connect(dbname='postgres', user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
cursor = conn.cursor()
cursor.execute('DROP DATABASE IF EXISTS ' + self.dbname)
cursor.execute('CREATE DATABASE ' + self.dbname)
conn.commit()
cursor.close()
conn.close()
def _apply_database_schema(self, sql_commands, schema_major, schema_minor, meta_uuid=None):
conn = connect(dbname=self.dbname, user=self.username,
password=self.password, host=self.postgres_host, port=self.postgres_port)
cursor = conn.cursor()
cursor.execute(sql_commands)
if not meta_uuid:
extras.register_uuid()
meta_uuid = uuid.uuid4()
cursor.execute("INSERT INTO DatabaseMeta VALUES (%s, %s, %s)",
(meta_uuid,
schema_major,
schema_minor
))
else:
cursor.execute("UPDATE DatabaseMeta SET schema_major = %s, schema_minor = %s WHERE oid = %s;",
(schema_major,
schema_minor,
meta_uuid
))
conn.commit()
cursor.close()
conn.close()
def _update_configuration_file(self, config):
''' Update the user configuration file with the newly created database's
configuration.
'''
config['postgres'] = OrderedDict(
[('host', self.postgres_host), ('port', self.postgres_port),
('dbname', self.dbname), ('username', self.username), ('password', self.password)])
with open(self.config_file, 'w+') as config_file:
yaml.dump(config, config_file)
def _parse_args(self, args):
self.postgres_host = args.postgres_host
self.postgres_port = args.postgres_port
self.username = args.username
self.password = args.password
self.dbname = args.dbname
self.config_file = args.config_file
self.force = args.force
class CreateAgendaSubcommand(SubCommand):
name = 'agenda'
@ -51,6 +337,7 @@ class CreateAgendaSubcommand(SubCommand):
self.parser.add_argument('-o', '--output', metavar='FILE',
help='Output file. If not specfied, STDOUT will be used instead.')
# pylint: disable=too-many-branches
def execute(self, state, args):
agenda = OrderedDict()
agenda['config'] = OrderedDict(augmentations=[], iterations=args.iterations)
@ -71,7 +358,15 @@ class CreateAgendaSubcommand(SubCommand):
extcls = pluginloader.get_plugin_class(name)
config = pluginloader.get_default_config(name)
if extcls.kind == 'workload':
# Handle special case for EnergyInstrumentBackends
if issubclass(extcls, EnergyInstrumentBackend):
if 'energy_measurement' not in agenda['config']['augmentations']:
energy_config = pluginloader.get_default_config('energy_measurement')
agenda['config']['augmentations'].append('energy_measurement')
agenda['config']['energy_measurement'] = energy_config
agenda['config']['energy_measurement']['instrument'] = extcls.name
agenda['config']['energy_measurement']['instrument_parameters'] = config
elif extcls.kind == 'workload':
entry = OrderedDict()
entry['name'] = extcls.name
if name != extcls.name:
@ -79,11 +374,12 @@ class CreateAgendaSubcommand(SubCommand):
entry['params'] = config
agenda['workloads'].append(entry)
else:
if extcls.kind == 'instrument':
agenda['config']['augmentations'].append(name)
if extcls.kind == 'output_processor':
agenda['config']['augmentations'].append(name)
agenda['config'][name] = config
if extcls.kind in ('instrument', 'output_processor'):
if extcls.name not in agenda['config']['augmentations']:
agenda['config']['augmentations'].append(extcls.name)
if extcls.name not in agenda['config']:
agenda['config'][extcls.name] = config
if args.output:
wfh = open(args.output, 'w')
@ -170,6 +466,7 @@ class CreateCommand(ComplexCommand):
object-specific arguments.
'''
subcmd_classes = [
CreateDatabaseSubcommand,
CreateWorkloadSubcommand,
CreateAgendaSubcommand,
CreatePackageSubcommand,
@ -240,6 +537,7 @@ def create_uiauto_project(path, name):
wfh.write(render_template(os.path.join('uiauto', 'UiAutomation.java'),
{'name': name, 'package_name': package_name}))
# Mapping of workload types to their corresponding creation method
create_funcs = {
'basic': create_template_workload,
@ -266,5 +564,5 @@ def get_class_name(name, postfix=''):
def touch(path):
with open(path, 'w') as _:
with open(path, 'w') as _: # NOQA
pass

@ -0,0 +1,192 @@
--!VERSION!1.2!ENDVERSION!
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "lo";
-- In future, it may be useful to implement rules on which Parameter oid fields can be none depeendent on the value in the type column;
DROP TABLE IF EXISTS DatabaseMeta;
DROP TABLE IF EXISTS Parameters;
DROP TABLE IF EXISTS Classifiers;
DROP TABLE IF EXISTS LargeObjects;
DROP TABLE IF EXISTS Artifacts;
DROP TABLE IF EXISTS Metrics;
DROP TABLE IF EXISTS Augmentations;
DROP TABLE IF EXISTS Jobs_Augs;
DROP TABLE IF EXISTS ResourceGetters;
DROP TABLE IF EXISTS Resource_Getters;
DROP TABLE IF EXISTS Events;
DROP TABLE IF EXISTS Targets;
DROP TABLE IF EXISTS Jobs;
DROP TABLE IF EXISTS Runs;
DROP TYPE IF EXISTS status_enum;
DROP TYPE IF EXISTS param_enum;
CREATE TYPE status_enum AS ENUM ('UNKNOWN(0)','NEW(1)','PENDING(2)','STARTED(3)','CONNECTED(4)', 'INITIALIZED(5)', 'RUNNING(6)', 'OK(7)', 'PARTIAL(8)', 'FAILED(9)', 'ABORTED(10)', 'SKIPPED(11)');
CREATE TYPE param_enum AS ENUM ('workload', 'resource_getter', 'augmentation', 'device', 'runtime', 'boot');
-- In future, it might be useful to create an ENUM type for the artifact kind, or simply a generic enum type;
CREATE TABLE DatabaseMeta (
oid uuid NOT NULL,
schema_major int,
schema_minor int,
PRIMARY KEY (oid)
);
CREATE TABLE Runs (
oid uuid NOT NULL,
event_summary text,
basepath text,
status status_enum,
timestamp timestamp,
run_name text,
project text,
project_stage text,
retry_on_status status_enum[],
max_retries int,
bail_on_init_failure boolean,
allow_phone_home boolean,
run_uuid uuid,
start_time timestamp,
end_time timestamp,
duration float,
metadata jsonb,
_pod_version int,
_pod_serialization_version int,
state jsonb,
PRIMARY KEY (oid)
);
CREATE TABLE Jobs (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
status status_enum,
retry int,
label text,
job_id text,
iterations int,
workload_name text,
metadata jsonb,
_pod_version int,
_pod_serialization_version int,
PRIMARY KEY (oid)
);
CREATE TABLE Targets (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
target text,
cpus text[],
os text,
os_version jsonb,
hostid int,
hostname text,
abi text,
is_rooted boolean,
kernel_version text,
kernel_release text,
kernel_sha1 text,
kernel_config text[],
sched_features text[],
page_size_kb int,
screen_resolution int[],
prop json,
android_id text,
_pod_version int,
_pod_serialization_version int,
PRIMARY KEY (oid)
);
CREATE TABLE Events (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
job_oid uuid references Jobs(oid),
timestamp timestamp,
message text,
_pod_version int,
_pod_serialization_version int,
PRIMARY KEY (oid)
);
CREATE TABLE Resource_Getters (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
name text,
PRIMARY KEY (oid)
);
CREATE TABLE Augmentations (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
name text,
PRIMARY KEY (oid)
);
CREATE TABLE Jobs_Augs (
oid uuid NOT NULL,
job_oid uuid NOT NULL references Jobs(oid),
augmentation_oid uuid NOT NULL references Augmentations(oid),
PRIMARY KEY (oid)
);
CREATE TABLE Metrics (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
job_oid uuid references Jobs(oid),
name text,
value double precision,
units text,
lower_is_better boolean,
_pod_version int,
_pod_serialization_version int,
PRIMARY KEY (oid)
);
CREATE TABLE LargeObjects (
oid uuid NOT NULL,
lo_oid lo NOT NULL,
PRIMARY KEY (oid)
);
-- Trigger that allows you to manage large objects from the LO table directly;
CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON LargeObjects
FOR EACH ROW EXECUTE PROCEDURE lo_manage(lo_oid);
CREATE TABLE Artifacts (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
job_oid uuid references Jobs(oid),
name text,
large_object_uuid uuid NOT NULL references LargeObjects(oid),
description text,
kind text,
_pod_version int,
_pod_serialization_version int,
PRIMARY KEY (oid)
);
CREATE TABLE Classifiers (
oid uuid NOT NULL,
artifact_oid uuid references Artifacts(oid),
metric_oid uuid references Metrics(oid),
job_oid uuid references Jobs(oid),
run_oid uuid references Runs(oid),
key text,
value text,
PRIMARY KEY (oid)
);
CREATE TABLE Parameters (
oid uuid NOT NULL,
run_oid uuid NOT NULL references Runs(oid),
job_oid uuid references Jobs(oid),
augmentation_oid uuid references Augmentations(oid),
resource_getter_oid uuid references Resource_Getters(oid),
name text,
value text,
value_type text,
type param_enum,
PRIMARY KEY (oid)
);

@ -0,0 +1,30 @@
ALTER TABLE resourcegetters RENAME TO resource_getters;
ALTER TABLE classifiers ADD COLUMN job_oid uuid references Jobs(oid);
ALTER TABLE classifiers ADD COLUMN run_oid uuid references Runs(oid);
ALTER TABLE targets ADD COLUMN page_size_kb int;
ALTER TABLE targets ADD COLUMN screen_resolution int[];
ALTER TABLE targets ADD COLUMN prop text;
ALTER TABLE targets ADD COLUMN android_id text;
ALTER TABLE targets ADD COLUMN _pod_version int;
ALTER TABLE targets ADD COLUMN _pod_serialization_version int;
ALTER TABLE jobs RENAME COLUMN retries TO retry;
ALTER TABLE jobs ADD COLUMN _pod_version int;
ALTER TABLE jobs ADD COLUMN _pod_serialization_version int;
ALTER TABLE runs ADD COLUMN project_stage text;
ALTER TABLE runs ADD COLUMN state jsonb;
ALTER TABLE runs ADD COLUMN duration float;
ALTER TABLE runs ADD COLUMN _pod_version int;
ALTER TABLE runs ADD COLUMN _pod_serialization_version int;
ALTER TABLE artifacts ADD COLUMN _pod_version int;
ALTER TABLE artifacts ADD COLUMN _pod_serialization_version int;
ALTER TABLE events ADD COLUMN _pod_version int;
ALTER TABLE events ADD COLUMN _pod_serialization_version int;
ALTER TABLE metrics ADD COLUMN _pod_version int;
ALTER TABLE metrics ADD COLUMN _pod_serialization_version int;

@ -30,6 +30,9 @@ class ProcessContext(object):
self.target_info = None
self.job_output = None
def add_augmentation(self, aug):
pass
class ProcessCommand(Command):
@ -64,7 +67,7 @@ class ProcessCommand(Command):
instead of just processing the root.
""")
def execute(self, config, args):
def execute(self, config, args): # pylint: disable=arguments-differ,too-many-branches,too-many-statements
process_directory = os.path.expandvars(args.directory)
self.logger.debug('Using process directory: {}'.format(process_directory))
if not os.path.exists(process_directory):
@ -77,6 +80,9 @@ class ProcessCommand(Command):
pc = ProcessContext()
for run_output in output_list:
pc.run_output = run_output
pc.target_info = run_output.target_info
if not args.recursive:
self.logger.info('Installing output processors')
else:
@ -92,7 +98,7 @@ class ProcessCommand(Command):
pm = ProcessorManager(loader=config.plugin_cache)
for proc in config.get_processors():
pm.install(proc, None)
pm.install(proc, pc)
if args.additional_processors:
for proc in args.additional_processors:
# Do not add any processors that are already present since
@ -100,13 +106,11 @@ class ProcessCommand(Command):
try:
pm.get_output_processor(proc)
except ValueError:
pm.install(proc, None)
pm.install(proc, pc)
pm.validate()
pm.initialize()
pm.initialize(pc)
pc.run_output = run_output
pc.target_info = run_output.target_info
for job_output in run_output.jobs:
pc.job_output = job_output
pm.enable_all()
@ -136,7 +140,7 @@ class ProcessCommand(Command):
self.logger.info('Processing run')
pm.process_run_output(pc)
pm.export_run_output(pc)
pm.finalize()
pm.finalize(pc)
run_output.write_result()
self.logger.info('Done.')

@ -100,7 +100,7 @@ class RecordCommand(Command):
args.teardown or args.all):
self.logger.error("Cannot specify a recording stage without a Workload")
sys.exit()
if not (args.all or args.teardown or args.extract_results or args.run or args.setup):
if args.workload and not any([args.all, args.teardown, args.extract_results, args.run, args.setup]):
self.logger.error("Please specify which workload stages you wish to record")
sys.exit()
@ -120,6 +120,7 @@ class RecordCommand(Command):
outdir = os.getcwd()
self.tm = TargetManager(device, device_config, outdir)
self.tm.initialize()
self.target = self.tm.target
self.revent_recorder = ReventRecorder(self.target)
self.revent_recorder.deploy()
@ -261,6 +262,7 @@ class ReplayCommand(Command):
device_config = state.run_config.device_config or {}
target_manager = TargetManager(device, device_config, None)
target_manager.initialize()
self.target = target_manager.target
revent_file = self.target.path.join(self.target.working_directory,
os.path.split(args.recording)[1])

@ -84,7 +84,7 @@ class RunCommand(Command):
be specified multiple times.
""")
def execute(self, config, args):
def execute(self, config, args): # pylint: disable=arguments-differ
output = self.set_up_output_directory(config, args)
log.add_file(output.logfile)
output.add_artifact('runlog', output.logfile, kind='log',
@ -97,8 +97,10 @@ class RunCommand(Command):
parser = AgendaParser()
if os.path.isfile(args.agenda):
parser.load_from_path(config, args.agenda)
includes = parser.load_from_path(config, args.agenda)
shutil.copy(args.agenda, output.raw_config_dir)
for inc in includes:
shutil.copy(inc, output.raw_config_dir)
else:
try:
pluginloader.get_plugin_class(args.agenda, kind='workload')
@ -110,6 +112,11 @@ class RunCommand(Command):
'by running "wa list workloads".'
raise ConfigError(msg.format(args.agenda))
# Update run info with newly parsed config values
output.info.project = config.run_config.project
output.info.project_stage = config.run_config.project_stage
output.info.run_name = config.run_config.run_name
executor = Executor()
executor.execute(config, output)

@ -0,0 +1,9 @@
# 1
## 1.0
- First version
## 1.1
- LargeObjects table added as a substitute for the previous plan to
use the filesystem and a path reference to store artifacts. This
was done following an extended discussion and tests that verified
that the savings in processing power were not enough to warrant
the creation of a dedicated server or file handler.

@ -21,6 +21,8 @@
import sys
from subprocess import call, Popen, PIPE
from devlib.utils.misc import escape_double_quotes
from wa import Command
from wa.framework import pluginloader
from wa.framework.configuration.core import MetaConfiguration, RunConfiguration
@ -31,8 +33,6 @@ from wa.utils.doc import (strip_inlined_text, get_rst_from_plugin,
get_params_rst, underline)
from wa.utils.misc import which
from devlib.utils.misc import escape_double_quotes
class ShowCommand(Command):

@ -22,7 +22,7 @@ from wa.utils import log
from wa.utils.misc import (get_article, merge_config_values)
from wa.utils.types import (identifier, integer, boolean, list_of_strings,
list_of, toggle_set, obj_dict, enum)
from wa.utils.serializer import is_pod
from wa.utils.serializer import is_pod, Podable
# Mapping for kind conversion; see docs for convert_types below
@ -110,7 +110,9 @@ class status_list(list):
list.append(self, str(item).upper())
class LoggingConfig(dict):
class LoggingConfig(Podable, dict):
_pod_serialization_version = 1
defaults = {
'file_format': '%(asctime)s %(levelname)-8s %(name)s: %(message)s',
@ -121,9 +123,14 @@ class LoggingConfig(dict):
@staticmethod
def from_pod(pod):
return LoggingConfig(pod)
pod = LoggingConfig._upgrade_pod(pod)
pod_version = pod.pop('_pod_version')
instance = LoggingConfig(pod)
instance._pod_version = pod_version # pylint: disable=protected-access
return instance
def __init__(self, config=None):
super(LoggingConfig, self).__init__()
dict.__init__(self)
if isinstance(config, dict):
config = {identifier(k.lower()): v for k, v in config.items()}
@ -142,7 +149,14 @@ class LoggingConfig(dict):
raise ValueError(config)
def to_pod(self):
return self
pod = super(LoggingConfig, self).to_pod()
pod.update(self)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def expanded_path(path):
@ -347,8 +361,9 @@ def _to_pod(cfg_point, value):
raise ValueError(msg.format(cfg_point.name, value))
class Configuration(object):
class Configuration(Podable):
_pod_serialization_version = 1
config_points = []
name = ''
@ -357,7 +372,7 @@ class Configuration(object):
@classmethod
def from_pod(cls, pod):
instance = cls()
instance = super(Configuration, cls).from_pod(pod)
for cfg_point in cls.config_points:
if cfg_point.name in pod:
value = pod.pop(cfg_point.name)
@ -370,6 +385,7 @@ class Configuration(object):
return instance
def __init__(self):
super(Configuration, self).__init__()
for confpoint in self.config_points:
confpoint.set_value(self, check_mandatory=False)
@ -393,12 +409,17 @@ class Configuration(object):
cfg_point.validate(self)
def to_pod(self):
pod = {}
pod = super(Configuration, self).to_pod()
for cfg_point in self.config_points:
value = getattr(self, cfg_point.name, None)
pod[cfg_point.name] = _to_pod(cfg_point, value)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
# This configuration for the core WA framework
class MetaConfiguration(Configuration):
@ -482,6 +503,10 @@ class MetaConfiguration(Configuration):
def plugins_directory(self):
return os.path.join(self.user_directory, 'plugins')
@property
def cache_directory(self):
return os.path.join(self.user_directory, 'cache')
@property
def plugin_paths(self):
return [self.plugins_directory] + (self.extra_plugin_paths or [])
@ -494,6 +519,10 @@ class MetaConfiguration(Configuration):
def additional_packages_file(self):
return os.path.join(self.user_directory, 'packages')
@property
def target_info_cache_file(self):
return os.path.join(self.cache_directory, 'targets.json')
def __init__(self, environ=None):
super(MetaConfiguration, self).__init__()
if environ is None:
@ -700,8 +729,12 @@ class RunConfiguration(Configuration):
meta_pod[cfg_point.name] = pod.pop(cfg_point.name, None)
device_config = pod.pop('device_config', None)
augmentations = pod.pop('augmentations', {})
getters = pod.pop('resource_getters', {})
instance = super(RunConfiguration, cls).from_pod(pod)
instance.device_config = device_config
instance.augmentations = augmentations
instance.resource_getters = getters
for cfg_point in cls.meta_data:
cfg_point.set_value(instance, meta_pod[cfg_point.name])
@ -712,6 +745,8 @@ class RunConfiguration(Configuration):
for confpoint in self.meta_data:
confpoint.set_value(self, check_mandatory=False)
self.device_config = None
self.augmentations = {}
self.resource_getters = {}
def merge_device_config(self, plugin_cache):
"""
@ -725,9 +760,21 @@ class RunConfiguration(Configuration):
self.device_config = plugin_cache.get_plugin_config(self.device,
generic_name="device_config")
def add_augmentation(self, aug):
if aug.name in self.augmentations:
raise ValueError('Augmentation "{}" already added.'.format(aug.name))
self.augmentations[aug.name] = aug.get_config()
def add_resource_getter(self, getter):
if getter.name in self.resource_getters:
raise ValueError('Resource getter "{}" already added.'.format(getter.name))
self.resource_getters[getter.name] = getter.get_config()
def to_pod(self):
pod = super(RunConfiguration, self).to_pod()
pod['device_config'] = dict(self.device_config or {})
pod['augmentations'] = self.augmentations
pod['resource_getters'] = self.resource_getters
return pod
@ -952,8 +999,8 @@ class JobGenerator(object):
if name == "augmentations":
self.update_augmentations(value)
def add_section(self, section, workloads):
new_node = self.root_node.add_section(section)
def add_section(self, section, workloads, group):
new_node = self.root_node.add_section(section, group)
with log.indentcontext():
for workload in workloads:
new_node.add_workload(workload)
@ -1015,6 +1062,12 @@ def create_job_spec(workload_entry, sections, target_manager, plugin_cache,
# PHASE 2.1: Merge general job spec configuration
for section in sections:
job_spec.update_config(section, check_mandatory=False)
# Add classifiers for any present groups
if section.id == 'global' or section.group is None:
# Ignore global config and default group
continue
job_spec.classifiers[section.group] = section.id
job_spec.update_config(workload_entry, check_mandatory=False)
# PHASE 2.2: Merge global, section and workload entry "workload_parameters"

@ -18,6 +18,8 @@ from itertools import groupby, chain
from future.moves.itertools import zip_longest
from devlib.utils.types import identifier
from wa.framework.configuration.core import (MetaConfiguration, RunConfiguration,
JobGenerator, settings)
from wa.framework.configuration.parsers import ConfigParser
@ -25,24 +27,35 @@ from wa.framework.configuration.plugin_cache import PluginCache
from wa.framework.exception import NotFoundError
from wa.framework.job import Job
from wa.utils import log
from wa.utils.serializer import Podable
class CombinedConfig(object):
class CombinedConfig(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
instance = CombinedConfig()
instance = super(CombinedConfig, CombinedConfig).from_pod(pod)
instance.settings = MetaConfiguration.from_pod(pod.get('settings', {}))
instance.run_config = RunConfiguration.from_pod(pod.get('run_config', {}))
return instance
def __init__(self, settings=None, run_config=None): # pylint: disable=redefined-outer-name
super(CombinedConfig, self).__init__()
self.settings = settings
self.run_config = run_config
def to_pod(self):
return {'settings': self.settings.to_pod(),
'run_config': self.run_config.to_pod()}
pod = super(CombinedConfig, self).to_pod()
pod['settings'] = self.settings.to_pod()
pod['run_config'] = self.run_config.to_pod()
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
class ConfigManager(object):
@ -90,15 +103,16 @@ class ConfigManager(object):
self.agenda = None
def load_config_file(self, filepath):
self._config_parser.load_from_path(self, filepath)
includes = self._config_parser.load_from_path(self, filepath)
self.loaded_config_sources.append(filepath)
self.loaded_config_sources.extend(includes)
def load_config(self, values, source):
self._config_parser.load(self, values, source)
self.loaded_config_sources.append(source)
def get_plugin(self, name=None, kind=None, *args, **kwargs):
return self.plugin_cache.get_plugin(name, kind, *args, **kwargs)
return self.plugin_cache.get_plugin(identifier(name), kind, *args, **kwargs)
def get_instruments(self, target):
instruments = []
@ -122,12 +136,15 @@ class ConfigManager(object):
processors.append(proc)
return processors
def get_config(self):
return CombinedConfig(self.settings, self.run_config)
def finalize(self):
if not self.agenda:
msg = 'Attempting to finalize config before agenda has been set'
raise RuntimeError(msg)
self.run_config.merge_device_config(self.plugin_cache)
return CombinedConfig(self.settings, self.run_config)
return self.get_config()
def generate_jobs(self, context):
job_specs = self.jobs_config.generate_job_specs(context.tm)

@ -18,11 +18,14 @@ import os
import logging
from functools import reduce # pylint: disable=redefined-builtin
from devlib.utils.types import identifier
from wa.framework.configuration.core import JobSpec
from wa.framework.exception import ConfigError
from wa.utils import log
from wa.utils.serializer import json, read_pod, SerializerSyntaxError
from wa.utils.types import toggle_set, counter
from wa.utils.misc import merge_config_values, isiterable
logger = logging.getLogger('config')
@ -31,7 +34,9 @@ logger = logging.getLogger('config')
class ConfigParser(object):
def load_from_path(self, state, filepath):
self.load(state, _load_file(filepath, "Config"), filepath)
raw, includes = _load_file(filepath, "Config")
self.load(state, raw, filepath)
return includes
def load(self, state, raw, source, wrap_exceptions=True): # pylint: disable=too-many-branches
logger.debug('Parsing config from "{}"'.format(source))
@ -72,8 +77,8 @@ class ConfigParser(object):
for name, values in raw.items():
# Assume that all leftover config is for a plug-in or a global
# alias it is up to PluginCache to assert this assumption
logger.debug('Caching "{}" with "{}"'.format(name, values))
state.plugin_cache.add_configs(name, values, source)
logger.debug('Caching "{}" with "{}"'.format(identifier(name), values))
state.plugin_cache.add_configs(identifier(name), values, source)
except ConfigError as e:
if wrap_exceptions:
@ -87,8 +92,9 @@ class ConfigParser(object):
class AgendaParser(object):
def load_from_path(self, state, filepath):
raw = _load_file(filepath, 'Agenda')
raw, includes = _load_file(filepath, 'Agenda')
self.load(state, raw, filepath)
return includes
def load(self, state, raw, source):
logger.debug('Parsing agenda from "{}"'.format(source))
@ -190,9 +196,10 @@ class AgendaParser(object):
raise ConfigError(msg.format(json.dumps(section, indent=None)))
section['runtime_params'] = section.pop('params')
group = section.pop('group', None)
section = _construct_valid_entry(section, seen_sect_ids,
"s", state.jobs_config)
state.jobs_config.add_section(section, workloads)
state.jobs_config.add_section(section, workloads, group)
########################
@ -222,12 +229,45 @@ def _load_file(filepath, error_name):
raise ValueError("{} does not exist".format(filepath))
try:
raw = read_pod(filepath)
includes = _process_includes(raw, filepath, error_name)
except SerializerSyntaxError as e:
raise ConfigError('Error parsing {} {}: {}'.format(error_name, filepath, e))
if not isinstance(raw, dict):
message = '{} does not contain a valid {} structure; top level must be a dict.'
raise ConfigError(message.format(filepath, error_name))
return raw
return raw, includes
def _process_includes(raw, filepath, error_name):
if not raw:
return []
source_dir = os.path.dirname(filepath)
included_files = []
replace_value = None
if hasattr(raw, 'items'):
for key, value in raw.items():
if key == 'include#':
include_path = os.path.expanduser(os.path.join(source_dir, value))
included_files.append(include_path)
replace_value, includes = _load_file(include_path, error_name)
included_files.extend(includes)
elif hasattr(value, 'items') or isiterable(value):
includes = _process_includes(value, filepath, error_name)
included_files.extend(includes)
elif isiterable(raw):
for element in raw:
if hasattr(element, 'items') or isiterable(element):
includes = _process_includes(element, filepath, error_name)
included_files.extend(includes)
if replace_value is not None:
del raw['include#']
for key, value in replace_value.items():
raw[key] = merge_config_values(value, raw.get(key, None))
return included_files
def merge_augmentations(raw):

@ -69,14 +69,20 @@ class SectionNode(JobSpecSource):
def is_leaf(self):
return not bool(self.children)
def __init__(self, config, parent=None):
def __init__(self, config, parent=None, group=None):
super(SectionNode, self).__init__(config, parent=parent)
self.workload_entries = []
self.children = []
self.group = group
def add_section(self, section):
new_node = SectionNode(section, parent=self)
self.children.append(new_node)
def add_section(self, section, group=None):
# Each level is the same group, only need to check first
if not self.children or group == self.children[0].group:
new_node = SectionNode(section, parent=self, group=group)
self.children.append(new_node)
else:
for child in self.children:
new_node = child.add_section(section, group)
return new_node
def add_workload(self, workload_config):

@ -13,7 +13,7 @@
# limitations under the License.
#
# pylint: disable=unused-import
from devlib.exception import (DevlibError, HostError, TimeoutError,
from devlib.exception import (DevlibError, HostError, TimeoutError, # pylint: disable=redefined-builtin
TargetError, TargetNotRespondingError)
from wa.utils.misc import get_traceback

@ -23,10 +23,10 @@ from copy import copy
from datetime import datetime
import wa.framework.signal as signal
from wa.framework import instrument
from wa.framework import instrument as instrumentation
from wa.framework.configuration.core import Status
from wa.framework.exception import TargetError, HostError, WorkloadError
from wa.framework.exception import TargetNotRespondingError, TimeoutError
from wa.framework.exception import TargetNotRespondingError, TimeoutError # pylint: disable=redefined-builtin
from wa.framework.job import Job
from wa.framework.output import init_job_output
from wa.framework.output_processor import ProcessorManager
@ -100,15 +100,13 @@ class ExecutionContext(object):
self.tm = tm
self.run_output = output
self.run_state = output.state
self.logger.debug('Loading resource discoverers')
self.resolver = ResourceResolver(cm.plugin_cache)
self.resolver.load()
self.job_queue = None
self.completed_jobs = None
self.current_job = None
self.successful_jobs = 0
self.failed_jobs = 0
self.run_interrupted = False
self._load_resource_getters()
def start_run(self):
self.output.info.start_time = datetime.utcnow()
@ -180,6 +178,9 @@ class ExecutionContext(object):
self.skip_job(job)
self.write_state()
def write_config(self):
self.run_output.write_config(self.cm.get_config())
def write_state(self):
self.run_output.write_state()
@ -191,6 +192,9 @@ class ExecutionContext(object):
def write_job_specs(self):
self.run_output.write_job_specs(self.cm.job_specs)
def add_augmentation(self, aug):
self.cm.run_config.add_augmentation(aug)
def get_resource(self, resource, strict=True):
result = self.resolver.get(resource, strict)
if result is None:
@ -295,6 +299,13 @@ class ExecutionContext(object):
self.job_queue = new_queue
def _load_resource_getters(self):
self.logger.debug('Loading resource discoverers')
self.resolver = ResourceResolver(self.cm.plugin_cache)
self.resolver.load()
for getter in self.resolver.getters:
self.cm.run_config.add_resource_getter(getter)
def _get_unique_filepath(self, filename):
filepath = os.path.join(self.output_directory, filename)
rest, ext = os.path.splitext(filepath)
@ -365,12 +376,12 @@ class Executor(object):
try:
self.do_execute(context)
except KeyboardInterrupt as e:
context.run_output.status = 'ABORTED'
context.run_output.status = Status.ABORTED
log.log_error(e, self.logger)
context.write_output()
raise
except Exception as e:
context.run_output.status = 'FAILED'
context.run_output.status = Status.FAILED
log.log_error(e, self.logger)
context.write_output()
raise
@ -405,9 +416,9 @@ class Executor(object):
context.output.write_state()
self.logger.info('Installing instruments')
for instrument_name in context.cm.get_instruments(self.target_manager.target):
instrument.install(instrument_name, context)
instrument.validate()
for instrument in context.cm.get_instruments(self.target_manager.target):
instrumentation.install(instrument, context)
instrumentation.validate()
self.logger.info('Installing output processors')
pm = ProcessorManager()
@ -415,6 +426,8 @@ class Executor(object):
pm.install(proc, context)
pm.validate()
context.write_config()
self.logger.info('Starting run')
runner = Runner(context, pm)
signal.send(signal.RUN_STARTED, self, context)
@ -437,11 +450,11 @@ class Executor(object):
self.logger.info('Results can be found in {}'.format(output.basepath))
if self.error_logged:
self.logger.warn('There were errors during execution.')
self.logger.warn('Please see {}'.format(output.logfile))
self.logger.warning('There were errors during execution.')
self.logger.warning('Please see {}'.format(output.logfile))
elif self.warning_logged:
self.logger.warn('There were warnings during execution.')
self.logger.warn('Please see {}'.format(output.logfile))
self.logger.warning('There were warnings during execution.')
self.logger.warning('Please see {}'.format(output.logfile))
def _error_signalled_callback(self, _):
self.error_logged = True
@ -503,7 +516,7 @@ class Runner(object):
signal.connect(self._error_signalled_callback, signal.ERROR_LOGGED)
signal.connect(self._warning_signalled_callback, signal.WARNING_LOGGED)
self.context.start_run()
self.pm.initialize()
self.pm.initialize(self.context)
with log.indentcontext():
self.context.initialize_jobs()
self.context.write_state()
@ -519,7 +532,7 @@ class Runner(object):
with signal.wrap('RUN_OUTPUT_PROCESSED', self):
self.pm.process_run_output(self.context)
self.pm.export_run_output(self.context)
self.pm.finalize()
self.pm.finalize(self.context)
signal.disconnect(self._error_signalled_callback, signal.ERROR_LOGGED)
signal.disconnect(self._warning_signalled_callback, signal.WARNING_LOGGED)

@ -42,6 +42,7 @@ def init_user_directory(overwrite_existing=False): # pylint: disable=R0914
os.makedirs(settings.user_directory)
os.makedirs(settings.dependencies_directory)
os.makedirs(settings.plugins_directory)
os.makedirs(settings.cache_directory)
generate_default_config(os.path.join(settings.user_directory, 'config.yaml'))

@ -98,13 +98,14 @@ and the code to clear these file goes in teardown method. ::
"""
import sys
import logging
import inspect
from collections import OrderedDict
from wa.framework import signal
from wa.framework.plugin import Plugin
from wa.framework.exception import (TargetNotRespondingError, TimeoutError,
from wa.framework.exception import (TargetNotRespondingError, TimeoutError, # pylint: disable=redefined-builtin
WorkloadError, TargetError)
from wa.utils.log import log_error
from wa.utils.misc import isiterable
@ -324,7 +325,10 @@ def install(instrument, context):
if not callable(attr):
msg = 'Attribute {} not callable in {}.'
raise ValueError(msg.format(attr_name, instrument))
argspec = inspect.getargspec(attr)
if sys.version_info[0] == 3:
argspec = inspect.getfullargspec(attr)
else:
argspec = inspect.getargspec(attr) # pylint: disable=deprecated-method
arg_num = len(argspec.args)
# Instrument callbacks will be passed exactly two arguments: self
# (the instrument instance to which the callback is bound) and
@ -345,6 +349,7 @@ def install(instrument, context):
instrument.logger.context = context
installed.append(instrument)
context.add_augmentation(instrument)
def uninstall(instrument):

@ -13,23 +13,33 @@
# limitations under the License.
#
try:
import psycopg2
from psycopg2 import Error as Psycopg2Error
except ImportError:
psycopg2 = None
Psycopg2Error = None
import logging
import os
import shutil
from collections import OrderedDict
from collections import OrderedDict, defaultdict
from copy import copy, deepcopy
from datetime import datetime
from io import StringIO
import devlib
from wa.framework.configuration.core import JobSpec, Status
from wa.framework.configuration.execution import CombinedConfig
from wa.framework.exception import HostError
from wa.framework.exception import HostError, SerializerSyntaxError, ConfigError
from wa.framework.run import RunState, RunInfo
from wa.framework.target.info import TargetInfo
from wa.framework.version import get_wa_version_with_commit
from wa.utils.misc import touch, ensure_directory_exists, isiterable
from wa.utils.serializer import write_pod, read_pod
from wa.utils.doc import format_simple_table
from wa.utils.misc import touch, ensure_directory_exists, isiterable, format_ordered_dict
from wa.utils.postgres import get_schema_versions
from wa.utils.serializer import write_pod, read_pod, Podable, json
from wa.utils.types import enum, numeric
@ -166,7 +176,35 @@ class Output(object):
return os.path.basename(self.basepath)
class RunOutput(Output):
class RunOutputCommon(object):
''' Split out common functionality to form a second base of
the RunOutput classes
'''
@property
def run_config(self):
if self._combined_config:
return self._combined_config.run_config
@property
def settings(self):
if self._combined_config:
return self._combined_config.settings
def get_job_spec(self, spec_id):
for spec in self.job_specs:
if spec.id == spec_id:
return spec
return None
def list_workloads(self):
workloads = []
for job in self.jobs:
if job.label not in workloads:
workloads.append(job.label)
return workloads
class RunOutput(Output, RunOutputCommon):
kind = 'run'
@ -207,16 +245,6 @@ class RunOutput(Output):
path = os.path.join(self.basepath, '__failed')
return ensure_directory_exists(path)
@property
def run_config(self):
if self._combined_config:
return self._combined_config.run_config
@property
def settings(self):
if self._combined_config:
return self._combined_config.settings
@property
def augmentations(self):
run_augs = set([])
@ -269,6 +297,7 @@ class RunOutput(Output):
write_pod(self.state.to_pod(), self.statefile)
def write_config(self, config):
self._combined_config = config
write_pod(config.to_pod(), self.configfile)
def read_config(self):
@ -301,19 +330,6 @@ class RunOutput(Output):
shutil.move(job_output.basepath, failed_path)
job_output.basepath = failed_path
def get_job_spec(self, spec_id):
for spec in self.job_specs:
if spec.id == spec_id:
return spec
return None
def list_workloads(self):
workloads = []
for job in self.jobs:
if job.label not in workloads:
workloads.append(job.label)
return workloads
class JobOutput(Output):
@ -331,12 +347,14 @@ class JobOutput(Output):
self.reload()
class Result(object):
class Result(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
instance = Result()
instance.status = Status(pod['status'])
instance = super(Result, Result).from_pod(pod)
instance.status = Status.from_pod(pod['status'])
instance.metrics = [Metric.from_pod(m) for m in pod['metrics']]
instance.artifacts = [Artifact.from_pod(a) for a in pod['artifacts']]
instance.events = [Event.from_pod(e) for e in pod['events']]
@ -346,6 +364,7 @@ class Result(object):
def __init__(self):
# pylint: disable=no-member
super(Result, self).__init__()
self.status = Status.NEW
self.metrics = []
self.artifacts = []
@ -429,21 +448,27 @@ class Result(object):
self.metadata[key] = args[0]
def to_pod(self):
return dict(
status=str(self.status),
metrics=[m.to_pod() for m in self.metrics],
artifacts=[a.to_pod() for a in self.artifacts],
events=[e.to_pod() for e in self.events],
classifiers=copy(self.classifiers),
metadata=deepcopy(self.metadata),
)
pod = super(Result, self).to_pod()
pod['status'] = self.status.to_pod()
pod['metrics'] = [m.to_pod() for m in self.metrics]
pod['artifacts'] = [a.to_pod() for a in self.artifacts]
pod['events'] = [e.to_pod() for e in self.events]
pod['classifiers'] = copy(self.classifiers)
pod['metadata'] = deepcopy(self.metadata)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
pod['status'] = Status(pod['status']).to_pod()
return pod
ARTIFACT_TYPES = ['log', 'meta', 'data', 'export', 'raw']
ArtifactType = enum(ARTIFACT_TYPES)
class Artifact(object):
class Artifact(Podable):
"""
This is an artifact generated during execution/post-processing of a
workload. Unlike metrics, this represents an actual artifact, such as a
@ -491,10 +516,16 @@ class Artifact(object):
"""
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
pod = Artifact._upgrade_pod(pod)
pod_version = pod.pop('_pod_version')
pod['kind'] = ArtifactType(pod['kind'])
return Artifact(**pod)
instance = Artifact(**pod)
instance._pod_version = pod_version # pylint: disable =protected-access
return instance
def __init__(self, name, path, kind, description=None, classifiers=None):
""""
@ -514,6 +545,7 @@ class Artifact(object):
used to identify sub-tests).
"""
super(Artifact, self).__init__()
self.name = name
self.path = path.replace('/', os.sep) if path is not None else path
try:
@ -525,10 +557,16 @@ class Artifact(object):
self.classifiers = classifiers or {}
def to_pod(self):
pod = copy(self.__dict__)
pod = super(Artifact, self).to_pod()
pod.update(self.__dict__)
pod['kind'] = str(self.kind)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __str__(self):
return self.path
@ -536,7 +574,7 @@ class Artifact(object):
return '{} ({}): {}'.format(self.name, self.kind, self.path)
class Metric(object):
class Metric(Podable):
"""
This is a single metric collected from executing a workload.
@ -553,15 +591,20 @@ class Metric(object):
to identify sub-tests).
"""
__slots__ = ['name', 'value', 'units', 'lower_is_better', 'classifiers']
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
return Metric(**pod)
pod = Metric._upgrade_pod(pod)
pod_version = pod.pop('_pod_version')
instance = Metric(**pod)
instance._pod_version = pod_version # pylint: disable =protected-access
return instance
def __init__(self, name, value, units=None, lower_is_better=False,
classifiers=None):
super(Metric, self).__init__()
self.name = name
self.value = numeric(value)
self.units = units
@ -569,13 +612,18 @@ class Metric(object):
self.classifiers = classifiers or {}
def to_pod(self):
return dict(
name=self.name,
value=self.value,
units=self.units,
lower_is_better=self.lower_is_better,
classifiers=self.classifiers,
)
pod = super(Metric, self).to_pod()
pod['name'] = self.name
pod['value'] = self.value
pod['units'] = self.units
pod['lower_is_better'] = self.lower_is_better
pod['classifiers'] = self.classifiers
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __str__(self):
result = '{}: {}'.format(self.name, self.value)
@ -587,23 +635,27 @@ class Metric(object):
def __repr__(self):
text = self.__str__()
if self.classifiers:
return '<{} {}>'.format(text, self.classifiers)
return '<{} {}>'.format(text, format_ordered_dict(self.classifiers))
else:
return '<{}>'.format(text)
class Event(object):
class Event(Podable):
"""
An event that occured during a run.
"""
__slots__ = ['timestamp', 'message']
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
pod = Event._upgrade_pod(pod)
pod_version = pod.pop('_pod_version')
instance = Event(pod['message'])
instance.timestamp = pod['timestamp']
instance._pod_version = pod_version # pylint: disable =protected-access
return instance
@property
@ -615,14 +667,20 @@ class Event(object):
return result
def __init__(self, message):
super(Event, self).__init__()
self.timestamp = datetime.utcnow()
self.message = message
self.message = str(message)
def to_pod(self):
return dict(
timestamp=self.timestamp,
message=self.message,
)
pod = super(Event, self).to_pod()
pod['timestamp'] = self.timestamp
pod['message'] = self.message
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __str__(self):
return '[{}] {}'.format(self.timestamp, self.message)
@ -689,3 +747,467 @@ def _save_raw_config(meta_dir, state):
basename = os.path.basename(source)
dest_path = os.path.join(raw_config_dir, 'cfg{}-{}'.format(i, basename))
shutil.copy(source, dest_path)
class DatabaseOutput(Output):
kind = None
@property
def resultfile(self):
if self.conn is None or self.oid is None:
return {}
pod = self._get_pod_version()
pod['metrics'] = self._get_metrics()
pod['status'] = self._get_status()
pod['classifiers'] = self._get_classifiers(self.oid, 'run')
pod['events'] = self._get_events()
pod['artifacts'] = self._get_artifacts()
return pod
@staticmethod
def _build_command(columns, tables, conditions=None, joins=None):
cmd = '''SELECT\n\t{}\nFROM\n\t{}'''.format(',\n\t'.join(columns), ',\n\t'.join(tables))
if joins:
for join in joins:
cmd += '''\nLEFT JOIN {} ON {}'''.format(join[0], join[1])
if conditions:
cmd += '''\nWHERE\n\t{}'''.format('\nAND\n\t'.join(conditions))
return cmd + ';'
def __init__(self, conn, oid=None, reload=True): # pylint: disable=super-init-not-called
self.conn = conn
self.oid = oid
self.result = None
if reload:
self.reload()
def __repr__(self):
return '<{} {}>'.format(self.__class__.__name__, self.oid)
def __str__(self):
return self.oid
def reload(self):
try:
self.result = Result.from_pod(self.resultfile)
except Exception as e: # pylint: disable=broad-except
self.result = Result()
self.result.status = Status.UNKNOWN
self.add_event(str(e))
def get_artifact_path(self, name):
artifact = self.get_artifact(name)
artifact = StringIO(self.conn.lobject(int(artifact.path)).read())
self.conn.commit()
return artifact
# pylint: disable=too-many-locals
def _read_db(self, columns, tables, conditions=None, join=None, as_dict=True):
# Automatically remove table name from column when using column names as keys or
# allow for column names to be aliases when retrieving the data,
# (db_column_name, alias)
db_columns = []
aliases_colunms = []
for column in columns:
if isinstance(column, tuple):
db_columns.append(column[0])
aliases_colunms.append(column[1])
else:
db_columns.append(column)
aliases_colunms.append(column.rsplit('.', 1)[-1])
cmd = self._build_command(db_columns, tables, conditions, join)
logger.debug(cmd)
with self.conn.cursor() as cursor:
cursor.execute(cmd)
results = cursor.fetchall()
self.conn.commit()
if not as_dict:
return results
# Format the output dict using column names as keys
output = []
for result in results:
entry = {}
for k, v in zip(aliases_colunms, result):
entry[k] = v
output.append(entry)
return output
def _get_pod_version(self):
columns = ['_pod_version', '_pod_serialization_version']
tables = ['{}s'.format(self.kind)]
conditions = ['{}s.oid = \'{}\''.format(self.kind, self.oid)]
results = self._read_db(columns, tables, conditions)
if results:
return results[0]
else:
return None
def _populate_classifers(self, pod, kind):
for entry in pod:
oid = entry.pop('oid')
entry['classifiers'] = self._get_classifiers(oid, kind)
return pod
def _get_classifiers(self, oid, kind):
columns = ['classifiers.key', 'classifiers.value']
tables = ['classifiers']
conditions = ['{}_oid = \'{}\''.format(kind, oid)]
results = self._read_db(columns, tables, conditions, as_dict=False)
classifiers = {}
for (k, v) in results:
classifiers[k] = v
return classifiers
def _get_metrics(self):
columns = ['metrics.name', 'metrics.value', 'metrics.units',
'metrics.lower_is_better',
'metrics.oid', 'metrics._pod_version',
'metrics._pod_serialization_version']
tables = ['metrics']
joins = [('classifiers', 'classifiers.metric_oid = metrics.oid')]
conditions = ['metrics.{}_oid = \'{}\''.format(self.kind, self.oid)]
pod = self._read_db(columns, tables, conditions, joins)
return self._populate_classifers(pod, 'metric')
def _get_status(self):
columns = ['{}s.status'.format(self.kind)]
tables = ['{}s'.format(self.kind)]
conditions = ['{}s.oid = \'{}\''.format(self.kind, self.oid)]
results = self._read_db(columns, tables, conditions, as_dict=False)
if results:
return results[0][0]
else:
return None
def _get_artifacts(self):
columns = ['artifacts.name', 'artifacts.description', 'artifacts.kind',
('largeobjects.lo_oid', 'path'), 'artifacts.oid',
'artifacts._pod_version', 'artifacts._pod_serialization_version']
tables = ['largeobjects', 'artifacts']
joins = [('classifiers', 'classifiers.artifact_oid = artifacts.oid')]
conditions = ['artifacts.{}_oid = \'{}\''.format(self.kind, self.oid),
'artifacts.large_object_uuid = largeobjects.oid',
'artifacts.job_oid IS NULL']
pod = self._read_db(columns, tables, conditions, joins)
for artifact in pod:
artifact['path'] = str(artifact['path'])
return self._populate_classifers(pod, 'metric')
def _get_events(self):
columns = ['events.message', 'events.timestamp']
tables = ['events']
conditions = ['events.{}_oid = \'{}\''.format(self.kind, self.oid)]
return self._read_db(columns, tables, conditions)
def kernel_config_from_db(raw):
kernel_config = {}
for k, v in zip(raw[0], raw[1]):
kernel_config[k] = v
return kernel_config
class RunDatabaseOutput(DatabaseOutput, RunOutputCommon):
kind = 'run'
@property
def basepath(self):
return 'db:({})-{}@{}:{}'.format(self.dbname, self.user,
self.host, self.port)
@property
def augmentations(self):
columns = ['augmentations.name']
tables = ['augmentations']
conditions = ['augmentations.run_oid = \'{}\''.format(self.oid)]
results = self._read_db(columns, tables, conditions, as_dict=False)
return [a for augs in results for a in augs]
@property
def _db_infofile(self):
columns = ['start_time', 'project', ('run_uuid', 'uuid'), 'end_time',
'run_name', 'duration', '_pod_version', '_pod_serialization_version']
tables = ['runs']
conditions = ['runs.run_uuid = \'{}\''.format(self.run_uuid)]
pod = self._read_db(columns, tables, conditions)
if not pod:
return {}
return pod[0]
@property
def _db_targetfile(self):
columns = ['os', 'is_rooted', 'target', 'abi', 'cpus', 'os_version',
'hostid', 'hostname', 'kernel_version', 'kernel_release',
'kernel_sha1', 'kernel_config', 'sched_features',
'_pod_version', '_pod_serialization_version']
tables = ['targets']
conditions = ['targets.run_oid = \'{}\''.format(self.oid)]
pod = self._read_db(columns, tables, conditions)
if not pod:
return {}
pod = pod[0]
try:
pod['cpus'] = [json.loads(cpu) for cpu in pod.pop('cpus')]
except SerializerSyntaxError:
pod['cpus'] = []
logger.debug('Failed to deserialize target cpu information')
pod['kernel_config'] = kernel_config_from_db(pod['kernel_config'])
return pod
@property
def _db_statefile(self):
# Read overall run information
columns = ['runs.state']
tables = ['runs']
conditions = ['runs.run_uuid = \'{}\''.format(self.run_uuid)]
pod = self._read_db(columns, tables, conditions)
pod = pod[0].get('state')
if not pod:
return {}
# Read job information
columns = ['jobs.job_id', 'jobs.oid']
tables = ['jobs']
conditions = ['jobs.run_oid = \'{}\''.format(self.oid)]
job_oids = self._read_db(columns, tables, conditions)
# Match job oid with jobs from state file
for job in pod.get('jobs', []):
for job_oid in job_oids:
if job['id'] == job_oid['job_id']:
job['oid'] = job_oid['oid']
break
return pod
@property
def _db_jobsfile(self):
workload_params = self._get_parameters('workload')
runtime_params = self._get_parameters('runtime')
columns = [('jobs.job_id', 'id'), 'jobs.label', 'jobs.workload_name',
'jobs.oid', 'jobs._pod_version', 'jobs._pod_serialization_version']
tables = ['jobs']
conditions = ['jobs.run_oid = \'{}\''.format(self.oid)]
jobs = self._read_db(columns, tables, conditions)
for job in jobs:
job['workload_parameters'] = workload_params.pop(job['oid'], {})
job['runtime_parameters'] = runtime_params.pop(job['oid'], {})
job.pop('oid')
return jobs
@property
def _db_run_config(self):
pod = defaultdict(dict)
parameter_types = ['augmentation', 'resource_getter']
for parameter_type in parameter_types:
columns = ['parameters.name', 'parameters.value',
'parameters.value_type',
('{}s.name'.format(parameter_type), '{}'.format(parameter_type))]
tables = ['parameters', '{}s'.format(parameter_type)]
conditions = ['parameters.run_oid = \'{}\''.format(self.oid),
'parameters.type = \'{}\''.format(parameter_type),
'parameters.{0}_oid = {0}s.oid'.format(parameter_type)]
configs = self._read_db(columns, tables, conditions)
for config in configs:
entry = {config['name']: json.loads(config['value'])}
pod['{}s'.format(parameter_type)][config.pop(parameter_type)] = entry
# run config
columns = ['runs.max_retries', 'runs.allow_phone_home',
'runs.bail_on_init_failure', 'runs.retry_on_status']
tables = ['runs']
conditions = ['runs.oid = \'{}\''.format(self.oid)]
config = self._read_db(columns, tables, conditions)
if not config:
return {}
config = config[0]
# Convert back into a string representation of an enum list
config['retry_on_status'] = config['retry_on_status'][1:-1].split(',')
pod.update(config)
return pod
def __init__(self,
password=None,
dbname='wa',
host='localhost',
port='5432',
user='postgres',
run_uuid=None,
list_runs=False):
if psycopg2 is None:
msg = 'Please install the psycopg2 in order to connect to postgres databases'
raise HostError(msg)
self.dbname = dbname
self.host = host
self.port = port
self.user = user
self.password = password
self.run_uuid = run_uuid
self.conn = None
self.info = None
self.state = None
self.result = None
self.target_info = None
self._combined_config = None
self.jobs = []
self.job_specs = []
self.connect()
super(RunDatabaseOutput, self).__init__(conn=self.conn, reload=False)
local_schema_version, db_schema_version = get_schema_versions(self.conn)
if local_schema_version != db_schema_version:
self.disconnect()
msg = 'The current database schema is v{} however the local ' \
'schema version is v{}. Please update your database ' \
'with the create command'
raise HostError(msg.format(db_schema_version, local_schema_version))
if list_runs:
print('Available runs are:')
self._list_runs()
self.disconnect()
return
if not self.run_uuid:
print('Please specify "Run uuid"')
self._list_runs()
self.disconnect()
return
if not self.oid:
self.oid = self._get_oid()
self.reload()
def read_job_specs(self):
job_specs = []
for job in self._db_jobsfile:
job_specs.append(JobSpec.from_pod(job))
return job_specs
def connect(self):
if self.conn and not self.conn.closed:
return
try:
self.conn = psycopg2.connect(dbname=self.dbname,
user=self.user,
host=self.host,
password=self.password,
port=self.port)
except Psycopg2Error as e:
raise HostError('Unable to connect to the Database: "{}'.format(e.args[0]))
def disconnect(self):
self.conn.commit()
self.conn.close()
def reload(self):
super(RunDatabaseOutput, self).reload()
info_pod = self._db_infofile
state_pod = self._db_statefile
if not info_pod or not state_pod:
msg = '"{}" does not appear to be a valid WA Database Output.'
raise ValueError(msg.format(self.oid))
self.info = RunInfo.from_pod(info_pod)
self.state = RunState.from_pod(state_pod)
self._combined_config = CombinedConfig.from_pod({'run_config': self._db_run_config})
self.target_info = TargetInfo.from_pod(self._db_targetfile)
self.job_specs = self.read_job_specs()
for job_state in self._db_statefile['jobs']:
job = JobDatabaseOutput(self.conn, job_state.get('oid'), job_state['id'],
job_state['label'], job_state['iteration'],
job_state['retries'])
job.status = job_state['status']
job.spec = self.get_job_spec(job.id)
if job.spec is None:
logger.warning('Could not find spec for job {}'.format(job.id))
self.jobs.append(job)
def _get_oid(self):
columns = ['{}s.oid'.format(self.kind)]
tables = ['{}s'.format(self.kind)]
conditions = ['runs.run_uuid = \'{}\''.format(self.run_uuid)]
oid = self._read_db(columns, tables, conditions, as_dict=False)
if not oid:
raise ConfigError('No matching run entries found for run_uuid {}'.format(self.run_uuid))
if len(oid) > 1:
raise ConfigError('Multiple entries found for run_uuid: {}'.format(self.run_uuid))
return oid[0][0]
def _get_parameters(self, param_type):
columns = ['parameters.job_oid', 'parameters.name', 'parameters.value']
tables = ['parameters']
conditions = ['parameters.type = \'{}\''.format(param_type),
'parameters.run_oid = \'{}\''.format(self.oid)]
params = self._read_db(columns, tables, conditions, as_dict=False)
parm_dict = defaultdict(dict)
for (job_oid, k, v) in params:
try:
parm_dict[job_oid][k] = json.loads(v)
except SerializerSyntaxError:
logger.debug('Failed to deserialize job_oid:{}-"{}":"{}"'.format(job_oid, k, v))
return parm_dict
def _list_runs(self):
columns = ['runs.run_uuid', 'runs.run_name', 'runs.project',
'runs.project_stage', 'runs.status', 'runs.start_time', 'runs.end_time']
tables = ['runs']
pod = self._read_db(columns, tables)
if pod:
headers = ['Run Name', 'Project', 'Project Stage', 'Start Time', 'End Time',
'run_uuid']
run_list = []
for entry in pod:
# Format times to display better
start_time = entry['start_time']
end_time = entry['end_time']
if start_time:
start_time = start_time.strftime("%Y-%m-%d %H:%M:%S")
if end_time:
end_time = end_time.strftime("%Y-%m-%d %H:%M:%S")
run_list.append([
entry['run_name'],
entry['project'],
entry['project_stage'],
start_time,
end_time,
entry['run_uuid']])
print(format_simple_table(run_list, headers))
else:
print('No Runs Found')
class JobDatabaseOutput(DatabaseOutput):
kind = 'job'
def __init__(self, conn, oid, job_id, label, iteration, retry):
super(JobDatabaseOutput, self).__init__(conn, oid=oid)
self.id = job_id
self.label = label
self.iteration = iteration
self.retry = retry
self.result = None
self.spec = None
self.reload()
def __repr__(self):
return '<{} {}-{}-{}>'.format(self.__class__.__name__,
self.id, self.label, self.iteration)
def __str__(self):
return '{}-{}-{}'.format(self.id, self.label, self.iteration)

@ -40,10 +40,10 @@ class OutputProcessor(Plugin):
msg = 'Instrument "{}" is required by {}, but is not installed.'
raise ConfigError(msg.format(instrument, self.name))
def initialize(self):
def initialize(self, context):
pass
def finalize(self):
def finalize(self, context):
pass
@ -60,6 +60,7 @@ class ProcessorManager(object):
self.logger.debug('Installing {}'.format(processor.name))
processor.logger.context = context
self.processors.append(processor)
context.add_augmentation(processor)
def disable_all(self):
for output_processor in self.processors:
@ -103,13 +104,13 @@ class ProcessorManager(object):
for proc in self.processors:
proc.validate()
def initialize(self):
def initialize(self, context):
for proc in self.processors:
proc.initialize()
proc.initialize(context)
def finalize(self):
def finalize(self, context):
for proc in self.processors:
proc.finalize()
proc.finalize(context)
def process_job_output(self, context):
self.do_for_each_proc('process_job_output', 'Processing using "{}"',

@ -22,27 +22,34 @@ from copy import copy
from datetime import datetime, timedelta
from wa.framework.configuration.core import Status
from wa.utils.serializer import Podable
class RunInfo(object):
class RunInfo(Podable):
"""
Information about the current run, such as its unique ID, run
time, etc.
"""
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
pod = RunInfo._upgrade_pod(pod)
uid = pod.pop('uuid')
_pod_version = pod.pop('_pod_version')
duration = pod.pop('duration')
if uid is not None:
uid = uuid.UUID(uid)
instance = RunInfo(**pod)
instance._pod_version = _pod_version # pylint: disable=protected-access
instance.uuid = uid
instance.duration = duration if duration is None else timedelta(seconds=duration)
return instance
def __init__(self, run_name=None, project=None, project_stage=None,
start_time=None, end_time=None, duration=None):
super(RunInfo, self).__init__()
self.uuid = uuid.uuid4()
self.run_name = run_name
self.project = project
@ -52,7 +59,8 @@ class RunInfo(object):
self.duration = duration
def to_pod(self):
d = copy(self.__dict__)
d = super(RunInfo, self).to_pod()
d.update(copy(self.__dict__))
d['uuid'] = str(self.uuid)
if self.duration is None:
d['duration'] = self.duration
@ -60,16 +68,23 @@ class RunInfo(object):
d['duration'] = self.duration.total_seconds()
return d
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
class RunState(object):
class RunState(Podable):
"""
Represents the state of a WA run.
"""
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
instance = RunState()
instance.status = Status(pod['status'])
instance = super(RunState, RunState).from_pod(pod)
instance.status = Status.from_pod(pod['status'])
instance.timestamp = pod['timestamp']
jss = [JobState.from_pod(j) for j in pod['jobs']]
instance.jobs = OrderedDict(((js.id, js.iteration), js) for js in jss)
@ -81,6 +96,7 @@ class RunState(object):
if js.status > Status.RUNNING)
def __init__(self):
super(RunState, self).__init__()
self.jobs = OrderedDict()
self.status = Status.NEW
self.timestamp = datetime.utcnow()
@ -101,18 +117,28 @@ class RunState(object):
return counter
def to_pod(self):
return OrderedDict(
status=str(self.status),
timestamp=self.timestamp,
jobs=[j.to_pod() for j in self.jobs.values()],
)
pod = super(RunState, self).to_pod()
pod['status'] = self.status.to_pod()
pod['timestamp'] = self.timestamp
pod['jobs'] = [j.to_pod() for j in self.jobs.values()]
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
pod['status'] = Status(pod['status']).to_pod()
return pod
class JobState(object):
class JobState(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
instance = JobState(pod['id'], pod['label'], pod['iteration'], Status(pod['status']))
pod = JobState._upgrade_pod(pod)
instance = JobState(pod['id'], pod['label'], pod['iteration'],
Status.from_pod(pod['status']))
instance.retries = pod['retries']
instance.timestamp = pod['timestamp']
return instance
@ -123,6 +149,7 @@ class JobState(object):
def __init__(self, id, label, iteration, status):
# pylint: disable=redefined-builtin
super(JobState, self).__init__()
self.id = id
self.label = label
self.iteration = iteration
@ -131,11 +158,17 @@ class JobState(object):
self.timestamp = datetime.utcnow()
def to_pod(self):
return OrderedDict(
id=self.id,
label=self.label,
iteration=self.iteration,
status=str(self.status),
retries=0,
timestamp=self.timestamp,
)
pod = super(JobState, self).to_pod()
pod['id'] = self.id
pod['label'] = self.label
pod['iteration'] = self.iteration
pod['status'] = self.status.to_pod()
pod['retries'] = 0
pod['timestamp'] = self.timestamp
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
pod['status'] = Status(pod['status']).to_pod()
return pod

@ -166,7 +166,7 @@ COMMON_TARGET_PARAMS = [
*not* need to be writable by unprivileged users or rooted devices
(WA will install with elevated privileges as necessary).
'''),
Parameter('modules', kind=list_of_strings,
Parameter('modules', kind=list,
description='''
A list of additional modules to be installed for the target.
@ -222,7 +222,7 @@ COMMON_PLATFORM_PARAMS = [
Hardware model of the platform. If not specified, an attempt will
be made to read it from target.
'''),
Parameter('modules', kind=list_of_strings,
Parameter('modules', kind=list,
description='''
An additional list of modules to be loaded into the target.
'''),
@ -345,9 +345,9 @@ CONNECTION_PARAMS = {
"""),
Parameter(
'sudo_cmd', kind=str,
default="sudo -- sh -c '{}'",
default="sudo -- sh -c {}",
description="""
Sudo command to use. Must have ``"{}"`` specified
Sudo command to use. Must have ``{}`` specified
somewhere in the string it indicate where the command
to be run via sudo is to go.
"""),

@ -14,12 +14,16 @@
#
# pylint: disable=protected-access
from copy import copy
import os
from devlib import AndroidTarget, TargetError
from devlib.target import KernelConfig, KernelVersion, Cpuinfo
from devlib.utils.android import AndroidProperties
from wa.framework.configuration.core import settings
from wa.framework.exception import ConfigError
from wa.utils.serializer import read_pod, write_pod, Podable
def cpuinfo_from_pod(pod):
cpuinfo = Cpuinfo('')
@ -60,20 +64,32 @@ def kernel_config_from_pod(pod):
return config
class CpufreqInfo(object):
class CpufreqInfo(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
pod = CpufreqInfo._upgrade_pod(pod)
return CpufreqInfo(**pod)
def __init__(self, **kwargs):
super(CpufreqInfo, self).__init__()
self.available_frequencies = kwargs.pop('available_frequencies', [])
self.available_governors = kwargs.pop('available_governors', [])
self.related_cpus = kwargs.pop('related_cpus', [])
self.driver = kwargs.pop('driver', None)
self._pod_version = kwargs.pop('_pod_version', self._pod_serialization_version)
def to_pod(self):
return copy(self.__dict__)
pod = super(CpufreqInfo, self).to_pod()
pod.update(self.__dict__)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __repr__(self):
return 'Cpufreq({} {})'.format(self.driver, self.related_cpus)
@ -81,20 +97,32 @@ class CpufreqInfo(object):
__str__ = __repr__
class IdleStateInfo(object):
class IdleStateInfo(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
pod = IdleStateInfo._upgrade_pod(pod)
return IdleStateInfo(**pod)
def __init__(self, **kwargs):
super(IdleStateInfo, self).__init__()
self.name = kwargs.pop('name', None)
self.desc = kwargs.pop('desc', None)
self.power = kwargs.pop('power', None)
self.latency = kwargs.pop('latency', None)
self._pod_version = kwargs.pop('_pod_version', self._pod_serialization_version)
def to_pod(self):
return copy(self.__dict__)
pod = super(IdleStateInfo, self).to_pod()
pod.update(self.__dict__)
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __repr__(self):
return 'IdleState({}/{})'.format(self.name, self.desc)
@ -102,11 +130,15 @@ class IdleStateInfo(object):
__str__ = __repr__
class CpuidleInfo(object):
class CpuidleInfo(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
pod = CpuidleInfo._upgrade_pod(pod)
instance = CpuidleInfo()
instance._pod_version = pod['_pod_version']
instance.governor = pod['governor']
instance.driver = pod['driver']
instance.states = [IdleStateInfo.from_pod(s) for s in pod['states']]
@ -117,17 +149,23 @@ class CpuidleInfo(object):
return len(self.states)
def __init__(self):
super(CpuidleInfo, self).__init__()
self.governor = None
self.driver = None
self.states = []
def to_pod(self):
pod = {}
pod = super(CpuidleInfo, self).to_pod()
pod['governor'] = self.governor
pod['driver'] = self.driver
pod['states'] = [s.to_pod() for s in self.states]
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __repr__(self):
return 'Cpuidle({}/{} {} states)'.format(
self.governor, self.driver, self.num_states)
@ -135,11 +173,13 @@ class CpuidleInfo(object):
__str__ = __repr__
class CpuInfo(object):
class CpuInfo(Podable):
_pod_serialization_version = 1
@staticmethod
def from_pod(pod):
instance = CpuInfo()
instance = super(CpuInfo, CpuInfo).from_pod(pod)
instance.id = pod['id']
instance.name = pod['name']
instance.architecture = pod['architecture']
@ -149,6 +189,7 @@ class CpuInfo(object):
return instance
def __init__(self):
super(CpuInfo, self).__init__()
self.id = None
self.name = None
self.architecture = None
@ -157,7 +198,7 @@ class CpuInfo(object):
self.cpuidle = CpuidleInfo()
def to_pod(self):
pod = {}
pod = super(CpuInfo, self).to_pod()
pod['id'] = self.id
pod['name'] = self.name
pod['architecture'] = self.architecture
@ -166,6 +207,11 @@ class CpuInfo(object):
pod['cpuidle'] = self.cpuidle.to_pod()
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
return pod
def __repr__(self):
return 'Cpu({} {})'.format(self.id, self.name)
@ -177,6 +223,7 @@ def get_target_info(target):
info.target = target.__class__.__name__
info.os = target.os
info.os_version = target.os_version
info.system_id = target.system_id
info.abi = target.abi
info.is_rooted = target.is_rooted
info.kernel_version = target.kernel_version
@ -217,6 +264,8 @@ def get_target_info(target):
info.cpus.append(cpu)
info.page_size_kb = target.page_size_kb
if isinstance(target, AndroidTarget):
info.screen_resolution = target.screen_resolution
info.prop = target.getprop()
@ -225,16 +274,56 @@ def get_target_info(target):
return info
class TargetInfo(object):
def read_target_info_cache():
if not os.path.exists(settings.cache_directory):
os.makedirs(settings.cache_directory)
if not os.path.isfile(settings.target_info_cache_file):
return {}
return read_pod(settings.target_info_cache_file)
def write_target_info_cache(cache):
if not os.path.exists(settings.cache_directory):
os.makedirs(settings.cache_directory)
write_pod(cache, settings.target_info_cache_file)
def get_target_info_from_cache(system_id):
cache = read_target_info_cache()
pod = cache.get(system_id, None)
if not pod:
return None
_pod_version = pod.get('_pod_version', 0)
if _pod_version != TargetInfo._pod_serialization_version:
msg = 'Target info version mismatch. Expected {}, but found {}.\nTry deleting {}'
raise ConfigError(msg.format(TargetInfo._pod_serialization_version, _pod_version,
settings.target_info_cache_file))
return TargetInfo.from_pod(pod)
def cache_target_info(target_info, overwrite=False):
cache = read_target_info_cache()
if target_info.system_id in cache and not overwrite:
raise ValueError('TargetInfo for {} is already in cache.'.format(target_info.system_id))
cache[target_info.system_id] = target_info.to_pod()
write_target_info_cache(cache)
class TargetInfo(Podable):
_pod_serialization_version = 2
@staticmethod
def from_pod(pod):
instance = TargetInfo()
instance = super(TargetInfo, TargetInfo).from_pod(pod)
instance.target = pod['target']
instance.abi = pod['abi']
instance.cpus = [CpuInfo.from_pod(c) for c in pod['cpus']]
instance.os = pod['os']
instance.os_version = pod['os_version']
instance.system_id = pod['system_id']
instance.hostid = pod['hostid']
instance.hostname = pod['hostname']
instance.abi = pod['abi']
@ -242,6 +331,7 @@ class TargetInfo(object):
instance.kernel_version = kernel_version_from_pod(pod)
instance.kernel_config = kernel_config_from_pod(pod)
instance.sched_features = pod['sched_features']
instance.page_size_kb = pod.get('page_size_kb')
if instance.os == 'android':
instance.screen_resolution = pod['screen_resolution']
instance.prop = AndroidProperties('')
@ -251,10 +341,12 @@ class TargetInfo(object):
return instance
def __init__(self):
super(TargetInfo, self).__init__()
self.target = None
self.cpus = []
self.os = None
self.os_version = None
self.system_id = None
self.hostid = None
self.hostname = None
self.abi = None
@ -265,14 +357,16 @@ class TargetInfo(object):
self.screen_resolution = None
self.prop = None
self.android_id = None
self.page_size_kb = None
def to_pod(self):
pod = {}
pod = super(TargetInfo, self).to_pod()
pod['target'] = self.target
pod['abi'] = self.abi
pod['cpus'] = [c.to_pod() for c in self.cpus]
pod['os'] = self.os
pod['os_version'] = self.os_version
pod['system_id'] = self.system_id
pod['hostid'] = self.hostid
pod['hostname'] = self.hostname
pod['abi'] = self.abi
@ -281,9 +375,29 @@ class TargetInfo(object):
pod['kernel_version'] = self.kernel_version.version
pod['kernel_config'] = dict(self.kernel_config.iteritems())
pod['sched_features'] = self.sched_features
pod['page_size_kb'] = self.page_size_kb
if self.os == 'android':
pod['screen_resolution'] = self.screen_resolution
pod['prop'] = self.prop._properties
pod['android_id'] = self.android_id
return pod
@staticmethod
def _pod_upgrade_v1(pod):
pod['_pod_version'] = pod.get('_pod_version', 1)
pod['cpus'] = pod.get('cpus', [])
pod['system_id'] = pod.get('system_id')
pod['hostid'] = pod.get('hostid')
pod['hostname'] = pod.get('hostname')
pod['sched_features'] = pod.get('sched_features')
pod['screen_resolution'] = pod.get('screen_resolution', (0, 0))
pod['prop'] = pod.get('prop')
pod['android_id'] = pod.get('android_id')
return pod
@staticmethod
def _pod_upgrade_v2(pod):
pod['page_size_kb'] = pod.get('page_size_kb')
pod['_pod_version'] = pod.get('format_version', 0)
return pod

@ -15,18 +15,18 @@
import logging
from devlib import Gem5SimulationPlatform
from devlib.utils.misc import memoized
from wa.framework import signal
from wa.framework.exception import ExecutionError, TargetError, TargetNotRespondingError
from wa.framework.plugin import Parameter
from wa.framework.target.descriptor import (get_target_description,
instantiate_target,
instantiate_assistant)
from wa.framework.target.info import get_target_info
from wa.framework.target.info import get_target_info, get_target_info_from_cache, cache_target_info
from wa.framework.target.runtime_parameter_manager import RuntimeParameterManager
from devlib import Gem5SimulationPlatform
from devlib.utils.misc import memoized
class TargetManager(object):
"""
@ -73,6 +73,8 @@ class TargetManager(object):
self.rpm = RuntimeParameterManager(self.target)
def finalize(self):
if not self.target:
return
if self.disconnect or isinstance(self.target.platform, Gem5SimulationPlatform):
self.logger.info('Disconnecting from the device')
with signal.wrap('TARGET_DISCONNECT'):
@ -89,7 +91,11 @@ class TargetManager(object):
@memoized
def get_target_info(self):
return get_target_info(self.target)
info = get_target_info_from_cache(self.target.system_id)
if info is None:
info = get_target_info(self.target)
cache_target_info(info)
return info
def reboot(self, context, hard=False):
with signal.wrap('REBOOT', self, context):

@ -18,14 +18,15 @@ import time
from collections import defaultdict, OrderedDict
from copy import copy
from devlib.exception import TargetError
from devlib.utils.misc import unique
from devlib.utils.types import integer
from wa.framework.exception import ConfigError
from wa.framework.plugin import Plugin, Parameter
from wa.utils.misc import resolve_cpus, resolve_unique_domain_cpus
from wa.utils.types import caseless_string, enum
from devlib.exception import TargetError
from devlib.utils.misc import unique
from devlib.utils.types import integer
logger = logging.getLogger('RuntimeConfig')
@ -369,13 +370,14 @@ class CpufreqRuntimeConfig(RuntimeConfig):
The governor to be set for all cores
""")
param_name = 'governor_tunables'
param_name = 'gov_tunables'
self._runtime_params[param_name] = \
RuntimeParameter(
param_name, kind=dict,
merge=True,
setter=self.set_governor_tunables,
setter_params={'core': None},
aliases=['governor_tunables'],
description="""
The governor tunables to be set for all cores
""")

@ -97,6 +97,6 @@ class RuntimeParameterManager(object):
def get_cfg_point(self, name):
name = caseless_string(name)
for k, v in self.runtime_params.items():
if name == k:
if name == k or name in v.cfg_point.aliases:
return v.cfg_point
raise ConfigError('Unknown runtime parameter: {}'.format(name))

@ -547,6 +547,21 @@ public class BaseUiAutomation {
}
}
// If an an app is not designed for running on the latest version of android
// (currently Q) dissmiss the warning popup if present.
public void dismissAndroidVersionPopup() throws Exception {
UiObject warningText =
mDevice.findObject(new UiSelector().textContains(
"This app was built for an older version of Android"));
UiObject acceptButton =
mDevice.findObject(new UiSelector().resourceId("android:id/button1")
.className("android.widget.Button"));
if (warningText.exists() && acceptButton.exists()) {
acceptButton.click();
}
}
// Override getParams function to decode a url encoded parameter bundle before
// passing it to workloads.
public Bundle getParams() {

Binary file not shown.

@ -21,7 +21,7 @@ from subprocess import Popen, PIPE
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision'])
version = VersionTuple(3, 0, 0)
version = VersionTuple(3, 1, 1)
def get_wa_version():
@ -46,7 +46,7 @@ def get_commit():
p.wait()
if p.returncode:
return None
if sys.version_info[0] == 3:
return std[:8].decode(sys.stdout.encoding)
if sys.version_info[0] == 3 and isinstance(std, bytes):
return std[:8].decode(sys.stdout.encoding or 'utf-8')
else:
return std[:8]

@ -16,8 +16,9 @@ import logging
import os
import time
from wa import Parameter
from wa.framework.plugin import TargetedPlugin
from devlib.utils.android import ApkInfo
from wa.framework.plugin import TargetedPlugin, Parameter
from wa.framework.resource import (ApkFile, ReventFile,
File, loose_version_matching)
from wa.framework.exception import WorkloadError, ConfigError
@ -25,8 +26,6 @@ from wa.utils.types import ParameterDict
from wa.utils.revent import ReventRecorder
from wa.utils.exec_control import once_per_instance
from devlib.utils.android import ApkInfo
class Workload(TargetedPlugin):
"""
@ -78,7 +77,7 @@ class Workload(TargetedPlugin):
raise WorkloadError(msg.format(self.name, ' '.join(self.supported_platforms),
self.target.os))
def init_resources(self, resolver):
def init_resources(self, context):
"""
This method may be used to perform early resource discovery and
initialization. This is invoked during the initial loading stage and
@ -88,7 +87,7 @@ class Workload(TargetedPlugin):
"""
for asset in self.deployable_assets:
self.asset_files.append(resolver.get(File(self, asset)))
self.asset_files.append(context.get(File(self, asset)))
@once_per_instance
def initialize(self, context):
@ -176,6 +175,7 @@ class ApkWorkload(Workload):
loading_time = 10
package_names = []
view = None
clear_data_on_reset = True
# Set this to True to mark that this workload requires the target apk to be run
# for initialisation purposes before the main run is performed.
@ -258,7 +258,8 @@ class ApkWorkload(Workload):
install_timeout=self.install_timeout,
uninstall=self.uninstall,
exact_abi=self.exact_abi,
prefer_host_package=self.prefer_host_package)
prefer_host_package=self.prefer_host_package,
clear_data_on_reset=self.clear_data_on_reset)
@once_per_instance
def initialize(self, context):
@ -298,9 +299,9 @@ class ApkUIWorkload(ApkWorkload):
super(ApkUIWorkload, self).__init__(target, **kwargs)
self.gui = None
def init_resources(self, resolver):
super(ApkUIWorkload, self).init_resources(resolver)
self.gui.init_resources(resolver)
def init_resources(self, context):
super(ApkUIWorkload, self).init_resources(context)
self.gui.init_resources(context)
@once_per_instance
def initialize(self, context):
@ -378,9 +379,9 @@ class UIWorkload(Workload):
super(UIWorkload, self).__init__(target, **kwargs)
self.gui = None
def init_resources(self, resolver):
super(UIWorkload, self).init_resources(resolver)
self.gui.init_resources(resolver)
def init_resources(self, context):
super(UIWorkload, self).init_resources(context)
self.gui.init_resources(context)
@once_per_instance
def initialize(self, context):
@ -642,7 +643,7 @@ class PackageHandler(object):
def __init__(self, owner, install_timeout=300, version=None, variant=None,
package_name=None, strict=False, force_install=False, uninstall=False,
exact_abi=False, prefer_host_package=True):
exact_abi=False, prefer_host_package=True, clear_data_on_reset=True):
self.logger = logging.getLogger('apk')
self.owner = owner
self.target = self.owner.target
@ -655,6 +656,7 @@ class PackageHandler(object):
self.uninstall = uninstall
self.exact_abi = exact_abi
self.prefer_host_package = prefer_host_package
self.clear_data_on_reset = clear_data_on_reset
self.supported_abi = self.target.supported_abi
self.apk_file = None
self.apk_info = None
@ -810,7 +812,8 @@ class PackageHandler(object):
def reset(self, context): # pylint: disable=W0613
self.target.execute('am force-stop {}'.format(self.apk_info.package))
self.target.execute('pm clear {}'.format(self.apk_info.package))
if self.clear_data_on_reset:
self.target.execute('pm clear {}'.format(self.apk_info.package))
def install_apk(self, context):
# pylint: disable=unused-argument
@ -819,7 +822,7 @@ class PackageHandler(object):
if 'Failure' in output:
if 'ALREADY_EXISTS' in output:
msg = 'Using already installed APK (did not uninstall properly?)'
self.logger.warn(msg)
self.logger.warning(msg)
else:
raise WorkloadError(output)
else:

@ -366,6 +366,9 @@ class EnergyMeasurement(Instrument):
description = """
This instrument is designed to be used as an interface to the various
energy measurement instruments located in devlib.
This instrument should be used to provide configuration for any of the
Energy Instrument Backends rather than specifying configuration directly.
"""
parameters = [

@ -119,7 +119,6 @@ class FpsInstrument(Instrument):
return
self._is_enabled = True
# pylint: disable=redefined-variable-type
if use_gfxinfo:
self.collector = GfxInfoFramesInstrument(self.target, collector_target, self.period)
self.processor = DerivedGfxInfoStats(self.drop_threshold, filename='fps.csv')

@ -73,7 +73,7 @@ class SysfsExtractor(Instrument):
description="""
Specifies whether tmpfs should be used to cache sysfile trees and then pull them down
as a tarball. This is significantly faster then just copying the directory trees from
the device directly, bur requres root and may not work on all devices. Defaults to
the device directly, but requires root and may not work on all devices. Defaults to
``True`` if the device is rooted and ``False`` if it is not.
"""),
Parameter('tmpfs_mount_point', default=None,

138
wa/instruments/perf.py Normal file

@ -0,0 +1,138 @@
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=unused-argument
import os
import re
from devlib.trace.perf import PerfCollector
from wa import Instrument, Parameter
from wa.utils.types import list_or_string, list_of_strs
PERF_COUNT_REGEX = re.compile(r'^(CPU\d+)?\s*(\d+)\s*(.*?)\s*(\[\s*\d+\.\d+%\s*\])?\s*$')
class PerfInstrument(Instrument):
name = 'perf'
description = """
Perf is a Linux profiling with performance counters.
Performance counters are CPU hardware registers that count hardware events
such as instructions executed, cache-misses suffered, or branches
mispredicted. They form a basis for profiling applications to trace dynamic
control flow and identify hotspots.
pref accepts options and events. If no option is given the default '-a' is
used. For events, the default events are migrations and cs. They both can
be specified in the config file.
Events must be provided as a list that contains them and they will look like
this ::
perf_events = ['migrations', 'cs']
Events can be obtained by typing the following in the command line on the
device ::
perf list
Whereas options, they can be provided as a single string as following ::
perf_options = '-a -i'
Options can be obtained by running the following in the command line ::
man perf-stat
"""
parameters = [
Parameter('events', kind=list_of_strs, default=['migrations', 'cs'],
global_alias='perf_events',
constraint=(lambda x: x, 'must not be empty.'),
description="""Specifies the events to be counted."""),
Parameter('optionstring', kind=list_or_string, default='-a',
global_alias='perf_options',
description="""Specifies options to be used for the perf command. This
may be a list of option strings, in which case, multiple instances of perf
will be kicked off -- one for each option string. This may be used to e.g.
collected different events from different big.LITTLE clusters.
"""),
Parameter('labels', kind=list_of_strs, default=None,
global_alias='perf_labels',
description="""Provides labels for pref output. If specified, the number of
labels must match the number of ``optionstring``\ s.
"""),
Parameter('force_install', kind=bool, default=False,
description="""
always install perf binary even if perf is already present on the device.
"""),
]
def __init__(self, target, **kwargs):
super(PerfInstrument, self).__init__(target, **kwargs)
self.collector = None
def initialize(self, context):
self.collector = PerfCollector(self.target,
self.events,
self.optionstring,
self.labels,
self.force_install)
def setup(self, context):
self.collector.reset()
def start(self, context):
self.collector.start()
def stop(self, context):
self.collector.stop()
def update_output(self, context):
self.logger.info('Extracting reports from target...')
outdir = os.path.join(context.output_directory, 'perf')
self.collector.get_trace(outdir)
for host_file in os.listdir(outdir):
label = host_file.split('.out')[0]
host_file_path = os.path.join(outdir, host_file)
context.add_artifact(label, host_file_path, 'raw')
with open(host_file_path) as fh:
in_results_section = False
for line in fh:
if 'Performance counter stats' in line:
in_results_section = True
next(fh) # skip the following blank line
if in_results_section:
if not line.strip(): # blank line
in_results_section = False
break
else:
line = line.split('#')[0] # comment
match = PERF_COUNT_REGEX.search(line)
if match:
classifiers = {}
cpu = match.group(1)
if cpu is not None:
classifiers['cpu'] = int(cpu.replace('CPU', ''))
count = int(match.group(2))
metric = '{}_{}'.format(label, match.group(3))
context.add_metric(metric, count, classifiers=classifiers)
def teardown(self, context):
self.collector.reset()

@ -206,8 +206,8 @@ class TraceCmdInstrument(Instrument):
self.collector.get_trace(outfile)
context.add_artifact('trace-cmd-bin', outfile, 'data')
if self.report:
textfile = os.path.join(context.output_directory, 'trace.txt')
if not self.report_on_target:
textfile = os.path.join(context.output_directory, 'trace.txt')
self.collector.report(outfile, textfile)
context.add_artifact('trace-cmd-txt', textfile, 'export')

@ -85,7 +85,8 @@ class CpuStatesProcessor(OutputProcessor):
"""),
]
def initialize(self):
def __init__(self, *args, **kwargs):
super(CpuStatesProcessor, self).__init__(*args, **kwargs)
self.iteration_reports = OrderedDict()
def process_job_output(self, output, target_info, run_output): # pylint: disable=unused-argument

@ -49,6 +49,11 @@ class CsvReportProcessor(OutputProcessor):
"""),
]
def __init__(self, *args, **kwargs):
super(CsvReportProcessor, self).__init__(*args, **kwargs)
self.outputs_so_far = []
self.artifact_added = False
def validate(self):
super(CsvReportProcessor, self).validate()
if self.use_all_classifiers and self.extra_columns:
@ -56,24 +61,20 @@ class CsvReportProcessor(OutputProcessor):
'use_all_classifiers is True'
raise ConfigError(msg)
def initialize(self):
self.outputs_so_far = [] # pylint: disable=attribute-defined-outside-init
self.artifact_added = False
# pylint: disable=unused-argument
def process_job_output(self, output, target_info, run_output):
self.outputs_so_far.append(output)
self._write_outputs(self.outputs_so_far, run_output)
if not self.artifact_added:
run_output.add_artifact('run_result_csv', 'results.csv', 'export')
self.artifact_added = True
self.artifact_added = True # pylint: disable=attribute-defined-outside-init
def process_run_output(self, output, target_info): # pylint: disable=unused-argument
self.outputs_so_far.append(output)
self._write_outputs(self.outputs_so_far, output)
if not self.artifact_added:
output.add_artifact('run_result_csv', 'results.csv', 'export')
self.artifact_added = True
self.artifact_added = True # pylint: disable=attribute-defined-outside-init
def _write_outputs(self, outputs, output):
if self.use_all_classifiers:

@ -0,0 +1,593 @@
# Copyright 2018 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import uuid
import collections
try:
import psycopg2
from psycopg2 import (connect, extras)
from psycopg2 import Error as Psycopg2Error
except ImportError as e:
psycopg2 = None
import_error_msg = e.args[0] if e.args else str(e)
from devlib.target import KernelVersion, KernelConfig
from wa import OutputProcessor, Parameter, OutputProcessorError
from wa.framework.target.info import CpuInfo
from wa.utils.postgres import (POSTGRES_SCHEMA_DIR, cast_level, cast_vanilla,
adapt_vanilla, return_as_is, adapt_level,
ListOfLevel, adapt_ListOfX, create_iterable_adapter,
get_schema_versions)
from wa.utils.serializer import json
from wa.utils.types import level
class PostgresqlResultProcessor(OutputProcessor):
name = 'postgres'
description = """
Stores results in a Postgresql database.
The structure of this database can easily be understood by examining
the postgres_schema.sql file (the schema used to generate it):
{}
""".format(os.path.join(POSTGRES_SCHEMA_DIR, 'postgres_schema.sql'))
parameters = [
Parameter('username', default='postgres',
description="""
This is the username that will be used to connect to the
Postgresql database. Note that depending on whether the user
has privileges to modify the database (normally only possible
on localhost), the user may only be able to append entries.
"""),
Parameter('password', default=None,
description="""
The password to be used to connect to the specified database
with the specified username.
"""),
Parameter('dbname', default='wa',
description="""
Name of the database that will be created or added to. Note,
to override this, you can specify a value in your user
wa configuration file.
"""),
Parameter('host', kind=str, default='localhost',
description="""
The host where the Postgresql server is running. The default
is localhost (i.e. the machine that wa is running on).
This is useful for complex systems where multiple machines
may be executing workloads and uploading their results to
a remote, centralised database.
"""),
Parameter('port', kind=str, default='5432',
description="""
The port the Postgresql server is running on, on the host.
The default is Postgresql's default, so do not change this
unless you have modified the default port for Postgresql.
"""),
]
# Commands
sql_command = {
"create_run": "INSERT INTO Runs (oid, event_summary, basepath, status, timestamp, run_name, project, project_stage, retry_on_status, max_retries, bail_on_init_failure, allow_phone_home, run_uuid, start_time, metadata, state, _pod_version, _pod_serialization_version) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)",
"update_run": "UPDATE Runs SET event_summary=%s, status=%s, timestamp=%s, end_time=%s, duration=%s, state=%s WHERE oid=%s;",
"create_job": "INSERT INTO Jobs (oid, run_oid, status, retry, label, job_id, iterations, workload_name, metadata, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s);",
"create_target": "INSERT INTO Targets (oid, run_oid, target, cpus, os, os_version, hostid, hostname, abi, is_rooted, kernel_version, kernel_release, kernel_sha1, kernel_config, sched_features, page_size_kb, screen_resolution, prop, android_id, _pod_version, _pod_serialization_version) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)",
"create_event": "INSERT INTO Events (oid, run_oid, job_oid, timestamp, message, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s",
"create_artifact": "INSERT INTO Artifacts (oid, run_oid, job_oid, name, large_object_uuid, description, kind, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)",
"create_metric": "INSERT INTO Metrics (oid, run_oid, job_oid, name, value, units, lower_is_better, _pod_version, _pod_serialization_version) VALUES (%s, %s, %s, %s, %s, %s , %s, %s, %s)",
"create_augmentation": "INSERT INTO Augmentations (oid, run_oid, name) VALUES (%s, %s, %s)",
"create_classifier": "INSERT INTO Classifiers (oid, artifact_oid, metric_oid, job_oid, run_oid, key, value) VALUES (%s, %s, %s, %s, %s, %s, %s)",
"create_parameter": "INSERT INTO Parameters (oid, run_oid, job_oid, augmentation_oid, resource_getter_oid, name, value, value_type, type) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)",
"create_resource_getter": "INSERT INTO Resource_Getters (oid, run_oid, name) VALUES (%s, %s, %s)",
"create_job_aug": "INSERT INTO Jobs_Augs (oid, job_oid, augmentation_oid) VALUES (%s, %s, %s)",
"create_large_object": "INSERT INTO LargeObjects (oid, lo_oid) VALUES (%s, %s)"
}
# Lists to track which run-related items have already been added
metrics_already_added = []
# Dicts needed so that jobs can look up ids
artifacts_already_added = {}
augmentations_already_added = {}
# Status bits (flags)
first_job_run = True
def __init__(self, *args, **kwargs):
super(PostgresqlResultProcessor, self).__init__(*args, **kwargs)
self.conn = None
self.cursor = None
self.run_uuid = None
self.target_uuid = None
def initialize(self, context):
if not psycopg2:
raise ImportError(
'The psycopg2 module is required for the ' +
'Postgresql Output Processor: {}'.format(import_error_msg))
# N.B. Typecasters are for postgres->python and adapters the opposite
self.connect_to_database()
self.cursor = self.conn.cursor()
self.verify_schema_versions()
# Register the adapters and typecasters for enum types
self.cursor.execute("SELECT NULL::status_enum")
status_oid = self.cursor.description[0][1]
self.cursor.execute("SELECT NULL::param_enum")
param_oid = self.cursor.description[0][1]
LEVEL = psycopg2.extensions.new_type(
(status_oid,), "LEVEL", cast_level)
psycopg2.extensions.register_type(LEVEL)
PARAM = psycopg2.extensions.new_type(
(param_oid,), "PARAM", cast_vanilla)
psycopg2.extensions.register_type(PARAM)
psycopg2.extensions.register_adapter(level, return_as_is(adapt_level))
psycopg2.extensions.register_adapter(
ListOfLevel, adapt_ListOfX(adapt_level))
psycopg2.extensions.register_adapter(KernelVersion, adapt_vanilla)
psycopg2.extensions.register_adapter(
CpuInfo, adapt_vanilla)
psycopg2.extensions.register_adapter(
collections.OrderedDict, extras.Json)
psycopg2.extensions.register_adapter(dict, extras.Json)
psycopg2.extensions.register_adapter(
KernelConfig, create_iterable_adapter(2, explicit_iterate=True))
# Register ready-made UUID type adapter
extras.register_uuid()
# Insert a run_uuid which will be globally accessible during the run
self.run_uuid = uuid.UUID(str(uuid.uuid4()))
run_output = context.run_output
retry_on_status = ListOfLevel(run_output.run_config.retry_on_status)
self.cursor.execute(
self.sql_command['create_run'],
(
self.run_uuid,
run_output.event_summary,
run_output.basepath,
run_output.status,
run_output.state.timestamp,
run_output.info.run_name,
run_output.info.project,
run_output.info.project_stage,
retry_on_status,
run_output.run_config.max_retries,
run_output.run_config.bail_on_init_failure,
run_output.run_config.allow_phone_home,
run_output.info.uuid,
run_output.info.start_time,
run_output.metadata,
json.dumps(run_output.state.to_pod()),
run_output.result._pod_version, # pylint: disable=protected-access
run_output.result._pod_serialization_version, # pylint: disable=protected-access
)
)
self.target_uuid = uuid.uuid4()
target_info = context.target_info
target_pod = target_info.to_pod()
self.cursor.execute(
self.sql_command['create_target'],
(
self.target_uuid,
self.run_uuid,
target_pod['target'],
target_pod['cpus'],
target_pod['os'],
target_pod['os_version'],
target_pod['hostid'],
target_pod['hostname'],
target_pod['abi'],
target_pod['is_rooted'],
# Important caveat: kernel_version is the name of the column in the Targets table
# However, this refers to kernel_version.version, not to kernel_version as a whole
target_pod['kernel_version'],
target_pod['kernel_release'],
target_info.kernel_version.sha1,
target_info.kernel_config,
target_pod['sched_features'],
target_pod['page_size_kb'],
# Android Specific
target_pod.get('screen_resolution'),
target_pod.get('prop'),
target_pod.get('android_id'),
target_pod.get('pod_version'),
target_pod.get('pod_serialization_version'),
)
)
# Commit cursor commands
self.conn.commit()
def export_job_output(self, job_output, target_info, run_output): # pylint: disable=too-many-branches, too-many-statements, too-many-locals, unused-argument
''' Run once for each job to upload information that is
updated on a job by job basis.
'''
job_uuid = uuid.uuid4()
# Create a new job
self.cursor.execute(
self.sql_command['create_job'],
(
job_uuid,
self.run_uuid,
job_output.status,
job_output.retry,
job_output.label,
job_output.id,
job_output.iteration,
job_output.spec.workload_name,
job_output.metadata,
job_output.spec._pod_version, # pylint: disable=protected-access
job_output.spec._pod_serialization_version, # pylint: disable=protected-access
)
)
for classifier in job_output.classifiers:
classifier_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_classifier'],
(
classifier_uuid,
None,
None,
job_uuid,
None,
classifier,
job_output.classifiers[classifier]
)
)
# Update the run table and run-level parameters
self.cursor.execute(
self.sql_command['update_run'],
(
run_output.event_summary,
run_output.status,
run_output.state.timestamp,
run_output.info.end_time,
None,
json.dumps(run_output.state.to_pod()),
self.run_uuid))
for classifier in run_output.classifiers:
classifier_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_classifier'],
(
classifier_uuid,
None,
None,
None,
None,
self.run_uuid,
classifier,
run_output.classifiers[classifier]
)
)
self.sql_upload_artifacts(run_output, record_in_added=True)
self.sql_upload_metrics(run_output, record_in_added=True)
self.sql_upload_augmentations(run_output)
self.sql_upload_resource_getters(run_output)
self.sql_upload_events(job_output, job_uuid=job_uuid)
self.sql_upload_artifacts(job_output, job_uuid=job_uuid)
self.sql_upload_metrics(job_output, job_uuid=job_uuid)
self.sql_upload_job_augmentations(job_output, job_uuid=job_uuid)
self.sql_upload_parameters(
"workload",
job_output.spec.workload_parameters,
job_uuid=job_uuid)
self.sql_upload_parameters(
"runtime",
job_output.spec.runtime_parameters,
job_uuid=job_uuid)
self.conn.commit()
def export_run_output(self, run_output, target_info): # pylint: disable=unused-argument, too-many-locals
''' A final export of the RunOutput that updates existing parameters
and uploads ones which are only generated after jobs have run.
'''
if not self.cursor: # Database did not connect correctly.
return
# Update the job statuses following completion of the run
for job in run_output.jobs:
job_id = job.id
job_status = job.status
self.cursor.execute(
"UPDATE Jobs SET status=%s WHERE job_id=%s and run_oid=%s",
(
job_status,
job_id,
self.run_uuid
)
)
run_uuid = self.run_uuid
# Update the run entry after jobs have completed
run_info_pod = run_output.info.to_pod()
run_state_pod = run_output.state.to_pod()
sql_command_update_run = self.sql_command['update_run']
self.cursor.execute(
sql_command_update_run,
(
run_output.event_summary,
run_output.status,
run_info_pod['start_time'],
run_info_pod['end_time'],
run_info_pod['duration'],
json.dumps(run_state_pod),
run_uuid,
)
)
self.sql_upload_events(run_output)
self.sql_upload_artifacts(run_output, check_uniqueness=True)
self.sql_upload_metrics(run_output, check_uniqueness=True)
self.sql_upload_augmentations(run_output)
self.conn.commit()
# Upload functions for use with both jobs and runs
def sql_upload_resource_getters(self, output_object):
for resource_getter in output_object.run_config.resource_getters:
resource_getter_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_resource_getter'],
(
resource_getter_uuid,
self.run_uuid,
resource_getter,
)
)
self.sql_upload_parameters(
'resource_getter',
output_object.run_config.resource_getters[resource_getter],
owner_id=resource_getter_uuid,
)
def sql_upload_events(self, output_object, job_uuid=None):
for event in output_object.events:
event_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_event'],
(
event_uuid,
self.run_uuid,
job_uuid,
event.timestamp,
event.message,
event._pod_version, # pylint: disable=protected-access
event._pod_serialization_version, # pylint: disable=protected-access
)
)
def sql_upload_job_augmentations(self, output_object, job_uuid=None):
''' This is a table which links the uuids of augmentations to jobs.
Note that the augmentations table is prepopulated, leading to the necessity
of an augmentaitions_already_added dictionary, which gives us the corresponding
uuids.
Augmentations which are prefixed by ~ are toggled off and not part of the job,
therefore not added.
'''
for augmentation in output_object.spec.augmentations:
if augmentation.startswith('~'):
continue
augmentation_uuid = self.augmentations_already_added[augmentation]
job_aug_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_job_aug'],
(
job_aug_uuid,
job_uuid,
augmentation_uuid,
)
)
def sql_upload_augmentations(self, output_object):
for augmentation in output_object.augmentations:
if augmentation.startswith('~') or augmentation in self.augmentations_already_added:
continue
augmentation_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_augmentation'],
(
augmentation_uuid,
self.run_uuid,
augmentation,
)
)
self.sql_upload_parameters(
'augmentation',
output_object.run_config.augmentations[augmentation],
owner_id=augmentation_uuid,
)
self.augmentations_already_added[augmentation] = augmentation_uuid
def sql_upload_metrics(self, output_object, record_in_added=False, check_uniqueness=False, job_uuid=None):
for metric in output_object.metrics:
if metric in self.metrics_already_added and check_uniqueness:
continue
metric_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_metric'],
(
metric_uuid,
self.run_uuid,
job_uuid,
metric.name,
metric.value,
metric.units,
metric.lower_is_better,
metric._pod_version, # pylint: disable=protected-access
metric._pod_serialization_version, # pylint: disable=protected-access
)
)
for classifier in metric.classifiers:
classifier_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_classifier'],
(
classifier_uuid,
None,
metric_uuid,
None,
None,
classifier,
metric.classifiers[classifier],
)
)
if record_in_added:
self.metrics_already_added.append(metric)
def sql_upload_artifacts(self, output_object, record_in_added=False, check_uniqueness=False, job_uuid=None):
''' Uploads artifacts to the database.
record_in_added will record the artifacts added in artifacts_aleady_added
check_uniqueness will ensure artifacts in artifacts_already_added do not get added again
'''
for artifact in output_object.artifacts:
if artifact in self.artifacts_already_added and check_uniqueness:
self.logger.debug('Skipping uploading {} as already added'.format(artifact))
continue
if artifact in self.artifacts_already_added:
self._sql_update_artifact(artifact, output_object)
else:
self._sql_create_artifact(artifact, output_object, record_in_added, job_uuid)
def sql_upload_parameters(self, parameter_type, parameter_dict, owner_id=None, job_uuid=None):
# Note, currently no augmentation parameters are workload specific, but in the future
# this may change
augmentation_id = None
resource_getter_id = None
if parameter_type not in ['workload', 'resource_getter', 'augmentation', 'runtime']:
# boot parameters are not yet implemented
# device parameters are redundant due to the targets table
raise NotImplementedError("{} is not a valid parameter type.".format(parameter_type))
if parameter_type == "resource_getter":
resource_getter_id = owner_id
elif parameter_type == "augmentation":
augmentation_id = owner_id
for parameter in parameter_dict:
parameter_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_parameter'],
(
parameter_uuid,
self.run_uuid,
job_uuid,
augmentation_id,
resource_getter_id,
parameter,
json.dumps(parameter_dict[parameter]),
str(type(parameter_dict[parameter])),
parameter_type,
)
)
def connect_to_database(self):
dsn = "dbname={} user={} password={} host={} port={}".format(
self.dbname, self.username, self.password, self.host, self.port)
try:
self.conn = connect(dsn=dsn)
except Psycopg2Error as e:
raise OutputProcessorError(
"Database error, if the database doesn't exist, " +
"please use 'wa create database' to create the database: {}".format(e))
def execute_sql_line_by_line(self, sql):
cursor = self.conn.cursor()
for line in sql.replace('\n', "").replace(";", ";\n").split("\n"):
if line and not line.startswith('--'):
cursor.execute(line)
cursor.close()
self.conn.commit()
self.conn.reset()
def verify_schema_versions(self):
local_schema_version, db_schema_version = get_schema_versions(self.conn)
if local_schema_version != db_schema_version:
self.cursor.close()
self.cursor = None
self.conn.commit()
self.conn.reset()
msg = 'The current database schema is v{} however the local ' \
'schema version is v{}. Please update your database ' \
'with the create command'
raise OutputProcessorError(msg.format(db_schema_version, local_schema_version))
def _sql_write_lobject(self, source, lobject):
with open(source) as lobj_file:
lobj_data = lobj_file.read()
if len(lobj_data) > 50000000: # Notify if LO inserts larger than 50MB
self.logger.debug("Inserting large object of size {}".format(len(lobj_data)))
lobject.write(lobj_data)
self.conn.commit()
def _sql_update_artifact(self, artifact, output_object):
self.logger.debug('Updating artifact: {}'.format(artifact))
lobj = self.conn.lobject(oid=self.artifacts_already_added[artifact], mode='w')
self._sql_write_lobject(os.path.join(output_object.basepath, artifact.path), lobj)
def _sql_create_artifact(self, artifact, output_object, record_in_added=False, job_uuid=None):
self.logger.debug('Uploading artifact: {}'.format(artifact))
artifact_uuid = uuid.uuid4()
lobj = self.conn.lobject()
loid = lobj.oid
large_object_uuid = uuid.uuid4()
self._sql_write_lobject(os.path.join(output_object.basepath, artifact.path), lobj)
self.cursor.execute(
self.sql_command['create_large_object'],
(
large_object_uuid,
loid,
)
)
self.cursor.execute(
self.sql_command['create_artifact'],
(
artifact_uuid,
self.run_uuid,
job_uuid,
artifact.name,
large_object_uuid,
artifact.description,
str(artifact.kind),
artifact._pod_version, # pylint: disable=protected-access
artifact._pod_serialization_version, # pylint: disable=protected-access
)
)
for classifier in artifact.classifiers:
classifier_uuid = uuid.uuid4()
self.cursor.execute(
self.sql_command['create_classifier'],
(
classifier_uuid,
artifact_uuid,
None,
None,
None,
classifier,
artifact.classifiers[classifier],
)
)
if record_in_added:
self.artifacts_already_added[artifact] = loid

@ -22,7 +22,6 @@ from datetime import datetime, timedelta
from contextlib import contextmanager
from wa import OutputProcessor, Parameter, OutputProcessorError
from wa.framework.exception import OutputProcessorError
from wa.utils.serializer import json
from wa.utils.types import boolean
@ -109,7 +108,8 @@ class SqliteResultProcessor(OutputProcessor):
]
def initialize(self):
def __init__(self, *args, **kwargs):
super(SqliteResultProcessor, self).__init__(*args, **kwargs)
self._last_spec = None
self._run_oid = None
self._spec_oid = None

@ -18,7 +18,8 @@
import time
from collections import Counter
from wa import OutputProcessor, Status
from wa.framework.output import Status
from wa.framework.output_processor import OutputProcessor
from wa.utils.misc import write_table

@ -56,7 +56,7 @@ class TargzProcessor(OutputProcessor):
'''),
]
def initialize(self):
def initialize(self, context):
if self.delete_output:
self.logger.debug('Registering RUN_FINALIZED handler.')
signal.connect(self.delete_output_directory, signal.RUN_FINALIZED, priority=-100)

@ -1234,11 +1234,13 @@ void record(const char *filepath, int delay, recording_mode_t mode)
if (ret < 1)
die("Could not write event count: %s", strerror(errno));
dprintf("Writing recording timestamps...\n");
uint64_t usecs;
fwrite(&start_time.tv_sec, sizeof(uint64_t), 1, fout);
uint64_t secs, usecs;
secs = start_time.tv_sec;
fwrite(&secs, sizeof(uint64_t), 1, fout);
usecs = start_time.tv_nsec / 1000;
fwrite(&usecs, sizeof(uint64_t), 1, fout);
fwrite(&end_time.tv_sec, sizeof(uint64_t), 1, fout);
secs = end_time.tv_sec;
fwrite(&secs, sizeof(uint64_t), 1, fout);
usecs = end_time.tv_nsec / 1000;
ret = fwrite(&usecs, sizeof(uint64_t), 1, fout);
if (ret < 1)

@ -72,10 +72,9 @@ class LogcatParser(object):
tid = int(parts.pop(0))
level = LogcatLogLevel.levels[log_level_map.index(parts.pop(0))]
tag = (parts.pop(0) if parts else '').strip()
except Exception as e:
except Exception as e: # pylint: disable=broad-except
message = 'Invalid metadata for line:\n\t{}\n\tgot: "{}"'
logger.warning(message.format(line, e))
return None
return LogcatEvent(timestamp, pid, tid, level, tag, message)

@ -449,7 +449,7 @@ class ParallelStats(object):
running_time_pc *= 100
else:
running_time_pc = 0
precision = self.use_ratios and 3 or 1
precision = 3 if self.use_ratios else 1
fmt = '{{:.{}f}}'.format(precision)
report.add([cluster, n,
fmt.format(time),
@ -524,7 +524,7 @@ class PowerStateStats(object):
time_pc *= 100
state_stats[state][cpu] = time_pc
precision = self.use_ratios and 3 or 1
precision = 3 if self.use_ratios else 1
return PowerStateStatsReport(self.filepath, state_stats, self.core_names, precision)
@ -592,7 +592,7 @@ def build_idle_state_map(cpus):
return idle_state_map
def report_power_stats(trace_file, cpus, output_basedir, use_ratios=False, no_idle=None,
def report_power_stats(trace_file, cpus, output_basedir, use_ratios=False, no_idle=None, # pylint: disable=too-many-locals
split_wfi_states=False):
"""
Process trace-cmd output to generate timelines and statistics of CPU power
@ -704,4 +704,3 @@ def report_power_stats(trace_file, cpus, output_basedir, use_ratios=False, no_id
report.write()
reports[report.name] = report
return reports

@ -193,14 +193,14 @@ def log_error(e, logger, critical=False):
old_level = set_indent_level(0)
logger.info('Got CTRL-C. Aborting.')
set_indent_level(old_level)
elif isinstance(e, WAError) or isinstance(e, DevlibError):
elif isinstance(e, (WAError, DevlibError)):
log_func(str(e))
elif isinstance(e, subprocess.CalledProcessError):
tb = get_traceback()
log_func(tb)
command = e.cmd
if e.args:
command = '{} {}'.format(command, ' '.join(e.args))
command = '{} {}'.format(command, ' '.join(map(str, e.args)))
message = 'Command \'{}\' returned non-zero exit status {}\nOUTPUT:\n{}\n'
log_func(message.format(command, e.returncode, e.output))
elif isinstance(e, SyntaxError):

@ -31,22 +31,24 @@ import subprocess
import sys
import traceback
from datetime import datetime, timedelta
from functools import reduce # pylint: disable=redefined-builtin
from operator import mul
if sys.version_info[0] == 3:
from io import StringIO
else:
from io import BytesIO as StringIO
# pylint: disable=wrong-import-position,unused-import
from itertools import chain, cycle
from distutils.spawn import find_executable
from distutils.spawn import find_executable # pylint: disable=no-name-in-module, import-error
import yaml
from dateutil import tz
# pylint: disable=wrong-import-order
from devlib.exception import TargetError
from devlib.utils.misc import (ABI_MAP, check_output, walk_modules,
ensure_directory_exists, ensure_file_directory_exists,
normalize, convert_new_lines, get_cpu_mask, unique,
escape_quotes, escape_single_quotes, escape_double_quotes,
isiterable, getch, as_relative, ranges_to_list, memoized,
list_to_ranges, list_to_mask, mask_to_list, which,
to_identifier)
@ -257,13 +259,13 @@ def format_duration(seconds, sep=' ', order=['day', 'hour', 'minute', 'second'])
result = []
for item in order:
value = getattr(dt, item, None)
if item is 'day':
if item == 'day':
value -= 1
if not value:
continue
suffix = '' if value == 1 else 's'
result.append('{} {}{}'.format(value, item, suffix))
return result and sep.join(result) or 'N/A'
return sep.join(result) if result else 'N/A'
def get_article(word):
@ -624,3 +626,13 @@ def resolve_unique_domain_cpus(name, target):
if domain_cpus[0] not in unique_cpus:
unique_cpus.append(domain_cpus[0])
return unique_cpus
def format_ordered_dict(od):
"""
Provide a string representation of ordered dict that is similar to the
regular dict representation, as that is more concise and easier to read
than the default __str__ for OrderedDict.
"""
return '{{{}}}'.format(', '.join('{}={}'.format(k, v)
for k, v in od.items()))

260
wa/utils/postgres.py Normal file

@ -0,0 +1,260 @@
# Copyright 2018 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
This module contains additional casting and adaptation functions for several
different datatypes and metadata types for use with the psycopg2 module. The
casting functions will transform Postgresql data types into Python objects, and
the adapters the reverse. They are named this way according to the psycopg2
conventions.
For more information about the available adapters and casters in the standard
psycopg2 module, please see:
http://initd.org/psycopg/docs/extensions.html#sql-adaptation-protocol-objects
"""
import re
import os
try:
from psycopg2 import InterfaceError
from psycopg2.extensions import AsIs
except ImportError:
InterfaceError = None
AsIs = None
from wa.utils.types import level
POSTGRES_SCHEMA_DIR = os.path.join(os.path.dirname(__file__),
'..',
'commands',
'postgres_schemas')
def cast_level(value, cur): # pylint: disable=unused-argument
"""Generic Level caster for psycopg2"""
if not InterfaceError:
raise ImportError('There was a problem importing psycopg2.')
if value is None:
return None
m = re.match(r"([^\()]*)\((\d*)\)", value)
name = str(m.group(1))
number = int(m.group(2))
if m:
return level(name, number)
else:
raise InterfaceError("Bad level representation: {}".format(value))
def cast_vanilla(value, cur): # pylint: disable=unused-argument
"""Vanilla Type caster for psycopg2
Simply returns the string representation.
"""
if value is None:
return None
else:
return str(value)
# List functions and classes for adapting
def adapt_level(a_level):
"""Generic Level Adapter for psycopg2"""
return "{}({})".format(a_level.name, a_level.value)
class ListOfLevel(object):
value = None
def __init__(self, a_level):
self.value = a_level
def return_original(self):
return self.value
def adapt_ListOfX(adapt_X):
"""This will create a multi-column adapter for a particular type.
Note that the type must itself need to be in array form. Therefore
this function serves to seaprate out individual lists into multiple
big lists.
E.g. if the X adapter produces array (a,b,c)
then this adapter will take an list of Xs and produce a master array:
((a1,a2,a3),(b1,b2,b3),(c1,c2,c3))
Takes as its argument the adapter for the type which must produce an
SQL array string.
Note that you should NOT put the AsIs in the adapt_X function.
The need for this function arises from the fact that we may want to
actually handle list-creating types differently if they themselves
are in a list, as in the example above, we cannot simply adopt a
recursive strategy.
Note that master_list is the list representing the array. Each element
in the list will represent a subarray (column). If there is only one
subarray following processing then the outer {} are stripped to give a
1 dimensional array.
"""
def adapter_function(param):
if not AsIs:
raise ImportError('There was a problem importing psycopg2.')
param = param.value
result_list = []
for element in param: # Where param will be a list of X's
result_list.append(adapt_X(element))
test_element = result_list[0]
num_items = len(test_element.split(","))
master_list = []
for x in range(num_items):
master_list.append("")
for element in result_list:
element = element.strip("{").strip("}")
element = element.split(",")
for x in range(num_items):
master_list[x] = master_list[x] + element[x] + ","
if num_items > 1:
master_sql_string = "{"
else:
master_sql_string = ""
for x in range(num_items):
# Remove trailing comma
master_list[x] = master_list[x].strip(",")
master_list[x] = "{" + master_list[x] + "}"
master_sql_string = master_sql_string + master_list[x] + ","
master_sql_string = master_sql_string.strip(",")
if num_items > 1:
master_sql_string = master_sql_string + "}"
return AsIs("'{}'".format(master_sql_string))
return adapter_function
def return_as_is(adapt_X):
"""Returns the AsIs appended function of the function passed
This is useful for adapter functions intended to be used with the
adapt_ListOfX function, which must return strings, as it allows them
to be standalone adapters.
"""
if not AsIs:
raise ImportError('There was a problem importing psycopg2.')
def adapter_function(param):
return AsIs("'{}'".format(adapt_X(param)))
return adapter_function
def adapt_vanilla(param):
"""Vanilla adapter: simply returns the string representation"""
if not AsIs:
raise ImportError('There was a problem importing psycopg2.')
return AsIs("'{}'".format(param))
def create_iterable_adapter(array_columns, explicit_iterate=False):
"""Create an iterable adapter of a specified dimension
If explicit_iterate is True, then it will be assumed that the param needs
to be iterated upon via param.iteritems(). Otherwise it will simply be
iterated vanilla.
The value of array_columns will be equal to the number of indexed elements
per item in the param iterable. E.g. a list of 3-element-long lists has
3 elements per item in the iterable (the master list) and therefore
array_columns should be equal to 3.
If array_columns is 0, then this indicates that the iterable contains
single items.
"""
if not AsIs:
raise ImportError('There was a problem importing psycopg2.')
def adapt_iterable(param):
"""Adapts an iterable object into an SQL array"""
final_string = "" # String stores a string representation of the array
if param:
if array_columns > 1:
for index in range(array_columns):
array_string = ""
for item in param.iteritems():
array_string = array_string + str(item[index]) + ","
array_string = array_string.strip(",")
array_string = "{" + array_string + "}"
final_string = final_string + array_string + ","
final_string = final_string.strip(",")
final_string = "{" + final_string + "}"
else:
# Simply return each item in the array
if explicit_iterate:
for item in param.iteritems():
final_string = final_string + str(item) + ","
else:
for item in param:
final_string = final_string + str(item) + ","
final_string = "{" + final_string + "}"
return AsIs("'{}'".format(final_string))
return adapt_iterable
# For reference only and future use
def adapt_list(param):
"""Adapts a list into an array"""
if not AsIs:
raise ImportError('There was a problem importing psycopg2.')
final_string = ""
if param:
for item in param:
final_string = final_string + str(item) + ","
final_string = "{" + final_string + "}"
return AsIs("'{}'".format(final_string))
def get_schema(schemafilepath):
with open(schemafilepath, 'r') as sqlfile:
sql_commands = sqlfile.read()
schema_major = None
schema_minor = None
# Extract schema version if present
if sql_commands.startswith('--!VERSION'):
splitcommands = sql_commands.split('!ENDVERSION!\n')
schema_major, schema_minor = splitcommands[0].strip('--!VERSION!').split('.')
schema_major = int(schema_major)
schema_minor = int(schema_minor)
sql_commands = splitcommands[1]
return schema_major, schema_minor, sql_commands
def get_database_schema_version(conn):
with conn.cursor() as cursor:
cursor.execute('''SELECT
DatabaseMeta.schema_major,
DatabaseMeta.schema_minor
FROM
DatabaseMeta;''')
schema_major, schema_minor = cursor.fetchone()
return (schema_major, schema_minor)
def get_schema_versions(conn):
schemafilepath = os.path.join(POSTGRES_SCHEMA_DIR, 'postgres_schema.sql')
cur_major_version, cur_minor_version, _ = get_schema(schemafilepath)
db_schema_version = get_database_schema_version(conn)
return (cur_major_version, cur_minor_version), db_schema_version

@ -20,10 +20,11 @@ import signal
from datetime import datetime
from collections import namedtuple
from devlib.utils.misc import memoized
from wa.framework.resource import Executable, NO_ONE, ResourceResolver
from wa.utils.exec_control import once_per_class
from devlib.utils.misc import memoized
GENERAL_MODE = 0
GAMEPAD_MODE = 1
@ -200,7 +201,7 @@ class ReventRecording(object):
def _parse_header_and_devices(self, fh):
magic, version = read_struct(fh, header_one_struct)
if magic != 'REVENT':
if magic != b'REVENT':
msg = '{} does not appear to be an revent recording'
raise ValueError(msg.format(self.filepath))
self.version = version
@ -215,11 +216,11 @@ class ReventRecording(object):
raise ValueError('Unexpected recording mode: {}'.format(self.mode))
self.num_events, = read_struct(fh, u64_struct)
if self.version > 2:
ts_sec = read_struct(fh, u64_struct)
ts_usec = read_struct(fh, u64_struct)
ts_sec = read_struct(fh, u64_struct)[0]
ts_usec = read_struct(fh, u64_struct)[0]
self.start_time = datetime.fromtimestamp(ts_sec + float(ts_usec) / 1000000)
ts_sec = read_struct(fh, u64_struct)
ts_usec = read_struct(fh, u64_struct)
ts_sec = read_struct(fh, u64_struct)[0]
ts_usec = read_struct(fh, u64_struct)[0]
self.end_time = datetime.fromtimestamp(ts_sec + float(ts_usec) / 1000000)
elif 2 > self.version >= 0:

@ -61,12 +61,11 @@ import re
import json as _json
from collections import OrderedDict
from datetime import datetime
import yaml as _yaml
import dateutil.parser
import yaml as _yaml # pylint: disable=wrong-import-order
# pylint: disable=redefined-builtin
from past.builtins import basestring
from past.builtins import basestring # pylint: disable=wrong-import-order
from wa.framework.exception import SerializerSyntaxError
from wa.utils.misc import isiterable
@ -104,7 +103,7 @@ POD_TYPES = [
class WAJSONEncoder(_json.JSONEncoder):
def default(self, obj): # pylint: disable=method-hidden
def default(self, obj): # pylint: disable=method-hidden,arguments-differ
if isinstance(obj, regex_type):
return 'REGEX:{}:{}'.format(obj.flags, obj.pattern)
elif isinstance(obj, datetime):
@ -119,7 +118,7 @@ class WAJSONEncoder(_json.JSONEncoder):
class WAJSONDecoder(_json.JSONDecoder):
def decode(self, s, **kwargs):
def decode(self, s, **kwargs): # pylint: disable=arguments-differ
d = _json.JSONDecoder.decode(self, s, **kwargs)
def try_parse_object(v):
@ -140,6 +139,8 @@ class WAJSONDecoder(_json.JSONDecoder):
return v
def load_objects(d):
if not hasattr(d, 'items'):
return d
pairs = []
for k, v in d.items():
if hasattr(v, 'items'):
@ -168,14 +169,14 @@ class json(object):
try:
return _json.load(fh, cls=WAJSONDecoder, object_pairs_hook=OrderedDict, *args, **kwargs)
except ValueError as e:
raise SerializerSyntaxError(e.message)
raise SerializerSyntaxError(e.args[0])
@staticmethod
def loads(s, *args, **kwargs):
try:
return _json.loads(s, cls=WAJSONDecoder, object_pairs_hook=OrderedDict, *args, **kwargs)
except ValueError as e:
raise SerializerSyntaxError(e.message)
raise SerializerSyntaxError(e.args[0])
_mapping_tag = _yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG
@ -362,3 +363,33 @@ def is_pod(obj):
if not is_pod(v):
return False
return True
class Podable(object):
_pod_serialization_version = 0
@classmethod
def from_pod(cls, pod):
pod = cls._upgrade_pod(pod)
instance = cls()
instance._pod_version = pod.pop('_pod_version') # pylint: disable=protected-access
return instance
@classmethod
def _upgrade_pod(cls, pod):
_pod_serialization_version = pod.pop('_pod_serialization_version', None) or 0
while _pod_serialization_version < cls._pod_serialization_version:
_pod_serialization_version += 1
upgrade = getattr(cls, '_pod_upgrade_v{}'.format(_pod_serialization_version))
pod = upgrade(pod)
return pod
def __init__(self):
self._pod_version = self._pod_serialization_version
def to_pod(self):
pod = {}
pod['_pod_version'] = self._pod_version
pod['_pod_serialization_version'] = self._pod_serialization_version
return pod

@ -61,7 +61,7 @@ def _get_terminal_size_windows():
sizex = right - left + 1
sizey = bottom - top + 1
return sizex, sizey
except:
except: # NOQA
pass
@ -72,7 +72,7 @@ def _get_terminal_size_tput():
cols = int(subprocess.check_call(shlex.split('tput cols')))
rows = int(subprocess.check_call(shlex.split('tput lines')))
return (cols, rows)
except:
except: # NOQA
pass
@ -84,7 +84,7 @@ def _get_terminal_size_linux():
cr = struct.unpack('hh',
fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234'))
return cr
except:
except: # NOQA
pass
cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2)
if not cr:
@ -92,12 +92,12 @@ def _get_terminal_size_linux():
fd = os.open(os.ctermid(), os.O_RDONLY)
cr = ioctl_GWINSZ(fd)
os.close(fd)
except:
except: # NOQA
pass
if not cr:
try:
cr = (os.environ['LINES'], os.environ['COLUMNS'])
except:
except: # NOQA
return None
return int(cr[1]), int(cr[0])

@ -323,7 +323,7 @@ class TraceCmdParser(object):
continue
body_parser = EVENT_PARSER_MAP.get(event_name, default_body_parser)
if isinstance(body_parser, str) or isinstance(body_parser, re._pattern_type): # pylint: disable=protected-access
if isinstance(body_parser, (str, re._pattern_type)): # pylint: disable=protected-access
body_parser = regex_body_parser(body_parser)
yield TraceCmdEvent(parser=body_parser, **match.groupdict())

@ -34,12 +34,11 @@ from bisect import insort
if sys.version_info[0] == 3:
from urllib.parse import quote, unquote # pylint: disable=no-name-in-module, import-error
from past.builtins import basestring # pylint: disable=redefined-builtin
long = int
long = int # pylint: disable=redefined-builtin
else:
from urllib import quote, unquote
from urllib import quote, unquote # pylint: disable=no-name-in-module
# pylint: disable=wrong-import-position
from collections import defaultdict, MutableMapping
from copy import copy
from functools import total_ordering
from future.utils import with_metaclass
@ -59,6 +58,7 @@ def list_of_strs(value):
raise ValueError(value)
return list(map(str, value))
list_of_strings = list_of_strs
@ -71,6 +71,7 @@ def list_of_ints(value):
raise ValueError(value)
return list(map(int, value))
list_of_integers = list_of_ints
@ -325,7 +326,7 @@ class prioritylist(object):
def _delete(self, priority, priority_index):
del self.elements[priority][priority_index]
self.size -= 1
if len(self.elements[priority]) == 0:
if not self.elements[priority]:
self.priorities.remove(priority)
self._cached_elements = None
@ -385,10 +386,11 @@ class toggle_set(set):
return toggle_set(pod)
@staticmethod
def merge(source, dest):
if '~~' in dest:
dest.remove('~~')
return dest
def merge(dest, source):
if '~~' in source:
return toggle_set(source)
dest = toggle_set(dest)
for item in source:
if item not in dest:
#Disable previously enabled item
@ -409,12 +411,10 @@ class toggle_set(set):
set.__init__(self, *args)
def merge_with(self, other):
other = copy(other)
return toggle_set.merge(self, toggle_set(other))
return toggle_set.merge(self, other)
def merge_into(self, other):
new_self = copy(self)
return toggle_set.merge(other, new_self)
return toggle_set.merge(other, self)
def add(self, item):
if item not in self:
@ -476,6 +476,7 @@ class obj_dict(MutableMapping):
def from_pod(pod):
return obj_dict(pod)
# pylint: disable=super-init-not-called
def __init__(self, values=None, not_in_dict=None):
self.__dict__['dict'] = dict(values or {})
self.__dict__['not_in_dict'] = not_in_dict if not_in_dict is not None else []
@ -634,7 +635,10 @@ def enum(args, start=0, step=1):
if name == attr:
return attr
raise ValueError('Invalid enum value: {}'.format(repr(name)))
try:
return Enum.from_pod(name)
except ValueError:
raise ValueError('Invalid enum value: {}'.format(repr(name)))
reserved = ['values', 'levels', 'names']
@ -652,8 +656,8 @@ def enum(args, start=0, step=1):
n += step
setattr(Enum, 'levels', levels)
setattr(Enum, 'values', [lv.value for lv in levels])
setattr(Enum, 'names', [lv.name for lv in levels])
setattr(Enum, 'values', [lvl.value for lvl in levels])
setattr(Enum, 'names', [lvl.name for lvl in levels])
return Enum

@ -18,6 +18,7 @@ package com.arm.wa.uiauto.androbench;
import android.app.Activity;
import android.os.Bundle;
import android.graphics.Rect;
import android.support.test.runner.AndroidJUnit4;
import android.support.test.uiautomator.UiObject;
import android.support.test.uiautomator.UiObjectNotFoundException;
@ -40,6 +41,11 @@ public class UiAutomation extends BaseUiAutomation {
public static String TAG = "UXPERF";
@Test
public void setup() throws Exception {
dismissAndroidVersionPopup();
}
@Test
public void runWorkload() throws Exception {
runBenchmark();
@ -59,7 +65,8 @@ public class UiAutomation extends BaseUiAutomation {
} else {
UiObject bench =
mDevice.findObject(new UiSelector().resourceIdMatches("com.andromeda.androbench2:id/btnStartingBenchmarking"));
bench.click();
Rect bounds = bench.getBounds();
mDevice.click(bounds.centerX(), bounds.centerY());
}
UiObject btn_yes= mDevice.findObject(selector.textContains("Yes")
.className("android.widget.Button"));
@ -74,26 +81,40 @@ public class UiAutomation extends BaseUiAutomation {
public void getScores() throws Exception {
UiSelector selector = new UiSelector();
UiObject seqRead =
UiObject seqRead =
mDevice.findObject(selector.text("Sequential Read").fromParent(selector.index(1)));
UiObject seqWrite =
UiObject seqWrite =
mDevice.findObject(selector.text("Sequential Write").fromParent(selector.index(1)));
UiObject ranRead =
UiObject ranRead =
mDevice.findObject(selector.text("Random Read").fromParent(selector.index(1)));
UiObject ranWrite =
UiObject ranWrite =
mDevice.findObject(selector.text("Random Write").fromParent(selector.index(1)));
UiObject sqlInsert =
UiObject sqlInsert =
mDevice.findObject(selector.text("SQLite Insert").fromParent(selector.index(1)));
UiObject sqlUpdate =
UiObject sqlUpdate =
mDevice.findObject(selector.text("SQLite Update").fromParent(selector.index(1)));
UiObject sqlDelete =
UiObject sqlDelete =
mDevice.findObject(selector.text("SQLite Delete").fromParent(selector.index(1)));
UiScrollable scrollView = new UiScrollable(new UiSelector().scrollable(true));
Log.d(TAG, "Sequential Read Score " + seqRead.getText());
if (scrollView.exists()){scrollView.scrollIntoView(seqWrite); }
Log.d(TAG, "Sequential Write Score " + seqWrite.getText());
if (scrollView.exists()){scrollView.scrollIntoView(ranRead);}
Log.d(TAG, "Random Read Score " + ranRead.getText());
if (scrollView.exists()){scrollView.scrollIntoView(ranWrite);}
Log.d(TAG, "Random Write Score " + ranWrite.getText());
if (scrollView.exists()){scrollView.scrollIntoView(sqlInsert);}
Log.d(TAG, "SQL Insert Score " + sqlInsert.getText());
if (scrollView.exists()){scrollView.scrollIntoView(sqlUpdate);}
Log.d(TAG, "SQL Update Score " + sqlUpdate.getText());
if (scrollView.exists()){scrollView.scrollIntoView(sqlDelete);}
Log.d(TAG, "SQL Delete Score " + sqlDelete.getText());
}
}

@ -55,7 +55,7 @@ class Antutu(ApkUiautoWorkload):
try:
result = float(match.group(1))
except ValueError:
result = 'NaN' # pylint: disable=redefined-variable-type
result = float('NaN')
entry = regex.pattern.rsplit(None, 1)[0]
context.add_metric(entry, result, lower_is_better=False)
expected_results -= 1

@ -42,6 +42,11 @@ public class UiAutomation extends BaseUiAutomation {
public static String TestButton6 = "com.antutu.ABenchMark:id/start_test_text";
private static int initialTimeoutSeconds = 20;
@Test
public void setup() throws Exception {
dismissAndroidVersionPopup();
}
@Test
public void runWorkload() throws Exception{
hitTest();
@ -54,7 +59,7 @@ public class UiAutomation extends BaseUiAutomation {
}
public void hitTest() throws Exception {
UiObject testbutton =
UiObject testbutton =
mDevice.findObject(new UiSelector().resourceId("com.antutu.ABenchMark:id/main_test_start_title"));
testbutton.click();
sleep(1);
@ -68,14 +73,14 @@ public class UiAutomation extends BaseUiAutomation {
public void getScores() throws Exception {
//Expand, Extract and Close CPU sub scores
UiObject cpuscores =
UiObject cpuscores =
mDevice.findObject(new UiSelector().text("CPU"));
cpuscores.click();
UiObject cpumaths =
UiObject cpumaths =
mDevice.findObject(new UiSelector().text("CPU Mathematics Score").fromParent(new UiSelector().index(3)));
UiObject cpucommon =
UiObject cpucommon =
mDevice.findObject(new UiSelector().text("CPU Common Use Score").fromParent(new UiSelector().index(3)));
UiObject cpumulti =
UiObject cpumulti =
mDevice.findObject(new UiSelector().text("CPU Multi-Core Score").fromParent(new UiSelector().index(3)));
Log.d(TAG, "CPU Maths Score " + cpumaths.getText());
Log.d(TAG, "CPU Common Score " + cpucommon.getText());
@ -83,14 +88,14 @@ public class UiAutomation extends BaseUiAutomation {
cpuscores.click();
//Expand, Extract and Close GPU sub scores
UiObject gpuscores =
UiObject gpuscores =
mDevice.findObject(new UiSelector().text("GPU"));
gpuscores.click();
UiObject gpumaroon =
UiObject gpumaroon =
mDevice.findObject(new UiSelector().text("3D [Marooned] Score").fromParent(new UiSelector().index(3)));
UiObject gpucoast =
UiObject gpucoast =
mDevice.findObject(new UiSelector().text("3D [Coastline] Score").fromParent(new UiSelector().index(3)));
UiObject gpurefinery =
UiObject gpurefinery =
mDevice.findObject(new UiSelector().text("3D [Refinery] Score").fromParent(new UiSelector().index(3)));
Log.d(TAG, "GPU Marooned Score " + gpumaroon.getText());
Log.d(TAG, "GPU Coastline Score " + gpucoast.getText());
@ -98,16 +103,16 @@ public class UiAutomation extends BaseUiAutomation {
gpuscores.click();
//Expand, Extract and Close UX sub scores
UiObject uxscores =
UiObject uxscores =
mDevice.findObject(new UiSelector().text("UX"));
uxscores.click();
UiObject security =
UiObject security =
mDevice.findObject(new UiSelector().text("Data Security Score").fromParent(new UiSelector().index(3)));
UiObject dataprocessing =
UiObject dataprocessing =
mDevice.findObject(new UiSelector().text("Data Processing Score").fromParent(new UiSelector().index(3)));
UiObject imageprocessing =
UiObject imageprocessing =
mDevice.findObject(new UiSelector().text("Image Processing Score").fromParent(new UiSelector().index(3)));
UiObject uxscore =
UiObject uxscore =
mDevice.findObject(new UiSelector().text("User Experience Score").fromParent(new UiSelector().index(3)));
Log.d(TAG, "Data Security Score " + security.getText());
Log.d(TAG, "Data Processing Score " + dataprocessing.getText());
@ -116,12 +121,12 @@ public class UiAutomation extends BaseUiAutomation {
uxscores.click();
//Expand, Extract and Close MEM sub scores
UiObject memscores =
UiObject memscores =
mDevice.findObject(new UiSelector().text("MEM"));
memscores.click();
UiObject ramscore =
UiObject ramscore =
mDevice.findObject(new UiSelector().text("RAM Score").fromParent(new UiSelector().index(3)));
UiObject romscore =
UiObject romscore =
mDevice.findObject(new UiSelector().text("ROM Score").fromParent(new UiSelector().index(3)));
Log.d(TAG, "RAM Score " + ramscore.getText());
Log.d(TAG, "ROM Score " + romscore.getText());

@ -36,6 +36,11 @@ public class UiAutomation extends BaseUiAutomation {
public Bundle parameters;
public String packageID;
@Test
public void setup() throws Exception {
dismissAndroidVersionPopup();
}
@Test
public void runWorkload() throws Exception {
startTest();

@ -150,10 +150,10 @@ class ExoPlayer(ApkWorkload):
return filepath
else:
if len(files) > 1:
self.logger.warn('Multiple files found for {} format. Using {}.'
.format(self.format, files[0]))
self.logger.warn('Use "filename"parameter instead of '
'"format" to specify a different file.')
self.logger.warning('Multiple files found for {} format. Using {}.'
.format(self.format, files[0]))
self.logger.warning('Use "filename"parameter instead of '
'"format" to specify a different file.')
return files[0]
def init_resources(self, context): # pylint: disable=unused-argument

@ -52,6 +52,10 @@ class Geekbench(ApkUiautoWorkload):
"""
summary_metrics = ['score', 'multicore_score']
versions = {
'4.3.1': {
'package': 'com.primatelabs.geekbench',
'activity': '.HomeActivity',
},
'4.2.0': {
'package': 'com.primatelabs.geekbench',
'activity': '.HomeActivity',

@ -62,6 +62,7 @@ public class UiAutomation extends BaseUiAutomation {
@Override
public void setup() throws Exception {
initialize_instrumentation();
dismissAndroidVersionPopup();
if (!isCorporate)
dismissEula();
@ -125,7 +126,7 @@ public class UiAutomation extends BaseUiAutomation {
public void runBenchmarks() throws Exception {
UiObject runButton =
mDevice.findObject(new UiSelector().textContains("Run Benchmarks")
mDevice.findObject(new UiSelector().textContains("Run Benchmark")
.className("android.widget.Button"));
if (!runButton.waitForExists(WAIT_TIMEOUT_5SEC)) {
throw new UiObjectNotFoundException("Could not find Run button");

@ -0,0 +1,71 @@
# Copyright 2014-2016 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import re
from wa import ApkUiautoWorkload, WorkloadError, Parameter
class Gfxbench(ApkUiautoWorkload):
name = 'gfxbench-corporate'
package_names = ['net.kishonti.gfxbench.gl.v50000.corporate']
clear_data_on_reset = False
regex_matches = [re.compile(r'Car Chase score (.+)'),
re.compile(r'Car Chase Offscreen score (.+)'),
re.compile(r'Manhattan 3.1 score (.+)'),
re.compile(r'1080p Manhattan 3.1 Offscreen score (.+)'),
re.compile(r'1440p Manhattan 3.1 Offscreen score (.+)'),
re.compile(r'Tessellation score (.+)'),
re.compile(r'Tessellation Offscreen score (.+)')]
score_regex = re.compile(r'.*?([\d.]+).*')
description = '''
Execute a subset of graphical performance benchmarks
Test description:
1. Open the gfxbench application
2. Execute Car Chase, Manhattan and Tessellation benchmarks
'''
parameters = [
Parameter('timeout', kind=int, default=3600,
description=('Timeout for an iteration of the benchmark.')),
]
def __init__(self, target, **kwargs):
super(Gfxbench, self).__init__(target, **kwargs)
self.gui.timeout = self.timeout
def update_output(self, context):
super(Gfxbench, self).update_output(context)
expected_results = len(self.regex_matches)
logcat_file = context.get_artifact_path('logcat')
with open(logcat_file) as fh:
for line in fh:
for regex in self.regex_matches:
match = regex.search(line)
# Check if we have matched the score string in logcat
if match:
score_match = self.score_regex.search(match.group(1))
# Check if there is valid number found for the score.
if score_match:
result = float(score_match.group(1))
else:
result = 'NaN'
entry = regex.pattern.rsplit(None, 1)[0]
context.add_metric(entry, result, 'FPS', lower_is_better=False)
expected_results -= 1
if expected_results > 0:
msg = "The GFXBench workload has failed. Expected {} scores, Detected {} scores."
raise WorkloadError(msg.format(len(self.regex_matches), expected_results))

Binary file not shown.

@ -0,0 +1,41 @@
apply plugin: 'com.android.application'
def packageName = "com.arm.wa.uiauto.gfxbench"
android {
compileSdkVersion 25
buildToolsVersion "25.0.3"
defaultConfig {
applicationId "${packageName}"
minSdkVersion 18
targetSdkVersion 25
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("$project.buildDir/apk/${packageName}.apk")
}
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support.test:runner:0.5'
compile 'com.android.support.test:rules:0.5'
compile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2'
compile(name: 'uiauto', ext:'aar')
}
repositories {
flatDir {
dirs 'libs'
}
}

@ -0,0 +1,13 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.arm.wa.uiauto.gfxbench"
android:versionCode="1"
android:versionName="1.0">
<instrumentation
android:name="android.support.test.runner.AndroidJUnitRunner"
android:targetPackage="${applicationId}"/>
</manifest>

@ -0,0 +1,202 @@
/* Copyright 2014-2016 ARM Limited
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.arm.wa.uiauto.gfxbench;
import android.os.Bundle;
import android.support.test.runner.AndroidJUnit4;
import android.support.test.uiautomator.UiObject;
import android.support.test.uiautomator.UiObjectNotFoundException;
import android.support.test.uiautomator.UiSelector;
import android.support.test.uiautomator.UiScrollable;
import android.util.Log;
import android.graphics.Rect;
import com.arm.wa.uiauto.BaseUiAutomation;
import com.arm.wa.uiauto.ActionLogger;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import java.util.concurrent.TimeUnit;
@RunWith(AndroidJUnit4.class)
public class UiAutomation extends BaseUiAutomation {
private int networkTimeoutSecs = 30;
private long networkTimeout = TimeUnit.SECONDS.toMillis(networkTimeoutSecs);
public static String TAG = "UXPERF";
@Before
public void initialize(){
initialize_instrumentation();
}
@Test
public void setup() throws Exception{
setScreenOrientation(ScreenOrientation.NATURAL);
clearFirstRun();
//Calculate the location of the test selection button
UiObject circle =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/main_circleControl")
.className("android.widget.RelativeLayout"));
Rect bounds = circle.getBounds();
int selectx = bounds.width()/4;
selectx = bounds.centerX() + selectx;
int selecty = bounds.height()/4;
selecty = bounds.centerY() + selecty;
Log.d(TAG, "maxx " + selectx);
Log.d(TAG, "maxy " + selecty);
mDevice.click(selectx,selecty);
//Disable the tests
toggleTest("High-Level Tests");
toggleTest("Low-Level Tests");
toggleTest("Special Tests");
toggleTest("Fixed Time Test");
//Enable sub tests
toggleTest("Car Chase");
toggleTest("1080p Car Chase Offscreen");
toggleTest("Manhattan 3.1");
toggleTest("1080p Manhattan 3.1 Offscreen");
toggleTest("1440p Manhattan 3.1.1 Offscreen");
toggleTest("Tessellation");
toggleTest("1080p Tessellation Offscreen");
}
@Test
public void runWorkload() throws Exception {
runBenchmark();
getScores();
}
@Test
public void teardown() throws Exception{
unsetScreenOrientation();
}
public void clearFirstRun() throws Exception {
UiObject accept =
mDevice.findObject(new UiSelector().resourceId("android:id/button1")
.className("android.widget.Button"));
if (accept.exists()){
accept.click();
sleep(5);
}
UiObject sync =
mDevice.findObject(new UiSelector().text("Data synchronization")
.className("android.widget.TextView"));
if (!sync.exists()){
sync = mDevice.findObject(new UiSelector().text("Pushed data not found")
.className("android.widget.TextView"));
}
if (sync.exists()){
UiObject data =
mDevice.findObject(new UiSelector().resourceId("android:id/button1")
.className("android.widget.Button"));
data.click();
}
UiObject home =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/main_homeBack")
.className("android.widget.LinearLayout"));
home.waitForExists(300000);
}
public void runBenchmark() throws Exception {
//Start the tests
UiObject start =
mDevice.findObject(new UiSelector().text("Start"));
start.click();
//Wait for results
UiObject complete =
mDevice.findObject(new UiSelector().text("High-Level Tests")
.className("android.widget.TextView"));
complete.waitForExists(1200000);
}
public void getScores() throws Exception {
UiScrollable list = new UiScrollable(new UiSelector().scrollable(true));
UiObject results =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/results_testList"));
int number_of_results = results.getChildCount();
//High Level Tests
UiObject carchase =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/results_testList"))
.getChild(new UiSelector().index(1))
.getChild(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/updated_result_item_subresult"));
Log.d(TAG, "Car Chase score " + carchase.getText());
UiObject carchaseoff =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/results_testList"))
.getChild(new UiSelector().index(2))
.getChild(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/updated_result_item_subresult"));
Log.d(TAG, "Car Chase Offscreen score " + carchaseoff.getText());
UiObject manhattan =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/results_testList"))
.getChild(new UiSelector().index(3))
.getChild(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/updated_result_item_subresult"));
Log.d(TAG, "Manhattan 3.1 score " + manhattan.getText());
UiObject manhattan1080 =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/results_testList"))
.getChild(new UiSelector().index(4))
.getChild(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/updated_result_item_subresult"));
Log.d(TAG, "1080p Manhattan 3.1 Offscreen score " + manhattan1080.getText());
UiObject manhattan1440 =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/results_testList"))
.getChild(new UiSelector().index(5))
.getChild(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/updated_result_item_subresult"));
Log.d(TAG, "1440p Manhattan 3.1 Offscreen score " + manhattan1440.getText());
//Low Level Tests
UiObject tess =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/results_testList"))
.getChild(new UiSelector().index(7))
.getChild(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/updated_result_item_subresult"));
if (!tess.exists() && list.waitForExists(60)) {
list.scrollIntoView(tess);
}
Log.d(TAG, "Tessellation score " + tess.getText());
UiObject tessoff =
mDevice.findObject(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/results_testList"))
.getChild(new UiSelector().index(8))
.getChild(new UiSelector().resourceId("net.kishonti.gfxbench.gl.v50000.corporate:id/updated_result_item_subresult"));
if (!tessoff.exists() && list.waitForExists(60)) {
list.scrollIntoView(tessoff);
}
Log.d(TAG, "Tessellation Offscreen score " + tessoff.getText());
}
public void toggleTest(String testname) throws Exception {
UiScrollable list = new UiScrollable(new UiSelector().scrollable(true));
UiObject test =
mDevice.findObject(new UiSelector().text(testname));
if (!test.exists() && list.waitForExists(60)) {
list.scrollIntoView(test);
}
test.click();
}
}

@ -0,0 +1,23 @@
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.2'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}

Some files were not shown because too many files have changed in this diff Show More