1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-10-06 20:04:06 +01:00

doc: Restructure

Restructure the documentation to be split into `User Information` and
`Developer Information`, and split the how to guides into their
corresponding section.
This commit is contained in:
Marc Bonnici
2018-06-20 17:43:52 +01:00
committed by setrofim
parent 3c0f1968c5
commit 6c93590062
28 changed files with 122 additions and 86 deletions

View File

@@ -0,0 +1,28 @@
.. _developer_reference:
********************
Developer Reference
********************
.. contents::
:depth: 3
:local:
.. include:: developer_information/developer_reference/execution_model.rst
-----------------
.. include:: developer_information/developer_reference/writing_plugins.rst
-----------------
.. include:: developer_information/developer_reference/contributing.rst
-----------------
.. include:: developer_information/developer_reference/revent.rst
-----------------
.. include:: developer_information/developer_reference/serialization.rst

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 42 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 63 KiB

View File

@@ -0,0 +1,55 @@
Contributing Code
=================
We welcome code contributions via GitHub pull requests. To help with
maintainability of the code line we ask that the code uses a coding style
consistent with the rest of WA code. Briefly, it is
- `PEP8 <https://www.python.org/dev/peps/pep-0008/>`_ with line length and block
comment rules relaxed (the wrapper for PEP8 checker inside ``dev_scripts``
will run it with appropriate configuration).
- Four-space indentation (*no tabs!*).
- Title-case for class names, underscore-delimited lower case for functions,
methods, and variables.
- Use descriptive variable names. Delimit words with ``'_'`` for readability.
Avoid shortening words, skipping vowels, etc (common abbreviations such as
"stats" for "statistics", "config" for "configuration", etc are OK). Do
*not* use Hungarian notation (so prefer ``birth_date`` over ``dtBirth``).
New extensions should also follow implementation guidelines specified in the
:ref:`writing-plugins` section of the documentation.
We ask that the following checks are performed on the modified code prior to
submitting a pull request:
.. note:: You will need pylint and pep8 static checkers installed::
pip install pep8
pip install pylint
It is recommended that you install via pip rather than through your
distribution's package manager because the latter is likely to
contain out-of-date version of these tools.
- ``./dev_scripts/pylint`` should be run without arguments and should produce no
output (any output should be addressed by making appropriate changes in the
code or adding a pylint ignore directive, if there is a good reason for
keeping the code as is).
- ``./dev_scripts/pep8`` should be run without arguments and should produce no
output (any output should be addressed by making appropriate changes in the
code).
- If the modifications touch core framework (anything under ``wa/framework``), unit
tests should be run using ``nosetests``, and they should all pass.
- If significant additions have been made to the framework, unit
tests should be added to cover the new functionality.
- If modifications have been made to documentation (this includes description
attributes for Parameters and Extensions), documentation should be built to
make sure no errors or warning during build process, and a visual inspection
of new/updated sections in resulting HTML should be performed to ensure
everything renders as expected.
Once you have your contribution is ready, please follow instructions in `GitHub
documentation <https://help.github.com/articles/creating-a-pull-request/>`_ to
create a pull request.

View File

@@ -0,0 +1,158 @@
Framework Overview
==================
Execution Model
---------------
At the high level, the execution model looks as follows:
.. image:: developer_information/developer_reference/WA_Execution.svg
:scale: 100 %
After some initial setup, the framework initializes the device, loads and
initialized instruments and output processors and begins executing jobs defined
by the workload specs in the agenda. Each job executes in basic stages:
initialize
Perform any once-per-run initialization of a workload instance, i.e.
binary resource resolution.
setup
Initial setup for the workload is performed. E.g. required assets are
deployed to the devices, required services or applications are launched,
etc. Run time configuration of the device for the workload is also
performed at this time.
setup_rerun (apk based workloads only)
For some apk based workloads the application is required to be started
twice. If the ``requires_rerun`` attribute of the workload is set to
``True`` then after the first setup method is called the application
will be killed and then restarted. This method can then be used to
perform any additional setup required.
run
This is when the workload actually runs. This is defined as the part of
the workload that is to be measured. Exactly what happens at this stage
depends entirely on the workload.
extract results
Extract any results that have been generated during the execution of the
workload from the device and back to that target. Any files pulled from
the devices should be added as artifacts to the run context.
update output
Perform any required parsing and processing of any collected results and
add any generated metrics to the run context.
teardown
Final clean up is performed, e.g. applications may closed, files
generated during execution deleted, etc.
Signals are dispatched (see :ref:`below <signal_dispatch>`) at each stage of
workload execution, which installed instruments can hook into in order to
collect measurements, alter workload execution, etc. Instruments implementation
usually mirrors that of workloads, defining initialization, setup, teardown and
output processing stages for a particular instrument. Instead of a ``run``
method instruments usually implement ``start`` and ``stop`` methods instead
which triggered just before and just after a workload run. However, the signal
dispatch mechanism gives a high degree of flexibility to instruments allowing
them to hook into almost any stage of a WA run (apart from the very early
initialization).
Metrics and artifacts generated by workloads and instruments are accumulated by
the framework and are then passed to active output processors. This happens
after each individual workload execution and at the end of the run. A output
processor may chose to act at either or both of these points.
Control Flow
------------
This section goes into more detail explaining the relationship between the major
components of the framework and how control passes between them during a run. It
will only go through the major transitions and interactions and will not attempt
to describe every single thing that happens.
.. note:: This is the control flow for the ``wa run`` command which is the main
functionality of WA. Other commands are much simpler and most of what
is described below does not apply to them.
#. :class:`wa.framework.entrypoint` parses the command from the arguments, creates a
:class:`wa.framework.configuration.execution.ConfigManager` and executes the run
command (:class:`wa.commands.run.RunCommand`) passing it the ConfigManger.
#. Run command initializes the output directory and creates a
:class:`wa.framework.configuration.parsers.AgendaParser` and will parser an
agenda and populate the ConfigManger based on the command line arguments.
Finally it instantiates a :class:`wa.framework.execution.Executor` and
passes it the completed ConfigManager.
#. The Executor uses the ConfigManager to create a
:class:`wa.framework.configuration.core.RunConfiguration` and fully defines the
configuration for the run (which will be serialised into ``__meta`` subdirectory
under the output directory).
#. The Executor proceeds to instantiate a TargetManager, used to handle the
device connection and configuration, and a
:class:`wa.framework.execution.ExecutionContext` which is used to track the
current state of the run execution and also serves as a means of
communication between the core framework and plugins. After this any required
instruments and output processors are initialized and installed.
#. Finally, the Executor instantiates a :class:`wa.framework.execution.Runner`,
initializes its job queue with workload specs from the RunConfiguraiton, and
kicks it off.
#. The Runner performs the run time configuration of the device and goes
through the workload specs (in the order defined by ``execution_order``
setting), running each spec according to the execution model described in the
previous section and sending signals (see below) at appropriate points during
execution.
#. At the end of the run, the control is briefly passed back to the Executor,
which outputs a summary for the run.
.. _signal_dispatch:
Signal Dispatch
---------------
WA uses the `louie <https://github.com/11craft/louie/>`_ (formerly,
pydispatcher) library for signal dispatch. Callbacks can be registered for
signals emitted during the run. WA uses a version of louie that has been
modified to introduce priority to registered callbacks (so that callbacks that
are know to be slow can be registered with a lower priority and therefore do not
interfere with other callbacks).
This mechanism is abstracted for instruments. Methods of an
:class:`wa.framework.Instrument` subclass automatically get hooked to
appropriate signals based on their names when the instrument is "installed"
for the run. Priority can then be specified by adding ``extremely_fast``,
``very_fast``, ``fast`` , ``slow``, ``very_slow`` or ``extremely_slow``
:ref:`decorators <instruments_method_map>` to the method definitions.
The full list of method names and the signals they map to may be viewed
:ref:`here <instruments_method_map>`.
Signal dispatching mechanism may also be used directly, for example to
dynamically register callbacks at runtime or allow plugins other than
``Instruments`` to access stages of the run they are normally not aware of.
Signals can be either paired or non paired signals. Non paired signals are one
off signals that are sent to indicate special events or transitions in execution
stages have occurred for example ``TARGET_CONNECTED``. Paired signals are used to
signify the start and end of a particular event. If the start signal has been
sent the end signal is guaranteed to also be sent, whether the operation was a
successes or not, however in the case of correct operation an additional success
signal will also be sent. For example in the event of a successful reboot of the
the device, the following signals will be sent ``BEFORE_REBOOT``,
``SUCCESSFUL_REBOOT`` and ``AFTER_REBOOT``.
An overview of what signals are sent at which point during execution can be seen
below. Most of the paired signals have been removed from the diagram for clarity
and shown as being dispatched from a particular stage of execution, however in
reality these signals will be sent just before and just after these stages are
executed. As mentioned above for each of these signals there will be at least 2
and up to 3 signals sent. If the "BEFORE_X" signal (sent just before the stage
is ran) is sent then the "AFTER_X" (sent just after the stage is ran) signal is
guaranteed to also be sent, and under normal operation a "SUCCESSFUL_X" signal
is also sent just after stage has been completed. The diagram also lists the
conditional signals that can be sent at any time during execution if something
unexpected happens, for example an error occurs or the user aborts the run.
.. image:: developer_information/developer_reference/WA_Signal_Dispatch.svg
:scale: 100 %
See Also
--------
- :ref:`Instrumentation Signal-Method Mapping <instruments_method_map>`.

View File

@@ -0,0 +1,337 @@
Revent Recordings
=================
Convention for Naming revent Files for Revent Workloads
-------------------------------------------------------------------------------
There is a convention for naming revent files which you should follow if you
want to record your own revent files. Each revent file must start with the
device name(case sensitive) then followed by a dot '.' then the stage name
then '.revent'. All your custom revent files should reside at
``'~/.workload_automation/dependencies/WORKLOAD NAME/'``. These are the current
supported stages:
:setup: This stage is where the application is loaded (if present). It is
a good place to record an revent here to perform any tasks to get
ready for the main part of the workload to start.
:run: This stage is where the main work of the workload should be performed.
This will allow for more accurate results if the revent file for this
stage only records the main actions under test.
:extract_results: This stage is used after the workload has been completed
to retrieve any metrics from the workload e.g. a score.
:teardown: This stage is where any final actions should be performed to
clean up the workload.
Only the run stage is mandatory, the remaining stages will be replayed if a
recording is present otherwise no actions will be performed for that particular
stage.
For instance, to add a custom revent files for a device named "mydevice" and
a workload name "myworkload", you need to add the revent files to the directory
``/home/$WA_USER_HOME/dependencies/myworkload/revent_files`` creating it if
necessary. ::
mydevice.setup.revent
mydevice.run.revent
mydevice.extract_results.revent
mydevice.teardown.revent
Any revent file in the dependencies will always overwrite the revent file in the
workload directory. So for example it is possible to just provide one revent for
setup in the dependencies and use the run.revent that is in the workload directory.
File format of revent recordings
--------------------------------
You do not need to understand recording format in order to use revent. This
section is intended for those looking to extend revent in some way, or to
utilize revent recordings for other purposes.
Format Overview
^^^^^^^^^^^^^^^
Recordings are stored in a binary format. A recording consists of three
sections::
+-+-+-+-+-+-+-+-+-+-+-+
| Header |
+-+-+-+-+-+-+-+-+-+-+-+
| |
| Device Description |
| |
+-+-+-+-+-+-+-+-+-+-+-+
| |
| |
| Event Stream |
| |
| |
+-+-+-+-+-+-+-+-+-+-+-+
The header contains metadata describing the recording. The device description
contains information about input devices involved in this recording. Finally,
the event stream contains the recorded input events.
All fields are either fixed size or prefixed with their length or the number of
(fixed-sized) elements.
.. note:: All values below are little endian
Recording Header
^^^^^^^^^^^^^^^^
An revent recoding header has the following structure
* It starts with the "magic" string ``REVENT`` to indicate that this is an
revent recording.
* The magic is followed by a 16 bit version number. This indicates the format
version of the recording that follows. Current version is ``2``.
* The next 16 bits indicate the type of the recording. This dictates the
structure of the Device Description section. Valid values are:
``0``
This is a general input event recording. The device description
contains a list of paths from which the events where recorded.
``1``
This a gamepad recording. The device description contains the
description of the gamepad used to create the recording.
* The header is zero-padded to 128 bits.
::
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 'R' | 'E' | 'V' | 'E' |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 'N' | 'T' | Version |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Mode | PADDING |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| PADDING |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Device Description
^^^^^^^^^^^^^^^^^^
This section describes the input devices used in the recording. Its structure is
determined by the value of ``Mode`` field in the header.
general recording
^^^^^^^^^^^^^^^^^
.. note:: This is the only format supported prior to version ``2``.
The recording has been made from all available input devices. This section
contains the list of ``/dev/input`` paths for the devices, prefixed with total
number of the devices recorded.
::
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of devices |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| Device paths +-+-+-+-+-+-+-+-+-+-+-+-+
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Similarly, each device path is a length-prefixed string. Unlike C strings, the
path is *not* NULL-terminated.
::
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of device path |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| Device path |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
gamepad recording
^^^^^^^^^^^^^^^^^
The recording has been made from a specific gamepad. All events in the stream
will be for that device only. The section describes the device properties that
will be used to create a virtual input device using ``/dev/uinput``. Please
see ``linux/input.h`` header in the Linux kernel source for more information
about the fields in this section.
::
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| bustype | vendor |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| product | version |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| name_length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| name |
| |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ev_bits |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| |
| key_bits (96 bytes) |
| |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| |
| rel_bits (96 bytes) |
| |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| |
| abs_bits (96 bytes) |
| |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| num_absinfo |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| |
| |
| |
| absinfo entries |
| |
| |
| |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Each ``absinfo`` entry consists of six 32 bit values. The number of entries is
determined by the ``abs_bits`` field.
::
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| value |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| minimum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| maximum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| fuzz |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| flat |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| resolution |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Event stream
^^^^^^^^^^^^
The majority of an revent recording will be made up of the input events that were
recorded. The event stream is prefixed with the number of events in the stream,
and start and end times for the recording.
::
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of events |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of events (cont.) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Start Time Seconds |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Start Time Seconds (cont.) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Start Time Microseconds |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Start Time Microseconds (cont.) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| End Time Seconds |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| End Time Seconds (cont.) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| End Time Microseconds |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| End Time Microseconds (cont.) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| |
| Events |
| |
| |
| +-+-+-+-+-+-+-+-+-+-+-+-+
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Event structure
^^^^^^^^^^^^^^^
Each event entry structured as follows:
* An unsigned short integer representing which device from the list of device paths
this event is for (zero indexed). E.g. Device ID = 3 would be the 4th
device in the list of device paths.
* A unsigned long integer representing the number of seconds since "epoch" when
the event was recorded.
* A unsigned long integer representing the microseconds part of the timestamp.
* An unsigned integer representing the event type
* An unsigned integer representing the event code
* An unsigned integer representing the event value
For more information about the event type, code and value please read:
https://www.kernel.org/doc/Documentation/input/event-codes.txt
::
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Device ID | Timestamp Seconds |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Timestamp Seconds (cont.) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Timestamp Seconds (cont.) | stamp Micoseconds |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Timestamp Micoseconds (cont.) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Timestamp Micoseconds (cont.) | Event Type |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Event Code | Event Value |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Event Value (cont.) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Parser
^^^^^^
WA has a parser for revent recordings. This can be used to work with revent
recordings in scripts. Here is an example:
.. code:: python
from wa.utils.revent import ReventRecording
with ReventRecording('/path/to/recording.revent') as recording:
print "Recording: {}".format(recording.filepath)
print "There are {} input events".format(recording.num_events)
print "Over a total of {} seconds".format(recording.duration)

View File

@@ -0,0 +1,123 @@
.. _serialization:
Serialization
=============
Overview of Serialization
-------------------------
WA employs a serialization mechanism in order to store some of its internal
structures inside the output directory. Serialization is performed in two
stages:
1. A serializable object is converted into a POD (Plain Old Data) structure
consisting of primitive Python types, and a few additional types (see
:ref:`wa-pods` below).
2. The POD structure is serialized into a particular format by a generic
parser for that format. Currently, `yaml` and `json` are supported.
Deserialization works in reverse order -- first the serialized text is parsed
into a POD, which is then converted to the appropriate object.
Implementing Serializable Objects
---------------------------------
In order to be considered serializable, an object must either be a POD, or it
must implement the ``to_pod()`` method and ``from_pod`` static/class method,
which will perform the conversion to/form pod.
As an example, below as a (somewhat trimmed) implementation of the ``Event``
class:
.. code-block:: python
class Event(object):
@staticmethod
def from_pod(pod):
instance = Event(pod['message'])
instance.timestamp = pod['timestamp']
return instance
def __init__(self, message):
self.timestamp = datetime.utcnow()
self.message = message
def to_pod(self):
return dict(
timestamp=self.timestamp,
message=self.message,
)
Serialization API
-----------------
.. function:: read_pod(source, fmt=None)
.. function:: write_pod(pod, dest, fmt=None)
These read and write PODs from a file. The format will be inferred, if
possible, from the extension of the file, or it may be specified explicitly
with ``fmt``. ``source`` and ``dest`` can be either strings, in which case
they will be interpreted as paths, or they can be file-like objects.
.. function:: is_pod(obj)
Returns ``True`` if ``obj`` is a POD, and ``False`` otherwise.
.. function:: dump(o, wfh, fmt='json', \*args, \*\*kwargs)
.. function:: load(s, fmt='json', \*args, \*\*kwargs)
These implment an altenative serialization interface, which matches the
interface exposed by the parsers for the supported formats.
.. _wa-pods:
WA POD Types
------------
POD types are types that can be handled by a serializer directly, without a need
for any additional information. These consist of the build-in python types ::
list
tuple
dict
set
str
unicode
int
float
bool
...the standard library types ::
OrderedDict
datetime
...and the WA-defined types ::
regex_type
none_type
level
cpu_mask
Any structure consisting entirely of these types is a POD and can be serialized
and then deserialized without losing information. It is important to note that
only these specific types are considered POD, their subclasses are *not*.
.. note:: ``dict``\ s get deserialized as ``OrderedDict``\ s.
Serialization Formats
---------------------
WA utilizes two serialization formats: YAML and JSON. YAML is used for files
intended to be primarily written and/or read by humans; JSON is used for files
intended to be primarily written and/or read by WA and other programs.
The parsers and serializers for these formats used by WA have been modified to
handle additional types (e.g. regular expressions) that are typically not
supported by the formats. This was done in such a way that the resulting files
are still valid and can be parsed by any parser for that format.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,9 @@
*******
How Tos
*******
.. contents:: Contents
:depth: 4
:local:
.. include:: developer_information/how_tos/adding_plugins.rst

View File

@@ -0,0 +1,589 @@
.. _deploying-executables-example:
Deploying Executables Example
==============================
Installing binaries for a particular plugin should generally only be performed
once during a run. This should typically be done in the ``initialize`` method,
if the only functionality performed in the method is to install the required binaries
then the ``initialize`` method should be decorated with the ``@once``
:ref:`decorator <decorators>` otherwise this should be placed into a dedicated
method which is decorated instead. Please note if doing this then any installed
paths should be added as class attributes rather than instance variables. As a
general rule if binaries are installed as part of ``initialize`` then they
should be uninstalled in the complementary ``finalize`` method.
Part of an example workload demonstrating this is shown below:
.. code:: python
class MyWorkload(Workload):
#..
@once
def initialize(self, context):
resource = Executable(self, self.target.abi, 'my_executable')
host_binary = context.resolver.get(resource)
MyWorkload.target_binary = self.target.install(host_binary)
#..
def setup(self, context):
self.command = "{} -a -b -c".format(self.target_binary)
self.target.execute(self.command)
#..
@once
def finalize(self, context):
self.target.uninstall('my_executable')
.. _adding-a-workload:
Adding a Workload Examples
==========================
The easiest way to create a new workload is to use the
:ref:`create <create-command>` command. ``wa create workload <args>``. This
will use predefined templates to create a workload based on the options that are
supplied to be used as a starting point for the workload. For more information
on using the create workload command see ``wa create workload -h``
The first thing to decide is the type of workload you want to create depending
on the OS you will be using and the aim of the workload. The are currently 6
available workload types to choose as detailed :ref:`here<workload-types>`.
Once you have decided what type of workload you wish to choose this can be
specified with ``-k <workload_kind>`` followed by the workload name. This
will automatically generate a workload in the your ``WA_CONFIG_DIR/plugins``. If
you wish to specify a custom location this can be provided with ``-p
<path>``
Adding a Basic Workload Example
--------------------------------
To add a basic workload you can simply use the command::
wa create workload basic
This will generate a very basic workload with dummy methods for the workload
interface and it is left to the developer to add any required functionality to
the workload.
Not all the methods are required to be implemented, this example shows how a
subset might be used to implement a simple workload that times how long it takes
to compress a file of a particular size on the device.
.. note:: This is intended as an example of how to implement the Workload
:ref:`interface <workload-interface>`. The methodology used to
perform the actual measurement is not necessarily sound, and this
Workload should not be used to collect real measurements.
.. code-block:: python
import os
from wa import Workload, Parameter
class ZipTestWorkload(Workload):
name = 'ziptest'
description = '''
Times how long it takes to gzip a file of a particular size on a device.
This workload was created for illustration purposes only. It should not be
used to collect actual measurements.
'''
parameters = [
Parameter('file_size', kind=int, default=2000000,
description='Size of the file (in bytes) to be gzipped.')
]
def setup(self, context):
"""
In the setup method we do any preparation that is required before
the workload is ran, this is usually things like setting up required
files on the device and generating commands from user input. In this
case we will generate our input file on the host system and then
push it to a known location on the target for use in the 'run'
stage.
"""
super(ZipTestWorkload, self).setup(context)
# Generate a file of the specified size containing random garbage.
host_infile = os.path.join(context.output_directory, 'infile')
command = 'openssl rand -base64 {} > {}'.format(self.file_size, host_infile)
os.system(command)
# Set up on-device paths
devpath = self.target.path # os.path equivalent for the target
self.target_infile = devpath.join(self.target.working_directory, 'infile')
self.target_outfile = devpath.join(self.target.working_directory, 'outfile')
# Push the file to the target
self.target.push(host_infile, self.target_infile)
def run(self, context):
"""
The run method is where the actual 'work' of the workload takes
place and is what is measured by any instrumentation. So for this
example this is the execution of creating the zip file on the
target.
"""
cmd = 'cd {} && (time gzip {}) &>> {}'
self.target.execute(cmd.format(self.target.working_directory,
self.target_infile,
self.target_outfile))
def extract_results(self, context):
"""
This method is used to extract any results from the target for
example we want to pull the file containing the timing information
that we will use to generate metrics for our workload and then we
add this file as an artifact with a 'raw' kind, which means once WA
has finished processing it will allow it to decide whether to keep
the file or not.
"""
super(ZipTestWorkload, self).extract_results(context)
# Pull the results file to the host
self.host_outfile = os.path.join(context.output_directory, 'timing_results')
self.target.pull(self.target_outfile, self.host_outfile)
context.add_artifact('ziptest-results', host_output_file, kind='raw')
def update_output(self, context):
"""
In this method we can do any generation of metrics that we wish to
for our workload. In this case we are going to simply convert the
times reported into seconds and add them as 'metrics' to WA which can
then be displayed to the user along with any others in a format
dependant on which output processors they have enabled for the run.
"""
super(ZipTestWorkload, self).update_output(context)
# Extract metrics form the file's contents and update the result
# with them.
content = iter(open(self.host_outfile).read().strip().split())
for value, metric in zip(content, content):
mins, secs = map(float, value[:-1].split('m'))
context.add_metric(metric, secs + 60 * mins, 'seconds')
def teardown(self, context):
"""
Here we will perform any required clean up for the workload so we
will delete the input and output files from the device.
"""
super(ZipTestWorkload, self).teardown(context)
self.target.remove(self.target_infile)
self.target.remove(self.target_outfile)
.. _apkuiautomator-example:
Adding a ApkUiAutomator Workload Example
-----------------------------------------
If we wish to create a workload to automate the testing of the Google Docs
android app, we would choose to perform the automation using UIAutomator and we
would want to automatically deploy and install the apk file to the target,
therefore we would choose the :ref:`ApkUiAutomator workload
<apkuiautomator-workload>` type with the following command::
$ wa create workload -k apkuiauto google_docs
Workload created in $WA_USER_DIRECTORY/plugins/google_docs
From here you can navigate to the displayed directory and you will find your
``__init__.py`` and a ``uiauto`` directory. The former is your python WA
workload and will look something like this
.. code-block:: python
from wa import Parameter, ApkUiautoWorkload
class GoogleDocs(ApkUiautoWorkload):
name = 'google_docs'
description = "This is an placeholder description"
# Replace with a list of supported package names in the APK file(s).
package_names = ['package_name']
parameters = [
# Workload parameters go here e.g.
Parameter('example_parameter', kind=int, allowed_values=[1,2,3],
default=1, override=True, mandatory=False,
description='This is an example parameter')
]
def __init__(self, target, **kwargs):
super(GoogleDocs, self).__init__(target, **kwargs)
# Define any additional attributes required for the workload
def init_resources(self, resolver):
super(GoogleDocs, self).init_resources(resolver)
# This method may be used to perform early resource discovery and
# initialization. This is invoked during the initial loading stage and
# before the device is ready, so cannot be used for any device-dependent
# initialization. This method is invoked before the workload instance is
# validated.
def initialize(self, context):
super(GoogleDocs, self).initialize(context)
# This method should be used to perform once-per-run initialization of a
# workload instance.
def validate(self):
super(GoogleDocs, self).validate()
# Validate inter-parameter assumptions etc
def setup(self, context):
super(GoogleDocs, self).setup(context)
# Perform any necessary setup before starting the UI automation
def extract_results(self, context):
super(GoogleDocs, self).extract_results(context)
# Extract results on the target
def update_output(self, context):
super(GoogleDocs, self).update_output(context)
# Update the output within the specified execution context with the
# metrics and artifacts form this workload iteration.
def teardown(self, context):
super(GoogleDocs, self).teardown(context)
# Perform any final clean up for the Workload.
Depending on the purpose of your workload you can choose to implement which
methods you require. The main things that need setting are the list of
``package_names`` which must be a list of strings containing the android package
name that will be used during resource resolution to locate the relevant apk
file for the workload. Additionally the the workload parameters will need to
updating to any relevant parameters required by the workload as well as the
description.
The latter will contain a framework for performing the UI automation on the
target, the files you will be most interested in will be
``uiauto/app/src/main/java/arm/wa/uiauto/UiAutomation.java`` which will contain
the actual code of the automation and will look something like:
.. code-block:: java
package com.arm.wa.uiauto.google_docs;
import android.app.Activity;
import android.os.Bundle;
import org.junit.Test;
import org.junit.runner.RunWith;
import android.support.test.runner.AndroidJUnit4;
import android.util.Log;
import android.view.KeyEvent;
// Import the uiautomator libraries
import android.support.test.uiautomator.UiObject;
import android.support.test.uiautomator.UiObjectNotFoundException;
import android.support.test.uiautomator.UiScrollable;
import android.support.test.uiautomator.UiSelector;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import com.arm.wa.uiauto.BaseUiAutomation;
@RunWith(AndroidJUnit4.class)
public class UiAutomation extends BaseUiAutomation {
protected Bundle parameters;
protected int example_parameter;
public static String TAG = "google_docs";
@Before
public void initilize() throws Exception {
// Perform any parameter initialization here
parameters = getParams(); // Required to decode passed parameters.
packageID = getPackageID(parameters);
example_parameter = parameters.getInt("example_parameter");
}
@Test
public void setup() throws Exception {
// Optional: Perform any setup required before the main workload
// is ran, e.g. dismissing welcome screens
}
@Test
public void runWorkload() throws Exception {
// The main UI Automation code goes here
}
@Test
public void extractResults() throws Exception {
// Optional: Extract any relevant results from the workload,
}
@Test
public void teardown() throws Exception {
// Optional: Perform any clean up for the workload
}
}
A few items to note from the template:
- Each of the stages of execution for example ``setup``, ``runWorkload`` etc
are decorated with the ``@Test`` decorator, this is important to allow
these methods to be called at the appropriate time however any additional
methods you may add do not require this decorator.
- The ``initialize`` method has the ``@Before`` decorator, this is there to
ensure that this method is called before executing any of the workload
stages and therefore is used to decode and initialize any parameters that
are passed in.
- The code currently retrieves the ``example_parameter`` that was
provided to the python workload as an Integer, there are similar calls to
retrieve parameters of different types e.g. ``getString``, ``getBoolean``,
``getDouble`` etc.
Once you have implemented your java workload you can use the file
``uiauto/build.sh`` to compile your automation into an apk file to perform the
automation. The generated apk will be generated with the package name
``com.arm.wa.uiauto.<workload_name>`` which when running your workload will be
automatically detected by the resource getters and deployed to the device.
Adding a ReventApk Workload Example
------------------------------------
If we wish to create a workload to automate the testing of a UI based workload
that we cannot / do not wish to use UiAutomator then we can perform the
automation using revent. In this example we would want to automatically deploy
and install an apk file to the target, therefore we would choose the
:ref:`ApkRevent workload <apkrevent-workload>` type with the following
command::
$ wa create workload -k apkrevent my_game
Workload created in $WA_USER_DIRECTORY/plugins/my_game
This will generate a revent based workload you will end up with a very similar
python file as to the one outlined in generating a :ref:`UiAutomator based
workload <apkuiautomator-example>` however without the accompanying java
automation files.
The main difference between the two is that this workload will subclass
``ApkReventWorkload`` instead of ``ApkUiautomatorWorkload`` as shown below.
.. code-block:: python
from wa import ApkReventWorkload
class MyGame(ApkReventWorkload):
name = 'mygame'
package_names = ['com.mylogo.mygame']
# ..
---------------------------------------------------------------
.. _adding-an-instrument-example:
Adding an Instrument Example
=============================
This is an example of how we would create a instrument which will trace device
errors using a custom "trace" binary file. For more detailed information please see
:ref:`here <instrument-reference>`. The first thing to do is to subclass
:class:`Instrument`, overwrite the variable name with what we want our instrument
to be called and locate our binary for our instrument.
::
class TraceErrorsInstrument(Instrument):
name = 'trace-errors'
def __init__(self, target):
super(TraceErrorsInstrument, self).__init__(target)
self.binary_name = 'trace'
self.binary_file = os.path.join(os.path.dirname(__file__), self.binary_name)
self.trace_on_target = None
We then declare and implement the required methods as detailed
:ref:`here <instrument-api>`. For the ``initialize`` method, we want to install
the executable file to the target so we can use the target's ``install``
method which will try to copy the file to a location on the device that
supports execution, change the file mode appropriately and return the
file path on the target. ::
def initialize(self, context):
self.trace_on_target = self.target.install(self.binary_file)
Then we implemented the start method, which will simply run the file to start
tracing. Supposing that the call to this binary requires some overhead to begin
collecting errors we might want to decorate the method with the ``@slow``
decorator to try and reduce the impact on other running instruments. For more
information on prioritization please see :ref:`here <prioritization>`. ::
@slow
def start(self, context):
self.target.execute('{} start'.format(self.trace_on_target))
Lastly, we need to stop tracing once the workload stops and this happens in the
stop method, assuming stopping the collection also require some overhead we have
again decorated the method. ::
@slow
def stop(self, context):
self.target.execute('{} stop'.format(self.trace_on_target))
Once we have generated our result data we need to retrieve it from the device
for further processing or adding directly to WA's output for that job. For
example for trace data we will want to pull it to the device and add it as a
:ref:`artifact <artifact>` to WA's :ref:`context <context>` as shown below::
def extract_results(self, context):
# pull the trace file from the target
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.target.pull(self.result, context.working_directory)
context.add_artifact('error_trace', self.result, kind='export')
Once we have retrieved the data we can now do any further processing and add any
relevant :ref:`Metrics <metrics>` to the :ref:`context <context>`. For this we
will use the the ``add_metric`` method to add the results to the final output
for that workload. The method can be passed 4 params, which are the metric
`key`, `value`, `unit` and `lower_is_better`. ::
def update_output(self, context):
# parse the file if needs to be parsed, or add result directly to
# context.
metric = # ..
context.add_metric('number_of_errors', metric, lower_is_better=True
At the end of each job we might want to delete any files generated by the
instruments and the code to clear these file goes in teardown method. ::
def teardown(self, context):
self.target.remove(os.path.join(self.target.working_directory, 'trace.txt'))
At the very end of the run we would want to uninstall the binary we deployed earlier. ::
def finalize(self, context):
self.target.uninstall(self.binary_name)
So the full example would look something like::
class TraceErrorsInstrument(Instrument):
name = 'trace-errors'
def __init__(self, target):
super(TraceErrorsInstrument, self).__init__(target)
self.binary_name = 'trace'
self.binary_file = os.path.join(os.path.dirname(__file__), self.binary_name)
self.trace_on_target = None
def initialize(self, context):
self.trace_on_target = self.target.install(self.binary_file)
@slow
def start(self, context):
self.target.execute('{} start'.format(self.trace_on_target))
@slow
def stop(self, context):
self.target.execute('{} stop'.format(self.trace_on_target))
def extract_results(self, context):
self.result = os.path.join(self.target.working_directory, 'trace.txt')
self.target.pull(self.result, context.working_directory)
context.add_artifact('error_trace', self.result, kind='export')
def update_output(self, context):
metric = # ..
context.add_metric('number_of_errors', metric, lower_is_better=True
def teardown(self, context):
self.target.remove(os.path.join(self.target.working_directory, 'trace.txt'))
def finalize(self, context):
self.target.uninstall(self.binary_name)
Adding an Output Processor Example
===================================
This is an example of how we would create an output processor which will format
the run metrics as a column-aligned table. The first thing to do is to subclass
:class:`OutputProcessor` and overwrite the variable name with what we want our
processor to be called and provide a short description.
Next we need to implement any relevant methods, (please see
:ref:`adding an output processor <adding-an-output-processor>` for all the
available methods). In this case we only want to implement the
``export_run_output`` method as we are not generating any new artifacts and
we only care about the overall output rather than the individual job
outputs. The implementation is very simple, it just loops through all
the available metrics for all the available jobs and adds them to a list
which is written to file and then added as an :ref:`artifact <artifact>` to
the :ref:`context <context>`.
.. code-block:: python
import os
from wa import OutputProcessor
from wa.utils.misc import write_table
class Table(OutputProcessor):
name = 'table'
description = 'Generates a text file containing a column-aligned table of run results.'
def export_run_output(self, output, target_info):
rows = []
for job in output.jobs:
for metric in job.metrics:
rows.append([metric.name, str(metric.value), metric.units or '',
metric.lower_is_better and '-' or '+'])
outfile = output.get_path('table.txt')
with open(outfile, 'w') as wfh:
write_table(rows, wfh)
output.add_artifact('results_table', 'table.txt', 'export')
.. _adding-custom-target-example:
Adding a Custom Target Example
===============================
This is an example of how we would create a customised target, this is typically
used where we would need to augment the existing functionality for example on
development boards where we need to perform additional actions to implement some
functionality. In this example we are going to assume that this particular
device is running Android and requires a special "wakeup" command to be sent before it
can execute any other command.
To add a new target to WA we will first create a new file in
``$WA_USER_DIRECTORY/plugins/example_target.py``. In order to facilitate with
creating a new target WA provides a helper function to create a description for
the specified target class, and specified components. For components that are
not explicitly specified it will attempt to guess sensible defaults based on the target
class' bases.
.. code-block:: python
# Import our helper function
from wa import add_description_for_target
# Import the Target that our custom implementation will be based on
from devlib import AndroidTarget
class ExampleTarget(AndroidTarget):
# Provide the name that will be used to identify your custom target
name = 'example_target'
# Override our custom method(s)
def execute(self, *args, **kwargs):
super(ExampleTarget, self).execute('wakeup', check_exit_code=False)
return super(ExampleTarget, self).execute(*args, **kwargs)
description = '''An Android target which requires an explicit "wakeup" command
to be sent before accepting any other command'''
# Call the helper function with our newly created function and its description.
add_description_for_target(ExampleTarget, description)