1
0
mirror of https://github.com/ARM-software/workload-automation.git synced 2025-03-21 01:59:13 +00:00

doc: Restructure developer information

Re-organise and add additional information to developer guide.
This commit is contained in:
Marc Bonnici 2018-06-25 09:30:14 +01:00 committed by setrofim
parent a64bb3a26f
commit 2f94d12b57
9 changed files with 455 additions and 538 deletions

View File

@ -6,6 +6,14 @@ Developer Information
:depth: 4
:local:
------------------
.. include:: developer_information/how_to.rst
------------------
.. include:: developer_information/developer_guide.rst
------------------
.. include:: developer_information/developer_reference.rst

View File

@ -0,0 +1,12 @@
.. _developer_guide:
********************
Developer Guide
********************
.. contents::
:depth: 3
:local:
.. include:: developer_information/developer_guide/writing_plugins.rst

View File

@ -46,46 +46,49 @@ like which workload is currently running.
Notable methods of the context are:
context.add_artifact(name, host_file_path, kind, description=None, classifier=None)
:context.get_resource(resource, strict=True):
This method should be used to retrieve a resource using the resource getters rather than using the ResourceResolver directly as this method additionally record any found resources hash in the output metadata.
:context.add_artifact(name, host_file_path, kind, description=None, classifier=None):
Plugins can add :ref:`artifacts <artifact>` of various kinds to the run
output directory for WA and associate them with a description and/or
:ref:`classifier <classifiers>`.
context.add_metric(name, value, units=None, lower_is_better=False, classifiers=None)
:context.add_metric(name, value, units=None, lower_is_better=False, classifiers=None):
This method should be used to add :ref:`metrics <metrics>` that have been
generated from a workload, this will allow WA to process the results
accordingly depending on which output processors are enabled.
Notable attributes of the context are:
context.workload
:context.workload:
:class:`wa.framework.workload` object that is currently being executed.
context.tm
:context.tm:
This is the target manager that can be used to access various information
about the target including initialization parameters.
context.current_job
:context.current_job:
This is an instance of :class:`wa.framework.job.Job` and contains all
the information relevant to the workload job currently being executed.
context.current_job.spec
:context.current_job.spec:
The current workload specification being executed. This is an
instance of :class:`wa.framework.configuration.core.JobSpec`
and defines the workload and the parameters under which it is
being executed.
context.current_job.current_iteration
:context.current_job.current_iteration:
The current iteration of the spec that is being executed. Note that this
is the iteration for that spec, i.e. the number of times that spec has
been run, *not* the total number of all iterations have been executed so
far.
context.current_job_output
This is the result object for the current iteration. This is an instance
of :class:`wa.framework.output.JobOutput`. It contains the status
of the iteration as well as the metrics and artifacts generated by the
workload.
:context.job_output:
This is the output object for the current iteration which
is an instance of :class:`wa.framework.output.JobOutput`. It contains
the status of the iteration as well as the metrics and artifacts
generated by the workload.
In addition to these, context also defines a few useful paths (see below).
@ -154,209 +157,6 @@ irrespective of the host's path notation. For example:
.. note:: Output processors, unlike workloads and instruments, do not have their
own target attribute as they are designed to be able to be ran offline.
.. _metrics:
Metrics
^^^^^^^
This is what WA uses to store a single metric collected from executing a workload.
:name: the name of the metric. Uniquely identifies the metric
within the results.
:value: The numerical value of the metric for this execution of a
workload. This can be either an int or a float.
:units: Units for the collected value. Can be None if the value
has no units (e.g. it's a count or a standardised score).
:lower_is_better: Boolean flag indicating where lower values are
better than higher ones. Defaults to False.
:classifiers: A set of key-value pairs to further classify this
metric beyond current iteration (e.g. this can be used
to identify sub-tests).
Metrics can be added to WA output via the context:
.. code-block:: python
context.add_metric("score", 9001)
context.add_metric("time", 2.35, "seconds", lower_is_better=True)
You only need to specify the name and the value for the metric. Units and
classifiers are optional, and, if not specified otherwise, it will be assumed
that higher values are better (lower_is_better=False).
The metric will be added to the result for the current job, if there is one;
otherwise, it will be added to the overall run result.
.. _artifact:
Artifacts
^^^^^^^^^
This is an artifact generated during execution/post-processing of a workload.
Unlike :ref:`metrics <metrics>`, this represents an actual artifact, such as a
file, generated. This may be "output", such as trace, or it could be "meta
data" such as logs. These are distinguished using the ``kind`` attribute, which
also helps WA decide how it should be handled. Currently supported kinds are:
:log: A log file. Not part of the "output" as such but contains
information about the run/workload execution that be useful for
diagnostics/meta analysis.
:meta: A file containing metadata. This is not part of the "output", but
contains information that may be necessary to reproduce the
results (contrast with ``log`` artifacts which are *not*
necessary).
:data: This file contains new data, not available otherwise and should
be considered part of the "output" generated by WA. Most traces
would fall into this category.
:export: Exported version of results or some other artifact. This
signifies that this artifact does not contain any new data
that is not available elsewhere and that it may be safely
discarded without losing information.
:raw: Signifies that this is a raw dump/log that is normally processed
to extract useful information and is then discarded. In a sense,
it is the opposite of ``export``, but in general may also be
discarded.
.. note:: whether a file is marked as ``log``/``data`` or ``raw``
depends on how important it is to preserve this file,
e.g. when archiving, vs how much space it takes up.
Unlike ``export`` artifacts which are (almost) always
ignored by other exporters as that would never result
in data loss, ``raw`` files *may* be processed by
exporters if they decided that the risk of losing
potentially (though unlikely) useful data is greater
than the time/space cost of handling the artifact (e.g.
a database uploader may choose to ignore ``raw``
artifacts, whereas a network filer archiver may choose
to archive them).
.. note: The kind parameter is intended to represent the logical
function of a particular artifact, not it's intended means of
processing -- this is left entirely up to the output
processors.
As with :ref:`metrics`, artifacts are added via the context:
.. code-block:: python
context.add_artifact("benchmark-output", "bech-out.txt", kind="raw",
description="stdout from running the benchmark")
.. note:: The file *must* exist on the host by the point at which the artifact
is added, otherwise an error will be raised.
The artifact will be added to the result of the current job, if there is one;
otherwise, it will be added to the overall run result. In some situations, you
may wish to add an artifact to the overall run while being inside a job context,
this can be done with ``add_run_artifact``:
.. code-block:: python
context.add_run_artifact("score-summary", "scores.txt", kind="export",
description="""
Summary of the scores so far. Updated after
every job.
""")
In this case, you also need to make sure that the file represented by the
artifact is written to the output directory for the run and not the current job.
.. _metadata:
Metadata
^^^^^^^^
There may be additional data collected by your plugin that you want to record as
part of the result, but that does not fall under the definition of a "metric".
For example, you may want to record the version of the binary you're executing.
You can do this by adding a metadata entry:
.. code-block:: python
context.add_metadata("exe-version", 1.3)
Metadata will be added either to the current job result, or to the run result,
depending on the current context. Metadata values can be scalars or nested
structures of dicts/sequences; the only constraint is that all constituent
objects of the value must be POD (Plain Old Data) types -- see :ref:`WA POD
types <wa-pods>`.
There is special support for handling metadata entries that are dicts of values.
The following call adds a metadata entry ``"versions"`` who's value is
``{"my_exe": 1.3}``:
.. code-block:: python
context.add_metadata("versions", "my_exe", 1.3)
If you attempt to add a metadata entry that already exists, an error will be
raised, unless ``force=True`` is specified, in which case, it will be
overwritten.
Updating an existing entry whose value is a collection can be done with
``update_metadata``:
.. code-block:: python
context.update_metadata("ran_apps", "my_exe")
context.update_metadata("versions", "my_other_exe", "2.3.0")
The first call appends ``"my_exe"`` to the list at metadata entry
``"ran_apps"``. The second call updates the ``"versions"`` dict in the metadata
with an entry for ``"my_other_exe"``.
If an entry does not exit, ``update_metadata`` will create it, so it's
recommended to always use that for non-scalar entries, unless the intention is
specifically to ensure that the entry does not exist at the time of the call.
Classifiers
^^^^^^^^^^^
Classifiers are key-value pairs of tags that can be attached to metrics,
artifacts, jobs, or the entire run. Run and job classifiers get propagated to
metrics and artifacts. Classifier keys should be strings, and their values
should be simple scalars (i.e. strings, numbers, or bools).
Classifiers can be thought of as "tags" that are used to annotate metrics and
artifacts, in order to make it easier to sort through them later. WA itself does
not do anything with them, however output processors will augment the output
they generate with them (for example, ``csv`` processor can add additional
columns for classifier keys).
Classifiers are typically added by the user to attach some domain-specific
information (e.g. experiment configuration identifier) to the results, see
:ref:`classifiers`. However, plugins can also attach additional classifiers, by
specifying them in ``add_metric()`` and ``add_artifacts()`` calls.
Metadata vs Classifiers
^^^^^^^^^^^^^^^^^^^^^^^
Both metadata and classifiers are sets of essentially opaque key-value pairs
that get included in WA output. While they may seem somewhat similar and
interchangeable, they serve different purposes and are handled differently by
the framework.
Classifiers are used to annotate generated metrics and artifacts in order to
assist post-processing tools in sorting through them. Metadata is used to record
additional information that is not necessary for processing the results, but
that may be needed in order to reproduce them or to make sense of them in a
grander context.
These are specific differences in how they are handled:
- Classifiers are often provided by the user via the agenda (though can also be
added by plugins). Metadata in only created by the framework and plugins.
- Classifier values must be simple scalars; metadata values can be nested
collections, such as lists or dicts.
- Classifiers are used by output processors to augment the output the latter
generated; metadata typically isn't.
- Classifiers are essentially associated with the individual metrics and
artifacts (though in the agenda they're specified at workload, section, or
global run levels); metadata is associated with a particular job or run, and
not with metrics or artifacts.
.. _resource-resolution:
Dynamic Resource Resolution
@ -471,8 +271,10 @@ additional assets should have their on target paths added to the workload's
``deployed_assests`` attribute or the corresponding ``remove_assets`` method
should also be implemented.
.. _plugin-parmeters:
Parameters
^^^^^^^^^^
----------
All plugins can be parametrized. Parameters are specified using
``parameters`` class attribute. This should be a list of
@ -566,7 +368,7 @@ the parameters. In that case the dependent attribute should be left unspecified
on creation and should instead be set inside ``validate``.
Logging
^^^^^^^
-------
Every plugin class has it's own logger that you can access through
``self.logger`` inside the plugin's methods. Generally, a :class:`Target` will
@ -580,7 +382,7 @@ amount of time, such as downloading a file.
Documenting
^^^^^^^^^^^
-----------
All plugins and their parameter should be documented. For plugins
themselves, this is done through ``description`` class attribute. The convention
@ -609,7 +411,7 @@ plugin expects (including what the valid values are).
Error Notification
^^^^^^^^^^^^^^^^^^
------------------
When you detect an error condition, you should raise an appropriate exception to
notify the user. The exception would typically be :class:`ConfigError` or
@ -629,257 +431,6 @@ If the plugin itself is capable of recovering from the error and carrying
on, it may make more sense to log an ERROR or WARNING level message using the
plugin's logger and to continue operation.
.. _decorators:
Execution Decorators
---------------------
The following decorators are available for use in order to control how often a
method should be able to be executed.
For example, if we want to ensure that no matter how many iterations of a
particular workload are ran, we only execute the initialize method for that instance
once, we would use the decorator as follows:
.. code-block:: python
from wa.utils.exec_control import once
@once
def initialize(self, context):
# Perform one time initialization e.g. installing a binary to target
# ..
@once_per_instance
^^^^^^^^^^^^^^^^^^
The specified method will be invoked only once for every bound instance within
the environment.
@once_per_class
^^^^^^^^^^^^^^^
The specified method will be invoked only once for all instances of a class
within the environment.
@once
^^^^^
The specified method will be invoked only once within the environment.
.. warning:: If a method containing a super call is decorated, this will also cause
stop propagation up the hierarchy, unless this is the desired
effect, additional functionality should be implemented in a
separate decorated method which can then be called allowing for
normal propagation to be retained.
Utils
^^^^^
Workload Automation defines a number of utilities collected under
:mod:`wa.utils` subpackage. These utilities were created to help with the
implementation of the framework itself, but may be also be useful when
implementing plugins.
Workloads
---------
.. _workload-types:
Workload Types
^^^^^^^^^^^^^^^^
.. _basic-workload:
Basic (:class:`wa.Workload <wa.framework.workload.Workload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This type of the workload is the simplest type of workload and is left the to
developer to implement its full functionality.
.. _apk-workload:
Apk (:class:`wa.ApkWorkload <wa.framework.workload.ApkWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This workload will simply deploy and launch an android app in its basic form
with no UI interaction.
.. _uiautomator-workload:
UiAuto (:class:`wa.UiautoWorkload <wa.framework.workload.UiautoWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This workload is for android targets which will use UiAutomator to interact with
UI elements without a specific android app, for example performing manipulation
of android itself. This is the preferred type of automation as the results are
more portable and reproducible due to being able to wait for UI elements to
appear rather than having to rely on human recordings.
.. _apkuiautomator-workload:
ApkUiAuto (:class:`wa.ApkUiautoWorkload <wa.framework.workload.ApkUiautoWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The is the same as the UiAuto workload however it is also associated with an
android app e.g. AdobeReader and will automatically deploy and launch the
android app before running the automation.
.. _revent-workload:
Revent (:class:`wa.ReventWorkload <wa.framework.workload.ReventWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Revent workloads are designed primarily for games as these are unable to be
automated with UiAutomator due to the fact that they are rendered within a
single UI element. They require a recording to be performed manually and
currently will need re-recording for each different device. For more
information on revent workloads been please see :ref:`revent_files_creation`
.. _apkrevent-workload:
APKRevent (:class:`wa.ApkReventWorkload <wa.framework.workload.ApkReventWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The is the same as the Revent workload however it is also associated with an
android app e.g. AngryBirds and will automatically deploy and launch the android
app before running the automation.
.. _workload-interface:
Workload Interface
^^^^^^^^^^^^^^^^^^^
The workload interface should be implemented as follows:
.. class:: <workload_type>(TargetedPlugin)
.. attribute:: name
This identifies the workload (e.g. it is used to specify the
workload in the :ref:`agenda <agenda>`).
.. method:: init_resources(context)
This method may be optionally overridden to implement dynamic
resource discovery for the workload. This method executes
early on, before the device has been initialized, so it
should only be used to initialize resources that do not
depend on the device to resolve. This method is executed
once per run for each workload instance.
.. method:: validate(
This method can be used to validate any assumptions your workload
makes about the environment (e.g. that required files are
present, environment variables are set, etc) and should raise a
:class:`wa.WorkloadError <wa.framework.exception.WorkloadError>`
if that is not the case. The base class implementation only makes
sure sure that the name attribute has been set.
.. method:: initialize(context)
This method is decorated with the ``@once_per_instance`` decorator,
(for more information please see `Execution Decorators`_)
therefore it will be executed exactly once per run (no matter
how many instances of the workload there are). It will run
after the device has been initialized, so it may be used to
perform device-dependent initialization that does not need to
be repeated on each iteration (e.g. as installing executables
required by the workload on the device).
.. method:: setup(context)
Everything that needs to be in place for workload execution should
be done in this method. This includes copying files to the device,
starting up an application, configuring communications channels,
etc.
.. method:: setup_rerun(context)
Everything that needs to be in place for workload execution should
be done in this method. This includes copying files to the device,
starting up an application, configuring communications channels,
etc.
.. method:: run(context)
This method should perform the actual task that is being measured.
When this method exits, the task is assumed to be complete.
.. note:: Instruments are kicked off just before calling this
method and disabled right after, so everything in this
method is being measured. Therefore this method should
contain the least code possible to perform the operations
you are interested in measuring. Specifically, things like
installing or starting applications, processing results, or
copying files to/from the device should be done elsewhere if
possible.
.. method:: extract_results(context)
This method gets invoked after the task execution has finished and
should be used to extract metrics from the target.
.. method:: update_output(context)
This method should be used to update the output within the specified
execution context with the metrics and artifacts from this
workload iteration.
.. method:: teardown(context)
This could be used to perform any cleanup you may wish to do, e.g.
Uninstalling applications, deleting file on the device, etc.
.. method:: finalize(context)
This is the complement to ``initialize``. This will be executed
exactly once at the end of the run. This should be used to
perform any final clean up (e.g. uninstalling binaries installed
in the ``initialize``).
Workload methods (except for ``validate``) take a single argument that is a
:class:`wa.framework.execution.ExecutionContext` instance. This object keeps
track of the current execution state (such as the current workload, iteration
number, etc), and contains, among other things, a
:class:`wa.framework.output.JobOutput` instance that should be populated from
the ``update_output`` method with the results of the execution. For more
information please see `the context`_ documentation. ::
# ...
def update_output(self, context):
# ...
context.add_metric('energy', 23.6, 'Joules', lower_is_better=True)
# ...
.. _ReventWorkload:
Adding Revent Workload
-----------------------
There are two base classes that can be subclassed to create Revent based workloads
depending on whether the workload is associated with an android Apk or not
:class:`wa.ApkReventWorkload <wa.framework.workload.ApkReventWorkload>` and
:class:`wa.ReventWorkload <wa.framework.workload.ReventWorkload>` respectively.
They both implement all the methods needed to push the files to the device and run
them.
The revent workload classes define the following interfaces::
class ReventWorkload(Workload):
name = None
class ApkReventWorkload(Workload):
name = None
package_names = []
The interface should be implemented as follows
:name: This identifies the workload (e.g. it used to specify it in the
:ref:`agenda <agenda>`.
:package_names: This is a list of the android application apk packages names that
are required to run the workload.
.. _instrument-reference:
Adding an Instrument
@ -901,57 +452,56 @@ similar to the steps to add new workload and an example can be found in the
.. _instrument-api:
To implement your own the relevant methods of the interface shown below should be implemented:
To implement your own instrument the relevant methods of the interface shown
below should be implemented:
.. class:: Instrument(TargetedInstrument)
.. attribute:: name
:name:
The name of the instrument, this must be unique to WA.
.. attribute:: description
:description:
A description of what the instrument can be used for.
.. attribute:: parameters
:parameters:
A list of additional :class:`Parameters` the instrument can take.
.. method:: initialize(context):
:initialize(context):
This method will only be called once during the workload run
therefore operations that only need to be performed initially should
be performed here for example pushing the files to the target device,
installing them.
.. method:: setup(context):
:setup(context):
This method is invoked after the workload is setup. All the
necessary setup should go inside this method. Setup, includes
operations like clearing logs, additional configuration etc.
.. method:: start(context):
:start(context):
It is invoked just before the workload start execution. Here is
where instrument measures start being registered/taken.
where instrument measurement start being registered/taken.
.. method:: stop(context):
:stop(context):
It is invoked just after the workload execution stops. The measures
should stop being taken/registered.
It is invoked just after the workload execution stops and where
the measurements should stop being taken/registered.
.. method:: update_output(context):
:update_output(context):
It is invoked after the workload updated its result.
update_result is where the taken measures are added to the result so it
can be processed by Workload Automation.
.. method:: teardown(context):
:teardown(context):
It is invoked after the workload is torn down. It is a good place
to clean any logs generated by the instrument.
.. method:: finalize(context):
:finalize(context):
This method is the complement to the initialize method and will also
only be called once so should be used to deleting/uninstalling files
@ -1060,53 +610,51 @@ results in a few common formats (such as csv or JSON).
You can add your own output processors by creating a Python file in
``~/.workload_automation/plugins`` with a class that derives from
:class:`wa.OutputProcessor <wa.framework.processor.OutputProcessor>`, and should
implement the relevant methods from the following interface:
implement the relevant methods from the following interface:
.. class:: OutputProcessor(Plugin):
.. attribute:: name
:name:
The name of the output processor, this must be unique to WA.
.. attribute:: description
:description:
A description of what the output processor can be used for.
.. attribute:: parameters
:parameters:
A list of additional :class:`Parameters` the output processor can take.
.. method:: initialize():
:initialize():
This method will only be called once during the workload run
therefore operations that only need to be performed initially should
be performed here.
.. method:: process_job_output(output, target_info, run_ouput):
:process_job_output(output, target_info, run_ouput):
This method should be used to perform the processing of the
output from an individual job output. This is where any
additional artifacts should be generated if applicable.
.. method:: export_job_output(output, target_info, run_ouput):
:export_job_output(output, target_info, run_ouput):
This method should be used to perform the exportation of the
existing data collected/generated for an individual job. E.g.
uploading them to a database etc.
.. method:: process_run_output(output, target_info):
:process_run_output(output, target_info):
This method should be used to perform the processing of the
output from the run as a whole. This is where any
additional artifacts should be generated if applicable.
.. method:: export_run_output(output, target_info):
:export_run_output(output, target_info):
This method should be used to perform the exportation of the
existing data collected/generated for the run as a whole. E.g.
uploading them to a database etc.
.. method:: finalize():
:finalize():
This method is the complement to the initialize method and will also
only be called once.

View File

@ -13,11 +13,7 @@ Developer Reference
-----------------
.. include:: developer_information/developer_reference/writing_plugins.rst
-----------------
.. include:: developer_information/developer_reference/contributing.rst
.. include:: developer_information/developer_reference/plugins.rst
-----------------
@ -26,3 +22,8 @@ Developer Reference
-----------------
.. include:: developer_information/developer_reference/serialization.rst
-----------------
.. include:: developer_information/developer_reference/contributing.rst

View File

@ -0,0 +1,376 @@
.. plugins:
Plugins
=======
Workload Automation offers several plugin points (or plugin types). The most
interesting of these are
:workloads: These are the tasks that get executed and measured on the device. These
can be benchmarks, high-level use cases, or pretty much anything else.
:targets: These are interfaces to the physical devices (development boards or end-user
devices, such as smartphones) that use cases run on. Typically each model of a
physical device would require its own interface class (though some functionality
may be reused by subclassing from an existing base).
:instruments: Instruments allow collecting additional data from workload execution (e.g.
system traces). Instruments are not specific to a particular workload. Instruments
can hook into any stage of workload execution.
:output processors: These are used to format the results of workload execution once they have been
collected. Depending on the callback used, these will run either after each
iteration and/or at the end of the run, after all of the results have been
collected.
You can create a plugin by subclassing the appropriate base class, defining
appropriate methods and attributes, and putting the .py file containing the
class into the "plugins" subdirectory under ``~/.workload_automation`` (or
equivalent) where it will be automatically picked up by WA.
Plugin Basics
--------------
This section contains reference information common to plugins of all types.
.. _metrics:
Metrics
^^^^^^^
This is what WA uses to store a single metric collected from executing a workload.
:name: the name of the metric. Uniquely identifies the metric
within the results.
:value: The numerical value of the metric for this execution of a
workload. This can be either an int or a float.
:units: Units for the collected value. Can be None if the value
has no units (e.g. it's a count or a standardised score).
:lower_is_better: Boolean flag indicating where lower values are
better than higher ones. Defaults to False.
:classifiers: A set of key-value pairs to further classify this
metric beyond current iteration (e.g. this can be used
to identify sub-tests).
Metrics can be added to WA output via the context:
.. code-block:: python
context.add_metric("score", 9001)
context.add_metric("time", 2.35, "seconds", lower_is_better=True)
You only need to specify the name and the value for the metric. Units and
classifiers are optional, and, if not specified otherwise, it will be assumed
that higher values are better (lower_is_better=False).
The metric will be added to the result for the current job, if there is one;
otherwise, it will be added to the overall run result.
.. _artifact:
Artifacts
^^^^^^^^^
This is an artifact generated during execution/post-processing of a workload.
Unlike :ref:`metrics <metrics>`, this represents an actual artifact, such as a
file, generated. This may be "output", such as trace, or it could be "meta
data" such as logs. These are distinguished using the ``kind`` attribute, which
also helps WA decide how it should be handled. Currently supported kinds are:
:log: A log file. Not part of the "output" as such but contains
information about the run/workload execution that be useful for
diagnostics/meta analysis.
:meta: A file containing metadata. This is not part of the "output", but
contains information that may be necessary to reproduce the
results (contrast with ``log`` artifacts which are *not*
necessary).
:data: This file contains new data, not available otherwise and should
be considered part of the "output" generated by WA. Most traces
would fall into this category.
:export: Exported version of results or some other artifact. This
signifies that this artifact does not contain any new data
that is not available elsewhere and that it may be safely
discarded without losing information.
:raw: Signifies that this is a raw dump/log that is normally processed
to extract useful information and is then discarded. In a sense,
it is the opposite of ``export``, but in general may also be
discarded.
.. note:: whether a file is marked as ``log``/``data`` or ``raw``
depends on how important it is to preserve this file,
e.g. when archiving, vs how much space it takes up.
Unlike ``export`` artifacts which are (almost) always
ignored by other exporters as that would never result
in data loss, ``raw`` files *may* be processed by
exporters if they decided that the risk of losing
potentially (though unlikely) useful data is greater
than the time/space cost of handling the artifact (e.g.
a database uploader may choose to ignore ``raw``
artifacts, whereas a network filer archiver may choose
to archive them).
.. note: The kind parameter is intended to represent the logical
function of a particular artifact, not it's intended means of
processing -- this is left entirely up to the output
processors.
As with :ref:`metrics`, artifacts are added via the context:
.. code-block:: python
context.add_artifact("benchmark-output", "bech-out.txt", kind="raw",
description="stdout from running the benchmark")
.. note:: The file *must* exist on the host by the point at which the artifact
is added, otherwise an error will be raised.
The artifact will be added to the result of the current job, if there is one;
otherwise, it will be added to the overall run result. In some situations, you
may wish to add an artifact to the overall run while being inside a job context,
this can be done with ``add_run_artifact``:
.. code-block:: python
context.add_run_artifact("score-summary", "scores.txt", kind="export",
description="""
Summary of the scores so far. Updated after
every job.
""")
In this case, you also need to make sure that the file represented by the
artifact is written to the output directory for the run and not the current job.
.. _metadata:
Metadata
^^^^^^^^
There may be additional data collected by your plugin that you want to record as
part of the result, but that does not fall under the definition of a "metric".
For example, you may want to record the version of the binary you're executing.
You can do this by adding a metadata entry:
.. code-block:: python
context.add_metadata("exe-version", 1.3)
Metadata will be added either to the current job result, or to the run result,
depending on the current context. Metadata values can be scalars or nested
structures of dicts/sequences; the only constraint is that all constituent
objects of the value must be POD (Plain Old Data) types -- see :ref:`WA POD
types <wa-pods>`.
There is special support for handling metadata entries that are dicts of values.
The following call adds a metadata entry ``"versions"`` who's value is
``{"my_exe": 1.3}``:
.. code-block:: python
context.add_metadata("versions", "my_exe", 1.3)
If you attempt to add a metadata entry that already exists, an error will be
raised, unless ``force=True`` is specified, in which case, it will be
overwritten.
Updating an existing entry whose value is a collection can be done with
``update_metadata``:
.. code-block:: python
context.update_metadata("ran_apps", "my_exe")
context.update_metadata("versions", "my_other_exe", "2.3.0")
The first call appends ``"my_exe"`` to the list at metadata entry
``"ran_apps"``. The second call updates the ``"versions"`` dict in the metadata
with an entry for ``"my_other_exe"``.
If an entry does not exit, ``update_metadata`` will create it, so it's
recommended to always use that for non-scalar entries, unless the intention is
specifically to ensure that the entry does not exist at the time of the call.
.. _classifiers:
Classifiers
^^^^^^^^^^^
Classifiers are key-value pairs of tags that can be attached to metrics,
artifacts, jobs, or the entire run. Run and job classifiers get propagated to
metrics and artifacts. Classifier keys should be strings, and their values
should be simple scalars (i.e. strings, numbers, or bools).
Classifiers can be thought of as "tags" that are used to annotate metrics and
artifacts, in order to make it easier to sort through them later. WA itself does
not do anything with them, however output processors will augment the output
they generate with them (for example, ``csv`` processor can add additional
columns for classifier keys).
Classifiers are typically added by the user to attach some domain-specific
information (e.g. experiment configuration identifier) to the results, see
:ref:`using classifiers <using-classifiers>`. However, plugins can also attach
additional classifiers, by specifying them in ``add_metric()`` and
``add_artifacts()`` calls.
Metadata vs Classifiers
^^^^^^^^^^^^^^^^^^^^^^^
Both metadata and classifiers are sets of essentially opaque key-value pairs
that get included in WA output. While they may seem somewhat similar and
interchangeable, they serve different purposes and are handled differently by
the framework.
Classifiers are used to annotate generated metrics and artifacts in order to
assist post-processing tools in sorting through them. Metadata is used to record
additional information that is not necessary for processing the results, but
that may be needed in order to reproduce them or to make sense of them in a
grander context.
These are specific differences in how they are handled:
- Classifiers are often provided by the user via the agenda (though can also be
added by plugins). Metadata in only created by the framework and plugins.
- Classifier values must be simple scalars; metadata values can be nested
collections, such as lists or dicts.
- Classifiers are used by output processors to augment the output the latter
generated; metadata typically isn't.
- Classifiers are essentially associated with the individual metrics and
artifacts (though in the agenda they're specified at workload, section, or
global run levels); metadata is associated with a particular job or run, and
not with metrics or artifacts.
.. _execution-decorators:
Execution Decorators
---------------------
The following decorators are available for use in order to control how often a
method should be able to be executed.
For example, if we want to ensure that no matter how many iterations of a
particular workload are ran, we only execute the initialize method for that instance
once, we would use the decorator as follows:
.. code-block:: python
from wa.utils.exec_control import once
@once
def initialize(self, context):
# Perform one time initialization e.g. installing a binary to target
# ..
@once_per_instance
^^^^^^^^^^^^^^^^^^
The specified method will be invoked only once for every bound instance within
the environment.
@once_per_class
^^^^^^^^^^^^^^^
The specified method will be invoked only once for all instances of a class
within the environment.
@once
^^^^^
The specified method will be invoked only once within the environment.
.. warning:: If a method containing a super call is decorated, this will also cause
stop propagation up the hierarchy, unless this is the desired
effect, additional functionality should be implemented in a
separate decorated method which can then be called allowing for
normal propagation to be retained.
Utils
^^^^^
Workload Automation defines a number of utilities collected under
:mod:`wa.utils` subpackage. These utilities were created to help with the
implementation of the framework itself, but may be also be useful when
implementing plugins.
--------------------
Workloads
---------
All of the type inherit from the same base :class:`Workload` and its API can be
seen in the :ref:`API <workload-api>` section.
Workload methods (except for ``validate``) take a single argument that is a
:class:`wa.framework.execution.ExecutionContext` instance. This object keeps
track of the current execution state (such as the current workload, iteration
number, etc), and contains, among other things, a
:class:`wa.framework.output.JobOutput` instance that should be populated from
the ``update_output`` method with the results of the execution. For more
information please see `the context`_ documentation. ::
# ...
def update_output(self, context):
# ...
context.add_metric('energy', 23.6, 'Joules', lower_is_better=True)
# ...
.. _workload-types:
Workload Types
^^^^^^^^^^^^^^^^
There are multiple workload types that you can inherit from depending on the
purpose of your workload, the different types along with an output of their
intended use cases are outlined below.
.. _basic-workload:
Basic (:class:`wa.Workload <wa.framework.workload.Workload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This type of the workload is the simplest type of workload and is left the to
developer to implement its full functionality.
.. _apk-workload:
Apk (:class:`wa.ApkWorkload <wa.framework.workload.ApkWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This workload will simply deploy and launch an android app in its basic form
with no UI interaction.
.. _uiautomator-workload:
UiAuto (:class:`wa.UiautoWorkload <wa.framework.workload.UiautoWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This workload is for android targets which will use UiAutomator to interact with
UI elements without a specific android app, for example performing manipulation
of android itself. This is the preferred type of automation as the results are
more portable and reproducible due to being able to wait for UI elements to
appear rather than having to rely on human recordings.
.. _apkuiautomator-workload:
ApkUiAuto (:class:`wa.ApkUiautoWorkload <wa.framework.workload.ApkUiautoWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The is the same as the UiAuto workload however it is also associated with an
android app e.g. AdobeReader and will automatically deploy and launch the
android app before running the automation.
.. _revent-workload:
Revent (:class:`wa.ReventWorkload <wa.framework.workload.ReventWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Revent workloads are designed primarily for games as these are unable to be
automated with UiAutomator due to the fact that they are rendered within a
single UI element. They require a recording to be performed manually and
currently will need re-recording for each different device. For more
information on revent workloads been please see :ref:`revent_files_creation`
.. _apkrevent-workload:
APKRevent (:class:`wa.ApkReventWorkload <wa.framework.workload.ApkReventWorkload>`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The is the same as the Revent workload however it is also associated with an
android app e.g. AngryBirds and will automatically deploy and launch the android
app before running the automation.

View File

@ -7,7 +7,7 @@ Installing binaries for a particular plugin should generally only be performed
once during a run. This should typically be done in the ``initialize`` method,
if the only functionality performed in the method is to install the required binaries
then the ``initialize`` method should be decorated with the ``@once``
:ref:`decorator <decorators>` otherwise this should be placed into a dedicated
:ref:`decorator <execution-decorators>` otherwise this should be placed into a dedicated
method which is decorated instead. Please note if doing this then any installed
paths should be added as class attributes rather than instance variables. As a
general rule if binaries are installed as part of ``initialize`` then they
@ -75,7 +75,7 @@ to compress a file of a particular size on the device.
.. note:: This is intended as an example of how to implement the Workload
:ref:`interface <workload-interface>`. The methodology used to
:ref:`interface <workload-api>`. The methodology used to
perform the actual measurement is not necessarily sound, and this
Workload should not be used to collect real measurements.

View File

@ -1,29 +0,0 @@
.. _developer_reference:
====================
Developer Reference
====================
.. contents::
:depth: 3
:local:
--------------------------------------------------------------------------------
.. include:: developer_reference/execution_model.rst
-----------------
.. include:: developer_reference/writing_plugins.rst
-----------------
.. include:: developer_reference/contributing.rst
-----------------
.. include:: developer_reference/revent.rst
-----------------
.. include:: developer_reference/serialization.rst

View File

@ -167,9 +167,10 @@ Python Workload Structure
any results from the target back to the host system and to update the output
with any metrics or artefacts for the specific workload iteration respectively.
- WA now features :ref:`decorators <decorators>` which can be used to allow for more efficient
binary deployment and that they are only installed to the device once per run. For
more information of implementing this please see
- WA now features :ref:`execution decorators <execution-decorators>` which can
be used to allow for more efficient binary deployment and that they are only
installed to the device once per run. For more information of implementing
this please see
:ref:`deploying executables to a target <deploying-executables>`.

View File

@ -332,7 +332,7 @@ parameters used.
name: cyclictest
iterations: 10
.. _classifiers:
.. _using-classifiers:
Classifiers
------------