.. plugins: Plugins ======= Workload Automation offers several plugin points (or plugin types). The most interesting of these are :workloads: These are the tasks that get executed and measured on the device. These can be benchmarks, high-level use cases, or pretty much anything else. :targets: These are interfaces to the physical devices (development boards or end-user devices, such as smartphones) that use cases run on. Typically each model of a physical device would require its own interface class (though some functionality may be reused by subclassing from an existing base). :instruments: Instruments allow collecting additional data from workload execution (e.g. system traces). Instruments are not specific to a particular workload. Instruments can hook into any stage of workload execution. :output processors: These are used to format the results of workload execution once they have been collected. Depending on the callback used, these will run either after each iteration and/or at the end of the run, after all of the results have been collected. You can create a plugin by subclassing the appropriate base class, defining appropriate methods and attributes, and putting the .py file containing the class into the "plugins" subdirectory under ``~/.workload_automation`` (or equivalent) where it will be automatically picked up by WA. Plugin Basics -------------- This section contains reference information common to plugins of all types. .. _context: The Context ^^^^^^^^^^^ The majority of methods in plugins accept a context argument. This is an instance of :class:`wa.framework.execution.ExecutionContext`. It contains information about the current state of execution of WA and keeps track of things like which workload is currently running. Notable methods of the context are: :context.get_resource(resource, strict=True): This method should be used to retrieve a resource using the resource getters rather than using the ResourceResolver directly as this method additionally record any found resources hash in the output metadata. :context.add_artifact(name, host_file_path, kind, description=None, classifier=None): Plugins can add :ref:`artifacts ` of various kinds to the run output directory for WA and associate them with a description and/or :ref:`classifier `. :context.add_metric(name, value, units=None, lower_is_better=False, classifiers=None): This method should be used to add :ref:`metrics ` that have been generated from a workload, this will allow WA to process the results accordingly depending on which output processors are enabled. Notable attributes of the context are: :context.workload: :class:`wa.framework.workload` object that is currently being executed. :context.tm: This is the target manager that can be used to access various information about the target including initialization parameters. :context.current_job: This is an instance of :class:`wa.framework.job.Job` and contains all the information relevant to the workload job currently being executed. :context.current_job.spec: The current workload specification being executed. This is an instance of :class:`wa.framework.configuration.core.JobSpec` and defines the workload and the parameters under which it is being executed. :context.current_job.current_iteration: The current iteration of the spec that is being executed. Note that this is the iteration for that spec, i.e. the number of times that spec has been run, *not* the total number of all iterations have been executed so far. :context.job_output: This is the output object for the current iteration which is an instance of :class:`wa.framework.output.JobOutput`. It contains the status of the iteration as well as the metrics and artifacts generated by the workload. In addition to these, context also defines a few useful paths (see below). Paths ^^^^^ You should avoid using hard-coded absolute paths in your plugins whenever possible, as they make your code too dependent on a particular environment and may mean having to make adjustments when moving to new (host and/or device) platforms. To help avoid hard-coded absolute paths, WA defines a number of standard locations. You should strive to define your paths relative to one of these. On the host ~~~~~~~~~~~ Host paths are available through the context object, which is passed to most plugin methods. context.run_output_directory This is the top-level output directory for all WA results (by default, this will be "wa_output" in the directory in which WA was invoked. context.output_directory This is the output directory for the current iteration. This will an iteration-specific subdirectory under the main results location. If there is no current iteration (e.g. when processing overall run results) this will point to the same location as ``root_output_directory``. Additionally, the global ``wa.settings`` object exposes on other location: settings.dependency_directory this is the root directory for all plugin dependencies (e.g. media files, assets etc) that are not included within the plugin itself. As per Python best practice, it is recommended that methods and values in ``os.path`` standard library module are used for host path manipulation. On the target ~~~~~~~~~~~~~ Workloads and instruments have a ``target`` attribute, which is an interface to the target used by WA. It defines the following location: target.working_directory This is the directory for all WA-related files on the target. All files deployed to the target should be pushed to somewhere under this location (the only exception being executables installed with ``target.install`` method). Since there could be a mismatch between path notation used by the host and the target, the ``os.path`` modules should *not* be used for on-target path manipulation. Instead target has an equipment module exposed through ``target.path`` attribute. This has all the same attributes and behaves the same way as ``os.path``, but is guaranteed to produce valid paths for the target, irrespective of the host's path notation. For example: .. code:: python result_file = self.target.path.join(self.target.working_directory, "result.txt") self.command = "{} -a -b -c {}".format(target_binary, result_file) .. note:: Output processors, unlike workloads and instruments, do not have their own target attribute as they are designed to be able to be ran offline. .. _plugin-parmeters: Parameters ^^^^^^^^^^^ All plugins can be parametrized. Parameters are specified using ``parameters`` class attribute. This should be a list of :class:`wa.framework.plugin.Parameter` instances. The following attributes can be specified on parameter creation: :name: This is the only mandatory argument. The name will be used to create a corresponding attribute in the plugin instance, so it must be a valid Python identifier. :kind: This is the type of the value of the parameter. This must be an callable. Normally this should be a standard Python type, e.g. ``int`` or ``float``, or one the types defined in :mod:`wa.utils.types`. If not explicitly specified, this will default to ``str``. .. note:: Irrespective of the ``kind`` specified, ``None`` is always a valid value for a parameter. If you don't want to allow ``None``, then set ``mandatory`` (see below) to ``True``. :allowed_values: A list of the only allowed values for this parameter. .. note:: For composite types, such as ``list_of_strings`` or ``list_of_ints`` in :mod:`wa.utils.types`, each element of the value will be checked against ``allowed_values`` rather than the composite value itself. :default: The default value to be used for this parameter if one has not been specified by the user. Defaults to ``None``. :mandatory: A ``bool`` indicating whether this parameter is mandatory. Setting this to ``True`` will make ``None`` an illegal value for the parameter. Defaults to ``False``. .. note:: Specifying a ``default`` will mean that this parameter will, effectively, be ignored (unless the user sets the param to ``None``). .. note:: Mandatory parameters are *bad*. If at all possible, you should strive to provide a sensible ``default`` or to make do without the parameter. Only when the param is absolutely necessary, and there really is no sensible default that could be given (e.g. something like login credentials), should you consider making it mandatory. :constraint: This is an additional constraint to be enforced on the parameter beyond its type or fixed allowed values set. This should be a predicate (a function that takes a single argument -- the user-supplied value -- and returns a ``bool`` indicating whether the constraint has been satisfied). :override: A parameter name must be unique not only within an plugin but also with that plugin's class hierarchy. If you try to declare a parameter with the same name as already exists, you will get an error. If you do want to override a parameter from further up in the inheritance hierarchy, you can indicate that by setting ``override`` attribute to ``True``. When overriding, you do not need to specify every other attribute of the parameter, just the ones you what to override. Values for the rest will be taken from the parameter in the base class. Validation and cross-parameter constraints ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A plugin will get validated at some point after construction. When exactly this occurs depends on the plugin type, but it *will* be validated before it is used. You can implement ``validate`` method in your plugin (that takes no arguments beyond the ``self``) to perform any additional *internal* validation in your plugin. By "internal", I mean that you cannot make assumptions about the surrounding environment (e.g. that the device has been initialized). The contract for ``validate`` method is that it should raise an exception (either ``wa.framework.exception.ConfigError`` or plugin-specific exception type -- see further on this page) if some validation condition has not, and cannot, been met. If the method returns without raising an exception, then the plugin is in a valid internal state. Note that ``validate`` can be used not only to verify, but also to impose a valid internal state. In particular, this where cross-parameter constraints can be resolved. If the ``default`` or ``allowed_values`` of one parameter depend on another parameter, there is no way to express that declaratively when specifying the parameters. In that case the dependent attribute should be left unspecified on creation and should instead be set inside ``validate``. Logging ^^^^^^^ Every plugin class has it's own logger that you can access through ``self.logger`` inside the plugin's methods. Generally, a :class:`Target` will log everything it is doing, so you shouldn't need to add much additional logging for device actions. However you might what to log additional information, e.g. what settings your plugin is using, what it is doing on the host, etc. (Operations on the host will not normally be logged, so your plugin should definitely log what it is doing on the host). One situation in particular where you should add logging is before doing something that might take a significant amount of time, such as downloading a file. Documenting ^^^^^^^^^^^ All plugins and their parameter should be documented. For plugins themselves, this is done through ``description`` class attribute. The convention for an plugin description is that the first paragraph should be a short summary description of what the plugin does and why one would want to use it (among other things, this will get extracted and used by ``wa list`` command). Subsequent paragraphs (separated by blank lines) can then provide a more detailed description, including any limitations and setup instructions. For parameters, the description is passed as an argument on creation. Please note that if ``default``, ``allowed_values``, or ``constraint``, are set in the parameter, they do not need to be explicitly mentioned in the description (wa documentation utilities will automatically pull those). If the ``default`` is set in ``validate`` or additional cross-parameter constraints exist, this *should* be documented in the parameter description. Both plugins and their parameters should be documented using reStructureText markup (standard markup for Python documentation). See: http://docutils.sourceforge.net/rst.html Aside from that, it is up to you how you document your plugin. You should try to provide enough information so that someone unfamiliar with your plugin is able to use it, e.g. you should document all settings and parameters your plugin expects (including what the valid values are). Error Notification ^^^^^^^^^^^^^^^^^^ When you detect an error condition, you should raise an appropriate exception to notify the user. The exception would typically be :class:`ConfigError` or (depending the type of the plugin) :class:`WorkloadError`/:class:`DeviceError`/:class:`InstrumentError`/:class:`OutputProcessorError`. All these errors are defined in :mod:`wa.framework.exception` module. A :class:`ConfigError` should be raised where there is a problem in configuration specified by the user (either through the agenda or config files). These errors are meant to be resolvable by simple adjustments to the configuration (and the error message should suggest what adjustments need to be made. For all other errors, such as missing dependencies, mis-configured environment, problems performing operations, etc., the plugin type-specific exceptions should be used. If the plugin itself is capable of recovering from the error and carrying on, it may make more sense to log an ERROR or WARNING level message using the plugin's logger and to continue operation. .. _metrics: Metrics ^^^^^^^ This is what WA uses to store a single metric collected from executing a workload. :name: the name of the metric. Uniquely identifies the metric within the results. :value: The numerical value of the metric for this execution of a workload. This can be either an int or a float. :units: Units for the collected value. Can be None if the value has no units (e.g. it's a count or a standardised score). :lower_is_better: Boolean flag indicating where lower values are better than higher ones. Defaults to False. :classifiers: A set of key-value pairs to further classify this metric beyond current iteration (e.g. this can be used to identify sub-tests). Metrics can be added to WA output via the :ref:`context `: .. code-block:: python context.add_metric("score", 9001) context.add_metric("time", 2.35, "seconds", lower_is_better=True) You only need to specify the name and the value for the metric. Units and classifiers are optional, and, if not specified otherwise, it will be assumed that higher values are better (``lower_is_better=False``). The metric will be added to the result for the current job, if there is one; otherwise, it will be added to the overall run result. .. _artifact: Artifacts ^^^^^^^^^ This is an artifact generated during execution/post-processing of a workload. Unlike :ref:`metrics `, this represents an actual artifact, such as a file, generated. This may be "output", such as trace, or it could be "meta data" such as logs. These are distinguished using the ``kind`` attribute, which also helps WA decide how it should be handled. Currently supported kinds are: :log: A log file. Not part of the "output" as such but contains information about the run/workload execution that be useful for diagnostics/meta analysis. :meta: A file containing metadata. This is not part of the "output", but contains information that may be necessary to reproduce the results (contrast with ``log`` artifacts which are *not* necessary). :data: This file contains new data, not available otherwise and should be considered part of the "output" generated by WA. Most traces would fall into this category. :export: Exported version of results or some other artifact. This signifies that this artifact does not contain any new data that is not available elsewhere and that it may be safely discarded without losing information. :raw: Signifies that this is a raw dump/log that is normally processed to extract useful information and is then discarded. In a sense, it is the opposite of ``export``, but in general may also be discarded. .. note:: whether a file is marked as ``log``/``data`` or ``raw`` depends on how important it is to preserve this file, e.g. when archiving, vs how much space it takes up. Unlike ``export`` artifacts which are (almost) always ignored by other exporters as that would never result in data loss, ``raw`` files *may* be processed by exporters if they decided that the risk of losing potentially (though unlikely) useful data is greater than the time/space cost of handling the artifact (e.g. a database uploader may choose to ignore ``raw`` artifacts, whereas a network filer archiver may choose to archive them). .. note: The kind parameter is intended to represent the logical function of a particular artifact, not it's intended means of processing -- this is left entirely up to the output processors. As with :ref:`metrics`, artifacts are added via the :ref:`context `: .. code-block:: python context.add_artifact("benchmark-output", "bech-out.txt", kind="raw", description="stdout from running the benchmark") .. note:: The file *must* exist on the host by the point at which the artifact is added, otherwise an error will be raised. The artifact will be added to the result of the current job, if there is one; otherwise, it will be added to the overall run result. In some situations, you may wish to add an artifact to the overall run while being inside a job context, this can be done with ``add_run_artifact``: .. code-block:: python context.add_run_artifact("score-summary", "scores.txt", kind="export", description=""" Summary of the scores so far. Updated after every job. """) In this case, you also need to make sure that the file represented by the artifact is written to the output directory for the run and not the current job. .. _metadata: Metadata ^^^^^^^^ There may be additional data collected by your plugin that you want to record as part of the result, but that does not fall under the definition of a "metric". For example, you may want to record the version of the binary you're executing. You can do this by adding a metadata entry: .. code-block:: python context.add_metadata("exe-version", 1.3) Metadata will be added either to the current job result, or to the run result, depending on the current context. Metadata values can be scalars or nested structures of dicts/sequences; the only constraint is that all constituent objects of the value must be POD (Plain Old Data) types -- see :ref:`WA POD types `. There is special support for handling metadata entries that are dicts of values. The following call adds a metadata entry ``"versions"`` who's value is ``{"my_exe": 1.3}``: .. code-block:: python context.add_metadata("versions", "my_exe", 1.3) If you attempt to add a metadata entry that already exists, an error will be raised, unless ``force=True`` is specified, in which case, it will be overwritten. Updating an existing entry whose value is a collection can be done with ``update_metadata``: .. code-block:: python context.update_metadata("ran_apps", "my_exe") context.update_metadata("versions", "my_other_exe", "2.3.0") The first call appends ``"my_exe"`` to the list at metadata entry ``"ran_apps"``. The second call updates the ``"versions"`` dict in the metadata with an entry for ``"my_other_exe"``. If an entry does not exit, ``update_metadata`` will create it, so it's recommended to always use that for non-scalar entries, unless the intention is specifically to ensure that the entry does not exist at the time of the call. .. _classifiers: Classifiers ^^^^^^^^^^^ Classifiers are key-value pairs of tags that can be attached to metrics, artifacts, jobs, or the entire run. Run and job classifiers get propagated to metrics and artifacts. Classifier keys should be strings, and their values should be simple scalars (i.e. strings, numbers, or bools). Classifiers can be thought of as "tags" that are used to annotate metrics and artifacts, in order to make it easier to sort through them later. WA itself does not do anything with them, however output processors will augment the output they generate with them (for example, ``csv`` processor can add additional columns for classifier keys). Classifiers are typically added by the user to attach some domain-specific information (e.g. experiment configuration identifier) to the results, see :ref:`using classifiers `. However, plugins can also attach additional classifiers, by specifying them in ``add_metric()`` and ``add_artifacts()`` calls. Metadata vs Classifiers ^^^^^^^^^^^^^^^^^^^^^^^ Both metadata and classifiers are sets of essentially opaque key-value pairs that get included in WA output. While they may seem somewhat similar and interchangeable, they serve different purposes and are handled differently by the framework. Classifiers are used to annotate generated metrics and artifacts in order to assist post-processing tools in sorting through them. Metadata is used to record additional information that is not necessary for processing the results, but that may be needed in order to reproduce them or to make sense of them in a grander context. These are specific differences in how they are handled: - Classifiers are often provided by the user via the agenda (though can also be added by plugins). Metadata in only created by the framework and plugins. - Classifier values must be simple scalars; metadata values can be nested collections, such as lists or dicts. - Classifiers are used by output processors to augment the output the latter generated; metadata typically isn't. - Classifiers are essentially associated with the individual metrics and artifacts (though in the agenda they're specified at workload, section, or global run levels); metadata is associated with a particular job or run, and not with metrics or artifacts. -------------------- .. _execution-decorators: Execution Decorators --------------------- The following decorators are available for use in order to control how often a method should be able to be executed. For example, if we want to ensure that no matter how many iterations of a particular workload are ran, we only execute the initialize method for that instance once, we would use the decorator as follows: .. code-block:: python from wa.utils.exec_control import once @once def initialize(self, context): # Perform one time initialization e.g. installing a binary to target # .. @once_per_instance ^^^^^^^^^^^^^^^^^^ The specified method will be invoked only once for every bound instance within the environment. @once_per_class ^^^^^^^^^^^^^^^ The specified method will be invoked only once for all instances of a class within the environment. @once ^^^^^ The specified method will be invoked only once within the environment. .. warning:: If a method containing a super call is decorated, this will also cause stop propagation up the hierarchy, unless this is the desired effect, additional functionality should be implemented in a separate decorated method which can then be called allowing for normal propagation to be retained. -------------------- Utils ----- Workload Automation defines a number of utilities collected under :mod:`wa.utils` subpackage. These utilities were created to help with the implementation of the framework itself, but may be also be useful when implementing plugins. -------------------- Workloads --------- All of the type inherit from the same base :class:`Workload` and its API can be seen in the :ref:`API ` section. Workload methods (except for ``validate``) take a single argument that is a :class:`wa.framework.execution.ExecutionContext` instance. This object keeps track of the current execution state (such as the current workload, iteration number, etc), and contains, among other things, a :class:`wa.framework.output.JobOutput` instance that should be populated from the ``update_output`` method with the results of the execution. For more information please see `the context`_ documentation. :: # ... def update_output(self, context): # ... context.add_metric('energy', 23.6, 'Joules', lower_is_better=True) # ... .. _workload-types: Workload Types ^^^^^^^^^^^^^^^^ There are multiple workload types that you can inherit from depending on the purpose of your workload, the different types along with an output of their intended use cases are outlined below. .. _basic-workload: Basic (:class:`wa.Workload `) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This type of the workload is the simplest type of workload and is left the to developer to implement its full functionality. .. _apk-workload: Apk (:class:`wa.ApkWorkload `) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This workload will simply deploy and launch an android app in its basic form with no UI interaction. .. _uiautomator-workload: UiAuto (:class:`wa.UiautoWorkload `) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This workload is for android targets which will use UiAutomator to interact with UI elements without a specific android app, for example performing manipulation of android itself. This is the preferred type of automation as the results are more portable and reproducible due to being able to wait for UI elements to appear rather than having to rely on human recordings. .. _apkuiautomator-workload: ApkUiAuto (:class:`wa.ApkUiautoWorkload `) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The is the same as the UiAuto workload however it is also associated with an android app e.g. AdobeReader and will automatically deploy and launch the android app before running the automation. .. _revent-workload: Revent (:class:`wa.ReventWorkload `) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Revent workloads are designed primarily for games as these are unable to be automated with UiAutomator due to the fact that they are rendered within a single UI element. They require a recording to be performed manually and currently will need re-recording for each different device. For more information on revent workloads been please see :ref:`revent_files_creation` .. _apkrevent-workload: APKRevent (:class:`wa.ApkReventWorkload `) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The is the same as the Revent workload however it is also associated with an android app e.g. AngryBirds and will automatically deploy and launch the android app before running the automation.