Save classifiers at Result as well as Metric level. Reason: when
processing output, one might want to filter complete results, as well as
individual metrics. While it is in theory possible to get the
classifiers for a job by simply extracting the common classifiers
between all metrics, this fails when there are no metrics generated for
a job (note that one might still want to process the output in this
case, e.g. for the artifacts).
Do not explode if a result file for a job is missing when loading a
RunOutput. Specify job status as "UNKNOWN" and add the exception from
attempting to load the file to the events.
Make TargetInfo an attribute of run output, replacing the read/write
methods for the targetfile. Instead, always load it on creation, if
targetfile exists (useful for external scripts), and have a method to
set it after creation (uselful during WA run, where the output is
created before connecting to the target).
Don't construct an ArtifactType in Output.Add_artifact, the Artifact
class does that for us.
Next, fix the use of a nonexistent attribute Artifact.valid_kinds
Added a workload type to handle workloads that have both an APK with an
application and associated automation JAR. Added benchmarkpi
implementation using using the new workload.
- Implemented result processor infrastructured
- Corrected some status tracking issues (differed between states
and output).
- Added "csv" and "status" result processors (these will be the default
enabled).
Changing the way target descriptions work from a static mapping to
something that is dynamically generated and is extensible via plugins.
Also moving core target implementation stuff under "framework".
Changing the way target descriptions work from a static mapping to
something that is dynamically generated and is extensible via plugins.
Also moving core target implementation stuff under "framework".