1
0
mirror of https://github.com/ARM-software/devlib.git synced 2025-09-23 12:21:54 +01:00

104 Commits
v1.1.1 ... v1.2

Author SHA1 Message Date
Marc Bonnici
5ff278b133 Version bump for release 2019-12-20 15:57:57 +00:00
Marc Bonnici
b72fb470e7 docs: Update to include Collector information 2019-12-20 15:16:32 +00:00
Marc Bonnici
a4fd57f023 devlib/__init__: Export LogcatCollector in devlib package 2019-12-20 15:16:32 +00:00
Marc Bonnici
cf8ebf6668 devlib/collector: Update Collectors to implement collector interface 2019-12-20 15:16:32 +00:00
Marc Bonnici
15a77a841d collector/screencapture: Refactor to use new collector interface
Update the interface to make use of the collector interface.
Notable changes are the removal of the `output_path` path provided on
initialisation which will now be provided by the dedicated `set_output`
method.
2019-12-20 15:16:32 +00:00
Marc Bonnici
9bf9f2dd1b collector: Update the Collector Interface
Update `get_trace` to `get_data` to better reflect the purpose.
The return type of said method will be a `CollectorOutput` object will
contain one or more `CollectorOutputEntry` objects which will be used to
provide the `path`and `path_kind` attributes to indicate the path to the
obtained output and it's type (currently a "file" or "directory")
respectively.
2019-12-20 15:16:32 +00:00
Marc Bonnici
19887de71e devlib/trace: Refactor trace to be collector
We now have have multiple `trace` instruments that do not match the
description and therefore are moved to have a more suitably named
hierarchy.
2019-12-20 15:16:32 +00:00
Marc Bonnici
baa7ad1650 devlib/AndroidTarget: Move adb specific commands into the ADB connection
The `AndroidTarget` class should not depend on ADB specific commands as
is is possible to use this target with other connection types e.g. ssh.
Therefore move the adb specific commands into the `AdbConnection`.

- `wait_for_device` and `reboot_bootloader` are now exposed in AndroidTarget
as generic methods and call through to the connection method.
- `adb_kill_server` is now a standalone function of the AdbConnection.
2019-12-20 15:15:45 +00:00
Marc Bonnici
75621022be devlib/AndroidTarget: Move ADB disconnect code into connection.
The `AndroidTarget` would ensure that when connecting to a IP target
that it disconnected first to prevent the connection getting stuck if
the connection was not closed correctly. Move this code into the
`AdbConnection` instead as this is more relevant.
2019-12-20 15:15:45 +00:00
Valentin Schneider
01dd80df34 module/sched: Fix get_capacities() on !SCHED_DEBUG kernels
While reading the DT-provided capacity values (exposed in sysfs) is
sufficient, get_capacities() also unconditionally fetches data from the
sched_domain procfs, which is only populated on kernels compiled with
CONFIG_SCHED_DEBUG.

Tweak the logic to only call get_sd_info() if it is both possible and
required.
2019-12-13 15:32:01 +00:00
Sergei Trofimov
eb0661a6b4 utils/android: update SDK versions map
Update the entry for API level 28 and add an entry for API level 29.
2019-12-06 16:25:11 +00:00
Marc Bonnici
f303d1326b exception/get_traceback: Fix type error
Passing a `BytesIO` object to `print_tb` returns a `TypeError` change
this to a `StringIO` object instead.
2019-12-06 08:20:12 +00:00
Marc Bonnici
abd88548d2 instrument/frames: Fix missing import 2019-12-06 08:20:12 +00:00
Marc Bonnici
2a934288eb instrument/daq: Fix error message 2019-12-06 08:20:12 +00:00
Douglas RAILLARD
2bf4d8a433 target: Return a bool in Target.check_responsive()
Since bool is a subclass of int, turning 0 into False and 1 into True should not
break any user code.
2019-12-05 18:26:09 +00:00
Valentin Schneider
cf26dee308 trace/ftrace: Support the 'function' tracer
This tracer is similar to the 'function_graph' tracer in that it helps us
trace function calls. It is however more lightweight, and only traces
functions entries (along with the caller of the function). It can also
happen that the kernel has support for the 'function' tracer but not for
'function_graph' (the opposite cannot be true, however).
2019-12-04 11:17:13 +00:00
Valentin Schneider
e7bd2a5b22 trace/ftrace: Memoize tracable functions
This is similar to what is already done for events and tracers. Also, use
this opportunity to use read_value() instead of target.execute('cat {}').
2019-12-04 11:17:13 +00:00
Valentin Schneider
72be3d01f8 trace/ftrace: Only require CONFIG_FUNCTION_PROFILER for the function profiling
We currently raise an exception when trying to use the 'function' or
'function_graph' tracer if the kernel wasn't compiled
CONFIG_FUNCTION_PROFILER, but that is a completely valid use.
2019-12-04 11:17:13 +00:00
Marc Bonnici
745dc9499a modules/flash: Add a connect parameter to the flash method
Adds a `connect` parameter to the flash method to specifiy whether
devlib should attempt to connect to the target after flashing has
completed.
2019-11-28 17:11:24 +00:00
Sergei Trofimov
6c9f80ff76 target: get model form platform
Move the resolution of the model name from targets into Platform's
_set_model_from_target() (which was already attempting to do that via
dmidecode method).
2019-11-28 11:07:58 +00:00
Javi Merino
182f4e7b3f daq: Fix teardown() removing temporary files
The teardown() method was introduced in bb1552151a ("instruments:
Add teardown method to clean up tempfiles") but it uses an undeclared
variable tempdir. Make tempdir an object variable so that it can be
used in teardown().
2019-11-26 16:29:04 +00:00
Javi Merino
4df2b9a4c4 daq: move to daqpower 2.0
daqpower 2.0 has a new interface and it is more stable.
2019-11-26 16:29:04 +00:00
Peter Puhov
aa64951398 Add NUMA nodes 2019-11-22 16:48:28 +00:00
Michalis Spyrou
0fa91d6c4c Add options to ssh connection
The user can pass a dictionary containg the key and value
pairs with the extra ssh configuration options. Multiple
options will be passed as '-o key1=value1 -o key2=value2'

Signed-off-by: Michalis Spyrou <michalis.spyrou@arm.com>
2019-11-21 14:19:34 +00:00
Douglas RAILLARD
0e6280ae31 ftrace: Ensure /proc/kallsyms contains symbol addresses
The content of /proc/kallsyms depends on the value of
/proc/sys/kernel/kptr_restrict:
* If 0, restriction is lifted and kallsyms contains real addresses
* If 1, kallsyms will contain null pointers

Since trace-cmd records the content of kallsyms into the trace.dat and uses that
to pretty-print function names (function tracer/grapher), ensure that its
content is available.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
2019-11-18 09:02:29 +00:00
Douglas RAILLARD
2650a534f3 exception: Fix DevlibError unpickling
Unpickling of BaseException is done by feeding self.args to the exception type.
This self.args attribute is initialized in two places: in
BaseException.__new__ (before __init__ is called) and in BaseException.__init__
as well.

The following code ends up with self.args == ('hello',), instead of (1, 2):

    class MyExcep(BaseException):
          def __init__(self, foo, bar):
              print('before super().__init__()', self.args)
              super().__init__('hello')
              print('after super().__init__()', self.args)

    MyExcep(1, 2)
    # Prints:
    # before super().__init__() (1, 2)
    # after super().__init__() ('hello',)

When unplickling such instance, ('hello',) will be fed to MyExcep.__init__(),
which will fail with a TypeError since it requires 2 positional arguments.

In order to fix that, super().__init__() needs to be handwritten instead of
getting the one from BaseException:

    class MyBase(BaseException):
        def __init__(self, msg):
            self.msg = msg

    class MyExcep(MyBase):
          def __init__(self, foo, bar):
              print('before super().__init__()', self.args)
              super().__init__('hello')
              print('after super().__init__()', self.args)

    MyExcep(1, 2)
    # Prints:
    # before super().__init__() (1, 2)
    # after super().__init__() (1, 2)

This will correctly initialize self.args == (1, 2), allowing unpickling to work.
2019-11-13 16:43:07 +00:00
Javi Merino
c212ef2146 module/cgroups: Really move all tasks in Controller.move_all_tasks_to()
The docstring of Controller.move_all_tasks_to() says that the function
moves all the tasks to the "dest" cgroup.  However, it iterates over
self._cgroups, which is a dictionary that is lazily populated when you
call Controller.cgroup().  For example, this doesn't work:

cpuset_cg = target.cgroups.controller("cpuset")
cpuset_cg.move_all_tasks_to("top-app")

Because you haven't populated self._cgroups yet.  You need to manually
populate the dictionary with something like:

for group in cpuset_cg.list_all():
    cpuset_cg.cgroup(group)

before you can use move_all_tasks_to().  Iterate through
self.list_all() instead of self._cgroups to really move all tasks to
to the destination directory.

Controller.move_tasks() has a try-except block to get the cgroups of
the source and destination groups.  Controller.cgroup() caches the
groups in self._cgroups and populates it if it hasn't been already.
Simplify move_tasks() and let it deal with source and dest cgroups
that exist but the controller hasn't loaded yet.
2019-11-05 10:28:43 +00:00
Javi Merino
5b5da7c392 module/cgroups: log to the class' logger
All classes in the module have a logger.  Avoid using the root logger
and use the class' logger.
2019-11-05 10:28:43 +00:00
Marc Bonnici
3801fe1d67 trace-cmd: Respect strict when setting saved_cmdlines_size
Not all devices have the `saved_cmdlines_size` node exposed and therefore
attempting to set this can fail. Raise an error for this only when
`strict` is set to `True` otherwise raise a warning instead.
2019-11-04 17:26:29 +00:00
Douglas RAILLARD
43673e3fc5 ftrace: Report unavailable events all at once
Emit one warning message or one exception referring to the whole list of
unavailable events, rather than spreading it through multiple calls. In strict
mode, this allows the user to fix the whole list of bogus events at once rather
than incrementally.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
2019-11-04 17:20:03 +00:00
Douglas RAILLARD
bbe3bb6adb ftrace: Expose FtraceCollector.available_events
Expose the list of events the kernel supports.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
2019-11-04 17:20:03 +00:00
Douglas RAILLARD
656da00d2a ftrace: Add tracer name validation
Check that the asked tracer is supported by the kernel.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
2019-11-04 17:20:03 +00:00
Douglas RAILLARD
6b0b12d833 ftrace: Enable alternative tracers
"function_graph" tracer allows getting funcgraph_entry/funcgraph_exit events for
listed functions. This allows getting precise information on when a given
function was called, and how long its execution took (to build a time-based
heatmap for example).

This can be enabled using:
     FtraceCollector(target, functions=['foo', 'bar'], tracer='function_graph')

If needed, children functions can also be traced with
trace_children_functions=True .

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
2019-11-04 17:20:03 +00:00
Douglas RAILLARD
56cdc2e6c3 ftrace: Allow setting the number of cmdlines saved by ftrace
While tracing, ftrace records a mapping of PIDs to cmdlines. By default, it will
only record up to 128 such entries, which is not enough for a typical android
system. The consequence is trace-cmd reporting "<...>" as cmdline.

Allow setting that number to a higher value, and default to a comfortable 4096
entries.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
2019-11-01 14:06:02 +00:00
Douglas RAILLARD
def235064b ftrace: Allow choosing clock source
trace-cmd start -C <clock> allows selecting the ftrace clock. Expose that in
FtraceCollector API.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
2019-11-01 14:06:02 +00:00
Marc Bonnici
4d1299d678 Target: Allow for any TargetError when checking for root
On some unrooted devices the checking of root status can cause
other error types, therefore update `except` statement to accommodate
these.
2019-10-24 13:47:57 +01:00
Marc Bonnici
d4f3316120 doc/target: Update documentation for install_module 2019-10-22 17:58:34 +01:00
Marc Bonnici
76ef9e0364 target: Improve error reporting of module installation
If an Exception occurs when installing a module log it explicitly to
make it clearer to the user what went wrong.
2019-10-22 17:58:34 +01:00
Marc Bonnici
249b8336b5 target: Add method to install device modules after initial setup
Allow for installing additional device modules once a target has already
been initialized.
2019-10-22 17:58:34 +01:00
Marc Bonnici
c5d06ee3d6 doc/target: Correct terminology 2019-10-22 17:58:34 +01:00
Douglas RAILLARD
207291e940 module/thermal: List directories with as_root=target.is_rooted
Listing thermal zones directories in sysfs fails one some system when running as
non-root.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
2019-10-16 14:26:57 +01:00
Marc Bonnici
6b72b50c40 docs/instrumenation: Document teardown behaviour for instrument API 2019-10-03 11:36:11 +01:00
Marc Bonnici
c73266c3a9 docs/instrumentation: Fix typos 2019-10-03 11:36:11 +01:00
Marc Bonnici
0d6c6883dd instruments: Add keep_raw parameter to control teardown deletion
Add a `keep_raw` parameter that prevents raw files from being deleted
during teardown in case they are still required.
2019-10-03 11:36:11 +01:00
Marc Bonnici
bb1552151a instruments: Add teardown method to clean up tempfiles
Implement the `teardown` method in instruments that utilise tempfiles
which were previously left behind.
2019-10-03 11:36:11 +01:00
Robert Freeman
5e69f06d77 Add simpleperf type to perf TraceCollector
* Added simperf type to trace collector
* Added record command to allow for perf/simpleperf
  recording and reporting
2019-09-18 12:55:54 +01:00
Marc Bonnici
9e6cfde832 AndroidTarget: Fix additional paramter to adb_root
Remove the target `adb_name` from the call as this method is an instance
method.
2019-09-16 14:17:12 +01:00
Marc Bonnici
4fe0b2cb64 rendering/SurfaceFlingerFrameCollector: Update parser to ignore text
On newer devices dumpsys output is more explanatory and does not only
contain numerical data. Update the parser to ignore non numerical
data for example the arguments that were passed and section headers.
2019-09-12 16:01:58 +01:00
Marc Bonnici
b9654c694c target/install: Add timeout parameters to additional install methods
Not all install methods supported a timeout parameter so this can cause
issues for installing large binaries to the target via some paths.
2019-09-12 14:19:16 +01:00
Marc Bonnici
ed135febde LocalConnection: Implement connected_as_root parameter
As of commit 5601fdb108 the
`connected_as_root` status is tracked in the connection. Add missing
implementation to `LocalConnection`.
2019-09-12 09:15:41 +01:00
Marc Bonnici
5d4315c5d2 AndroidTarget: Add guards / workarounds for adb specific functionality
A `Target` should be independent of the connection type used however we
do have some adb specific functionality as part of the `Target` for
speed/compatibility reasons. For the cases that we can perform the
operation in a connection agnostic method add alternative implementation
and for those raise a error to inform the user of the issue.
2019-09-11 10:46:00 +01:00
Marc Bonnici
9982f810e1 AndroidTarget: Utilise the adb root functionality of the connection
Adb specific functionality does not belong in the target however for now
rely on the connection to perform (un)rooting of the connection.
2019-09-11 10:46:00 +01:00
Marc Bonnici
5601fdb108 target.py: Track connected_as_root in the connection
Move from tracking of `connected_as_root` from the target to the
connection to allow it to perform it's own caching.
2019-09-11 10:46:00 +01:00
Marc Bonnici
4e36bad2ab target.py: Un-memoize the is_rooted property
Un-memoize the `is_rooted` property of the connection and perform our
own caching instead as the state can be changed depending on the
connection status.
2019-09-11 10:46:00 +01:00
Marc Bonnici
72e4443b7d AdbConnection: Enable adb_as_root as a connection parameter
To allow for connecting to an `AndroidTarget` as root before the target
has been initialised, allow for passing `adb_as_root` as a connection
parameter to the `AdbConnection`. This will restart `adbd` as root
before attempting to connect to the target and will restart as unrooted
once all connections to that target have been closed.
2019-09-11 10:46:00 +01:00
Marc Bonnici
9ddf763650 AdbConnection: Add adb rooting to the connection to allow tracking
Add a method to `AdbConnection` to control whether whether adb is
connected as root. This allows for the connection to track whether it is
connected as root for a particular device across all instances of a
connection.
2019-09-11 10:46:00 +01:00
Marc Bonnici
18830b74da SshConnection: Implement tracking of connected_as_root status
Improve the detection of being `connected_as_root` from comparing the
username to checking the actual id of the user and export this as a
property for the connection.
2019-09-11 10:46:00 +01:00
Marc Bonnici
66de30799b doc/connection: Update connection documentation 2019-09-11 10:46:00 +01:00
Sergei Trofimov
156915f26f cpuidle: fix exist() --> exists() typo. 2019-09-05 09:17:26 +01:00
Douglas RAILLARD
74edfcbe43 target: Fix quoting of PATH components
Make sure the components of PATH are properly quoted.
2019-09-04 16:09:06 +01:00
Douglas RAILLARD
aa62a52ee3 target: Make sure subprocesses of Target.execute() inherit PATH
Make sure that the subprocesses of the command that is spawned see the same
value of PATH env var, so that the tools installed by devlib are available from
scripts that could be started as well.
2019-09-04 16:09:06 +01:00
Douglas RAILLARD
9c86174ff5 target: Add Target.execute(force_locale='C') parameter
To avoid locale-specific variations in the output of commands, set LC_ALL=C by
default. This can be disabled by using None, or set to another locale.
2019-09-04 16:09:06 +01:00
Douglas RAILLARD
ea19235aed trace: dmesg: Add KernelLogEntry.from_dmesg_output() classmethod
Allow building a list of KernelLogEntry from a full dmesg output, in addition to
building just one entry using KernelLogEntry.from_str() .
2019-09-04 16:08:53 +01:00
Valentin Schneider
e1fb6cf911 module/cpufreq: Make use_governor() affect only online CPUs
Turns out you can't change cpufreq attributes on an offlined
CPU (no big surprise!), so use_governor() will fail if a whole
frequency domain has been hotplugged out.

Change the default behaviour to only target online CPUs.
2019-09-03 09:27:22 +01:00
Douglas RAILLARD
d9d187471f trace: dmesg: Ignore empty lines
dmesg output seems to sometimes include an empty line. Ignore them, so we don't
fail to match with the regexes.
2019-09-03 09:27:10 +01:00
Marc Bonnici
c944d34593 utils/android: Fix echoing of commands.
The fix in commit 964fde2 caused issues with certain command structures,
for example running in the background. To prevent this run the original
command as a subshell.
2019-08-14 07:46:50 +01:00
Marc Bonnici
964fde2fef utils/android: Echo the exit code of the actual command
When executing a command using `su`, the `echo` command was returning the
error code of the invocation of `su` rather than the command itself.
Usually `su` should mimic the return code of the command it is executing
however this is not always the case which can cause issues.
2019-08-09 16:18:25 +01:00
Douglas RAILLARD
988de69b61 target: Add Target.revertable_write_value()
Same as write_value(), but returns a context manager that will write
back the old value on exit.

Also add batch_revertable_write_value() that takes a list of kwargs
dict, and will call revertable_write_value() on each of them, returning
a single combined context manager.
2019-07-30 18:05:21 +01:00
Douglas RAILLARD
ded30eef00 misc: Add batch_contextmanager
Convenience wrapper around standard contextlib.ExitStack class.
2019-07-30 18:05:21 +01:00
Javi Merino
71bd8b10ed trace/systrace: make start() return when tracing has started
In SystraceCollector, start() returns after executing
subprocess.Popen() for systrace. That doesn't mean that systrace is
running though, the function can return even before systrace has had a
chance to execute anything. Therefore, when you run the command
you want to trace, systrace will miss the first seconds of the
execution.

Run systrace with -u to unbuffer its stdin and wait for it to print
"Starting tracing (stop with enter)" before returning.

Fixes #403
2019-07-30 15:07:28 +01:00
Marc Bonnici
986261bc7e utils/android: Move private method to end of class 2019-07-30 13:44:52 +01:00
Marc Bonnici
dc5f4c6b49 android/adb: Enable fall back for su command
Commit 89c40fb switched from using `echo CMD | su` to `su -c CMD`
however not all modern versions of `su` support this format.
Automatically try and detect if the new style is supported when
connecting and if not, fall back to the old implementation.
2019-07-30 13:44:52 +01:00
Marc Bonnici
88f8c9e9ac module/cpuidle: Add fallback for reading governor
As per #407 if the kernel is compiled with the ability to switch cpuidle
governors via sysfs `current_governor_ro` is replaced with
`current_governor` so check if the intial path exists before reading.
2019-07-30 09:52:34 +01:00
Marc Bonnici
0c434e8a1b setup.py: Remove Python2 as a supported version 2019-07-19 17:07:41 +01:00
Marc Bonnici
5848369846 Version Bump 2019-07-19 17:07:41 +01:00
Marc Bonnici
002ade33a8 Version Bump 2019-07-19 16:37:04 +01:00
Marc Bonnici
2e8d42db79 setup.py Update classifiers 2019-07-19 16:37:04 +01:00
Pierre-Clément Tosi
6b414cc291 utils.adb_shell: Move from 'echo CMD | su' to '-c'
Move from the current implementation (piping the command to su) which
has unexpected behaviours to the '-c' su flag (which then becomes
required).
2019-07-19 16:36:01 +01:00
Pierre-Clément Tosi
0d798f1c4f utils.adb_shell: Improve stability (Py3)
Move from pipes.quote (private) to shlex.quote (Py3.3+ standard).

Make tests of inputs against None (their default value) instead of based
on their truthiness.

Improve logging through quoted commands (runnable as-is, less confusing).

Make the command-building process straightforward for readability and
maintainability.
2019-07-19 16:36:01 +01:00
Marc Bonnici
1325e59b1a target/KernelConfig: Implement the __bool__ method
To aid in checking whether any information is contained in the
`KernelConfig` ensure that that `__bool__` method value indicated the
presence of parsed input.
2019-07-18 15:12:30 +01:00
Marc Bonnici
f141899dae target/KernelConfig: Ensure get_config_name is static
`get_config_name` was previsouly treaded as a bound method so
ensure that is defined as static as expected.
2019-07-18 15:12:30 +01:00
Valentin Schneider
984556bc8e module/sched: Make SchedModule probing more accurate
Right now, this module won't be loaded if the sched_domain procfs
entries are not present on the target. However, other pieces of
information may be present in which case it would make sense to load
the module.

For instance, mainline kernels compiled without SCHED_DEBUG can still
expose the cpu_capacity sysfs entry. As such, try to get a better idea
of what's available and only disable the loading of the module if it
can provide absolutely nothing.
2019-07-09 15:36:13 +01:00
Valentin Schneider
03a469fc38 module/sched: Expose the remote CPU capacity sysfs path
A later change needs to access this outside of a SchedModule instance,
so make the information available as a classmethod.
2019-07-09 15:36:13 +01:00
Valentin Schneider
2d86474682 module/sched: Expose a classmethod variant of SchedModule.has_debug
A later change needs to access this outside of a SchedModule instance,
so make the information available as a classmethod.
2019-07-09 15:36:13 +01:00
Valentin Schneider
ada318f27b module/sched: Fix None check
As mentioned in the previous commit, CPU numbers would be passed to
SchedProcFSData's __init__() (instead of a proper sysfs path). When
done with CPU0, that path would be evaluated as False and the code
would carry on with the default path, which was quite confusing.

This has now been fixed (and 0 isn't such a great path to give
anyway), nevertheless this check should just catter to None.
2019-07-09 15:36:13 +01:00
Valentin Schneider
b8f7b24790 module/sched: Fix incorrect SchedProcFSData usage
Rather than using the conveniently provided `get_cpu_sd_info()` helper
method, `has_em()` and `get_em_capacity()` would build a
`SchedProcFSData` with `path=<CPU number>`, which is obviously broken.

Do the right thing and use `get_cpu_sd_info()` in those places.
2019-07-09 15:36:13 +01:00
Josh Choo
a9b9938b0f module/sched: Return the correct maximum capacity
The existing behaviour assumes that the cap_states file contains a list
of capacity|cost pairs, and attempts to return the maximum capacity by
selecting the value at the second last index of the list.

This assumption fails on some newer Qualcomm kernels where the
cap_states file contains a list of capacity|frequency|cost triplets.
Consequently, the maximum frequency would be erroneously returned
instead of the maximum capacity.

Fix the problem by dynamically calculating the index of the maximum
capacity by dividing the number of entries in cap_states by the value in
nr_cap_states.

---

For example, on a certain Snapdragon 845 device:

/proc/sys/kernel/sched_domain/cpu0/domain0/group0/energy/cap_states
        54 entries:

        CAP     FREQ    COST
        --------------------
        65	300000	12
        87	403200	17
        104	480000	21
        125	576000	27
        141	652800	31
        162	748800	37
        179     825600	42
        195	902400	47
        212	979200	52
        228	1056000	57
        245	1132800	62
        266	1228800	70
        286	1324800 78
        307	1420800	89
        328	1516800	103
        348	1612800	122
        365	1689600	141
        381	1766400	160

/proc/sys/kernel/sched_domain/cpu0/domain0/group0/energy/nr_cap_states
        18

Max capacity = 381 (third-last index)
2019-07-09 09:04:34 +01:00
Marc Bonnici
f619f1dd07 setup.py: Set maximum package version for python2.7 support
In the latest versions of panadas and numpy python2.7 support has been
dropped therefore restrict the maximum version of these packages.
2019-07-08 13:46:19 +01:00
Marc Bonnici
ad350c9267 bin/perf: Update binaries
In the previous version there appears to be a bug causing perf to
segfault as per https://github.com/ARM-software/devlib/issues/395.
Therefore update provided binaries to v3.19 which does not appear to
have this issue.
2019-06-11 13:05:37 +01:00
Douglas RAILLARD
8343794d34 module/thermal: Gracefully handle unexpected sysfs names
Instead of raising an exception, log an warning and carry on.
2019-06-05 15:52:20 +01:00
Douglas RAILLARD
f2bc5dbc14 devlib: Re-export DmesgCollector in devlib package
Allow using 'import devlib.DmesgCollector', just like
devlib.FtraceCollector.
2019-06-03 14:16:28 +01:00
Patrick Bellasi
6f42f67e95 target: Ensure we use installed binaries
Apart from busybox, devlib itself makes use of other system provided binaries.
For example, the DmesgCollector module uses the system provided dmesg.
In cases the system provided binary does not support some of the features
required by devlib, we currently just fails with an error.

For the user it is still possible to deploy a custom/updated version of a
required binary via the Target::install API. However, that binary is not
automatically considered by devlib.

Let's ensure that all Target::execute commands use a PATH which gives priority
to devlib installed binaries.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2019-05-24 17:47:18 +01:00
Marc Bonnici
ae7f01fd19 target: Use root if available when determine number of cpus
On some targets some entries in `/sys/devices/system/cpu` require root
to list otherwise will return a permission error.
2019-05-24 11:18:54 +01:00
Pierre-Clément Tosi
b5f36610ad trace/perf: Soften POSIX signal for termination
Replace the default SIGKILL signal sent to perf to "request" its
termination by a SIGINT, allowing it to handle the signal by cleaning up
before exit. This should address issues regarding corrupted perf.data
output files.
2019-05-15 14:30:18 +01:00
Douglas RAILLARD
4c8f2430e2 trace: dmesg: Allow using old util-linux binary
Old util-linux binaries don't support --force-prefix. Multi-line entry
parsing will break on these, but at least we can collect the log.

Also decode the raw priority, so only the facility is not decoded in
case busybox or old util-linux is used.
2019-03-26 09:38:58 +00:00
Douglas RAILLARD
a8b6e56874 trace: dmesg: Call dmesg -c as root
Clearing the kernel ring buffer needs root permission.
2019-03-25 14:57:33 +00:00
Douglas RAILLARD
c92756d65a trace: Fix dmesg collector when using util-linux dmesg
Set missing "facility" attribute on DmesgCollector instances.
2019-03-25 14:57:33 +00:00
Douglas RAILLARD
8512f116fc trace: Add DmesgCollector
Allows collecting dmesg output and parses it for easy filtering.
2019-03-19 13:52:04 +00:00
Valentin Schneider
be8b87d559 module/sched: Fix/simplify procfs packing behaviour
Back when I first wrote this I tried to make something smart that
would automatically detect which procfs entries to pack into a
mapping, the condition to do so being "the entry ends with a
digit and there is another entry with the same name but a different
digit".

I wrongly assumed this would always work for the sched_domain entries,
but it's possible to have a domain with a single group and thus a
single "group0" entry.

Since we know which entries we want to pack, let's hard-code these and
be less smart about it.
2019-03-19 13:48:29 +00:00
Valentin Schneider
d76c2d63fe module/sched: Make get_capacities() work with hotplugged CPUs 2019-03-19 13:48:29 +00:00
Valentin Schneider
8bfa050226 module/sched: SchedProcFSData: Don't assume SD name is always present
The existence of that field is gated by SCHED_DEBUG, so look for an
always-present field instead.
2019-03-19 13:48:29 +00:00
Chris Redpath
8871fe3c25 devlib/sched: Change order of CPU capacity algorithms
There are two ways we can load CPU capacity. Up until 4.14, supported
kernels did not have the /sys/devices/system/cpu/cpuX/cpu_capacity file
and the only way to read cpu capacity was by grepping the EM from
procfs sched_domain output. After 4.14, that route still exists but is
complicated due to a change in the format once support for
frequency-power models was merged.

In order to avoid rewriting the procfs EM grepping code, lets switch the
order of algorithms we try to use when loading CPU capacity. All newer
kernels provide the dedicated sysfs node and all kernels which do not
have this node use the old format for the EM in sched_domain output.

Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2019-03-18 14:29:38 +00:00
Sergei Trofimov
aa50b2d42d host: expect shell syntax inside LocalConnection.execute
When using sudo with LocalConnection, execute the input command via 'sh
-c' to ensure any shell syntax within the command is handled properly.
2019-03-07 09:34:23 +00:00
Marc Bonnici
ebcb1664e7 utils/version: Development version bump 2019-02-27 10:55:20 +00:00
46 changed files with 1535 additions and 497 deletions

View File

@@ -45,9 +45,11 @@ from devlib.derived import DerivedMeasurements, DerivedMetric
from devlib.derived.energy import DerivedEnergyMeasurements from devlib.derived.energy import DerivedEnergyMeasurements
from devlib.derived.fps import DerivedGfxInfoStats, DerivedSurfaceFlingerStats from devlib.derived.fps import DerivedGfxInfoStats, DerivedSurfaceFlingerStats
from devlib.trace.ftrace import FtraceCollector from devlib.collector.ftrace import FtraceCollector
from devlib.trace.perf import PerfCollector from devlib.collector.perf import PerfCollector
from devlib.trace.serial_trace import SerialTraceCollector from devlib.collector.serial_trace import SerialTraceCollector
from devlib.collector.dmesg import DmesgCollector
from devlib.collector.logcat import LogcatCollector
from devlib.host import LocalConnection from devlib.host import LocalConnection
from devlib.utils.android import AdbConnection from devlib.utils.android import AdbConnection

BIN
devlib/bin/arm/simpleperf Executable file

Binary file not shown.

Binary file not shown.

BIN
devlib/bin/arm64/simpleperf Executable file

Binary file not shown.

Binary file not shown.

BIN
devlib/bin/x86/simpleperf Executable file

Binary file not shown.

BIN
devlib/bin/x86_64/simpleperf Executable file

Binary file not shown.

View File

@@ -15,12 +15,14 @@
import logging import logging
from devlib.utils.types import caseless_string
class TraceCollector(object): class CollectorBase(object):
def __init__(self, target): def __init__(self, target):
self.target = target self.target = target
self.logger = logging.getLogger(self.__class__.__name__) self.logger = logging.getLogger(self.__class__.__name__)
self.output_path = None
def reset(self): def reset(self):
pass pass
@@ -31,6 +33,12 @@ class TraceCollector(object):
def stop(self): def stop(self):
pass pass
def set_output(self, output_path):
self.output_path = output_path
def get_data(self):
return CollectorOutput()
def __enter__(self): def __enter__(self):
self.reset() self.reset()
self.start() self.start()
@@ -39,5 +47,29 @@ class TraceCollector(object):
def __exit__(self, exc_type, exc_value, traceback): def __exit__(self, exc_type, exc_value, traceback):
self.stop() self.stop()
def get_trace(self, outfile): class CollectorOutputEntry(object):
path_kinds = ['file', 'directory']
def __init__(self, path, path_kind):
self.path = path
path_kind = caseless_string(path_kind)
if path_kind not in self.path_kinds:
msg = '{} is not a valid path_kind [{}]'
raise ValueError(msg.format(path_kind, ' '.join(self.path_kinds)))
self.path_kind = path_kind
def __str__(self):
return self.path
def __repr__(self):
return '<{} ({})>'.format(self.path, self.path_kind)
def __fspath__(self):
"""Allow using with os.path operations"""
return self.path
class CollectorOutput(list):
pass pass

208
devlib/collector/dmesg.py Normal file
View File

@@ -0,0 +1,208 @@
# Copyright 2019 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import division
import re
from itertools import takewhile
from datetime import timedelta
from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry)
class KernelLogEntry(object):
"""
Entry of the kernel ring buffer.
:param facility: facility the entry comes from
:type facility: str
:param level: log level
:type level: str
:param timestamp: Timestamp of the entry
:type timestamp: datetime.timedelta
:param msg: Content of the entry
:type msg: str
"""
_TIMESTAMP_MSG_REGEX = re.compile(r'\[(.*?)\] (.*)')
_RAW_LEVEL_REGEX = re.compile(r'<([0-9]+)>(.*)')
_PRETTY_LEVEL_REGEX = re.compile(r'\s*([a-z]+)\s*:([a-z]+)\s*:\s*(.*)')
def __init__(self, facility, level, timestamp, msg):
self.facility = facility
self.level = level
self.timestamp = timestamp
self.msg = msg
@classmethod
def from_str(cls, line):
"""
Parses a "dmesg --decode" output line, formatted as following:
kern :err : [3618282.310743] nouveau 0000:01:00.0: systemd-logind[988]: nv50cal_space: -16
Or the more basic output given by "dmesg -r":
<3>[3618282.310743] nouveau 0000:01:00.0: systemd-logind[988]: nv50cal_space: -16
"""
def parse_raw_level(line):
match = cls._RAW_LEVEL_REGEX.match(line)
if not match:
raise ValueError('dmesg entry format not recognized: {}'.format(line))
level, remainder = match.groups()
levels = DmesgCollector.LOG_LEVELS
# BusyBox dmesg can output numbers that need to wrap around
level = levels[int(level) % len(levels)]
return level, remainder
def parse_pretty_level(line):
match = cls._PRETTY_LEVEL_REGEX.match(line)
facility, level, remainder = match.groups()
return facility, level, remainder
def parse_timestamp_msg(line):
match = cls._TIMESTAMP_MSG_REGEX.match(line)
timestamp, msg = match.groups()
timestamp = timedelta(seconds=float(timestamp.strip()))
return timestamp, msg
line = line.strip()
# If we can parse the raw prio directly, that is a basic line
try:
level, remainder = parse_raw_level(line)
facility = None
except ValueError:
facility, level, remainder = parse_pretty_level(line)
timestamp, msg = parse_timestamp_msg(remainder)
return cls(
facility=facility,
level=level,
timestamp=timestamp,
msg=msg.strip(),
)
@classmethod
def from_dmesg_output(cls, dmesg_out):
"""
Return a generator of :class:`KernelLogEntry` for each line of the
output of dmesg command.
.. note:: The same restrictions on the dmesg output format as for
:meth:`from_str` apply.
"""
for line in dmesg_out.splitlines():
if line.strip():
yield cls.from_str(line)
def __str__(self):
facility = self.facility + ': ' if self.facility else ''
return '{facility}{level}: [{timestamp}] {msg}'.format(
facility=facility,
level=self.level,
timestamp=self.timestamp.total_seconds(),
msg=self.msg,
)
class DmesgCollector(CollectorBase):
"""
Dmesg output collector.
:param level: Minimum log level to enable. All levels that are more
critical will be collected as well.
:type level: str
:param facility: Facility to record, see dmesg --help for the list.
:type level: str
.. warning:: If BusyBox dmesg is used, facility and level will be ignored,
and the parsed entries will also lack that information.
"""
# taken from "dmesg --help"
# This list needs to be ordered by priority
LOG_LEVELS = [
"emerg", # system is unusable
"alert", # action must be taken immediately
"crit", # critical conditions
"err", # error conditions
"warn", # warning conditions
"notice", # normal but significant condition
"info", # informational
"debug", # debug-level messages
]
def __init__(self, target, level=LOG_LEVELS[-1], facility='kern'):
super(DmesgCollector, self).__init__(target)
self.output_path = None
if level not in self.LOG_LEVELS:
raise ValueError('level needs to be one of: {}'.format(
', '.join(self.LOG_LEVELS)
))
self.level = level
# Check if dmesg is the BusyBox one, or the one from util-linux in a
# recent version.
# Note: BusyBox dmesg does not support -h, but will still print the
# help with an exit code of 1
self.basic_dmesg = '--force-prefix' not in \
self.target.execute('dmesg -h', check_exit_code=False)
self.facility = facility
self.reset()
@property
def entries(self):
return KernelLogEntry.from_dmesg_output(self.dmesg_out)
def reset(self):
self.dmesg_out = None
def start(self):
self.reset()
# Empty the dmesg ring buffer
self.target.execute('dmesg -c', as_root=True)
def stop(self):
levels_list = list(takewhile(
lambda level: level != self.level,
self.LOG_LEVELS
))
levels_list.append(self.level)
if self.basic_dmesg:
cmd = 'dmesg -r'
else:
cmd = 'dmesg --facility={facility} --force-prefix --decode --level={levels}'.format(
levels=','.join(levels_list),
facility=self.facility,
)
self.dmesg_out = self.target.execute(cmd)
def set_output(self, output_path):
self.output_path = output_path
def get_data(self):
if self.output_path is None:
raise RuntimeError("Output path was not set.")
with open(self.output_path, 'wt') as f:
f.write(self.dmesg_out + '\n')
return CollectorOutput([CollectorOutputEntry(self.output_path, 'file')])

View File

@@ -20,11 +20,14 @@ import time
import re import re
import subprocess import subprocess
import sys import sys
import contextlib
from pipes import quote
from devlib.trace import TraceCollector from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry)
from devlib.host import PACKAGE_BIN_DIRECTORY from devlib.host import PACKAGE_BIN_DIRECTORY
from devlib.exception import TargetStableError, HostError from devlib.exception import TargetStableError, HostError
from devlib.utils.misc import check_output, which from devlib.utils.misc import check_output, which, memoized
TRACE_MARKER_START = 'TRACE_MARKER_START' TRACE_MARKER_START = 'TRACE_MARKER_START'
@@ -48,12 +51,14 @@ TIMEOUT = 180
CPU_RE = re.compile(r' Function \(CPU([0-9]+)\)') CPU_RE = re.compile(r' Function \(CPU([0-9]+)\)')
STATS_RE = re.compile(r'([^ ]*) +([0-9]+) +([0-9.]+) us +([0-9.]+) us +([0-9.]+) us') STATS_RE = re.compile(r'([^ ]*) +([0-9]+) +([0-9.]+) us +([0-9.]+) us +([0-9.]+) us')
class FtraceCollector(TraceCollector): class FtraceCollector(CollectorBase):
# pylint: disable=too-many-locals,too-many-branches,too-many-statements # pylint: disable=too-many-locals,too-many-branches,too-many-statements
def __init__(self, target, def __init__(self, target,
events=None, events=None,
functions=None, functions=None,
tracer=None,
trace_children_functions=False,
buffer_size=None, buffer_size=None,
buffer_size_step=1000, buffer_size_step=1000,
tracing_path='/sys/kernel/debug/tracing', tracing_path='/sys/kernel/debug/tracing',
@@ -63,26 +68,34 @@ class FtraceCollector(TraceCollector):
no_install=False, no_install=False,
strict=False, strict=False,
report_on_target=False, report_on_target=False,
trace_clock='local',
saved_cmdlines_nr=4096,
): ):
super(FtraceCollector, self).__init__(target) super(FtraceCollector, self).__init__(target)
self.events = events if events is not None else DEFAULT_EVENTS self.events = events if events is not None else DEFAULT_EVENTS
self.functions = functions self.functions = functions
self.tracer = tracer
self.trace_children_functions = trace_children_functions
self.buffer_size = buffer_size self.buffer_size = buffer_size
self.buffer_size_step = buffer_size_step self.buffer_size_step = buffer_size_step
self.tracing_path = tracing_path self.tracing_path = tracing_path
self.automark = automark self.automark = automark
self.autoreport = autoreport self.autoreport = autoreport
self.autoview = autoview self.autoview = autoview
self.strict = strict
self.report_on_target = report_on_target self.report_on_target = report_on_target
self.target_output_file = target.path.join(self.target.working_directory, OUTPUT_TRACE_FILE) self.target_output_file = target.path.join(self.target.working_directory, OUTPUT_TRACE_FILE)
text_file_name = target.path.splitext(OUTPUT_TRACE_FILE)[0] + '.txt' text_file_name = target.path.splitext(OUTPUT_TRACE_FILE)[0] + '.txt'
self.target_text_file = target.path.join(self.target.working_directory, text_file_name) self.target_text_file = target.path.join(self.target.working_directory, text_file_name)
self.output_path = None
self.target_binary = None self.target_binary = None
self.host_binary = None self.host_binary = None
self.start_time = None self.start_time = None
self.stop_time = None self.stop_time = None
self.event_string = None self.event_string = None
self.function_string = None self.function_string = None
self.trace_clock = trace_clock
self.saved_cmdlines_nr = saved_cmdlines_nr
self._reset_needed = True self._reset_needed = True
# pylint: disable=bad-whitespace # pylint: disable=bad-whitespace
@@ -94,6 +107,9 @@ class FtraceCollector(TraceCollector):
self.function_profile_file = self.target.path.join(self.tracing_path, 'function_profile_enabled') self.function_profile_file = self.target.path.join(self.tracing_path, 'function_profile_enabled')
self.marker_file = self.target.path.join(self.tracing_path, 'trace_marker') self.marker_file = self.target.path.join(self.tracing_path, 'trace_marker')
self.ftrace_filter_file = self.target.path.join(self.tracing_path, 'set_ftrace_filter') self.ftrace_filter_file = self.target.path.join(self.tracing_path, 'set_ftrace_filter')
self.trace_clock_file = self.target.path.join(self.tracing_path, 'trace_clock')
self.save_cmdlines_size_file = self.target.path.join(self.tracing_path, 'saved_cmdlines_size')
self.available_tracers_file = self.target.path.join(self.tracing_path, 'available_tracers')
self.host_binary = which('trace-cmd') self.host_binary = which('trace-cmd')
self.kernelshark = which('kernelshark') self.kernelshark = which('kernelshark')
@@ -113,51 +129,98 @@ class FtraceCollector(TraceCollector):
self.target_binary = 'trace-cmd' self.target_binary = 'trace-cmd'
# Validate required events to be traced # Validate required events to be traced
available_events = self.target.execute( def event_to_regex(event):
'cat {}'.format(self.available_events_file), if not event.startswith('*'):
as_root=True).splitlines() event = '*' + event
selected_events = []
for event in self.events: return re.compile(event.replace('*', '.*'))
# Convert globs supported by FTrace into valid regexp globs
_event = event def event_is_in_list(event, events):
if event[0] != '*': return any(
_event = '*' + event event_to_regex(event).match(_event)
event_re = re.compile(_event.replace('*', '.*')) for _event in events
# Select events matching the required ones )
if not list(filter(event_re.match, available_events)):
message = 'Event [{}] not available for tracing'.format(event) unavailable_events = [
if strict: event
for event in self.events
if not event_is_in_list(event, self.available_events)
]
if unavailable_events:
message = 'Events not available for tracing: {}'.format(
', '.join(unavailable_events)
)
if self.strict:
raise TargetStableError(message) raise TargetStableError(message)
self.target.logger.warning(message)
else: else:
selected_events.append(event) self.target.logger.warning(message)
# If function profiling is enabled we always need at least one event.
# Thus, if not other events have been specified, try to add at least selected_events = sorted(set(self.events) - set(unavailable_events))
# a tracepoint which is always available and possibly triggered few
# times. if self.tracer and self.tracer not in self.available_tracers:
if self.functions and not selected_events: raise TargetStableError('Unsupported tracer "{}". Available tracers: {}'.format(
selected_events = ['sched_wakeup_new'] self.tracer, ', '.join(self.available_tracers)))
self.event_string = _build_trace_events(selected_events)
# Check for function tracing support # Check for function tracing support
if self.functions: if self.functions:
if not self.target.file_exists(self.function_profile_file):
raise TargetStableError('Function profiling not supported. '\
'A kernel build with CONFIG_FUNCTION_PROFILER enable is required')
# Validate required functions to be traced # Validate required functions to be traced
available_functions = self.target.execute(
'cat {}'.format(self.available_functions_file),
as_root=True).splitlines()
selected_functions = [] selected_functions = []
for function in self.functions: for function in self.functions:
if function not in available_functions: if function not in self.available_functions:
message = 'Function [{}] not available for profiling'.format(function) message = 'Function [{}] not available for tracing/profiling'.format(function)
if strict: if self.strict:
raise TargetStableError(message) raise TargetStableError(message)
self.target.logger.warning(message) self.target.logger.warning(message)
else: else:
selected_functions.append(function) selected_functions.append(function)
# Function profiling
if self.tracer is None:
if not self.target.file_exists(self.function_profile_file):
raise TargetStableError('Function profiling not supported. '\
'A kernel build with CONFIG_FUNCTION_PROFILER enable is required')
self.function_string = _build_trace_functions(selected_functions) self.function_string = _build_trace_functions(selected_functions)
# If function profiling is enabled we always need at least one event.
# Thus, if not other events have been specified, try to add at least
# a tracepoint which is always available and possibly triggered few
# times.
if not selected_events:
selected_events = ['sched_wakeup_new']
# Function tracing
elif self.tracer == 'function':
self.function_string = _build_graph_functions(selected_functions, False)
# Function graphing
elif self.tracer == 'function_graph':
self.function_string = _build_graph_functions(selected_functions, trace_children_functions)
self.event_string = _build_trace_events(selected_events)
@property
@memoized
def available_tracers(self):
"""
List of ftrace tracers supported by the target's kernel.
"""
return self.target.read_value(self.available_tracers_file).split(' ')
@property
@memoized
def available_events(self):
"""
List of ftrace events supported by the target's kernel.
"""
return self.target.read_value(self.available_events_file).splitlines()
@property
@memoized
def available_functions(self):
"""
List of functions whose tracing/profiling is supported by the target's kernel.
"""
return self.target.read_value(self.available_functions_file).splitlines()
def reset(self): def reset(self):
if self.buffer_size: if self.buffer_size:
@@ -170,8 +233,40 @@ class FtraceCollector(TraceCollector):
self.start_time = time.time() self.start_time = time.time()
if self._reset_needed: if self._reset_needed:
self.reset() self.reset()
self.target.execute('{} start {}'.format(self.target_binary, self.event_string),
as_root=True) if self.tracer is not None and 'function' in self.tracer:
tracecmd_functions = self.function_string
else:
tracecmd_functions = ''
tracer_string = '-p {}'.format(self.tracer) if self.tracer else ''
# Ensure kallsyms contains addresses if possible, so that function the
# collected trace contains enough data for pretty printing
with contextlib.suppress(TargetStableError):
self.target.write_value('/proc/sys/kernel/kptr_restrict', 0)
self.target.write_value(self.trace_clock_file, self.trace_clock, verify=False)
try:
self.target.write_value(self.save_cmdlines_size_file, self.saved_cmdlines_nr)
except TargetStableError as e:
message = 'Could not set "save_cmdlines_size"'
if self.strict:
self.logger.error(message)
raise e
else:
self.logger.warning(message)
self.logger.debug(e)
self.target.execute(
'{} start {events} {tracer} {functions}'.format(
self.target_binary,
events=self.event_string,
tracer=tracer_string,
functions=tracecmd_functions,
),
as_root=True,
)
if self.automark: if self.automark:
self.mark_start() self.mark_start()
if 'cpufreq' in self.target.modules: if 'cpufreq' in self.target.modules:
@@ -181,7 +276,7 @@ class FtraceCollector(TraceCollector):
self.logger.debug('Trace CPUIdle states') self.logger.debug('Trace CPUIdle states')
self.target.cpuidle.perturb_cpus() self.target.cpuidle.perturb_cpus()
# Enable kernel function profiling # Enable kernel function profiling
if self.functions: if self.functions and self.tracer is None:
self.target.execute('echo nop > {}'.format(self.current_tracer_file), self.target.execute('echo nop > {}'.format(self.current_tracer_file),
as_root=True) as_root=True)
self.target.execute('echo 0 > {}'.format(self.function_profile_file), self.target.execute('echo 0 > {}'.format(self.function_profile_file),
@@ -194,7 +289,7 @@ class FtraceCollector(TraceCollector):
def stop(self): def stop(self):
# Disable kernel function profiling # Disable kernel function profiling
if self.functions: if self.functions and self.tracer is None:
self.target.execute('echo 1 > {}'.format(self.function_profile_file), self.target.execute('echo 1 > {}'.format(self.function_profile_file),
as_root=True) as_root=True)
if 'cpufreq' in self.target.modules: if 'cpufreq' in self.target.modules:
@@ -207,9 +302,14 @@ class FtraceCollector(TraceCollector):
timeout=TIMEOUT, as_root=True) timeout=TIMEOUT, as_root=True)
self._reset_needed = True self._reset_needed = True
def get_trace(self, outfile): def set_output(self, output_path):
if os.path.isdir(outfile): if os.path.isdir(output_path):
outfile = os.path.join(outfile, os.path.basename(self.target_output_file)) output_path = os.path.join(output_path, os.path.basename(self.target_output_file))
self.output_path = output_path
def get_data(self):
if self.output_path is None:
raise RuntimeError("Output path was not set.")
self.target.execute('{0} extract -o {1}; chmod 666 {1}'.format(self.target_binary, self.target.execute('{0} extract -o {1}; chmod 666 {1}'.format(self.target_binary,
self.target_output_file), self.target_output_file),
timeout=TIMEOUT, as_root=True) timeout=TIMEOUT, as_root=True)
@@ -218,23 +318,27 @@ class FtraceCollector(TraceCollector):
# Therefore timout for the pull command must also be adjusted # Therefore timout for the pull command must also be adjusted
# accordingly. # accordingly.
pull_timeout = 10 * (self.stop_time - self.start_time) pull_timeout = 10 * (self.stop_time - self.start_time)
self.target.pull(self.target_output_file, outfile, timeout=pull_timeout) self.target.pull(self.target_output_file, self.output_path, timeout=pull_timeout)
if not os.path.isfile(outfile): output = CollectorOutput()
if not os.path.isfile(self.output_path):
self.logger.warning('Binary trace not pulled from device.') self.logger.warning('Binary trace not pulled from device.')
else: else:
output.append(CollectorOutputEntry(self.output_path, 'file'))
if self.autoreport: if self.autoreport:
textfile = os.path.splitext(outfile)[0] + '.txt' textfile = os.path.splitext(self.output_path)[0] + '.txt'
if self.report_on_target: if self.report_on_target:
self.generate_report_on_target() self.generate_report_on_target()
self.target.pull(self.target_text_file, self.target.pull(self.target_text_file,
textfile, timeout=pull_timeout) textfile, timeout=pull_timeout)
else: else:
self.report(outfile, textfile) self.report(self.output_path, textfile)
output.append(CollectorOutputEntry(textfile, 'file'))
if self.autoview: if self.autoview:
self.view(outfile) self.view(self.output_path)
return output
def get_stats(self, outfile): def get_stats(self, outfile):
if not self.functions: if not (self.functions and self.tracer is None):
return return
if os.path.isdir(outfile): if os.path.isdir(outfile):
@@ -351,3 +455,10 @@ def _build_trace_events(events):
def _build_trace_functions(functions): def _build_trace_functions(functions):
function_string = " ".join(functions) function_string = " ".join(functions)
return function_string return function_string
def _build_graph_functions(functions, trace_children_functions):
opt = 'g' if trace_children_functions else 'l'
return ' '.join(
'-{} {}'.format(opt, quote(f))
for f in functions
)

View File

@@ -16,14 +16,16 @@
import os import os
import shutil import shutil
from devlib.trace import TraceCollector from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry)
from devlib.utils.android import LogcatMonitor from devlib.utils.android import LogcatMonitor
class LogcatCollector(TraceCollector): class LogcatCollector(CollectorBase):
def __init__(self, target, regexps=None): def __init__(self, target, regexps=None):
super(LogcatCollector, self).__init__(target) super(LogcatCollector, self).__init__(target)
self.regexps = regexps self.regexps = regexps
self.output_path = None
self._collecting = False self._collecting = False
self._prev_log = None self._prev_log = None
self._monitor = None self._monitor = None
@@ -45,12 +47,14 @@ class LogcatCollector(TraceCollector):
""" """
Start collecting logcat lines Start collecting logcat lines
""" """
if self.output_path is None:
raise RuntimeError("Output path was not set.")
self._monitor = LogcatMonitor(self.target, self.regexps) self._monitor = LogcatMonitor(self.target, self.regexps)
if self._prev_log: if self._prev_log:
# Append new data collection to previous collection # Append new data collection to previous collection
self._monitor.start(self._prev_log) self._monitor.start(self._prev_log)
else: else:
self._monitor.start() self._monitor.start(self.output_path)
self._collecting = True self._collecting = True
@@ -65,9 +69,10 @@ class LogcatCollector(TraceCollector):
self._collecting = False self._collecting = False
self._prev_log = self._monitor.logfile self._prev_log = self._monitor.logfile
def get_trace(self, outfile): def set_output(self, output_path):
""" self.output_path = output_path
Output collected logcat lines to designated file
""" def get_data(self):
# copy self._monitor.logfile to outfile if self.output_path is None:
shutil.copy(self._monitor.logfile, outfile) raise RuntimeError("No data collected.")
return CollectorOutput([CollectorOutputEntry(self.output_path, 'file')])

253
devlib/collector/perf.py Normal file
View File

@@ -0,0 +1,253 @@
# Copyright 2018 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import re
import time
from past.builtins import basestring, zip
from devlib.host import PACKAGE_BIN_DIRECTORY
from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry)
from devlib.utils.misc import ensure_file_directory_exists as _f
PERF_COMMAND_TEMPLATE = '{binary} {command} {options} {events} sleep 1000 > {outfile} 2>&1 '
PERF_REPORT_COMMAND_TEMPLATE= '{binary} report {options} -i {datafile} > {outfile} 2>&1 '
PERF_RECORD_COMMAND_TEMPLATE= '{binary} record {options} {events} -o {outfile}'
PERF_DEFAULT_EVENTS = [
'cpu-migrations',
'context-switches',
]
SIMPLEPERF_DEFAULT_EVENTS = [
'raw-cpu-cycles',
'raw-l1-dcache',
'raw-l1-dcache-refill',
'raw-br-mis-pred',
'raw-instruction-retired',
]
DEFAULT_EVENTS = {'perf':PERF_DEFAULT_EVENTS, 'simpleperf':SIMPLEPERF_DEFAULT_EVENTS}
class PerfCollector(CollectorBase):
"""
Perf is a Linux profiling with performance counters.
Simpleperf is an Android profiling tool with performance counters.
It is highly recomended to use perf_type = simpleperf when using this instrument
on android devices, since it recognises android symbols in record mode and is much more stable
when reporting record .data files. For more information see simpleperf documentation at:
https://android.googlesource.com/platform/system/extras/+/master/simpleperf/doc/README.md
Performance counters are CPU hardware registers that count hardware events
such as instructions executed, cache-misses suffered, or branches
mispredicted. They form a basis for profiling applications to trace dynamic
control flow and identify hotspots.
pref accepts options and events. If no option is given the default '-a' is
used. For events, the default events are migrations and cs for perf and raw-cpu-cycles,
raw-l1-dcache, raw-l1-dcache-refill, raw-instructions-retired. They both can
be specified in the config file.
Events must be provided as a list that contains them and they will look like
this ::
perf_events = ['migrations', 'cs']
Events can be obtained by typing the following in the command line on the
device ::
perf list
simpleperf list
Whereas options, they can be provided as a single string as following ::
perf_options = '-a -i'
Options can be obtained by running the following in the command line ::
man perf-stat
"""
def __init__(self,
target,
perf_type='perf',
command='stat',
events=None,
optionstring=None,
report_options=None,
labels=None,
force_install=False):
super(PerfCollector, self).__init__(target)
self.force_install = force_install
self.labels = labels
self.report_options = report_options
self.output_path = None
# Validate parameters
if isinstance(optionstring, list):
self.optionstrings = optionstring
else:
self.optionstrings = [optionstring]
if perf_type in ['perf', 'simpleperf']:
self.perf_type = perf_type
else:
raise ValueError('Invalid perf type: {}, must be perf or simpleperf'.format(perf_type))
if not events:
self.events = DEFAULT_EVENTS[self.perf_type]
else:
self.events = events
if isinstance(self.events, basestring):
self.events = [self.events]
if not self.labels:
self.labels = ['perf_{}'.format(i) for i in range(len(self.optionstrings))]
if len(self.labels) != len(self.optionstrings):
raise ValueError('The number of labels must match the number of optstrings provided for perf.')
if command in ['stat', 'record']:
self.command = command
else:
raise ValueError('Unsupported perf command, must be stat or record')
self.binary = self.target.get_installed(self.perf_type)
if self.force_install or not self.binary:
self.binary = self._deploy_perf()
self._validate_events(self.events)
self.commands = self._build_commands()
def reset(self):
self.target.killall(self.perf_type, as_root=self.target.is_rooted)
self.target.remove(self.target.get_workpath('TemporaryFile*'))
for label in self.labels:
filepath = self._get_target_file(label, 'data')
self.target.remove(filepath)
filepath = self._get_target_file(label, 'rpt')
self.target.remove(filepath)
def start(self):
for command in self.commands:
self.target.kick_off(command)
def stop(self):
self.target.killall(self.perf_type, signal='SIGINT',
as_root=self.target.is_rooted)
# perf doesn't transmit the signal to its sleep call so handled here:
self.target.killall('sleep', as_root=self.target.is_rooted)
# NB: we hope that no other "important" sleep is on-going
def set_output(self, output_path):
self.output_path = output_path
def get_data(self):
if self.output_path is None:
raise RuntimeError("Output path was not set.")
output = CollectorOutput()
for label in self.labels:
if self.command == 'record':
self._wait_for_data_file_write(label, self.output_path)
path = self._pull_target_file_to_host(label, 'rpt', self.output_path)
output.append(CollectorOutputEntry(path, 'file'))
else:
path = self._pull_target_file_to_host(label, 'out', self.output_path)
output.append(CollectorOutputEntry(path, 'file'))
return output
def _deploy_perf(self):
host_executable = os.path.join(PACKAGE_BIN_DIRECTORY,
self.target.abi, self.perf_type)
return self.target.install(host_executable)
def _get_target_file(self, label, extension):
return self.target.get_workpath('{}.{}'.format(label, extension))
def _build_commands(self):
commands = []
for opts, label in zip(self.optionstrings, self.labels):
if self.command == 'stat':
commands.append(self._build_perf_stat_command(opts, self.events, label))
else:
commands.append(self._build_perf_record_command(opts, label))
return commands
def _build_perf_stat_command(self, options, events, label):
event_string = ' '.join(['-e {}'.format(e) for e in events])
command = PERF_COMMAND_TEMPLATE.format(binary = self.binary,
command = self.command,
options = options or '',
events = event_string,
outfile = self._get_target_file(label, 'out'))
return command
def _build_perf_report_command(self, report_options, label):
command = PERF_REPORT_COMMAND_TEMPLATE.format(binary=self.binary,
options=report_options or '',
datafile=self._get_target_file(label, 'data'),
outfile=self._get_target_file(label, 'rpt'))
return command
def _build_perf_record_command(self, options, label):
event_string = ' '.join(['-e {}'.format(e) for e in self.events])
command = PERF_RECORD_COMMAND_TEMPLATE.format(binary=self.binary,
options=options or '',
events=event_string,
outfile=self._get_target_file(label, 'data'))
return command
def _pull_target_file_to_host(self, label, extension, output_path):
target_file = self._get_target_file(label, extension)
host_relpath = os.path.basename(target_file)
host_file = _f(os.path.join(output_path, host_relpath))
self.target.pull(target_file, host_file)
return host_file
def _wait_for_data_file_write(self, label, output_path):
data_file_finished_writing = False
max_tries = 80
current_tries = 0
while not data_file_finished_writing:
files = self.target.execute('cd {} && ls'.format(self.target.get_workpath('')))
# Perf stores data in tempory files whilst writing to data output file. Check if they have been removed.
if 'TemporaryFile' in files and current_tries <= max_tries:
time.sleep(0.25)
current_tries += 1
else:
if current_tries >= max_tries:
self.logger.warning('''writing {}.data file took longer than expected,
file may not have written correctly'''.format(label))
data_file_finished_writing = True
report_command = self._build_perf_report_command(self.report_options, label)
self.target.execute(report_command)
def _validate_events(self, events):
available_events_string = self.target.execute('{} list'.format(self.perf_type))
available_events = available_events_string.splitlines()
for available_event in available_events:
if available_event == '':
continue
if 'OR' in available_event:
available_events.append(available_event.split('OR')[1])
available_events[available_events.index(available_event)] = available_event.split()[0].strip()
# Raw hex event codes can also be passed in that do not appear on perf/simpleperf list, prefixed with 'r'
raw_event_code_regex = re.compile(r"^r(0x|0X)?[A-Fa-f0-9]+$")
for event in events:
if event in available_events or re.match(raw_event_code_regex, event):
continue
else:
raise ValueError('Event: {} is not in available event list for {}'.format(event, self.perf_type))

View File

@@ -19,13 +19,14 @@ import sys
import threading import threading
import time import time
from devlib.trace import TraceCollector from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry)
from devlib.exception import WorkerThreadError from devlib.exception import WorkerThreadError
class ScreenCapturePoller(threading.Thread): class ScreenCapturePoller(threading.Thread):
def __init__(self, target, period, output_path=None, timeout=30): def __init__(self, target, period, timeout=30):
super(ScreenCapturePoller, self).__init__() super(ScreenCapturePoller, self).__init__()
self.target = target self.target = target
self.logger = logging.getLogger('screencapture') self.logger = logging.getLogger('screencapture')
@@ -36,11 +37,16 @@ class ScreenCapturePoller(threading.Thread):
self.last_poll = 0 self.last_poll = 0
self.daemon = True self.daemon = True
self.exc = None self.exc = None
self.output_path = None
def set_output(self, output_path):
self.output_path = output_path self.output_path = output_path
def run(self): def run(self):
self.logger.debug('Starting screen capture polling') self.logger.debug('Starting screen capture polling')
try: try:
if self.output_path is None:
raise RuntimeError("Output path was not set.")
while True: while True:
if self.stop_signal.is_set(): if self.stop_signal.is_set():
break break
@@ -66,24 +72,33 @@ class ScreenCapturePoller(threading.Thread):
self.target.capture_screen(os.path.join(self.output_path, "screencap_{ts}.png")) self.target.capture_screen(os.path.join(self.output_path, "screencap_{ts}.png"))
class ScreenCaptureCollector(TraceCollector): class ScreenCaptureCollector(CollectorBase):
def __init__(self, target, output_path=None, period=None): def __init__(self, target, period=None):
super(ScreenCaptureCollector, self).__init__(target) super(ScreenCaptureCollector, self).__init__(target)
self._collecting = False self._collecting = False
self.output_path = output_path self.output_path = None
self.period = period self.period = period
self.target = target self.target = target
self._poller = ScreenCapturePoller(self.target, self.period,
self.output_path) def set_output(self, output_path):
self.output_path = output_path
def reset(self): def reset(self):
pass self._poller = ScreenCapturePoller(self.target, self.period)
def get_data(self):
if self.output_path is None:
raise RuntimeError("No data collected.")
return CollectorOutput([CollectorOutputEntry(self.output_path, 'directory')])
def start(self): def start(self):
""" """
Start collecting the screenshots Start collecting the screenshots
""" """
if self.output_path is None:
raise RuntimeError("Output path was not set.")
self._poller.set_output(self.output_path)
self._poller.start() self._poller.start()
self._collecting = True self._collecting = True

View File

@@ -17,11 +17,12 @@ import shutil
from tempfile import NamedTemporaryFile from tempfile import NamedTemporaryFile
from pexpect.exceptions import TIMEOUT from pexpect.exceptions import TIMEOUT
from devlib.trace import TraceCollector from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry)
from devlib.utils.serial_port import get_connection from devlib.utils.serial_port import get_connection
class SerialTraceCollector(TraceCollector): class SerialTraceCollector(CollectorBase):
@property @property
def collecting(self): def collecting(self):
@@ -32,33 +33,35 @@ class SerialTraceCollector(TraceCollector):
self.serial_port = serial_port self.serial_port = serial_port
self.baudrate = baudrate self.baudrate = baudrate
self.timeout = timeout self.timeout = timeout
self.output_path - None
self._serial_target = None self._serial_target = None
self._conn = None self._conn = None
self._tmpfile = None self._outfile_fh = None
self._collecting = False self._collecting = False
def reset(self): def reset(self):
if self._collecting: if self._collecting:
raise RuntimeError("reset was called whilst collecting") raise RuntimeError("reset was called whilst collecting")
if self._tmpfile: if self._outfile_fh:
self._tmpfile.close() self._outfile_fh.close()
self._tmpfile = None self._outfile_fh = None
def start(self): def start(self):
if self._collecting: if self._collecting:
raise RuntimeError("start was called whilst collecting") raise RuntimeError("start was called whilst collecting")
if self.output_path is None:
raise RuntimeError("Output path was not set.")
self._outfile_fh = open(self.output_path, 'w')
self._tmpfile = NamedTemporaryFile()
start_marker = "-------- Starting serial logging --------\n" start_marker = "-------- Starting serial logging --------\n"
self._tmpfile.write(start_marker.encode('utf-8')) self._outfile_fh.write(start_marker.encode('utf-8'))
self._serial_target, self._conn = get_connection(port=self.serial_port, self._serial_target, self._conn = get_connection(port=self.serial_port,
baudrate=self.baudrate, baudrate=self.baudrate,
timeout=self.timeout, timeout=self.timeout,
logfile=self._tmpfile, logfile=self._outfile_fh,
init_dtr=0) init_dtr=0)
self._collecting = True self._collecting = True
@@ -78,17 +81,19 @@ class SerialTraceCollector(TraceCollector):
del self._conn del self._conn
stop_marker = "-------- Stopping serial logging --------\n" stop_marker = "-------- Stopping serial logging --------\n"
self._tmpfile.write(stop_marker.encode('utf-8')) self._outfile_fh.write(stop_marker.encode('utf-8'))
self._outfile_fh.flush()
self._outfile_fh.close()
self._outfile_fh = None
self._collecting = False self._collecting = False
def get_trace(self, outfile): def set_output(self, output_path):
self.output_path = output_path
def get_data(self):
if self._collecting: if self._collecting:
raise RuntimeError("get_trace was called whilst collecting") raise RuntimeError("get_data was called whilst collecting")
if self.output_path is None:
self._tmpfile.flush() raise RuntimeError("No data collected.")
return CollectorOutput([CollectorOutputEntry(self.output_path, 'file')])
shutil.copy(self._tmpfile.name, outfile)
self._tmpfile.close()
self._tmpfile = None

View File

@@ -19,8 +19,9 @@ import subprocess
from shutil import copyfile from shutil import copyfile
from tempfile import NamedTemporaryFile from tempfile import NamedTemporaryFile
from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry)
from devlib.exception import TargetStableError, HostError from devlib.exception import TargetStableError, HostError
from devlib.trace import TraceCollector
import devlib.utils.android import devlib.utils.android
from devlib.utils.misc import memoized from devlib.utils.misc import memoized
@@ -33,7 +34,7 @@ DEFAULT_CATEGORIES = [
'idle' 'idle'
] ]
class SystraceCollector(TraceCollector): class SystraceCollector(CollectorBase):
""" """
A trace collector based on Systrace A trace collector based on Systrace
@@ -74,9 +75,10 @@ class SystraceCollector(TraceCollector):
self.categories = categories or DEFAULT_CATEGORIES self.categories = categories or DEFAULT_CATEGORIES
self.buffer_size = buffer_size self.buffer_size = buffer_size
self.output_path = None
self._systrace_process = None self._systrace_process = None
self._tmpfile = None self._outfile_fh = None
# Try to find a systrace binary # Try to find a systrace binary
self.systrace_binary = None self.systrace_binary = None
@@ -104,12 +106,12 @@ class SystraceCollector(TraceCollector):
self.reset() self.reset()
def _build_cmd(self): def _build_cmd(self):
self._tmpfile = NamedTemporaryFile() self._outfile_fh = open(self.output_path, 'w')
# pylint: disable=attribute-defined-outside-init # pylint: disable=attribute-defined-outside-init
self.systrace_cmd = '{} -o {} -e {}'.format( self.systrace_cmd = 'python2 -u {} -o {} -e {}'.format(
self.systrace_binary, self.systrace_binary,
self._tmpfile.name, self._outfile_fh.name,
self.target.adb_name self.target.adb_name
) )
@@ -122,13 +124,11 @@ class SystraceCollector(TraceCollector):
if self._systrace_process: if self._systrace_process:
self.stop() self.stop()
if self._tmpfile:
self._tmpfile.close()
self._tmpfile = None
def start(self): def start(self):
if self._systrace_process: if self._systrace_process:
raise RuntimeError("Tracing is already underway, call stop() first") raise RuntimeError("Tracing is already underway, call stop() first")
if self.output_path is None:
raise RuntimeError("Output path was not set.")
self.reset() self.reset()
@@ -137,9 +137,11 @@ class SystraceCollector(TraceCollector):
self._systrace_process = subprocess.Popen( self._systrace_process = subprocess.Popen(
self.systrace_cmd, self.systrace_cmd,
stdin=subprocess.PIPE, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
shell=True, shell=True,
universal_newlines=True universal_newlines=True
) )
self._systrace_process.stdout.read(1)
def stop(self): def stop(self):
if not self._systrace_process: if not self._systrace_process:
@@ -149,11 +151,16 @@ class SystraceCollector(TraceCollector):
self._systrace_process.communicate('\n') self._systrace_process.communicate('\n')
self._systrace_process = None self._systrace_process = None
def get_trace(self, outfile): if self._outfile_fh:
self._outfile_fh.close()
self._outfile_fh = None
def set_output(self, output_path):
self.output_path = output_path
def get_data(self):
if self._systrace_process: if self._systrace_process:
raise RuntimeError("Tracing is underway, call stop() first") raise RuntimeError("Tracing is underway, call stop() first")
if self.output_path is None:
if not self._tmpfile: raise RuntimeError("No data collected.")
raise RuntimeError("No tracing data available") return CollectorOutput([CollectorOutputEntry(self.output_path, 'file')])
copyfile(self._tmpfile.name, outfile)

View File

@@ -15,10 +15,16 @@
class DevlibError(Exception): class DevlibError(Exception):
"""Base class for all Devlib exceptions.""" """Base class for all Devlib exceptions."""
def __init__(self, *args):
message = args[0] if args else None
self._message = message
@property @property
def message(self): def message(self):
if self.args: if self._message is not None:
return self.args[0] return self._message
else:
return str(self) return str(self)
@@ -127,7 +133,7 @@ def get_traceback(exc=None):
if not exc: if not exc:
return None return None
tb = exc[2] tb = exc[2]
sio = io.BytesIO() sio = io.StringIO()
traceback.print_tb(tb, file=sio) traceback.print_tb(tb, file=sio)
del tb # needs to be done explicitly see: http://docs.python.org/2/library/sys.html#sys.exc_info del tb # needs to be done explicitly see: http://docs.python.org/2/library/sys.html#sys.exc_info
return sio.getvalue() return sio.getvalue()

View File

@@ -38,9 +38,21 @@ class LocalConnection(object):
name = 'local' name = 'local'
@property
def connected_as_root(self):
if self._connected_as_root is None:
result = self.execute('id', as_root=False)
self._connected_as_root = 'uid=0(' in result
return self._connected_as_root
@connected_as_root.setter
def connected_as_root(self, state):
self._connected_as_root = state
# pylint: disable=unused-argument # pylint: disable=unused-argument
def __init__(self, platform=None, keep_password=True, unrooted=False, def __init__(self, platform=None, keep_password=True, unrooted=False,
password=None, timeout=None): password=None, timeout=None):
self._connected_as_root = None
self.logger = logging.getLogger('local_connection') self.logger = logging.getLogger('local_connection')
self.keep_password = keep_password self.keep_password = keep_password
self.unrooted = unrooted self.unrooted = unrooted
@@ -67,11 +79,11 @@ class LocalConnection(object):
def execute(self, command, timeout=None, check_exit_code=True, def execute(self, command, timeout=None, check_exit_code=True,
as_root=False, strip_colors=True, will_succeed=False): as_root=False, strip_colors=True, will_succeed=False):
self.logger.debug(command) self.logger.debug(command)
if as_root: if as_root and not self.connected_as_root:
if self.unrooted: if self.unrooted:
raise TargetStableError('unrooted') raise TargetStableError('unrooted')
password = self._get_password() password = self._get_password()
command = 'echo {} | sudo -S '.format(quote(password)) + command command = 'echo {} | sudo -S -- sh -c '.format(quote(password)) + quote(command)
ignore = None if check_exit_code else 'all' ignore = None if check_exit_code else 'all'
try: try:
return check_output(command, shell=True, timeout=timeout, ignore=ignore)[0] return check_output(command, shell=True, timeout=timeout, ignore=ignore)[0]
@@ -84,7 +96,7 @@ class LocalConnection(object):
raise TargetStableError(message) raise TargetStableError(message)
def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False): def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False):
if as_root: if as_root and not self.connected_as_root:
if self.unrooted: if self.unrooted:
raise TargetStableError('unrooted') raise TargetStableError('unrooted')
password = self._get_password() password = self._get_password()
@@ -97,6 +109,12 @@ class LocalConnection(object):
def cancel_running_command(self): def cancel_running_command(self):
pass pass
def wait_for_device(self, timeout=30):
return
def reboot_bootloader(self, timeout=30):
raise NotImplementedError()
def _get_password(self): def _get_password(self):
if self.password: if self.password:
return self.password return self.password

View File

@@ -58,12 +58,14 @@ class AcmeCapeInstrument(Instrument):
iio_capture=which('iio-capture'), iio_capture=which('iio-capture'),
host='baylibre-acme.local', host='baylibre-acme.local',
iio_device='iio:device0', iio_device='iio:device0',
buffer_size=256): buffer_size=256,
keep_raw=False):
super(AcmeCapeInstrument, self).__init__(target) super(AcmeCapeInstrument, self).__init__(target)
self.iio_capture = iio_capture self.iio_capture = iio_capture
self.host = host self.host = host
self.iio_device = iio_device self.iio_device = iio_device
self.buffer_size = buffer_size self.buffer_size = buffer_size
self.keep_raw = keep_raw
self.sample_rate_hz = 100 self.sample_rate_hz = 100
if self.iio_capture is None: if self.iio_capture is None:
raise HostError('Missing iio-capture binary') raise HostError('Missing iio-capture binary')
@@ -159,3 +161,8 @@ class AcmeCapeInstrument(Instrument):
def get_raw(self): def get_raw(self):
return [self.raw_data_file] return [self.raw_data_file]
def teardown(self):
if not self.keep_raw:
if os.path.isfile(self.raw_data_file):
os.remove(self.raw_data_file)

View File

@@ -71,7 +71,7 @@ class ArmEnergyProbeInstrument(Instrument):
MAX_CHANNELS = 12 # 4 Arm Energy Probes MAX_CHANNELS = 12 # 4 Arm Energy Probes
def __init__(self, target, config_file='./config-aep', ): def __init__(self, target, config_file='./config-aep', keep_raw=False):
super(ArmEnergyProbeInstrument, self).__init__(target) super(ArmEnergyProbeInstrument, self).__init__(target)
self.arm_probe = which('arm-probe') self.arm_probe = which('arm-probe')
if self.arm_probe is None: if self.arm_probe is None:
@@ -80,6 +80,7 @@ class ArmEnergyProbeInstrument(Instrument):
self.attributes = ['power', 'voltage', 'current'] self.attributes = ['power', 'voltage', 'current']
self.sample_rate_hz = 10000 self.sample_rate_hz = 10000
self.config_file = config_file self.config_file = config_file
self.keep_raw = keep_raw
self.parser = AepParser() self.parser = AepParser()
#TODO make it generic #TODO make it generic
@@ -142,3 +143,8 @@ class ArmEnergyProbeInstrument(Instrument):
def get_raw(self): def get_raw(self):
return [self.output_file_raw] return [self.output_file_raw]
def teardown(self):
if not self.keep_raw:
if os.path.isfile(self.output_file_raw):
os.remove(self.output_file_raw)

View File

@@ -14,6 +14,7 @@
# #
import os import os
import shutil
import tempfile import tempfile
from itertools import chain from itertools import chain
@@ -23,11 +24,11 @@ from devlib.utils.csvutil import csvwriter, create_reader
from devlib.utils.misc import unique from devlib.utils.misc import unique
try: try:
from daqpower.client import execute_command, Status from daqpower.client import DaqClient
from daqpower.config import DeviceConfiguration, ServerConfiguration from daqpower.config import DeviceConfiguration
except ImportError as e: except ImportError as e:
execute_command, Status = None, None DaqClient = None
DeviceConfiguration, ServerConfiguration, ConfigurationError = None, None, None DeviceConfiguration = None
import_error_mesg = e.args[0] if e.args else str(e) import_error_mesg = e.args[0] if e.args else str(e)
@@ -44,26 +45,28 @@ class DaqInstrument(Instrument):
dv_range=0.2, dv_range=0.2,
sample_rate_hz=10000, sample_rate_hz=10000,
channel_map=(0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23), channel_map=(0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23),
keep_raw=False
): ):
# pylint: disable=no-member # pylint: disable=no-member
super(DaqInstrument, self).__init__(target) super(DaqInstrument, self).__init__(target)
self.keep_raw = keep_raw
self._need_reset = True self._need_reset = True
self._raw_files = [] self._raw_files = []
if execute_command is None: self.tempdir = None
if DaqClient is None:
raise HostError('Could not import "daqpower": {}'.format(import_error_mesg)) raise HostError('Could not import "daqpower": {}'.format(import_error_mesg))
if labels is None: if labels is None:
labels = ['PORT_{}'.format(i) for i in range(len(resistor_values))] labels = ['PORT_{}'.format(i) for i in range(len(resistor_values))]
if len(labels) != len(resistor_values): if len(labels) != len(resistor_values):
raise ValueError('"labels" and "resistor_values" must be of the same length') raise ValueError('"labels" and "resistor_values" must be of the same length')
self.server_config = ServerConfiguration(host=host, self.daq_client = DaqClient(host, port)
port=port) try:
result = self.execute('list_devices') devices = self.daq_client.list_devices()
if result.status == Status.OK: if device_id not in devices:
if device_id not in result.data:
msg = 'Device "{}" is not found on the DAQ server. Available devices are: "{}"' msg = 'Device "{}" is not found on the DAQ server. Available devices are: "{}"'
raise ValueError(msg.format(device_id, ', '.join(result.data))) raise ValueError(msg.format(device_id, ', '.join(devices)))
elif result.status != Status.OKISH: except Exception as e:
raise HostError('Problem querying DAQ server: {}'.format(result.message)) raise HostError('Problem querying DAQ server: {}'.format(e))
self.device_config = DeviceConfiguration(device_id=device_id, self.device_config = DeviceConfiguration(device_id=device_id,
v_range=v_range, v_range=v_range,
@@ -80,29 +83,27 @@ class DaqInstrument(Instrument):
def reset(self, sites=None, kinds=None, channels=None): def reset(self, sites=None, kinds=None, channels=None):
super(DaqInstrument, self).reset(sites, kinds, channels) super(DaqInstrument, self).reset(sites, kinds, channels)
self.execute('close') self.daq_client.close()
result = self.execute('configure', config=self.device_config) self.daq_client.configure(self.device_config)
if not result.status == Status.OK: # pylint: disable=no-member
raise HostError(result.message)
self._need_reset = False self._need_reset = False
self._raw_files = [] self._raw_files = []
def start(self): def start(self):
if self._need_reset: if self._need_reset:
self.reset() self.reset()
self.execute('start') self.daq_client.start()
def stop(self): def stop(self):
self.execute('stop') self.daq_client.stop()
self._need_reset = True self._need_reset = True
def get_data(self, outfile): # pylint: disable=R0914 def get_data(self, outfile): # pylint: disable=R0914
tempdir = tempfile.mkdtemp(prefix='daq-raw-') self.tempdir = tempfile.mkdtemp(prefix='daq-raw-')
self.execute('get_data', output_directory=tempdir) self.daq_client.get_data(self.tempdir)
raw_file_map = {} raw_file_map = {}
for entry in os.listdir(tempdir): for entry in os.listdir(self.tempdir):
site = os.path.splitext(entry)[0] site = os.path.splitext(entry)[0]
path = os.path.join(tempdir, entry) path = os.path.join(self.tempdir, entry)
raw_file_map[site] = path raw_file_map[site] = path
self._raw_files.append(path) self._raw_files.append(path)
@@ -118,7 +119,7 @@ class DaqInstrument(Instrument):
file_handles.append(fh) file_handles.append(fh)
except KeyError: except KeyError:
message = 'Could not get DAQ trace for {}; Obtained traces are in {}' message = 'Could not get DAQ trace for {}; Obtained traces are in {}'
raise HostError(message.format(site, tempdir)) raise HostError(message.format(site, self.tempdir))
# The first row is the headers # The first row is the headers
channel_order = [] channel_order = []
@@ -153,7 +154,7 @@ class DaqInstrument(Instrument):
return self._raw_files return self._raw_files
def teardown(self): def teardown(self):
self.execute('close') self.daq_client.close()
if not self.keep_raw:
def execute(self, command, **kwargs): if os.path.isdir(self.tempdir):
return execute_command(self.server_config, command, **kwargs) shutil.rmtree(self.tempdir)

View File

@@ -34,9 +34,11 @@ class EnergyProbeInstrument(Instrument):
def __init__(self, target, resistor_values, def __init__(self, target, resistor_values,
labels=None, labels=None,
device_entry='/dev/ttyACM0', device_entry='/dev/ttyACM0',
keep_raw=False
): ):
super(EnergyProbeInstrument, self).__init__(target) super(EnergyProbeInstrument, self).__init__(target)
self.resistor_values = resistor_values self.resistor_values = resistor_values
self.keep_raw = keep_raw
if labels is not None: if labels is not None:
self.labels = labels self.labels = labels
else: else:
@@ -126,3 +128,8 @@ class EnergyProbeInstrument(Instrument):
def get_raw(self): def get_raw(self):
return [self.raw_data_file] return [self.raw_data_file]
def teardown(self):
if self.keep_raw:
if os.path.isfile(self.raw_data_file):
os.remove(self.raw_data_file)

View File

@@ -14,6 +14,8 @@
# #
from __future__ import division from __future__ import division
import os
from devlib.instrument import (Instrument, CONTINUOUS, from devlib.instrument import (Instrument, CONTINUOUS,
MeasurementsCsv, MeasurementType) MeasurementsCsv, MeasurementType)
from devlib.utils.rendering import (GfxinfoFrameCollector, from devlib.utils.rendering import (GfxinfoFrameCollector,
@@ -70,6 +72,11 @@ class FramesInstrument(Instrument):
def _init_channels(self): def _init_channels(self):
raise NotImplementedError() raise NotImplementedError()
def teardown(self):
if not self.keep_raw:
if os.path.isfile(self._raw_file):
os.remove(self._raw_file)
class GfxInfoFramesInstrument(FramesInstrument): class GfxInfoFramesInstrument(FramesInstrument):

View File

@@ -91,7 +91,7 @@ class FlashModule(Module):
kind = 'flash' kind = 'flash'
def __call__(self, image_bundle=None, images=None, boot_config=None): def __call__(self, image_bundle=None, images=None, boot_config=None, connect=True):
raise NotImplementedError() raise NotImplementedError()

View File

@@ -54,7 +54,7 @@ class FastbootFlashModule(FlashModule):
def probe(target): def probe(target):
return target.os == 'android' return target.os == 'android'
def __call__(self, image_bundle=None, images=None, bootargs=None): def __call__(self, image_bundle=None, images=None, bootargs=None, connect=True):
if bootargs: if bootargs:
raise ValueError('{} does not support boot configuration'.format(self.name)) raise ValueError('{} does not support boot configuration'.format(self.name))
self.prelude_done = False self.prelude_done = False
@@ -67,6 +67,7 @@ class FastbootFlashModule(FlashModule):
self.logger.debug('flashing {}'.format(partition)) self.logger.debug('flashing {}'.format(partition))
self._flash_image(self.target, partition, expand_path(image_path)) self._flash_image(self.target, partition, expand_path(image_path))
fastboot_command('reboot') fastboot_command('reboot')
if connect:
self.target.connect(timeout=180) self.target.connect(timeout=180)
def _validate_image_bundle(self, image_bundle): def _validate_image_bundle(self, image_bundle):

View File

@@ -124,11 +124,10 @@ class Controller(object):
def move_tasks(self, source, dest, exclude=None): def move_tasks(self, source, dest, exclude=None):
if exclude is None: if exclude is None:
exclude = [] exclude = []
try:
srcg = self._cgroups[source] srcg = self.cgroup(source)
dstg = self._cgroups[dest] dstg = self.cgroup(dest)
except KeyError as e:
raise ValueError('Unknown group: {}'.format(e))
self.target._execute_util( # pylint: disable=protected-access self.target._execute_util( # pylint: disable=protected-access
'cgroups_tasks_move {} {} \'{}\''.format( 'cgroups_tasks_move {} {} \'{}\''.format(
srcg.directory, dstg.directory, exclude), srcg.directory, dstg.directory, exclude),
@@ -158,18 +157,18 @@ class Controller(object):
raise ValueError('wrong type for "exclude" parameter, ' raise ValueError('wrong type for "exclude" parameter, '
'it must be a str or a list') 'it must be a str or a list')
logging.debug('Moving all tasks into %s', dest) self.logger.debug('Moving all tasks into %s', dest)
# Build list of tasks to exclude # Build list of tasks to exclude
grep_filters = '' grep_filters = ''
for comm in exclude: for comm in exclude:
grep_filters += '-e {} '.format(comm) grep_filters += '-e {} '.format(comm)
logging.debug(' using grep filter: %s', grep_filters) self.logger.debug(' using grep filter: %s', grep_filters)
if grep_filters != '': if grep_filters != '':
logging.debug(' excluding tasks which name matches:') self.logger.debug(' excluding tasks which name matches:')
logging.debug(' %s', ', '.join(exclude)) self.logger.debug(' %s', ', '.join(exclude))
for cgroup in self._cgroups: for cgroup in self.list_all():
if cgroup != dest: if cgroup != dest:
self.move_tasks(cgroup, dest, grep_filters) self.move_tasks(cgroup, dest, grep_filters)
@@ -288,10 +287,8 @@ class CGroup(object):
def get(self): def get(self):
conf = {} conf = {}
logging.debug('Reading %s attributes from:', self.logger.debug('Reading %s attributes from:', self.controller.kind)
self.controller.kind) self.logger.debug(' %s', self.directory)
logging.debug(' %s',
self.directory)
output = self.target._execute_util( # pylint: disable=protected-access output = self.target._execute_util( # pylint: disable=protected-access
'cgroups_get_attributes {} {}'.format( 'cgroups_get_attributes {} {}'.format(
self.directory, self.controller.kind), self.directory, self.controller.kind),
@@ -330,7 +327,7 @@ class CGroup(object):
def get_tasks(self): def get_tasks(self):
task_ids = self.target.read_value(self.tasks_file).split() task_ids = self.target.read_value(self.tasks_file).split()
logging.debug('Tasks: %s', task_ids) self.logger.debug('Tasks: %s', task_ids)
return list(map(int, task_ids)) return list(map(int, task_ids))
def add_task(self, tid): def add_task(self, tid):

View File

@@ -111,7 +111,7 @@ class CpufreqModule(Module):
:Keyword Arguments: Governor tunables, See :meth:`set_governor_tunables` :Keyword Arguments: Governor tunables, See :meth:`set_governor_tunables`
""" """
if not cpus: if not cpus:
cpus = range(self.target.number_of_cpus) cpus = self.target.list_online_cpus()
# Setting a governor & tunables for a cpu will set them for all cpus # Setting a governor & tunables for a cpu will set them for all cpus
# in the same clock domain, so only manipulating one cpu per domain # in the same clock domain, so only manipulating one cpu per domain

View File

@@ -173,4 +173,7 @@ class Cpuidle(Module):
return self.target.read_value(self.target.path.join(self.root_path, 'current_driver')) return self.target.read_value(self.target.path.join(self.root_path, 'current_driver'))
def get_governor(self): def get_governor(self):
return self.target.read_value(self.target.path.join(self.root_path, 'current_governor_ro')) path = self.target.path.join(self.root_path, 'current_governor_ro')
if not self.target.path.exists(path):
path = self.target.path.join(self.root_path, 'current_governor')
return self.target.read_value(path)

View File

@@ -52,6 +52,12 @@ class SchedProcFSNode(object):
_re_procfs_node = re.compile(r"(?P<name>.*\D)(?P<digits>\d+)$") _re_procfs_node = re.compile(r"(?P<name>.*\D)(?P<digits>\d+)$")
PACKABLE_ENTRIES = [
"cpu",
"domain",
"group"
]
@staticmethod @staticmethod
def _ends_with_digits(node): def _ends_with_digits(node):
if not isinstance(node, basestring): if not isinstance(node, basestring):
@@ -71,18 +77,19 @@ class SchedProcFSNode(object):
""" """
:returns: The name of the procfs node :returns: The name of the procfs node
""" """
return re.search(SchedProcFSNode._re_procfs_node, node).group("name") match = re.search(SchedProcFSNode._re_procfs_node, node)
if match:
return match.group("name")
@staticmethod return node
def _packable(node, entries):
@classmethod
def _packable(cls, node):
""" """
:returns: Whether it makes sense to pack a node into a common entry :returns: Whether it makes sense to pack a node into a common entry
""" """
return (SchedProcFSNode._ends_with_digits(node) and return (SchedProcFSNode._ends_with_digits(node) and
any([SchedProcFSNode._ends_with_digits(x) and SchedProcFSNode._node_name(node) in cls.PACKABLE_ENTRIES)
SchedProcFSNode._node_digits(x) != SchedProcFSNode._node_digits(node) and
SchedProcFSNode._node_name(x) == SchedProcFSNode._node_name(node)
for x in entries]))
@staticmethod @staticmethod
def _build_directory(node_name, node_data): def _build_directory(node_name, node_data):
@@ -119,7 +126,7 @@ class SchedProcFSNode(object):
# Find which entries can be packed into a common entry # Find which entries can be packed into a common entry
packables = { packables = {
node : SchedProcFSNode._node_name(node) + "s" node : SchedProcFSNode._node_name(node) + "s"
for node in list(nodes.keys()) if SchedProcFSNode._packable(node, list(nodes.keys())) for node in list(nodes.keys()) if SchedProcFSNode._packable(node)
} }
self._dyn_attrs = {} self._dyn_attrs = {}
@@ -228,13 +235,13 @@ class SchedProcFSData(SchedProcFSNode):
# Even if we have a CPU entry, it can be empty (e.g. hotplugged out) # Even if we have a CPU entry, it can be empty (e.g. hotplugged out)
# Make sure some data is there # Make sure some data is there
for cpu in cpus: for cpu in cpus:
if target.file_exists(target.path.join(path, cpu, "domain0", "name")): if target.file_exists(target.path.join(path, cpu, "domain0", "flags")):
return True return True
return False return False
def __init__(self, target, path=None): def __init__(self, target, path=None):
if not path: if path is None:
path = self.sched_domain_root path = self.sched_domain_root
procfs = target.read_tree_values(path, depth=self._read_depth) procfs = target.read_tree_values(path, depth=self._read_depth)
@@ -252,7 +259,21 @@ class SchedModule(Module):
logger = logging.getLogger(SchedModule.name) logger = logging.getLogger(SchedModule.name)
SchedDomainFlag.check_version(target, logger) SchedDomainFlag.check_version(target, logger)
return SchedProcFSData.available(target) # It makes sense to load this module if at least one of those
# functionalities is enabled
schedproc = SchedProcFSData.available(target)
debug = SchedModule.target_has_debug(target)
dmips = any([target.file_exists(SchedModule.cpu_dmips_capacity_path(target, cpu))
for cpu in target.list_online_cpus()])
logger.info("Scheduler sched_domain procfs entries %s",
"found" if schedproc else "not found")
logger.info("Detected kernel compiled with SCHED_DEBUG=%s",
"y" if debug else "n")
logger.info("CPU capacity sysfs entries %s",
"found" if dmips else "not found")
return schedproc or debug or dmips
def get_kernel_attributes(self, matching=None, check_exit_code=True): def get_kernel_attributes(self, matching=None, check_exit_code=True):
""" """
@@ -306,12 +327,16 @@ class SchedModule(Module):
path = '/proc/sys/kernel/sched_' + attr path = '/proc/sys/kernel/sched_' + attr
self.target.write_value(path, value, verify) self.target.write_value(path, value, verify)
@classmethod
def target_has_debug(cls, target):
if target.config.get('SCHED_DEBUG') != 'y':
return False
return target.file_exists('/sys/kernel/debug/sched_features')
@property @property
@memoized @memoized
def has_debug(self): def has_debug(self):
if self.target.config.get('SCHED_DEBUG') != 'y': return self.target_has_debug(self.target)
return False;
return self.target.file_exists('/sys/kernel/debug/sched_features')
def get_features(self): def get_features(self):
""" """
@@ -386,17 +411,26 @@ class SchedModule(Module):
:returns: Whether energy model data is available for 'cpu' :returns: Whether energy model data is available for 'cpu'
""" """
if not sd: if not sd:
sd = SchedProcFSData(self.target, cpu) sd = self.get_cpu_sd_info(cpu)
return sd.procfs["domain0"].get("group0", {}).get("energy", {}).get("cap_states") != None return sd.procfs["domain0"].get("group0", {}).get("energy", {}).get("cap_states") != None
@classmethod
def cpu_dmips_capacity_path(cls, target, cpu):
"""
:returns: The target sysfs path where the dmips capacity data should be
"""
return target.path.join(
cls.cpu_sysfs_root,
'cpu{}/cpu_capacity'.format(cpu))
@memoized @memoized
def has_dmips_capacity(self, cpu): def has_dmips_capacity(self, cpu):
""" """
:returns: Whether dmips capacity data is available for 'cpu' :returns: Whether dmips capacity data is available for 'cpu'
""" """
return self.target.file_exists( return self.target.file_exists(
self.target.path.join(self.cpu_sysfs_root, 'cpu{}/cpu_capacity'.format(cpu)) self.cpu_dmips_capacity_path(self.target, cpu)
) )
@memoized @memoized
@@ -405,10 +439,13 @@ class SchedModule(Module):
:returns: The maximum capacity value exposed by the EAS energy model :returns: The maximum capacity value exposed by the EAS energy model
""" """
if not sd: if not sd:
sd = SchedProcFSData(self.target, cpu) sd = self.get_cpu_sd_info(cpu)
cap_states = sd.domains[0].groups[0].energy.cap_states cap_states = sd.domains[0].groups[0].energy.cap_states
return int(cap_states.split('\t')[-2]) cap_states_list = cap_states.split('\t')
num_cap_states = sd.domains[0].groups[0].energy.nr_cap_states
max_cap_index = -1 * int(len(cap_states_list) / num_cap_states)
return int(cap_states_list[max_cap_index])
@memoized @memoized
def get_dmips_capacity(self, cpu): def get_dmips_capacity(self, cpu):
@@ -416,14 +453,9 @@ class SchedModule(Module):
:returns: The capacity value generated from the capacity-dmips-mhz DT entry :returns: The capacity value generated from the capacity-dmips-mhz DT entry
""" """
return self.target.read_value( return self.target.read_value(
self.target.path.join( self.cpu_dmips_capacity_path(self.target, cpu), int
self.cpu_sysfs_root,
'cpu{}/cpu_capacity'.format(cpu)
),
int
) )
@memoized
def get_capacities(self, default=None): def get_capacities(self, default=None):
""" """
:param default: Default capacity value to find if no data is :param default: Default capacity value to find if no data is
@@ -434,16 +466,30 @@ class SchedModule(Module):
:raises RuntimeError: Raised when no capacity information is :raises RuntimeError: Raised when no capacity information is
found and 'default' is None found and 'default' is None
""" """
cpus = list(range(self.target.number_of_cpus)) cpus = self.target.list_online_cpus()
capacities = {} capacities = {}
sd_info = self.get_sd_info()
for cpu in cpus: for cpu in cpus:
if self.has_dmips_capacity(cpu):
capacities[cpu] = self.get_dmips_capacity(cpu)
missing_cpus = set(cpus).difference(capacities.keys())
if not missing_cpus:
return capacities
if not SchedProcFSData.available(self.target):
if default != None:
capacities.update({cpu : default for cpu in missing_cpus})
return capacities
else:
raise RuntimeError(
'No capacity data for cpus {}'.format(sorted(missing_cpus)))
sd_info = self.get_sd_info()
for cpu in missing_cpus:
if self.has_em(cpu, sd_info.cpus[cpu]): if self.has_em(cpu, sd_info.cpus[cpu]):
capacities[cpu] = self.get_em_capacity(cpu, sd_info.cpus[cpu]) capacities[cpu] = self.get_em_capacity(cpu, sd_info.cpus[cpu])
elif self.has_dmips_capacity(cpu):
capacities[cpu] = self.get_dmips_capacity(cpu)
else: else:
if default != None: if default != None:
capacities[cpu] = default capacities[cpu] = default

View File

@@ -48,7 +48,7 @@ class ThermalZone(object):
self.path = target.path.join(root, self.name) self.path = target.path.join(root, self.name)
self.trip_points = {} self.trip_points = {}
for entry in self.target.list_directory(self.path): for entry in self.target.list_directory(self.path, as_root=target.is_rooted):
re_match = re.match('^trip_point_([0-9]+)_temp', entry) re_match = re.match('^trip_point_([0-9]+)_temp', entry)
if re_match is not None: if re_match is not None:
self.add_trip_point(re_match.group(1)) self.add_trip_point(re_match.group(1))
@@ -88,6 +88,9 @@ class ThermalModule(Module):
for entry in target.list_directory(self.thermal_root): for entry in target.list_directory(self.thermal_root):
re_match = re.match('^(thermal_zone|cooling_device)([0-9]+)', entry) re_match = re.match('^(thermal_zone|cooling_device)([0-9]+)', entry)
if not re_match:
self.logger.warning('unknown thermal entry: %s', entry)
continue
if re_match.group(1) == 'thermal_zone': if re_match.group(1) == 'thermal_zone':
self.add_thermal_zone(re_match.group(2)) self.add_thermal_zone(re_match.group(2))

View File

@@ -325,7 +325,7 @@ class VersatileExpressFlashModule(FlashModule):
self.timeout = timeout self.timeout = timeout
self.short_delay = short_delay self.short_delay = short_delay
def __call__(self, image_bundle=None, images=None, bootargs=None): def __call__(self, image_bundle=None, images=None, bootargs=None, connect=True):
self.target.hard_reset() self.target.hard_reset()
with open_serial_connection(port=self.target.platform.serial_port, with open_serial_connection(port=self.target.platform.serial_port,
baudrate=self.target.platform.baudrate, baudrate=self.target.platform.baudrate,
@@ -346,6 +346,7 @@ class VersatileExpressFlashModule(FlashModule):
msg = 'Could not deploy images to {}; got: {}' msg = 'Could not deploy images to {}; got: {}'
raise TargetStableError(msg.format(self.vemsd_mount, e)) raise TargetStableError(msg.format(self.vemsd_mount, e))
self.target.boot() self.target.boot()
if connect:
self.target.connect(timeout=30) self.target.connect(timeout=30)
def _deploy_image_bundle(self, bundle): def _deploy_image_bundle(self, bundle):

View File

@@ -78,7 +78,16 @@ class Platform(object):
def _set_model_from_target(self, target): def _set_model_from_target(self, target):
if target.os == 'android': if target.os == 'android':
try:
self.model = target.getprop(prop='ro.product.device')
except KeyError:
self.model = target.getprop('ro.product.model') self.model = target.getprop('ro.product.model')
elif target.file_exists("/proc/device-tree/model"):
# There is currently no better way to do this cross platform.
# ARM does not have dmidecode
raw_model = target.execute("cat /proc/device-tree/model")
device_model_to_return = '_'.join(raw_model.split()[:2])
return device_model_to_return.rstrip(' \t\r\n\0')
elif target.is_rooted: elif target.is_rooted:
try: try:
self.model = target.execute('dmidecode -s system-version', self.model = target.execute('dmidecode -s system-version',

View File

@@ -29,6 +29,7 @@ import threading
import xml.dom.minidom import xml.dom.minidom
import copy import copy
from collections import namedtuple, defaultdict from collections import namedtuple, defaultdict
from contextlib import contextmanager
from pipes import quote from pipes import quote
from past.builtins import long from past.builtins import long
from past.types import basestring from past.types import basestring
@@ -45,12 +46,14 @@ from devlib.module import get_module
from devlib.platform import Platform from devlib.platform import Platform
from devlib.exception import (DevlibTransientError, TargetStableError, from devlib.exception import (DevlibTransientError, TargetStableError,
TargetNotRespondingError, TimeoutError, TargetNotRespondingError, TimeoutError,
TargetTransientError, KernelConfigKeyError) # pylint: disable=redefined-builtin TargetTransientError, KernelConfigKeyError,
TargetError) # pylint: disable=redefined-builtin
from devlib.utils.ssh import SshConnection from devlib.utils.ssh import SshConnection
from devlib.utils.android import AdbConnection, AndroidProperties, LogcatMonitor, adb_command, adb_disconnect, INTENT_FLAGS from devlib.utils.android import AdbConnection, AndroidProperties, LogcatMonitor, adb_command, adb_disconnect, INTENT_FLAGS
from devlib.utils.misc import memoized, isiterable, convert_new_lines from devlib.utils.misc import memoized, isiterable, convert_new_lines
from devlib.utils.misc import commonprefix, merge_lists from devlib.utils.misc import commonprefix, merge_lists
from devlib.utils.misc import ABI_MAP, get_cpu_name, ranges_to_list from devlib.utils.misc import ABI_MAP, get_cpu_name, ranges_to_list
from devlib.utils.misc import batch_contextmanager
from devlib.utils.types import integer, boolean, bitmask, identifier, caseless_string, bytes_regex from devlib.utils.types import integer, boolean, bitmask, identifier, caseless_string, bytes_regex
@@ -70,7 +73,6 @@ GOOGLE_DNS_SERVER_ADDRESS = '8.8.8.8'
installed_package_info = namedtuple('installed_package_info', 'apk_path package') installed_package_info = namedtuple('installed_package_info', 'apk_path package')
class Target(object): class Target(object):
path = None path = None
@@ -107,21 +109,18 @@ class Target(object):
@property @property
def connected_as_root(self): def connected_as_root(self):
if self._connected_as_root is None: return self.conn and self.conn.connected_as_root
result = self.execute('id')
self._connected_as_root = 'uid=0(' in result
return self._connected_as_root
@property @property
@memoized
def is_rooted(self): def is_rooted(self):
if self.connected_as_root: if self._is_rooted is None:
return True
try: try:
self.execute('ls /', timeout=5, as_root=True) self.execute('ls /', timeout=5, as_root=True)
return True self._is_rooted = True
except (TargetStableError, TimeoutError): except(TargetError, TimeoutError):
return False self._is_rooted = False
return self._is_rooted or self.connected_as_root
@property @property
@memoized @memoized
@@ -137,6 +136,10 @@ class Target(object):
def os_version(self): # pylint: disable=no-self-use def os_version(self): # pylint: disable=no-self-use
return {} return {}
@property
def model(self):
return self.platform.model
@property @property
def abi(self): # pylint: disable=no-self-use def abi(self): # pylint: disable=no-self-use
return None return None
@@ -155,12 +158,33 @@ class Target(object):
def number_of_cpus(self): def number_of_cpus(self):
num_cpus = 0 num_cpus = 0
corere = re.compile(r'^\s*cpu\d+\s*$') corere = re.compile(r'^\s*cpu\d+\s*$')
output = self.execute('ls /sys/devices/system/cpu') output = self.execute('ls /sys/devices/system/cpu', as_root=self.is_rooted)
for entry in output.split(): for entry in output.split():
if corere.match(entry): if corere.match(entry):
num_cpus += 1 num_cpus += 1
return num_cpus return num_cpus
@property
@memoized
def number_of_nodes(self):
num_nodes = 0
nodere = re.compile(r'^\s*node\d+\s*$')
output = self.execute('ls /sys/devices/system/node', as_root=self.is_rooted)
for entry in output.split():
if nodere.match(entry):
num_nodes += 1
return num_nodes
@property
@memoized
def list_nodes_cpus(self):
nodes_cpus = []
for node in range(self.number_of_nodes):
path = self.path.join('/sys/devices/system/node/node{}/cpulist'.format(node))
output = self.read_value(path)
nodes_cpus.append(ranges_to_list(output))
return nodes_cpus
@property @property
@memoized @memoized
def config(self): def config(self):
@@ -213,7 +237,7 @@ class Target(object):
conn_cls=None, conn_cls=None,
is_container=False is_container=False
): ):
self._connected_as_root = None self._is_rooted = None
self.connection_settings = connection_settings or {} self.connection_settings = connection_settings or {}
# Set self.platform: either it's given directly (by platform argument) # Set self.platform: either it's given directly (by platform argument)
# or it's given in the connection_settings argument # or it's given in the connection_settings argument
@@ -322,7 +346,7 @@ class Target(object):
timeout = max(timeout - reset_delay, 10) timeout = max(timeout - reset_delay, 10)
if self.has('boot'): if self.has('boot'):
self.boot() # pylint: disable=no-member self.boot() # pylint: disable=no-member
self._connected_as_root = None self.conn.connected_as_root = None
if connect: if connect:
self.connect(timeout=timeout) self.connect(timeout=timeout)
@@ -384,7 +408,19 @@ class Target(object):
# execution # execution
def execute(self, command, timeout=None, check_exit_code=True, def execute(self, command, timeout=None, check_exit_code=True,
as_root=False, strip_colors=True, will_succeed=False): as_root=False, strip_colors=True, will_succeed=False,
force_locale='C'):
# Force the locale if necessary for more predictable output
if force_locale:
# Use an explicit export so that the command is allowed to be any
# shell statement, rather than just a command invocation
command = 'export LC_ALL={} && {}'.format(quote(force_locale), command)
# Ensure to use deployed command when availables
if self.executables_directory:
command = "export PATH={}:$PATH && {}".format(quote(self.executables_directory), command)
return self.conn.execute(command, timeout=timeout, return self.conn.execute(command, timeout=timeout,
check_exit_code=check_exit_code, as_root=as_root, check_exit_code=check_exit_code, as_root=as_root,
strip_colors=strip_colors, will_succeed=will_succeed) strip_colors=strip_colors, will_succeed=will_succeed)
@@ -478,6 +514,18 @@ class Target(object):
def read_bool(self, path): def read_bool(self, path):
return self.read_value(path, kind=boolean) return self.read_value(path, kind=boolean)
@contextmanager
def revertable_write_value(self, path, value, verify=True):
orig_value = self.read_value(path)
try:
self.write_value(path, value, verify)
yield
finally:
self.write_value(path, orig_value, verify)
def batch_revertable_write_value(self, kwargs_list):
return batch_contextmanager(self.revertable_write_value, kwargs_list)
def write_value(self, path, value, verify=True): def write_value(self, path, value, verify=True):
value = str(value) value = str(value)
self.execute('echo {} > {}'.format(quote(value), quote(path)), check_exit_code=False, as_root=True) self.execute('echo {} > {}'.format(quote(value), quote(path)), check_exit_code=False, as_root=True)
@@ -493,16 +541,16 @@ class Target(object):
except (DevlibTransientError, subprocess.CalledProcessError): except (DevlibTransientError, subprocess.CalledProcessError):
# on some targets "reboot" doesn't return gracefully # on some targets "reboot" doesn't return gracefully
pass pass
self._connected_as_root = None self.conn.connected_as_root = None
def check_responsive(self, explode=True): def check_responsive(self, explode=True):
try: try:
self.conn.execute('ls /', timeout=5) self.conn.execute('ls /', timeout=5)
return 1 return True
except (DevlibTransientError, subprocess.CalledProcessError): except (DevlibTransientError, subprocess.CalledProcessError):
if explode: if explode:
raise TargetNotRespondingError('Target {} is not responding'.format(self.conn.name)) raise TargetNotRespondingError('Target {} is not responding'.format(self.conn.name))
return 0 return False
# process management # process management
@@ -619,12 +667,12 @@ class Target(object):
which = get_installed which = get_installed
def install_if_needed(self, host_path, search_system_binaries=True): def install_if_needed(self, host_path, search_system_binaries=True, timeout=None):
binary_path = self.get_installed(os.path.split(host_path)[1], binary_path = self.get_installed(os.path.split(host_path)[1],
search_system_binaries=search_system_binaries) search_system_binaries=search_system_binaries)
if not binary_path: if not binary_path:
binary_path = self.install(host_path) binary_path = self.install(host_path, timeout=timeout)
return binary_path return binary_path
def is_installed(self, name): def is_installed(self, name):
@@ -774,6 +822,18 @@ class Target(object):
strip_null_chars) strip_null_chars)
return _build_path_tree(value_map, path, self.path.sep, dictcls) return _build_path_tree(value_map, path, self.path.sep, dictcls)
def install_module(self, mod, **params):
mod = get_module(mod)
if mod.stage == 'early':
msg = 'Module {} cannot be installed after device setup has already occoured.'
raise TargetStableError(msg)
if mod.probe(self):
self._install_module(mod, **params)
else:
msg = 'Module {} is not supported by the target'.format(mod.name)
raise TargetStableError(msg)
# internal methods # internal methods
def _setup_shutils(self): def _setup_shutils(self):
@@ -843,7 +903,11 @@ class Target(object):
def _install_module(self, mod, **params): def _install_module(self, mod, **params):
if mod.name not in self._installed_modules: if mod.name not in self._installed_modules:
self.logger.debug('Installing module {}'.format(mod.name)) self.logger.debug('Installing module {}'.format(mod.name))
try:
mod.install(self, **params) mod.install(self, **params)
except Exception as e:
self.logger.error('Module "{}" failed to install on target'.format(mod.name))
raise
self._installed_modules[mod.name] = mod self._installed_modules[mod.name] = mod
else: else:
self.logger.debug('Module {} is already installed.'.format(mod.name)) self.logger.debug('Module {} is already installed.'.format(mod.name))
@@ -920,17 +984,6 @@ class LinuxTarget(Target):
os_version[name] = convert_new_lines(output.strip()).replace('\n', ' ') os_version[name] = convert_new_lines(output.strip()).replace('\n', ' ')
return os_version return os_version
@property
@memoized
# There is currently no better way to do this cross platform.
# ARM does not have dmidecode
def model(self):
if self.file_exists("/proc/device-tree/model"):
raw_model = self.execute("cat /proc/device-tree/model")
device_model_to_return = '_'.join(raw_model.split()[:2])
return device_model_to_return.rstrip(' \t\r\n\0')
return None
@property @property
@memoized @memoized
def system_id(self): def system_id(self):
@@ -1003,7 +1056,7 @@ class LinuxTarget(Target):
def install(self, filepath, timeout=None, with_name=None): # pylint: disable=W0221 def install(self, filepath, timeout=None, with_name=None): # pylint: disable=W0221
destpath = self.path.join(self.executables_directory, destpath = self.path.join(self.executables_directory,
with_name and with_name or self.path.basename(filepath)) with_name and with_name or self.path.basename(filepath))
self.push(filepath, destpath) self.push(filepath, destpath, timeout=timeout)
self.execute('chmod a+x {}'.format(quote(destpath)), timeout=timeout) self.execute('chmod a+x {}'.format(quote(destpath)), timeout=timeout)
self._installed_binaries[self.path.basename(destpath)] = destpath self._installed_binaries[self.path.basename(destpath)] = destpath
return destpath return destpath
@@ -1103,14 +1156,6 @@ class AndroidTarget(Target):
output = self.execute('content query --uri content://settings/secure --projection value --where "name=\'android_id\'"').strip() output = self.execute('content query --uri content://settings/secure --projection value --where "name=\'android_id\'"').strip()
return output.split('value=')[-1] return output.split('value=')[-1]
@property
@memoized
def model(self):
try:
return self.getprop(prop='ro.product.device')
except KeyError:
return None
@property @property
@memoized @memoized
def system_id(self): def system_id(self):
@@ -1165,7 +1210,7 @@ class AndroidTarget(Target):
except (DevlibTransientError, subprocess.CalledProcessError): except (DevlibTransientError, subprocess.CalledProcessError):
# on some targets "reboot" doesn't return gracefully # on some targets "reboot" doesn't return gracefully
pass pass
self._connected_as_root = None self.conn.connected_as_root = None
def wait_boot_complete(self, timeout=10): def wait_boot_complete(self, timeout=10):
start = time.time() start = time.time()
@@ -1180,13 +1225,6 @@ class AndroidTarget(Target):
def connect(self, timeout=30, check_boot_completed=True): # pylint: disable=arguments-differ def connect(self, timeout=30, check_boot_completed=True): # pylint: disable=arguments-differ
device = self.connection_settings.get('device') device = self.connection_settings.get('device')
if device and ':' in device:
# ADB does not automatically remove a network device from it's
# devices list when the connection is broken by the remote, so the
# adb connection may have gone "stale", resulting in adb blocking
# indefinitely when making calls to the device. To avoid this,
# always disconnect first.
adb_disconnect(device)
super(AndroidTarget, self).connect(timeout=timeout, check_boot_completed=check_boot_completed) super(AndroidTarget, self).connect(timeout=timeout, check_boot_completed=check_boot_completed)
def kick_off(self, command, as_root=None): def kick_off(self, command, as_root=None):
@@ -1227,7 +1265,7 @@ class AndroidTarget(Target):
if ext == '.apk': if ext == '.apk':
return self.install_apk(filepath, timeout) return self.install_apk(filepath, timeout)
else: else:
return self.install_executable(filepath, with_name) return self.install_executable(filepath, with_name, timeout)
def uninstall(self, name): def uninstall(self, name):
if self.package_is_installed(name): if self.package_is_installed(name):
@@ -1390,7 +1428,14 @@ class AndroidTarget(Target):
if self.get_sdk_version() >= 23: if self.get_sdk_version() >= 23:
flags.append('-g') # Grant all runtime permissions flags.append('-g') # Grant all runtime permissions
self.logger.debug("Replace APK = {}, ADB flags = '{}'".format(replace, ' '.join(flags))) self.logger.debug("Replace APK = {}, ADB flags = '{}'".format(replace, ' '.join(flags)))
if isinstance(self.conn, AdbConnection):
return adb_command(self.adb_name, "install {} {}".format(' '.join(flags), quote(filepath)), timeout=timeout) return adb_command(self.adb_name, "install {} {}".format(' '.join(flags), quote(filepath)), timeout=timeout)
else:
dev_path = self.get_workpath(filepath.rsplit(os.path.sep, 1)[-1])
self.push(quote(filepath), dev_path, timeout=timeout)
result = self.execute("pm install {} {}".format(' '.join(flags), quote(dev_path)), timeout=timeout)
self.remove(dev_path)
return result
else: else:
raise TargetStableError('Can\'t install {}: unsupported format.'.format(filepath)) raise TargetStableError('Can\'t install {}: unsupported format.'.format(filepath))
@@ -1437,21 +1482,25 @@ class AndroidTarget(Target):
'-n com.android.providers.media/.MediaScannerReceiver' '-n com.android.providers.media/.MediaScannerReceiver'
self.execute(command.format(quote('file://'+dirpath)), as_root=as_root) self.execute(command.format(quote('file://'+dirpath)), as_root=as_root)
def install_executable(self, filepath, with_name=None): def install_executable(self, filepath, with_name=None, timeout=None):
self._ensure_executables_directory_is_writable() self._ensure_executables_directory_is_writable()
executable_name = with_name or os.path.basename(filepath) executable_name = with_name or os.path.basename(filepath)
on_device_file = self.path.join(self.working_directory, executable_name) on_device_file = self.path.join(self.working_directory, executable_name)
on_device_executable = self.path.join(self.executables_directory, executable_name) on_device_executable = self.path.join(self.executables_directory, executable_name)
self.push(filepath, on_device_file) self.push(filepath, on_device_file, timeout=timeout)
if on_device_file != on_device_executable: if on_device_file != on_device_executable:
self.execute('cp {} {}'.format(quote(on_device_file), quote(on_device_executable)), as_root=self.needs_su) self.execute('cp {} {}'.format(quote(on_device_file), quote(on_device_executable)),
as_root=self.needs_su, timeout=timeout)
self.remove(on_device_file, as_root=self.needs_su) self.remove(on_device_file, as_root=self.needs_su)
self.execute("chmod 0777 {}".format(quote(on_device_executable)), as_root=self.needs_su) self.execute("chmod 0777 {}".format(quote(on_device_executable)), as_root=self.needs_su)
self._installed_binaries[executable_name] = on_device_executable self._installed_binaries[executable_name] = on_device_executable
return on_device_executable return on_device_executable
def uninstall_package(self, package): def uninstall_package(self, package):
if isinstance(self.conn, AdbConnection):
adb_command(self.adb_name, "uninstall {}".format(quote(package)), timeout=30) adb_command(self.adb_name, "uninstall {}".format(quote(package)), timeout=30)
else:
self.execute("pm uninstall {}".format(quote(package)), timeout=30)
def uninstall_executable(self, executable_name): def uninstall_executable(self, executable_name):
on_device_executable = self.path.join(self.executables_directory, executable_name) on_device_executable = self.path.join(self.executables_directory, executable_name)
@@ -1461,34 +1510,31 @@ class AndroidTarget(Target):
def dump_logcat(self, filepath, filter=None, append=False, timeout=30): # pylint: disable=redefined-builtin def dump_logcat(self, filepath, filter=None, append=False, timeout=30): # pylint: disable=redefined-builtin
op = '>>' if append else '>' op = '>>' if append else '>'
filtstr = ' -s {}'.format(quote(filter)) if filter else '' filtstr = ' -s {}'.format(quote(filter)) if filter else ''
if isinstance(self.conn, AdbConnection):
command = 'logcat -d{} {} {}'.format(filtstr, op, quote(filepath)) command = 'logcat -d{} {} {}'.format(filtstr, op, quote(filepath))
adb_command(self.adb_name, command, timeout=timeout) adb_command(self.adb_name, command, timeout=timeout)
else:
dev_path = self.get_workpath('logcat')
command = 'logcat -d{} {} {}'.format(filtstr, op, quote(dev_path))
self.execute(command, timeout=timeout)
self.pull(dev_path, filepath)
self.remove(dev_path)
def clear_logcat(self): def clear_logcat(self):
with self.clear_logcat_lock: with self.clear_logcat_lock:
if isinstance(self.conn, AdbConnection):
adb_command(self.adb_name, 'logcat -c', timeout=30) adb_command(self.adb_name, 'logcat -c', timeout=30)
else:
self.execute('logcat -c', timeout=30)
def get_logcat_monitor(self, regexps=None): def get_logcat_monitor(self, regexps=None):
return LogcatMonitor(self, regexps) return LogcatMonitor(self, regexps)
def adb_kill_server(self, timeout=30): def wait_for_device(self, timeout=30):
adb_command(self.adb_name, 'kill-server', timeout) self.conn.wait_for_device()
def adb_wait_for_device(self, timeout=30): def reboot_bootloader(self, timeout=30):
adb_command(self.adb_name, 'wait-for-device', timeout) self.conn.reboot_bootloader()
def adb_reboot_bootloader(self, timeout=30):
adb_command(self.adb_name, 'reboot-bootloader', timeout)
def adb_root(self, enable=True, force=False):
if enable:
if self._connected_as_root and not force:
return
adb_command(self.adb_name, 'root', timeout=30)
self._connected_as_root = True
return
adb_command(self.adb_name, 'unroot', timeout=30)
self._connected_as_root = False
def is_screen_on(self): def is_screen_on(self):
output = self.execute('dumpsys power') output = self.execute('dumpsys power')
@@ -2006,6 +2052,9 @@ class KernelConfig(object):
This class does not provide a Mapping API and only return string values. This class does not provide a Mapping API and only return string values.
""" """
@staticmethod
def get_config_name(name):
return TypedKernelConfig.get_config_name(name)
def __init__(self, text): def __init__(self, text):
# Expose typed_config as a non-private attribute, so that user code # Expose typed_config as a non-private attribute, so that user code
@@ -2014,7 +2063,9 @@ class KernelConfig(object):
# Expose the original text for backward compatibility # Expose the original text for backward compatibility
self.text = text self.text = text
get_config_name = TypedKernelConfig.get_config_name def __bool__(self):
return bool(self.typed_config)
not_set_regex = TypedKernelConfig.not_set_regex not_set_regex = TypedKernelConfig.not_set_regex
def iteritems(self): def iteritems(self):

View File

@@ -1,137 +0,0 @@
# Copyright 2018 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import re
from past.builtins import basestring, zip
from devlib.host import PACKAGE_BIN_DIRECTORY
from devlib.trace import TraceCollector
from devlib.utils.misc import ensure_file_directory_exists as _f
PERF_COMMAND_TEMPLATE = '{} stat {} {} sleep 1000 > {} 2>&1 '
PERF_COUNT_REGEX = re.compile(r'^(CPU\d+)?\s*(\d+)\s*(.*?)\s*(\[\s*\d+\.\d+%\s*\])?\s*$')
DEFAULT_EVENTS = [
'migrations',
'cs',
]
class PerfCollector(TraceCollector):
"""
Perf is a Linux profiling with performance counters.
Performance counters are CPU hardware registers that count hardware events
such as instructions executed, cache-misses suffered, or branches
mispredicted. They form a basis for profiling applications to trace dynamic
control flow and identify hotspots.
pref accepts options and events. If no option is given the default '-a' is
used. For events, the default events are migrations and cs. They both can
be specified in the config file.
Events must be provided as a list that contains them and they will look like
this ::
perf_events = ['migrations', 'cs']
Events can be obtained by typing the following in the command line on the
device ::
perf list
Whereas options, they can be provided as a single string as following ::
perf_options = '-a -i'
Options can be obtained by running the following in the command line ::
man perf-stat
"""
def __init__(self, target,
events=None,
optionstring=None,
labels=None,
force_install=False):
super(PerfCollector, self).__init__(target)
self.events = events if events else DEFAULT_EVENTS
self.force_install = force_install
self.labels = labels
# Validate parameters
if isinstance(optionstring, list):
self.optionstrings = optionstring
else:
self.optionstrings = [optionstring]
if self.events and isinstance(self.events, basestring):
self.events = [self.events]
if not self.labels:
self.labels = ['perf_{}'.format(i) for i in range(len(self.optionstrings))]
if len(self.labels) != len(self.optionstrings):
raise ValueError('The number of labels must match the number of optstrings provided for perf.')
self.binary = self.target.get_installed('perf')
if self.force_install or not self.binary:
self.binary = self._deploy_perf()
self.commands = self._build_commands()
def reset(self):
self.target.killall('perf', as_root=self.target.is_rooted)
for label in self.labels:
filepath = self._get_target_outfile(label)
self.target.remove(filepath)
def start(self):
for command in self.commands:
self.target.kick_off(command)
def stop(self):
self.target.killall('sleep', as_root=self.target.is_rooted)
# pylint: disable=arguments-differ
def get_trace(self, outdir):
for label in self.labels:
target_file = self._get_target_outfile(label)
host_relpath = os.path.basename(target_file)
host_file = _f(os.path.join(outdir, host_relpath))
self.target.pull(target_file, host_file)
def _deploy_perf(self):
host_executable = os.path.join(PACKAGE_BIN_DIRECTORY,
self.target.abi, 'perf')
return self.target.install(host_executable)
def _build_commands(self):
commands = []
for opts, label in zip(self.optionstrings, self.labels):
commands.append(self._build_perf_command(opts, self.events, label))
return commands
def _get_target_outfile(self, label):
return self.target.get_workpath('{}.out'.format(label))
def _build_perf_command(self, options, events, label):
event_string = ' '.join(['-e {}'.format(e) for e in events])
command = PERF_COMMAND_TEMPLATE.format(self.binary,
options or '',
event_string,
self._get_target_outfile(label))
return command

View File

@@ -31,7 +31,10 @@ import pexpect
import xml.etree.ElementTree import xml.etree.ElementTree
import zipfile import zipfile
from pipes import quote try:
from shlex import quote
except ImportError:
from pipes import quote
from devlib.exception import TargetTransientError, TargetStableError, HostError from devlib.exception import TargetTransientError, TargetStableError, HostError
from devlib.utils.misc import check_output, which, ABI_MAP from devlib.utils.misc import check_output, which, ABI_MAP
@@ -45,7 +48,8 @@ AM_START_ERROR = re.compile(r"Error: Activity.*")
# See: # See:
# http://developer.android.com/guide/topics/manifest/uses-sdk-element.html#ApiLevels # http://developer.android.com/guide/topics/manifest/uses-sdk-element.html#ApiLevels
ANDROID_VERSION_MAP = { ANDROID_VERSION_MAP = {
28: 'P', 29: 'Q',
28: 'PIE',
27: 'OREO_MR1', 27: 'OREO_MR1',
26: 'OREO', 26: 'OREO',
25: 'NOUGAT_MR1', 25: 'NOUGAT_MR1',
@@ -234,43 +238,43 @@ class AdbConnection(object):
# maintains the count of parallel active connections to a device, so that # maintains the count of parallel active connections to a device, so that
# adb disconnect is not invoked untill all connections are closed # adb disconnect is not invoked untill all connections are closed
active_connections = defaultdict(int) active_connections = defaultdict(int)
# Track connected as root status per device
_connected_as_root = defaultdict(lambda: None)
default_timeout = 10 default_timeout = 10
ls_command = 'ls' ls_command = 'ls'
su_cmd = 'su -c {}'
@property @property
def name(self): def name(self):
return self.device return self.device
# Again, we need to handle boards where the default output format from ls is @property
# single column *and* boards where the default output is multi-column. def connected_as_root(self):
# We need to do this purely because the '-1' option causes errors on older if self._connected_as_root[self.device] is None:
# versions of the ls tool in Android pre-v7. result = self.execute('id')
def _setup_ls(self): self._connected_as_root[self.device] = 'uid=0(' in result
command = "shell '(ls -1); echo \"\n$?\"'" return self._connected_as_root[self.device]
try:
output = adb_command(self.device, command, timeout=self.timeout, adb_server=self.adb_server) @connected_as_root.setter
except subprocess.CalledProcessError as e: def connected_as_root(self, state):
raise HostError( self._connected_as_root[self.device] = state
'Failed to set up ls command on Android device. Output:\n'
+ e.output)
lines = output.splitlines()
retval = lines[-1].strip()
if int(retval) == 0:
self.ls_command = 'ls -1'
else:
self.ls_command = 'ls'
logger.debug("ls command is set to {}".format(self.ls_command))
# pylint: disable=unused-argument # pylint: disable=unused-argument
def __init__(self, device=None, timeout=None, platform=None, adb_server=None): def __init__(self, device=None, timeout=None, platform=None, adb_server=None,
adb_as_root=False):
self.timeout = timeout if timeout is not None else self.default_timeout self.timeout = timeout if timeout is not None else self.default_timeout
if device is None: if device is None:
device = adb_get_device(timeout=timeout, adb_server=adb_server) device = adb_get_device(timeout=timeout, adb_server=adb_server)
self.device = device self.device = device
self.adb_server = adb_server self.adb_server = adb_server
self.adb_as_root = adb_as_root
if self.adb_as_root:
self.adb_root(enable=True)
adb_connect(self.device) adb_connect(self.device)
AdbConnection.active_connections[self.device] += 1 AdbConnection.active_connections[self.device] += 1
self._setup_ls() self._setup_ls()
self._setup_su()
def push(self, source, dest, timeout=None): def push(self, source, dest, timeout=None):
if timeout is None: if timeout is None:
@@ -300,7 +304,7 @@ class AdbConnection(object):
as_root=False, strip_colors=True, will_succeed=False): as_root=False, strip_colors=True, will_succeed=False):
try: try:
return adb_shell(self.device, command, timeout, check_exit_code, return adb_shell(self.device, command, timeout, check_exit_code,
as_root, adb_server=self.adb_server) as_root, adb_server=self.adb_server, su_cmd=self.su_cmd)
except TargetStableError as e: except TargetStableError as e:
if will_succeed: if will_succeed:
raise TargetTransientError(e) raise TargetTransientError(e)
@@ -313,6 +317,8 @@ class AdbConnection(object):
def close(self): def close(self):
AdbConnection.active_connections[self.device] -= 1 AdbConnection.active_connections[self.device] -= 1
if AdbConnection.active_connections[self.device] <= 0: if AdbConnection.active_connections[self.device] <= 0:
if self.adb_as_root:
self.adb_root(self.device, enable=False)
adb_disconnect(self.device) adb_disconnect(self.device)
del AdbConnection.active_connections[self.device] del AdbConnection.active_connections[self.device]
@@ -322,6 +328,50 @@ class AdbConnection(object):
# before the next one can be issued. # before the next one can be issued.
pass pass
def adb_root(self, enable=True):
cmd = 'root' if enable else 'unroot'
output = adb_command(self.device, cmd, timeout=30)
if 'cannot run as root in production builds' in output:
raise TargetStableError(output)
AdbConnection._connected_as_root[self.device] = enable
def wait_for_device(self, timeout=30):
adb_command(self.device, 'wait-for-device', timeout)
def reboot_bootloader(self, timeout=30):
adb_command(self.device, 'reboot-bootloader', timeout)
# Again, we need to handle boards where the default output format from ls is
# single column *and* boards where the default output is multi-column.
# We need to do this purely because the '-1' option causes errors on older
# versions of the ls tool in Android pre-v7.
def _setup_ls(self):
command = "shell '(ls -1); echo \"\n$?\"'"
try:
output = adb_command(self.device, command, timeout=self.timeout, adb_server=self.adb_server)
except subprocess.CalledProcessError as e:
raise HostError(
'Failed to set up ls command on Android device. Output:\n'
+ e.output)
lines = output.splitlines()
retval = lines[-1].strip()
if int(retval) == 0:
self.ls_command = 'ls -1'
else:
self.ls_command = 'ls'
logger.debug("ls command is set to {}".format(self.ls_command))
def _setup_su(self):
try:
# Try the new style of invoking `su`
self.execute('ls', timeout=self.timeout, as_root=True,
check_exit_code=True)
# If failure assume either old style or unrooted. Here we will assume
# old style and root status will be verified later.
except (TargetStableError, TargetTransientError, TimeoutError):
self.su_cmd = 'echo {} | su'
logger.debug("su command is set to {}".format(quote(self.su_cmd)))
def fastboot_command(command, timeout=None, device=None): def fastboot_command(command, timeout=None, device=None):
_check_env() _check_env()
@@ -381,6 +431,12 @@ def adb_connect(device, timeout=None, attempts=MAX_ATTEMPTS):
tries += 1 tries += 1
if device: if device:
if "." in device: # Connect is required only for ADB-over-IP if "." in device: # Connect is required only for ADB-over-IP
# ADB does not automatically remove a network device from it's
# devices list when the connection is broken by the remote, so the
# adb connection may have gone "stale", resulting in adb blocking
# indefinitely when making calls to the device. To avoid this,
# always disconnect first.
adb_disconnect(device)
command = 'adb connect {}'.format(quote(device)) command = 'adb connect {}'.format(quote(device))
logger.debug(command) logger.debug(command)
output, _ = check_output(command, shell=True, timeout=timeout) output, _ = check_output(command, shell=True, timeout=timeout)
@@ -420,25 +476,27 @@ def _ping(device):
# pylint: disable=too-many-locals # pylint: disable=too-many-locals
def adb_shell(device, command, timeout=None, check_exit_code=False, def adb_shell(device, command, timeout=None, check_exit_code=False,
as_root=False, adb_server=None): # NOQA as_root=False, adb_server=None, su_cmd='su -c {}'): # NOQA
_check_env() _check_env()
if as_root:
command = 'echo {} | su'.format(quote(command))
device_part = []
if adb_server:
device_part = ['-H', adb_server]
device_part += ['-s', device] if device else []
# On older combinations of ADB/Android versions, the adb host command always # On older combinations of ADB/Android versions, the adb host command always
# exits with 0 if it was able to run the command on the target, even if the # exits with 0 if it was able to run the command on the target, even if the
# command failed (https://code.google.com/p/android/issues/detail?id=3254). # command failed (https://code.google.com/p/android/issues/detail?id=3254).
# Homogenise this behaviour by running the command then echoing the exit # Homogenise this behaviour by running the command then echoing the exit
# code. # code of the executed command itself.
adb_shell_command = '({}); echo \"\n$?\"'.format(command) command = r'({}); echo "\n$?"'.format(command)
actual_command = ['adb'] + device_part + ['shell', adb_shell_command]
logger.debug('adb {} shell {}'.format(' '.join(device_part), command)) parts = ['adb']
if adb_server is not None:
parts += ['-H', adb_server]
if device is not None:
parts += ['-s', device]
parts += ['shell',
command if not as_root else su_cmd.format(quote(command))]
logger.debug(' '.join(quote(part) for part in parts))
try: try:
raw_output, _ = check_output(actual_command, timeout, shell=False, combined_output=True) raw_output, _ = check_output(parts, timeout, shell=False, combined_output=True)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise TargetStableError(str(e)) raise TargetStableError(str(e))
@@ -492,6 +550,8 @@ def adb_background_shell(device, command,
logger.debug(full_command) logger.debug(full_command)
return subprocess.Popen(full_command, stdout=stdout, stderr=stderr, shell=True) return subprocess.Popen(full_command, stdout=stdout, stderr=stderr, shell=True)
def adb_kill_server(self, timeout=30):
adb_command(None, 'kill-server', timeout)
def adb_list_devices(adb_server=None): def adb_list_devices(adb_server=None):
output = adb_command(None, 'devices', adb_server=adb_server) output = adb_command(None, 'devices', adb_server=adb_server)

View File

@@ -19,11 +19,13 @@ Miscellaneous functions that don't fit anywhere else.
""" """
from __future__ import division from __future__ import division
from contextlib import contextmanager
from functools import partial, reduce from functools import partial, reduce
from itertools import groupby from itertools import groupby
from operator import itemgetter from operator import itemgetter
import ctypes import ctypes
import functools
import logging import logging
import os import os
import pkgutil import pkgutil
@@ -38,6 +40,11 @@ import wrapt
import warnings import warnings
try:
from contextlib import ExitStack
except AttributeError:
from contextlib2 import ExitStack
from past.builtins import basestring from past.builtins import basestring
# pylint: disable=redefined-builtin # pylint: disable=redefined-builtin
@@ -695,3 +702,19 @@ def memoized(wrapped, instance, args, kwargs): # pylint: disable=unused-argumen
return __memo_cache[id_string] return __memo_cache[id_string]
return memoize_wrapper(*args, **kwargs) return memoize_wrapper(*args, **kwargs)
@contextmanager
def batch_contextmanager(f, kwargs_list):
"""
Return a context manager that will call the ``f`` callable with the keyword
arguments dict in the given list, in one go.
:param f: Callable expected to return a context manager.
:param kwargs_list: list of kwargs dictionaries to be used to call ``f``.
:type kwargs_list: list(dict)
"""
with ExitStack() as stack:
for kwargs in kwargs_list:
stack.enter_context(f(**kwargs))
yield

View File

@@ -147,32 +147,44 @@ class SurfaceFlingerFrameCollector(FrameCollector):
return text.replace('\r\n', '\n').replace('\r', '\n').split('\n') return text.replace('\r\n', '\n').replace('\r', '\n').split('\n')
def _process_raw_file(self, fh): def _process_raw_file(self, fh):
found = False
text = fh.read().replace('\r\n', '\n').replace('\r', '\n') text = fh.read().replace('\r\n', '\n').replace('\r', '\n')
for line in text.split('\n'): for line in text.split('\n'):
line = line.strip() line = line.strip()
if line: if not line:
self._process_trace_line(line) continue
if 'SurfaceFlinger appears to be unresponsive, dumping anyways' in line:
def _process_trace_line(self, line): self.unresponsive_count += 1
continue
parts = line.split() parts = line.split()
# We only want numerical data, ignore textual data.
try:
parts = list(map(int, parts))
except ValueError:
continue
found = True
self._process_trace_parts(parts)
if not found:
logger.warning('Could not find expected SurfaceFlinger output.')
def _process_trace_parts(self, parts):
if len(parts) == 3: if len(parts) == 3:
frame = SurfaceFlingerFrame(*list(map(int, parts))) frame = SurfaceFlingerFrame(*parts)
if not frame.frame_ready_time: if not frame.frame_ready_time:
return # "null" frame return # "null" frame
if frame.frame_ready_time <= self.last_ready_time: if frame.frame_ready_time <= self.last_ready_time:
return # duplicate frame return # duplicate frame
if (frame.frame_ready_time - frame.desired_present_time) > self.drop_threshold: if (frame.frame_ready_time - frame.desired_present_time) > self.drop_threshold:
logger.debug('Dropping bogus frame {}.'.format(line)) logger.debug('Dropping bogus frame {}.'.format(' '.join(map(str, parts))))
return # bogus data return # bogus data
self.last_ready_time = frame.frame_ready_time self.last_ready_time = frame.frame_ready_time
self.frames.append(frame) self.frames.append(frame)
elif len(parts) == 1: elif len(parts) == 1:
self.refresh_period = int(parts[0]) self.refresh_period = parts[0]
self.drop_threshold = self.refresh_period * 1000 self.drop_threshold = self.refresh_period * 1000
elif 'SurfaceFlinger appears to be unresponsive, dumping anyways' in line:
self.unresponsive_count += 1
else: else:
logger.warning('Unexpected SurfaceFlinger dump output: {}'.format(line)) msg = 'Unexpected SurfaceFlinger dump output: {}'.format(' '.join(map(str, parts)))
logger.warning(msg)
def read_gfxinfo_columns(target): def read_gfxinfo_columns(target):

View File

@@ -54,7 +54,15 @@ sshpass = None
logger = logging.getLogger('ssh') logger = logging.getLogger('ssh')
gem5_logger = logging.getLogger('gem5-connection') gem5_logger = logging.getLogger('gem5-connection')
def ssh_get_shell(host, username, password=None, keyfile=None, port=None, timeout=10, telnet=False, original_prompt=None): def ssh_get_shell(host,
username,
password=None,
keyfile=None,
port=None,
timeout=10,
telnet=False,
original_prompt=None,
options=None):
_check_env() _check_env()
start_time = time.time() start_time = time.time()
while True: while True:
@@ -63,7 +71,8 @@ def ssh_get_shell(host, username, password=None, keyfile=None, port=None, timeou
raise ValueError('keyfile may not be used with a telnet connection.') raise ValueError('keyfile may not be used with a telnet connection.')
conn = TelnetPxssh(original_prompt=original_prompt) conn = TelnetPxssh(original_prompt=original_prompt)
else: # ssh else: # ssh
conn = pxssh.pxssh(echo=False) conn = pxssh.pxssh(options=options,
echo=False)
try: try:
if keyfile: if keyfile:
@@ -158,6 +167,18 @@ class SshConnection(object):
def name(self): def name(self):
return self.host return self.host
@property
def connected_as_root(self):
if self._connected_as_root is None:
# Execute directly to prevent deadlocking of connection
result = self._execute_and_wait_for_prompt('id', as_root=False)
self._connected_as_root = 'uid=0(' in result
return self._connected_as_root
@connected_as_root.setter
def connected_as_root(self, state):
self._connected_as_root = state
# pylint: disable=unused-argument,super-init-not-called # pylint: disable=unused-argument,super-init-not-called
def __init__(self, def __init__(self,
host, host,
@@ -170,8 +191,10 @@ class SshConnection(object):
password_prompt=None, password_prompt=None,
original_prompt=None, original_prompt=None,
platform=None, platform=None,
sudo_cmd="sudo -- sh -c {}" sudo_cmd="sudo -- sh -c {}",
options=None
): ):
self._connected_as_root = None
self.host = host self.host = host
self.username = username self.username = username
self.password = password self.password = password
@@ -182,7 +205,16 @@ class SshConnection(object):
self.sudo_cmd = sanitize_cmd_template(sudo_cmd) self.sudo_cmd = sanitize_cmd_template(sudo_cmd)
logger.debug('Logging in {}@{}'.format(username, host)) logger.debug('Logging in {}@{}'.format(username, host))
timeout = timeout if timeout is not None else self.default_timeout timeout = timeout if timeout is not None else self.default_timeout
self.conn = ssh_get_shell(host, username, password, self.keyfile, port, timeout, False, None) self.options = options if options is not None else {}
self.conn = ssh_get_shell(host,
username,
password,
self.keyfile,
port,
timeout,
False,
None,
self.options)
atexit.register(self.close) atexit.register(self.close)
def push(self, source, dest, timeout=30): def push(self, source, dest, timeout=30):
@@ -232,9 +264,17 @@ class SshConnection(object):
try: try:
port_string = '-p {}'.format(self.port) if self.port else '' port_string = '-p {}'.format(self.port) if self.port else ''
keyfile_string = '-i {}'.format(self.keyfile) if self.keyfile else '' keyfile_string = '-i {}'.format(self.keyfile) if self.keyfile else ''
if as_root: if as_root and not self.connected_as_root:
command = self.sudo_cmd.format(command) command = self.sudo_cmd.format(command)
command = '{} {} {} {}@{} {}'.format(ssh, keyfile_string, port_string, self.username, self.host, command) options = " ".join([ "-o {}={}".format(key,val)
for key,val in self.options.items()])
command = '{} {} {} {} {}@{} {}'.format(ssh,
options,
keyfile_string,
port_string,
self.username,
self.host,
command)
logger.debug(command) logger.debug(command)
if self.password: if self.password:
command, _ = _give_password(self.password, command) command, _ = _give_password(self.password, command)
@@ -259,9 +299,15 @@ class SshConnection(object):
return True return True
return False return False
def wait_for_device(self, timeout=30):
return
def reboot_bootloader(self, timeout=30):
raise NotImplementedError()
def _execute_and_wait_for_prompt(self, command, timeout=None, as_root=False, strip_colors=True, log=True): def _execute_and_wait_for_prompt(self, command, timeout=None, as_root=False, strip_colors=True, log=True):
self.conn.prompt(0.1) # clear an existing prompt if there is one. self.conn.prompt(0.1) # clear an existing prompt if there is one.
if self.username == 'root': if as_root and self.connected_as_root:
# As we're already root, there is no need to use sudo. # As we're already root, there is no need to use sudo.
as_root = False as_root = False
if as_root: if as_root:
@@ -305,7 +351,14 @@ class SshConnection(object):
# only specify -P for scp if the port is *not* the default. # only specify -P for scp if the port is *not* the default.
port_string = '-P {}'.format(quote(str(self.port))) if (self.port and self.port != 22) else '' port_string = '-P {}'.format(quote(str(self.port))) if (self.port and self.port != 22) else ''
keyfile_string = '-i {}'.format(quote(self.keyfile)) if self.keyfile else '' keyfile_string = '-i {}'.format(quote(self.keyfile)) if self.keyfile else ''
command = '{} -r {} {} {} {}'.format(scp, keyfile_string, port_string, quote(source), quote(dest)) options = " ".join(["-o {}={}".format(key,val)
for key,val in self.options.items()])
command = '{} {} -r {} {} {} {}'.format(scp,
options,
keyfile_string,
port_string,
quote(source),
quote(dest))
command_redacted = command command_redacted = command
logger.debug(command) logger.debug(command)
if self.password: if self.password:
@@ -587,6 +640,19 @@ class Gem5Connection(TelnetConnection):
# Delete the lock file # Delete the lock file
os.remove(self.lock_file_name) os.remove(self.lock_file_name)
def wait_for_device(self, timeout=30):
"""
Wait for Gem5 to be ready for interation with a timeout.
"""
for _ in attempts(timeout):
if self.ready:
return
time.sleep(1)
raise TimeoutError('Gem5 is not ready for interaction')
def reboot_bootloader(self, timeout=30):
raise NotImplementedError()
# Functions only to be called by the Gem5 connection itself # Functions only to be called by the Gem5 connection itself
def _connect_gem5_platform(self, platform): def _connect_gem5_platform(self, platform):
port = platform.gem5_port port = platform.gem5_port

View File

@@ -21,7 +21,7 @@ from subprocess import Popen, PIPE
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev']) VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev'])
version = VersionTuple(1, 1, 1, '') version = VersionTuple(1, 2, 0, '')
def get_devlib_version(): def get_devlib_version():

153
doc/collectors.rst Normal file
View File

@@ -0,0 +1,153 @@
.. _collector:
Collectors
==========
The ``Collector`` API provide a consistent way of collecting arbitrary data from
a target. Data is collected via an instance of a class derived from
:class:`CollectorBase`.
Example
-------
The following example shows how to use a collector to read the logcat output
from an Android target.
.. code-block:: python
# import and instantiate the Target and the collector
# (note: this assumes exactly one android target connected
# to the host machine).
In [1]: from devlib import AndroidTarget, LogcatCollector
In [2]: t = AndroidTarget()
# Set up the collector on the Target.
In [3]: collector = LogcatCollector(t)
# Configure the output file path for the collector to use.
In [4]: collector.set_output('adb_log.txt')
# Reset the Collector to preform any required configuration or preparation.
In [5]: collector.reset()
# Start Collecting
In [6]: collector.start()
# Wait for some output to be generated
In [7]: sleep(10)
# Stop Collecting
In [8]: collector.stop()
# Retrieved the collected data
In [9]: output = collector.get_data()
# Display the returned ``CollectorOutput`` Object.
In [10]: output
Out[10]: [<adb_log.txt (file)>]
In [11] log_file = output[0]
# Get the path kind of the the returned CollectorOutputEntry.
In [12]: log_file.path_kind
Out[12]: 'file'
# Get the path of the returned CollectorOutputEntry.
In [13]: log_file.path
Out[13]: 'adb_log.txt'
# Find the full path to the log file.
In [14]: os.path.join(os.getcwd(), logfile)
Out[14]: '/tmp/adb_log.txt'
API
---
.. collector:
.. module:: devlib.collector
CollectorBase
~~~~~~~~~~~~~
.. class:: CollectorBase(target, \*\*kwargs)
A ``CollectorBase`` is the the base class and API that should be
implemented to allowing collecting various data from a traget e.g. traces,
logs etc.
.. method:: Collector.setup(\*args, \*\*kwargs)
This will set up the collector on the target. Parameters this method takes
are particular to subclasses (see documentation for specific collectors
below). What actions are performed by this method are also
collector-specific. Usually these will be things like installing
executables, starting services, deploying assets, etc. Typically, this method
needs to be invoked at most once per reboot of the target (unless
``teardown()`` has been called), but see documentation for the collector
you're interested in.
.. method:: CollectorBase.reset()
This can be used to configure a collector for collection. This must be invoked
before ``start()`` is called to begin collection.
.. method:: CollectorBase.start()
Starts collecting from the target.
.. method:: CollectorBase.stop()
Stops collecting from target. Must be called after
:func:`start()`.
.. method:: CollectorBase.set_output(output_path)
Configure the output path for the particular collector. This will be either
a directory or file path which will be used when storing the data. Please see
the individual Collector documentation for more information.
.. method:: CollectorBase.get_data()
The collected data will be return via the previously specified output_path.
This method will return a ``CollectorOutput`` object which is a subclassed
list object containing individual ``CollectorOutputEntry`` objects with details
about the individual output entry.
CollectorOutputEntry
~~~~~~~~~~~~~~~~~~~~
This object is designed to allow for the output of a collector to be processed
generically. The object will behave as a regular string containing the path to
underlying output path and can be used directly in ``os.path`` operations.
.. attribute:: CollectorOutputEntry.path
The file path for the corresponding output item.
.. attribute:: CollectorOutputEntry.path_kind
The type of output the is specified in the ``path`` attribute. Current valid
kinds are: ``file`` and ``directory``.
.. method:: CollectorOutputEntry.__init__(path, path_kind)
Initialises a ``CollectorOutputEntry`` object with the desired file path and
kind of file path specified.
.. collectors:
Available Collectors
---------------------
This section lists collectors that are currently part of devlib.
.. todo:: Add collectors

View File

@@ -100,7 +100,7 @@ class that implements the following methods.
Connection Types Connection Types
---------------- ----------------
.. class:: AdbConnection(device=None, timeout=None) .. class:: AdbConnection(device=None, timeout=None, adb_server=None, adb_as_root=False)
A connection to an android device via ``adb`` (Android Debug Bridge). A connection to an android device via ``adb`` (Android Debug Bridge).
``adb`` is part of the Android SDK (though stand-alone versions are also ``adb`` is part of the Android SDK (though stand-alone versions are also
@@ -113,10 +113,13 @@ Connection Types
:param timeout: Connection timeout in seconds. If a connection to the device :param timeout: Connection timeout in seconds. If a connection to the device
is not established within this period, :class:`HostError` is not established within this period, :class:`HostError`
is raised. is raised.
:param adb_server: Allows specifying the address of the adb server to use.
:param adb_as_root: Specify whether the adb server should be restarted in root mode.
.. class:: SshConnection(host, username, password=None, keyfile=None, port=None,\ .. class:: SshConnection(host, username, password=None, keyfile=None, port=None,\
timeout=None, password_prompt=None) timeout=None, password_prompt=None, \
sudo_cmd="sudo -- sh -c {}", options=None)
A connection to a device on the network over SSH. A connection to a device on the network over SSH.
@@ -141,6 +144,8 @@ Connection Types
:param password_prompt: A string with the password prompt used by :param password_prompt: A string with the password prompt used by
``sshpass``. Set this if your version of ``sshpass`` ``sshpass``. Set this if your version of ``sshpass``
uses something other than ``"[sudo] password"``. uses something other than ``"[sudo] password"``.
:param sudo_cmd: Specify the format of the command used to grant sudo access.
:param options: A dictionary with extra ssh configuration options.
.. class:: TelnetConnection(host, username, password=None, port=None,\ .. class:: TelnetConnection(host, username, password=None, port=None,\

View File

@@ -19,6 +19,7 @@ Contents:
target target
modules modules
instrumentation instrumentation
collectors
derived_measurements derived_measurements
platform platform
connection connection

View File

@@ -1,3 +1,5 @@
.. _instrumentation:
Instrumentation Instrumentation
=============== ===============
@@ -164,10 +166,21 @@ Instrument
.. method:: Instrument.get_raw() .. method:: Instrument.get_raw()
Returns a list of paths to files containing raw output from the underlying Returns a list of paths to files containing raw output from the underlying
source(s) that is used to produce the data CSV. If now raw output is source(s) that is used to produce the data CSV. If no raw output is
generated or saved, an empty list will be returned. The format of the generated or saved, an empty list will be returned. The format of the
contents of the raw files is entirely source-dependent. contents of the raw files is entirely source-dependent.
.. note:: This method is not guaranteed to return valid filepaths after the
:meth:`teardown` method has been invoked as the raw files may have
been deleted. Please ensure that copies are created manually
prior to calling :meth:`teardown` if the files are to be retained.
.. method:: Instrument.teardown()
Performs any required clean up of the instrument. This usually includes
removing temporary and raw files (if ``keep_raw`` is set to ``False`` on relevant
instruments), stopping services etc.
.. attribute:: Instrument.sample_rate_hz .. attribute:: Instrument.sample_rate_hz
Sample rate of the instrument in Hz. Assumed to be the same for all channels. Sample rate of the instrument in Hz. Assumed to be the same for all channels.
@@ -400,7 +413,7 @@ For reference, the software stack on the host is roughly given by:
Ethernet was the only IIO Interface used and tested during the development of Ethernet was the only IIO Interface used and tested during the development of
this instrument. However, this instrument. However,
`USB seems to be supported<https://gitlab.com/baylibre-acme/ACME/issues/2>`_. `USB seems to be supported <https://gitlab.com/baylibre-acme/ACME/issues/2>`_.
The IIO library also provides "Local" and "XML" connections but these are to be The IIO library also provides "Local" and "XML" connections but these are to be
used when the IIO devices are directly connected to the host *i.e.* in our used when the IIO devices are directly connected to the host *i.e.* in our
case, if we were to run Python and devlib on the BBB. These are also untested. case, if we were to run Python and devlib on the BBB. These are also untested.

View File

@@ -322,7 +322,7 @@ FlashModule
"flash" "flash"
.. method:: __call__(image_bundle=None, images=None, boot_config=None) .. method:: __call__(image_bundle=None, images=None, boot_config=None, connect=True)
Must be implemented by derived classes. Must be implemented by derived classes.
@@ -338,6 +338,7 @@ FlashModule
:param boot_config: Some platforms require specifying boot arguments at the :param boot_config: Some platforms require specifying boot arguments at the
time of flashing the images, rather than during each time of flashing the images, rather than during each
reboot. For other platforms, this will be ignored. reboot. For other platforms, this will be ignored.
:connect: Specifiy whether to try and connect to the target after flashing.
Module Registration Module Registration

View File

@@ -6,8 +6,7 @@ There are currently four target interfaces:
- :class:`LinuxTarget` for interacting with Linux devices over SSH. - :class:`LinuxTarget` for interacting with Linux devices over SSH.
- :class:`AndroidTarget` for interacting with Android devices over adb. - :class:`AndroidTarget` for interacting with Android devices over adb.
- :class:`ChromeOsTarget`: for interacting with ChromeOS devices over SSH, and - :class:`ChromeOsTarget`: for interacting with ChromeOS devices over SSH, and their Android containers over adb.
their Android containers over adb.
- :class:`LocalLinuxTarget`: for interacting with the local Linux host. - :class:`LocalLinuxTarget`: for interacting with the local Linux host.
They all work in more-or-less the same way, with the major difference being in They all work in more-or-less the same way, with the major difference being in
@@ -307,12 +306,22 @@ has been successfully installed on a target, you can use ``has()`` method, e.g.
Please see the modules documentation for more detail. Please see the modules documentation for more detail.
Instruments and Collectors
--------------------------
Measurement and Trace You can retrieve multiple types of data from a target. There are two categories
--------------------- of classes that allow for this:
You can collected traces (currently, just ftrace) using
:class:`TraceCollector`\ s. For example - An :class:`Instrument` which may be used to collect measurements (such as power) from
targets that support it. Please see the
:ref:`instruments documentation <Instrumentation>` for more details.
- A :class:`Collector` may be used to collect arbitary data from a ``Target`` varying
from screenshots to trace data. Please see the
:ref:`collectors documentation <collector>` for more details.
An example workflow using :class:`FTraceCollector` is as follows:
.. code:: python .. code:: python
@@ -333,16 +342,12 @@ You can collected traces (currently, just ftrace) using
import time; time.sleep(5) import time; time.sleep(5)
# extract the trace file from the target into a local file # extract the trace file from the target into a local file
trace.get_trace('/tmp/trace.bin') trace.get_data('/tmp/trace.bin')
# View trace file using Kernelshark (must be installed on the host). # View trace file using Kernelshark (must be installed on the host).
trace.view('/tmp/trace.bin') trace.view('/tmp/trace.bin')
# Convert binary trace into text format. This would normally be done # Convert binary trace into text format. This would normally be done
# automatically during get_trace(), unless autoreport is set to False during # automatically during get_data(), unless autoreport is set to False during
# instantiation of the trace collector. # instantiation of the trace collector.
trace.report('/tmp/trace.bin', '/tmp/trace.txt') trace.report('/tmp/trace.bin', '/tmp/trace.txt')
In a similar way, :class:`Instrument` instances may be used to collect
measurements (such as power) from targets that support it. Please see
instruments documentation for more details.

View File

@@ -232,7 +232,7 @@ Target
:param timeout: timeout (in seconds) for the transfer; if the transfer does :param timeout: timeout (in seconds) for the transfer; if the transfer does
not complete within this period, an exception will be raised. not complete within this period, an exception will be raised.
.. method:: Target.execute(command [, timeout [, check_exit_code [, as_root [, strip_colors [, will_succeed]]]]]) .. method:: Target.execute(command [, timeout [, check_exit_code [, as_root [, strip_colors [, will_succeed [, force_locale]]]]]])
Execute the specified command on the target device and return its output. Execute the specified command on the target device and return its output.
@@ -252,6 +252,9 @@ Target
will make the method always raise an instance of a subclass of will make the method always raise an instance of a subclass of
:class:`DevlibTransientError` when the command fails, instead of a :class:`DevlibTransientError` when the command fails, instead of a
:class:`DevlibStableError`. :class:`DevlibStableError`.
:param force_locale: Prepend ``LC_ALL=<force_locale>`` in front of the
command to get predictable output that can be more safely parsed.
If ``None``, no locale is prepended.
.. method:: Target.background(command [, stdout [, stderr [, as_root]]]) .. method:: Target.background(command [, stdout [, stderr [, as_root]]])
@@ -346,6 +349,18 @@ Target
some sysfs entries silently failing to set the written value without some sysfs entries silently failing to set the written value without
returning an error code. returning an error code.
.. method:: Target.revertable_write_value(path, value [, verify])
Same as :meth:`Target.write_value`, but as a context manager that will write
back the previous value on exit.
.. method:: Target.batch_revertable_write_value(kwargs_list)
Calls :meth:`Target.revertable_write_value` with all the keyword arguments
dictionary given in the list. This is a convenience method to update
multiple files at once, leaving them in their original state on exit. If one
write fails, all the already-performed writes will be reverted as well.
.. method:: Target.read_tree_values(path, depth=1, dictcls=dict, [, tar [, decode_unicode [, strip_null_char ]]]): .. method:: Target.read_tree_values(path, depth=1, dictcls=dict, [, tar [, decode_unicode [, strip_null_char ]]]):
Read values of all sysfs (or similar) file nodes under ``path``, traversing Read values of all sysfs (or similar) file nodes under ``path``, traversing
@@ -530,6 +545,15 @@ Target
:returns: ``True`` if internet seems available, ``False`` otherwise. :returns: ``True`` if internet seems available, ``False`` otherwise.
.. method:: Target.install_module(mod, **params)
:param mod: The module name or object to be installed to the target.
:param params: Keyword arguments used to instantiate the module.
Installs an additional module to the target after the initial setup has been
performed.
Android Target Android Target
--------------- ---------------
@@ -624,6 +648,14 @@ Android Target
Returns ``True`` if the targets screen is currently on and ``False`` Returns ``True`` if the targets screen is currently on and ``False``
otherwise. otherwise.
.. method:: AndroidTarget.wait_for_target(timeout=30)
Returns when the devices becomes available withing the given timeout
otherwise returns a ``TimeoutError``.
.. method:: AndroidTarget.reboot_bootloader(timeout=30)
Attempts to reboot the target into it's bootloader.
.. method:: AndroidTarget.homescreen() .. method:: AndroidTarget.homescreen()
Returns the device to its home screen. Returns the device to its home screen.
@@ -665,7 +697,7 @@ ChromeOS Target
:param android_executables_directory: This is the location of the :param android_executables_directory: This is the location of the
executables directory to be used for the android container. If not executables directory to be used for the android container. If not
specified will default to a ``bin`` subfolder in the specified will default to a ``bin`` subdirectory in the
``android_working_directory.`` ``android_working_directory.``
:param package_data_directory: This is the location of the data stored :param package_data_directory: This is the location of the data stored

View File

@@ -85,21 +85,24 @@ params = dict(
'wrapt', # Basic for construction of decorator functions 'wrapt', # Basic for construction of decorator functions
'future', # Python 2-3 compatibility 'future', # Python 2-3 compatibility
'enum34;python_version<"3.4"', # Enums for Python < 3.4 'enum34;python_version<"3.4"', # Enums for Python < 3.4
'pandas', 'contextlib2;python_version<"3.0"', # Python 3 contextlib backport for Python 2
'numpy', 'numpy<=1.16.4; python_version<"3"',
'numpy; python_version>="3"',
'pandas<=0.24.2; python_version<"3"',
'pandas; python_version>"3"',
], ],
extras_require={ extras_require={
'daq': ['daqpower'], 'daq': ['daqpower>=2'],
'doc': ['sphinx'], 'doc': ['sphinx'],
'monsoon': ['python-gflags'], 'monsoon': ['python-gflags'],
'acme': ['pandas', 'numpy'], 'acme': ['pandas', 'numpy'],
}, },
# https://pypi.python.org/pypi?%3Aaction=list_classifiers # https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[ classifiers=[
'Development Status :: 4 - Beta', 'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: Apache Software License', 'License :: OSI Approved :: Apache Software License',
'Operating System :: POSIX :: Linux', 'Operating System :: POSIX :: Linux',
'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3',
], ],
) )