1
0
mirror of https://github.com/ARM-software/devlib.git synced 2025-09-22 20:01:53 +01:00

61 Commits

Author SHA1 Message Date
Marc Bonnici
002ade33a8 Version Bump 2019-07-19 16:37:04 +01:00
Marc Bonnici
2e8d42db79 setup.py Update classifiers 2019-07-19 16:37:04 +01:00
Pierre-Clément Tosi
6b414cc291 utils.adb_shell: Move from 'echo CMD | su' to '-c'
Move from the current implementation (piping the command to su) which
has unexpected behaviours to the '-c' su flag (which then becomes
required).
2019-07-19 16:36:01 +01:00
Pierre-Clément Tosi
0d798f1c4f utils.adb_shell: Improve stability (Py3)
Move from pipes.quote (private) to shlex.quote (Py3.3+ standard).

Make tests of inputs against None (their default value) instead of based
on their truthiness.

Improve logging through quoted commands (runnable as-is, less confusing).

Make the command-building process straightforward for readability and
maintainability.
2019-07-19 16:36:01 +01:00
Marc Bonnici
1325e59b1a target/KernelConfig: Implement the __bool__ method
To aid in checking whether any information is contained in the
`KernelConfig` ensure that that `__bool__` method value indicated the
presence of parsed input.
2019-07-18 15:12:30 +01:00
Marc Bonnici
f141899dae target/KernelConfig: Ensure get_config_name is static
`get_config_name` was previsouly treaded as a bound method so
ensure that is defined as static as expected.
2019-07-18 15:12:30 +01:00
Valentin Schneider
984556bc8e module/sched: Make SchedModule probing more accurate
Right now, this module won't be loaded if the sched_domain procfs
entries are not present on the target. However, other pieces of
information may be present in which case it would make sense to load
the module.

For instance, mainline kernels compiled without SCHED_DEBUG can still
expose the cpu_capacity sysfs entry. As such, try to get a better idea
of what's available and only disable the loading of the module if it
can provide absolutely nothing.
2019-07-09 15:36:13 +01:00
Valentin Schneider
03a469fc38 module/sched: Expose the remote CPU capacity sysfs path
A later change needs to access this outside of a SchedModule instance,
so make the information available as a classmethod.
2019-07-09 15:36:13 +01:00
Valentin Schneider
2d86474682 module/sched: Expose a classmethod variant of SchedModule.has_debug
A later change needs to access this outside of a SchedModule instance,
so make the information available as a classmethod.
2019-07-09 15:36:13 +01:00
Valentin Schneider
ada318f27b module/sched: Fix None check
As mentioned in the previous commit, CPU numbers would be passed to
SchedProcFSData's __init__() (instead of a proper sysfs path). When
done with CPU0, that path would be evaluated as False and the code
would carry on with the default path, which was quite confusing.

This has now been fixed (and 0 isn't such a great path to give
anyway), nevertheless this check should just catter to None.
2019-07-09 15:36:13 +01:00
Valentin Schneider
b8f7b24790 module/sched: Fix incorrect SchedProcFSData usage
Rather than using the conveniently provided `get_cpu_sd_info()` helper
method, `has_em()` and `get_em_capacity()` would build a
`SchedProcFSData` with `path=<CPU number>`, which is obviously broken.

Do the right thing and use `get_cpu_sd_info()` in those places.
2019-07-09 15:36:13 +01:00
Josh Choo
a9b9938b0f module/sched: Return the correct maximum capacity
The existing behaviour assumes that the cap_states file contains a list
of capacity|cost pairs, and attempts to return the maximum capacity by
selecting the value at the second last index of the list.

This assumption fails on some newer Qualcomm kernels where the
cap_states file contains a list of capacity|frequency|cost triplets.
Consequently, the maximum frequency would be erroneously returned
instead of the maximum capacity.

Fix the problem by dynamically calculating the index of the maximum
capacity by dividing the number of entries in cap_states by the value in
nr_cap_states.

---

For example, on a certain Snapdragon 845 device:

/proc/sys/kernel/sched_domain/cpu0/domain0/group0/energy/cap_states
        54 entries:

        CAP     FREQ    COST
        --------------------
        65	300000	12
        87	403200	17
        104	480000	21
        125	576000	27
        141	652800	31
        162	748800	37
        179     825600	42
        195	902400	47
        212	979200	52
        228	1056000	57
        245	1132800	62
        266	1228800	70
        286	1324800 78
        307	1420800	89
        328	1516800	103
        348	1612800	122
        365	1689600	141
        381	1766400	160

/proc/sys/kernel/sched_domain/cpu0/domain0/group0/energy/nr_cap_states
        18

Max capacity = 381 (third-last index)
2019-07-09 09:04:34 +01:00
Marc Bonnici
f619f1dd07 setup.py: Set maximum package version for python2.7 support
In the latest versions of panadas and numpy python2.7 support has been
dropped therefore restrict the maximum version of these packages.
2019-07-08 13:46:19 +01:00
Marc Bonnici
ad350c9267 bin/perf: Update binaries
In the previous version there appears to be a bug causing perf to
segfault as per https://github.com/ARM-software/devlib/issues/395.
Therefore update provided binaries to v3.19 which does not appear to
have this issue.
2019-06-11 13:05:37 +01:00
Douglas RAILLARD
8343794d34 module/thermal: Gracefully handle unexpected sysfs names
Instead of raising an exception, log an warning and carry on.
2019-06-05 15:52:20 +01:00
Douglas RAILLARD
f2bc5dbc14 devlib: Re-export DmesgCollector in devlib package
Allow using 'import devlib.DmesgCollector', just like
devlib.FtraceCollector.
2019-06-03 14:16:28 +01:00
Patrick Bellasi
6f42f67e95 target: Ensure we use installed binaries
Apart from busybox, devlib itself makes use of other system provided binaries.
For example, the DmesgCollector module uses the system provided dmesg.
In cases the system provided binary does not support some of the features
required by devlib, we currently just fails with an error.

For the user it is still possible to deploy a custom/updated version of a
required binary via the Target::install API. However, that binary is not
automatically considered by devlib.

Let's ensure that all Target::execute commands use a PATH which gives priority
to devlib installed binaries.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2019-05-24 17:47:18 +01:00
Marc Bonnici
ae7f01fd19 target: Use root if available when determine number of cpus
On some targets some entries in `/sys/devices/system/cpu` require root
to list otherwise will return a permission error.
2019-05-24 11:18:54 +01:00
Pierre-Clément Tosi
b5f36610ad trace/perf: Soften POSIX signal for termination
Replace the default SIGKILL signal sent to perf to "request" its
termination by a SIGINT, allowing it to handle the signal by cleaning up
before exit. This should address issues regarding corrupted perf.data
output files.
2019-05-15 14:30:18 +01:00
Douglas RAILLARD
4c8f2430e2 trace: dmesg: Allow using old util-linux binary
Old util-linux binaries don't support --force-prefix. Multi-line entry
parsing will break on these, but at least we can collect the log.

Also decode the raw priority, so only the facility is not decoded in
case busybox or old util-linux is used.
2019-03-26 09:38:58 +00:00
Douglas RAILLARD
a8b6e56874 trace: dmesg: Call dmesg -c as root
Clearing the kernel ring buffer needs root permission.
2019-03-25 14:57:33 +00:00
Douglas RAILLARD
c92756d65a trace: Fix dmesg collector when using util-linux dmesg
Set missing "facility" attribute on DmesgCollector instances.
2019-03-25 14:57:33 +00:00
Douglas RAILLARD
8512f116fc trace: Add DmesgCollector
Allows collecting dmesg output and parses it for easy filtering.
2019-03-19 13:52:04 +00:00
Valentin Schneider
be8b87d559 module/sched: Fix/simplify procfs packing behaviour
Back when I first wrote this I tried to make something smart that
would automatically detect which procfs entries to pack into a
mapping, the condition to do so being "the entry ends with a
digit and there is another entry with the same name but a different
digit".

I wrongly assumed this would always work for the sched_domain entries,
but it's possible to have a domain with a single group and thus a
single "group0" entry.

Since we know which entries we want to pack, let's hard-code these and
be less smart about it.
2019-03-19 13:48:29 +00:00
Valentin Schneider
d76c2d63fe module/sched: Make get_capacities() work with hotplugged CPUs 2019-03-19 13:48:29 +00:00
Valentin Schneider
8bfa050226 module/sched: SchedProcFSData: Don't assume SD name is always present
The existence of that field is gated by SCHED_DEBUG, so look for an
always-present field instead.
2019-03-19 13:48:29 +00:00
Chris Redpath
8871fe3c25 devlib/sched: Change order of CPU capacity algorithms
There are two ways we can load CPU capacity. Up until 4.14, supported
kernels did not have the /sys/devices/system/cpu/cpuX/cpu_capacity file
and the only way to read cpu capacity was by grepping the EM from
procfs sched_domain output. After 4.14, that route still exists but is
complicated due to a change in the format once support for
frequency-power models was merged.

In order to avoid rewriting the procfs EM grepping code, lets switch the
order of algorithms we try to use when loading CPU capacity. All newer
kernels provide the dedicated sysfs node and all kernels which do not
have this node use the old format for the EM in sched_domain output.

Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2019-03-18 14:29:38 +00:00
Sergei Trofimov
aa50b2d42d host: expect shell syntax inside LocalConnection.execute
When using sudo with LocalConnection, execute the input command via 'sh
-c' to ensure any shell syntax within the command is handled properly.
2019-03-07 09:34:23 +00:00
Marc Bonnici
ebcb1664e7 utils/version: Development version bump 2019-02-27 10:55:20 +00:00
Marc Bonnici
0ff8628c9c utils/version: Version bump 2019-02-27 10:55:20 +00:00
Marc Bonnici
c0d8a98d90 setup.py: Use version_helper to generate devlib version
Instead of parsing the text of the file to extract the current version
use the version_helper to access the newly added version tuple.
2019-02-27 10:55:20 +00:00
Marc Bonnici
441eea9897 devlib/version: Implement devlibs version as a namedtuple
Instead of defining devlibs versions as a string use a namedtuple and
add in a revision field.
2019-02-27 10:55:20 +00:00
Marc Bonnici
b0db2067a2 target: Fix missing import for Python3
The `long` type is no longer present in Python3 so import it from
`past.builtins` for compatibility.
2019-02-14 09:38:31 +00:00
Marc Bonnici
1417e81605 target/HexInt: Fix to inherit from long instead of int.
When using Python 2.7 `int`s have a maximum size which can be exceeded
when attempting to convert the hex representation back. Change `HexInt` to
be a `long` instead to avoid this issue.
2019-02-13 14:21:34 +00:00
Douglas RAILLARD
2e81a72b39 ssh: Fix command line echoing
Command line echoing was disabled, but that disabling did not take
effect. Another part of devlib was still expecting command lines to be
echoed. That is fixed by disabling echoing when creating pxssh
connection, and removing the code that expected the line to be echoed.
2019-02-06 13:16:10 +00:00
Valentin Schneider
22f2c8b663 acmecape: Fix buffer_size use with pipes.quote()
pipes.quote() doesn't like integers:

>>> from pipes import quote
>>> quote(42)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.5/shlex.py", line 282, in quote
    if _find_unsafe(s) is None:
TypeError: expected string or bytes-like object

Convert buffer_size to str when quoting it
2019-02-06 13:15:49 +00:00
Michalis Spyrou
c2db6c17ab Add adb_server option in android background connection
Signed-off-by: Michalis Spyrou <michalis.spyrou@arm.com>
2019-02-06 09:28:22 +00:00
Quentin Perret
e01a76ef1b doc: Explain the 'tar' flag of target.read_tree_values
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
2019-02-01 11:21:09 +00:00
Quentin Perret
9fcca25031 hwmon: Robustify hwmon scan
The hwmon module reads sysfs entries from the target during init. Since
this step is known to be fragile on some hosts, make sure to specify
tar=True when calling target.read_tree_values() to improve the chances
of reading the data properly.

Signed-off-by: Quentin Perret <quentin.perret@arm.com>
2019-02-01 11:21:09 +00:00
Quentin Perret
a6b9542f0f target: Speed-up read_tree_values()
Since target.read_tree_values() has been modified to use tar as a way to
fetch the content of files from the target, it is more robust, but also
much slower. Since that level of robustness is in practice required only
for very specific use-cased, re-introduce the old way of doing read_tree
using find and grep.

read_tree_values() gains a new parameter to specify how files should be
read from the target, with or without tar. It defaults to the old way
of doing things.

Signed-off-by: Quentin Perret <quentin.perret@arm.com>
2019-02-01 11:21:09 +00:00
Marc Bonnici
413e83f5d6 target/kernelconfig: Add alias for itteritems
Add an 'items' alias for itteritems to avoid confusion when iterating in
different versions of Python.
2019-01-30 16:47:21 +00:00
Marc Bonnici
ac19873423 target/TypedKernelConfig: Fix converting to string method
Some strings already quoted and therefore result in being quoted twice.
Strip off existing quotes before quoting the value to prevent this.
2019-01-30 16:07:19 +00:00
Douglas RAILLARD
17d4b22b9f shutils: Fix read_tree_tgz_b64 on empty folder
Hide tar stderr output, so it does not get mixed with the base64 stream
in case the folder we are tarring is empty.
2019-01-29 15:21:28 +00:00
Douglas RAILLARD
f65130b7c7 target: Introduce TypedKernelConfig
Maps Kconfig types to appropriate Python types, and act as a regular
mapping with extended API:
    * tristate and bool values mapped to an Enum
    * int values to int
    * hex values to HexInt subclass of int that defaults to parsing and
      printing in hex format.

Implement KernelConfig as a shim on top of TypedKernelConfig so they
share most of the code. Code needing a TypedKernelConfig from a
KernelConfig producer such as Target.config can trivially access the
`typed_config` attribute of KernelConfig objects.
2019-01-28 15:34:22 +00:00
Douglas RAILLARD
5b51c2644e target: Introduce KernelConfigKeyError
Make a new exception type raised by KernelConfig that inherits from both
KeyError (exception raised by mappings) and IndexError (exception raised
by sequences, but also raised here for backward compatibility).
2019-01-28 15:34:22 +00:00
Douglas RAILLARD
a752f55956 target: Fix KernelConfig parsing
Use the canonical option name in the underlying mapping, so looking up
the canonicalized option name will work as expected.

Otherwise, looking up the key CONFIG_FONT_8x8 would not work, since the
internal representation uses 'CONFIG_FONT_8x8' and the user input is
turned into 'CONFIG_FONT_8X8' (note the capital "X").
2019-01-28 15:34:22 +00:00
Douglas RAILLARD
781f9b068d target: Move kernel config parsing in method
Split kernel config parsing from target.KernelConfig from __init__ into
its separate method
2019-01-28 15:34:22 +00:00
Quentin Perret
7e79eeb9cb target: Robustify read_tree_values()
target.read_tree_values() has several weaknesses. It doesn't support
files with ':' in their name, and it fails when reading binary files.
In essence, these limitations are cause by its fragile implementation
based on grep in shutils.

In order to robustify read_tree_values(), use tar and base64 to send the
content of a tree to the host, which can then process it from there. In
the process, read_tree_values() gains two new arguments:
 - decode_unicode: must be set to work text/utf-8 content;
 - strip_null_chars: must be set to remove '\00' chars from text files.

Both are set to true by default to keep backward compatibility with the
existing code.

Suggested-by: Douglas Raillard <douglas.raillard@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
2019-01-28 15:34:11 +00:00
Marc Bonnici
911a9f2ef4 utils/ssh: Implement work around for issue in pexpect
As detailed in https://github.com/pexpect/pexpect/issues/552 when
sending a command with exactly the length of the prompt less than the
window width, an extra prompt is introduced into the returned data
causing the matching to fail and only return part of the commands
output. To workaround this check the length of the command to be
submitted and if of the specific length add an additional whitespace
character to the end of the command to prevent this behaviour.
2019-01-28 15:06:45 +00:00
Patrick Bellasi
cc0679e40f module: sched: add support to get/set scheduler features
Scheduler features are a debbugging mechanism which allows to tune at
run-time some (usually experimental) features of the Linux scheduler.

Let's add a proper API abstraction to easily access the list of
supported features and tune them.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2019-01-28 13:56:31 +00:00
Patrick Bellasi
5dea9f8bcf module: sched: add get/set methods for scheduler attributes
The Linux scheduler exposes a set of tunables via /proc/sys/kernel's
attributes staring by "sched_".

Let's add a convenient API to the sched module to read and set the
values of these attributes.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2019-01-28 13:56:31 +00:00
Marc Bonnici
a9ee41855d GfxInfo: Update to use nano second measurments
GfxInfo reports in nano seconds however was incorrectly labeled as
microseconds. Update to consistently deal in nano seconds.
2019-01-09 13:58:22 +00:00
Marc Bonnici
c13e3c260b instrument/measurment_types: Add nanoseconds
Add a 'nanosecond' measurement type and the appropriate conversions.
Also update notion of existing conversions to make things clearer.
2019-01-09 13:58:22 +00:00
Marc Bonnici
aabb74c8cb utils/rendering: Fix compatibility with Python3
Ensure output is encoded before attempting to write to file.
2019-01-09 13:58:22 +00:00
Marc Bonnici
a4c22cef71 utils/rendering: Fix incorrect debug message
This debug message is used for all derived FrameCollectors so do not
specify a particular method.
2019-01-09 13:58:22 +00:00
Marc Bonnici
3da7fbc9dd utils/rendering: Fix default value
Make sure to use an int default value otherwise can cause type error
when used for comparison.
2019-01-09 13:58:22 +00:00
Pierre-Clement Tosi
f2a87ce61c utils.android.ApkInfo: Retrieve APK class methods
Add a pseudo-attribute to ApkInfo giving all the methods available in
the APK as a list of (class, method) tuples. To keep backward
compatibility, this list is retrieved the first time the attribute is
being accessed.
2019-01-04 11:22:48 +00:00
Pierre-Clement Tosi
2b6cb264cf utils.android.ApkInfo: Add reading activity names
Add an activities pseudo-attribute to the ApkInfo class that retrieves the
names of the activities (_i.e._ entry points) of the APK. To keep
backward compatibility, these are retrieved the first time the attribute is
being accessed.
2019-01-04 11:22:48 +00:00
Pierre-Clement Tosi
7e0e6e8706 utils.android: Add ApkInfo._run()
Add a _run() method to handle CLI calls from within the class.
2019-01-04 11:22:48 +00:00
Volker Eckert
4fabcae0b4 cgroups.py: strip slashes, don't drop first char
this makes it cope with both, names starting with or without slashes
2019-01-04 11:22:24 +00:00
Marc Bonnici
3c4a282c29 devlib/version: Update to development version 2019-01-04 11:22:03 +00:00
23 changed files with 899 additions and 133 deletions

View File

@@ -48,15 +48,17 @@ from devlib.derived.fps import DerivedGfxInfoStats, DerivedSurfaceFlingerStats
from devlib.trace.ftrace import FtraceCollector
from devlib.trace.perf import PerfCollector
from devlib.trace.serial_trace import SerialTraceCollector
from devlib.trace.dmesg import DmesgCollector
from devlib.host import LocalConnection
from devlib.utils.android import AdbConnection
from devlib.utils.ssh import SshConnection, TelnetConnection, Gem5Connection
from devlib.utils.version import get_commit as __get_commit
from devlib.utils.version import (get_devlib_version as __get_devlib_version,
get_commit as __get_commit)
__version__ = '1.1.0'
__version__ = __get_devlib_version()
__commit = __get_commit()
if __commit:

Binary file not shown.

Binary file not shown.

View File

@@ -238,6 +238,19 @@ hotplug_online_all() {
done
}
################################################################################
# Scheduler
################################################################################
sched_get_kernel_attributes() {
MATCH=${1:-'.*'}
[ -d /proc/sys/kernel/ ] || exit 1
$GREP '' /proc/sys/kernel/sched_* | \
$SED -e 's|/proc/sys/kernel/sched_||' | \
$GREP -e "$MATCH"
}
################################################################################
# Misc
################################################################################
@@ -264,6 +277,34 @@ read_tree_values() {
fi
}
read_tree_tgz_b64() {
BASEPATH=$1
MAXDEPTH=$2
TMPBASE=$3
if [ ! -e $BASEPATH ]; then
echo "ERROR: $BASEPATH does not exist"
exit 1
fi
cd $TMPBASE
TMP_FOLDER=$($BUSYBOX realpath $($BUSYBOX mktemp -d XXXXXX))
# 'tar' doesn't work as expected on debugfs, so copy the tree first to
# workaround the issue
cd $BASEPATH
for CUR_FILE in $($BUSYBOX find . -follow -type f -maxdepth $MAXDEPTH); do
$BUSYBOX cp --parents $CUR_FILE $TMP_FOLDER/ 2> /dev/null
done
cd $TMP_FOLDER
$BUSYBOX tar cz * 2>/dev/null | $BUSYBOX base64
# Clean-up the tmp folder since we won't need it any more
cd $TMPBASE
rm -rf $TMP_FOLDER
}
get_linux_system_id() {
kernel=$($BUSYBOX uname -r)
hardware=$($BUSYBOX ip a | $BUSYBOX grep 'link/ether' | $BUSYBOX sed 's/://g' | $BUSYBOX awk '{print $2}' | $BUSYBOX tr -d '\n')
@@ -337,12 +378,18 @@ hotplug_online_all)
read_tree_values)
read_tree_values $*
;;
read_tree_tgz_b64)
read_tree_tgz_b64 $*
;;
get_linux_system_id)
get_linux_system_id $*
;;
get_android_system_id)
get_android_system_id $*
;;
sched_get_kernel_attributes)
sched_get_kernel_attributes $*
;;
*)
echo "Command [$CMD] not supported"
exit -1

View File

@@ -106,17 +106,17 @@ class DerivedGfxInfoStats(DerivedFpsStats):
frame_count += 1
if start_vsync is None:
start_vsync = frame_data.Vsync_time_us
end_vsync = frame_data.Vsync_time_us
start_vsync = frame_data.Vsync_time_ns
end_vsync = frame_data.Vsync_time_ns
frame_time = frame_data.FrameCompleted_time_us - frame_data.IntendedVsync_time_us
frame_time = frame_data.FrameCompleted_time_ns - frame_data.IntendedVsync_time_ns
pff = 1e9 / frame_time
if pff > self.drop_threshold:
per_frame_fps.append([pff])
if frame_count:
duration = end_vsync - start_vsync
fps = (1e6 * frame_count) / float(duration)
fps = (1e9 * frame_count) / float(duration)
else:
duration = 0
fps = 0
@@ -133,15 +133,15 @@ class DerivedGfxInfoStats(DerivedFpsStats):
def _process_with_pandas(self, measurements_csv):
data = pd.read_csv(measurements_csv.path)
data = data[data.Flags_flags == 0]
frame_time = data.FrameCompleted_time_us - data.IntendedVsync_time_us
per_frame_fps = (1e6 / frame_time)
frame_time = data.FrameCompleted_time_ns - data.IntendedVsync_time_ns
per_frame_fps = (1e9 / frame_time)
keep_filter = per_frame_fps > self.drop_threshold
per_frame_fps = per_frame_fps[keep_filter]
per_frame_fps.name = 'fps'
frame_count = data.index.size
if frame_count > 1:
duration = data.Vsync_time_us.iloc[-1] - data.Vsync_time_us.iloc[0]
duration = data.Vsync_time_ns.iloc[-1] - data.Vsync_time_ns.iloc[0]
fps = (1e9 * frame_count) / float(duration)
else:
duration = 0

View File

@@ -105,6 +105,16 @@ class WorkerThreadError(DevlibError):
super(WorkerThreadError, self).__init__(message)
class KernelConfigKeyError(KeyError, IndexError, DevlibError):
"""
Exception raised when a kernel config option cannot be found.
It inherits from :exc:`IndexError` for backward compatibility, and
:exc:`KeyError` to behave like a regular mapping.
"""
pass
def get_traceback(exc=None):
"""
Returns the string with the traceback for the specifiec exc

View File

@@ -71,7 +71,7 @@ class LocalConnection(object):
if self.unrooted:
raise TargetStableError('unrooted')
password = self._get_password()
command = 'echo {} | sudo -S '.format(quote(password)) + command
command = 'echo {} | sudo -S -- sh -c '.format(quote(password)) + quote(command)
ignore = None if check_exit_code else 'all'
try:
return check_output(command, shell=True, timeout=timeout, ignore=ignore)[0]

View File

@@ -97,20 +97,30 @@ _measurement_types = [
# covert without being familar with individual instruments.
MeasurementType('time', 'seconds', 'time',
conversions={
'time_us': lambda x: x * 1000000,
'time_ms': lambda x: x * 1000,
'time_us': lambda x: x * 1e6,
'time_ms': lambda x: x * 1e3,
'time_ns': lambda x: x * 1e9,
}
),
MeasurementType('time_us', 'microseconds', 'time',
conversions={
'time': lambda x: x / 1000000,
'time_ms': lambda x: x / 1000,
'time': lambda x: x / 1e6,
'time_ms': lambda x: x / 1e3,
'time_ns': lambda x: x * 1e3,
}
),
MeasurementType('time_ms', 'milliseconds', 'time',
conversions={
'time': lambda x: x / 1000,
'time_us': lambda x: x * 1000,
'time': lambda x: x / 1e3,
'time_us': lambda x: x * 1e3,
'time_ns': lambda x: x * 1e6,
}
),
MeasurementType('time_ns', 'nanoseconds', 'time',
conversions={
'time': lambda x: x / 1e9,
'time_ms': lambda x: x / 1e6,
'time_us': lambda x: x / 1e3,
}
),

View File

@@ -87,7 +87,8 @@ class AcmeCapeInstrument(Instrument):
params = dict(
iio_capture=self.iio_capture,
host=self.host,
buffer_size=self.buffer_size,
# This must be a string for quote()
buffer_size=str(self.buffer_size),
iio_device=self.iio_device,
outfile=self.raw_data_file
)

View File

@@ -82,7 +82,7 @@ class GfxInfoFramesInstrument(FramesInstrument):
if entry == 'Flags':
self.add_channel('Flags', MeasurementType('flags', 'flags'))
else:
self.add_channel(entry, 'time_us')
self.add_channel(entry, 'time_ns')
self.header = [chan.label for chan in self.channels.values()]

View File

@@ -262,8 +262,9 @@ class CGroup(object):
# Control cgroup path
self.directory = controller.mount_point
if name != '/':
self.directory = self.target.path.join(controller.mount_point, name[1:])
self.directory = self.target.path.join(controller.mount_point, name.strip('/'))
# Setup path for tasks file
self.tasks_file = self.target.path.join(self.directory, 'tasks')

View File

@@ -137,7 +137,7 @@ class HwmonModule(Module):
self.scan()
def scan(self):
values_tree = self.target.read_tree_values(self.root, depth=3)
values_tree = self.target.read_tree_values(self.root, depth=3, tar=True)
for entry_id, fields in values_tree.items():
path = self.target.path.join(self.root, entry_id)
name = fields.pop('name', None)

View File

@@ -21,6 +21,7 @@ from past.builtins import basestring
from devlib.module import Module
from devlib.utils.misc import memoized
from devlib.utils.types import boolean
class SchedProcFSNode(object):
@@ -51,6 +52,12 @@ class SchedProcFSNode(object):
_re_procfs_node = re.compile(r"(?P<name>.*\D)(?P<digits>\d+)$")
PACKABLE_ENTRIES = [
"cpu",
"domain",
"group"
]
@staticmethod
def _ends_with_digits(node):
if not isinstance(node, basestring):
@@ -70,18 +77,19 @@ class SchedProcFSNode(object):
"""
:returns: The name of the procfs node
"""
return re.search(SchedProcFSNode._re_procfs_node, node).group("name")
match = re.search(SchedProcFSNode._re_procfs_node, node)
if match:
return match.group("name")
@staticmethod
def _packable(node, entries):
return node
@classmethod
def _packable(cls, node):
"""
:returns: Whether it makes sense to pack a node into a common entry
"""
return (SchedProcFSNode._ends_with_digits(node) and
any([SchedProcFSNode._ends_with_digits(x) and
SchedProcFSNode._node_digits(x) != SchedProcFSNode._node_digits(node) and
SchedProcFSNode._node_name(x) == SchedProcFSNode._node_name(node)
for x in entries]))
SchedProcFSNode._node_name(node) in cls.PACKABLE_ENTRIES)
@staticmethod
def _build_directory(node_name, node_data):
@@ -118,7 +126,7 @@ class SchedProcFSNode(object):
# Find which entries can be packed into a common entry
packables = {
node : SchedProcFSNode._node_name(node) + "s"
for node in list(nodes.keys()) if SchedProcFSNode._packable(node, list(nodes.keys()))
for node in list(nodes.keys()) if SchedProcFSNode._packable(node)
}
self._dyn_attrs = {}
@@ -227,13 +235,13 @@ class SchedProcFSData(SchedProcFSNode):
# Even if we have a CPU entry, it can be empty (e.g. hotplugged out)
# Make sure some data is there
for cpu in cpus:
if target.file_exists(target.path.join(path, cpu, "domain0", "name")):
if target.file_exists(target.path.join(path, cpu, "domain0", "flags")):
return True
return False
def __init__(self, target, path=None):
if not path:
if path is None:
path = self.sched_domain_root
procfs = target.read_tree_values(path, depth=self._read_depth)
@@ -251,7 +259,128 @@ class SchedModule(Module):
logger = logging.getLogger(SchedModule.name)
SchedDomainFlag.check_version(target, logger)
return SchedProcFSData.available(target)
# It makes sense to load this module if at least one of those
# functionalities is enabled
schedproc = SchedProcFSData.available(target)
debug = SchedModule.target_has_debug(target)
dmips = any([target.file_exists(SchedModule.cpu_dmips_capacity_path(target, cpu))
for cpu in target.list_online_cpus()])
logger.info("Scheduler sched_domain procfs entries %s",
"found" if schedproc else "not found")
logger.info("Detected kernel compiled with SCHED_DEBUG=%s",
"y" if debug else "n")
logger.info("CPU capacity sysfs entries %s",
"found" if dmips else "not found")
return schedproc or debug or dmips
def get_kernel_attributes(self, matching=None, check_exit_code=True):
"""
Get the value of scheduler attributes.
:param matching: an (optional) substring to filter the scheduler
attributes to be returned.
The scheduler exposes a list of tunable attributes under:
/proc/sys/kernel
all starting with the "sched_" prefix.
This method returns a dictionary of all the "sched_" attributes exposed
by the target kernel, within the prefix removed.
It's possible to restrict the list of attributes by specifying a
substring to be matched.
returns: a dictionary of scheduler tunables
"""
command = 'sched_get_kernel_attributes {}'.format(
matching if matching else ''
)
output = self.target._execute_util(command, as_root=self.target.is_rooted,
check_exit_code=check_exit_code)
result = {}
for entry in output.strip().split('\n'):
if ':' not in entry:
continue
path, value = entry.strip().split(':', 1)
if value in ['0', '1']:
value = bool(int(value))
elif value.isdigit():
value = int(value)
result[path] = value
return result
def set_kernel_attribute(self, attr, value, verify=True):
"""
Set the value of a scheduler attribute.
:param attr: the attribute to set, without the "sched_" prefix
:param value: the value to set
:param verify: true to check that the requested value has been set
:raise TargetError: if the attribute cannot be set
"""
if isinstance(value, bool):
value = '1' if value else '0'
elif isinstance(value, int):
value = str(value)
path = '/proc/sys/kernel/sched_' + attr
self.target.write_value(path, value, verify)
@classmethod
def target_has_debug(cls, target):
if target.config.get('SCHED_DEBUG') != 'y':
return False
return target.file_exists('/sys/kernel/debug/sched_features')
@property
@memoized
def has_debug(self):
return self.target_has_debug(self.target)
def get_features(self):
"""
Get the status of each sched feature
:returns: a dictionary of features and their "is enabled" status
"""
if not self.has_debug:
raise RuntimeError("sched_features not available")
feats = self.target.read_value('/sys/kernel/debug/sched_features')
features = {}
for feat in feats.split():
value = True
if feat.startswith('NO'):
feat = feat.replace('NO_', '', 1)
value = False
features[feat] = value
return features
def set_feature(self, feature, enable, verify=True):
"""
Set the status of a specified scheduler feature
:param feature: the feature name to set
:param enable: true to enable the feature, false otherwise
:raise ValueError: if the specified enable value is not bool
:raise RuntimeError: if the specified feature cannot be set
"""
if not self.has_debug:
raise RuntimeError("sched_features not available")
feature = feature.upper()
feat_value = feature
if not boolean(enable):
feat_value = 'NO_' + feat_value
self.target.write_value('/sys/kernel/debug/sched_features',
feat_value, verify=False)
if not verify:
return
msg = 'Failed to set {}, feature not supported?'.format(feat_value)
features = self.get_features()
feat_value = features.get(feature, not enable)
if feat_value != enable:
raise RuntimeError(msg)
def get_cpu_sd_info(self, cpu):
"""
@@ -282,17 +411,26 @@ class SchedModule(Module):
:returns: Whether energy model data is available for 'cpu'
"""
if not sd:
sd = SchedProcFSData(self.target, cpu)
sd = self.get_cpu_sd_info(cpu)
return sd.procfs["domain0"].get("group0", {}).get("energy", {}).get("cap_states") != None
@classmethod
def cpu_dmips_capacity_path(cls, target, cpu):
"""
:returns: The target sysfs path where the dmips capacity data should be
"""
return target.path.join(
cls.cpu_sysfs_root,
'cpu{}/cpu_capacity'.format(cpu))
@memoized
def has_dmips_capacity(self, cpu):
"""
:returns: Whether dmips capacity data is available for 'cpu'
"""
return self.target.file_exists(
self.target.path.join(self.cpu_sysfs_root, 'cpu{}/cpu_capacity'.format(cpu))
self.cpu_dmips_capacity_path(self.target, cpu)
)
@memoized
@@ -301,10 +439,13 @@ class SchedModule(Module):
:returns: The maximum capacity value exposed by the EAS energy model
"""
if not sd:
sd = SchedProcFSData(self.target, cpu)
sd = self.get_cpu_sd_info(cpu)
cap_states = sd.domains[0].groups[0].energy.cap_states
return int(cap_states.split('\t')[-2])
cap_states_list = cap_states.split('\t')
num_cap_states = sd.domains[0].groups[0].energy.nr_cap_states
max_cap_index = -1 * int(len(cap_states_list) / num_cap_states)
return int(cap_states_list[max_cap_index])
@memoized
def get_dmips_capacity(self, cpu):
@@ -312,14 +453,9 @@ class SchedModule(Module):
:returns: The capacity value generated from the capacity-dmips-mhz DT entry
"""
return self.target.read_value(
self.target.path.join(
self.cpu_sysfs_root,
'cpu{}/cpu_capacity'.format(cpu)
),
int
self.cpu_dmips_capacity_path(self.target, cpu), int
)
@memoized
def get_capacities(self, default=None):
"""
:param default: Default capacity value to find if no data is
@@ -330,16 +466,16 @@ class SchedModule(Module):
:raises RuntimeError: Raised when no capacity information is
found and 'default' is None
"""
cpus = list(range(self.target.number_of_cpus))
cpus = self.target.list_online_cpus()
capacities = {}
sd_info = self.get_sd_info()
for cpu in cpus:
if self.has_em(cpu, sd_info.cpus[cpu]):
capacities[cpu] = self.get_em_capacity(cpu, sd_info.cpus[cpu])
elif self.has_dmips_capacity(cpu):
if self.has_dmips_capacity(cpu):
capacities[cpu] = self.get_dmips_capacity(cpu)
elif self.has_em(cpu, sd_info.cpus[cpu]):
capacities[cpu] = self.get_em_capacity(cpu, sd_info.cpus[cpu])
else:
if default != None:
capacities[cpu] = default

View File

@@ -88,6 +88,9 @@ class ThermalModule(Module):
for entry in target.list_directory(self.thermal_root):
re_match = re.match('^(thermal_zone|cooling_device)([0-9]+)', entry)
if not re_match:
self.logger.warning('unknown thermal entry: %s', entry)
continue
if re_match.group(1) == 'thermal_zone':
self.add_thermal_zone(re_match.group(2))

View File

@@ -13,6 +13,9 @@
# limitations under the License.
#
import io
import base64
import gzip
import os
import re
import time
@@ -27,13 +30,22 @@ import xml.dom.minidom
import copy
from collections import namedtuple, defaultdict
from pipes import quote
from past.builtins import long
from past.types import basestring
from numbers import Number
try:
from collections.abc import Mapping
except ImportError:
from collections import Mapping
from enum import Enum
from devlib.host import LocalConnection, PACKAGE_BIN_DIRECTORY
from devlib.module import get_module
from devlib.platform import Platform
from devlib.exception import (DevlibTransientError, TargetStableError,
TargetNotRespondingError, TimeoutError,
TargetTransientError) # pylint: disable=redefined-builtin
TargetTransientError, KernelConfigKeyError) # pylint: disable=redefined-builtin
from devlib.utils.ssh import SshConnection
from devlib.utils.android import AdbConnection, AndroidProperties, LogcatMonitor, adb_command, adb_disconnect, INTENT_FLAGS
from devlib.utils.misc import memoized, isiterable, convert_new_lines
@@ -143,7 +155,7 @@ class Target(object):
def number_of_cpus(self):
num_cpus = 0
corere = re.compile(r'^\s*cpu\d+\s*$')
output = self.execute('ls /sys/devices/system/cpu')
output = self.execute('ls /sys/devices/system/cpu', as_root=self.is_rooted)
for entry in output.split():
if corere.match(entry):
num_cpus += 1
@@ -373,6 +385,9 @@ class Target(object):
def execute(self, command, timeout=None, check_exit_code=True,
as_root=False, strip_colors=True, will_succeed=False):
# Ensure to use deployed command when availables
if self.executables_directory:
command = "PATH={}:$PATH && {}".format(self.executables_directory, command)
return self.conn.execute(command, timeout=timeout,
check_exit_code=check_exit_code, as_root=as_root,
strip_colors=strip_colors, will_succeed=will_succeed)
@@ -684,6 +699,43 @@ class Target(object):
timeout = duration + 10
self.execute('sleep {}'.format(duration), timeout=timeout)
def read_tree_tar_flat(self, path, depth=1, check_exit_code=True,
decode_unicode=True, strip_null_chars=True):
command = 'read_tree_tgz_b64 {} {} {}'.format(quote(path), depth,
quote(self.working_directory))
output = self._execute_util(command, as_root=self.is_rooted,
check_exit_code=check_exit_code)
result = {}
# Unpack the archive in memory
tar_gz = base64.b64decode(output)
tar_gz_bytes = io.BytesIO(tar_gz)
tar_buf = gzip.GzipFile(fileobj=tar_gz_bytes).read()
tar_bytes = io.BytesIO(tar_buf)
with tarfile.open(fileobj=tar_bytes) as tar:
for member in tar.getmembers():
try:
content_f = tar.extractfile(member)
# ignore exotic members like sockets
except Exception:
continue
# if it is a file and not a folder
if content_f:
content = content_f.read()
if decode_unicode:
try:
content = content.decode('utf-8').strip()
if strip_null_chars:
content = content.replace('\x00', '').strip()
except UnicodeDecodeError:
content = ''
name = self.path.join(path, member.name)
result[name] = content
return result
def read_tree_values_flat(self, path, depth=1, check_exit_code=True):
command = 'read_tree_values {} {}'.format(quote(path), depth)
output = self._execute_util(command, as_root=self.is_rooted,
@@ -699,8 +751,30 @@ class Target(object):
result = {k: '\n'.join(v).strip() for k, v in accumulator.items()}
return result
def read_tree_values(self, path, depth=1, dictcls=dict, check_exit_code=True):
value_map = self.read_tree_values_flat(path, depth, check_exit_code)
def read_tree_values(self, path, depth=1, dictcls=dict,
check_exit_code=True, tar=False, decode_unicode=True,
strip_null_chars=True):
"""
Reads the content of all files under a given tree
:path: path to the tree
:depth: maximum tree depth to read
:dictcls: type of the dict used to store the results
:check_exit_code: raise an exception if the shutil command fails
:tar: fetch the entire tree using tar rather than just the value (more
robust but slower in some use-cases)
:decode_unicode: decode the content of tar-ed files as utf-8
:strip_null_chars: remove '\x00' chars from the content of utf-8
decoded files
:returns: a tree-like dict with the content of files as leafs
"""
if not tar:
value_map = self.read_tree_values_flat(path, depth, check_exit_code)
else:
value_map = self.read_tree_tar_flat(path, depth, check_exit_code,
decode_unicode,
strip_null_chars)
return _build_path_tree(value_map, path, self.path.sep, dictcls)
# internal methods
@@ -1722,8 +1796,56 @@ class KernelVersion(object):
__repr__ = __str__
class KernelConfig(object):
class HexInt(long):
"""
Subclass of :class:`int` that uses hexadecimal formatting by default.
"""
def __new__(cls, val=0, base=16):
super_new = super(HexInt, cls).__new__
if isinstance(val, Number):
return super_new(cls, val)
else:
return super_new(cls, val, base=base)
def __str__(self):
return hex(self).strip('L')
class KernelConfigTristate(Enum):
YES = 'y'
NO = 'n'
MODULE = 'm'
def __bool__(self):
"""
Allow using this enum to represent bool Kconfig type, although it is
technically different from tristate.
"""
return self in (self.YES, self.MODULE)
def __nonzero__(self):
"""
For Python 2.x compatibility.
"""
return self.__bool__()
@classmethod
def from_str(cls, str_):
for state in cls:
if state.value == str_:
return state
raise ValueError('No kernel config tristate value matches "{}"'.format(str_))
class TypedKernelConfig(Mapping):
"""
Mapping-like typed version of :class:`KernelConfig`.
Values are either :class:`str`, :class:`int`,
:class:`KernelConfigTristate`, or :class:`HexInt`. ``hex`` Kconfig type is
mapped to :class:`HexInt` and ``bool`` to :class:`KernelConfigTristate`.
"""
not_set_regex = re.compile(r'# (\S+) is not set')
@staticmethod
@@ -1733,50 +1855,207 @@ class KernelConfig(object):
name = 'CONFIG_' + name
return name
def iteritems(self):
return iter(self._config.items())
def __init__(self, mapping=None):
mapping = mapping if mapping is not None else {}
self._config = {
# Ensure we use the canonical name of the config keys for internal
# representation
self.get_config_name(k): v
for k, v in dict(mapping).items()
}
def __init__(self, text):
self.text = text
self._config = {}
for line in text.split('\n'):
@classmethod
def from_str(cls, text):
"""
Build a :class:`TypedKernelConfig` out of the string content of a
Kconfig file.
"""
return cls(cls._parse_text(text))
@staticmethod
def _val_to_str(val):
"Convert back values to Kconfig-style string value"
# Special case the gracefully handle the output of get()
if val is None:
return None
elif isinstance(val, KernelConfigTristate):
return val.value
elif isinstance(val, basestring):
return '"{}"'.format(val.strip('"'))
else:
return str(val)
def __str__(self):
return '\n'.join(
'{}={}'.format(k, self._val_to_str(v))
for k, v in self.items()
)
@staticmethod
def _parse_val(k, v):
"""
Parse a value of types handled by Kconfig:
* string
* bool
* tristate
* hex
* int
Since bool cannot be distinguished from tristate, tristate is
always used. :meth:`KernelConfigTristate.__bool__` will allow using
it as a bool though, so it should not impact user code.
"""
if not v:
return None
# Handle "string" type
if v.startswith('"'):
# Strip enclosing "
return v[1:-1]
else:
try:
# Handles "bool" and "tristate" types
return KernelConfigTristate.from_str(v)
except ValueError:
pass
try:
# Handles "int" type
return int(v)
except ValueError:
pass
try:
# Handles "hex" type
return HexInt(v)
except ValueError:
pass
# If no type could be parsed
raise ValueError('Could not parse Kconfig key: {}={}'.format(
k, v
), k, v
)
@classmethod
def _parse_text(cls, text):
config = {}
for line in text.splitlines():
line = line.strip()
# skip empty lines
if not line:
continue
if line.startswith('#'):
match = self.not_set_regex.search(line)
match = cls.not_set_regex.search(line)
if match:
self._config[match.group(1)] = 'n'
elif '=' in line:
value = 'n'
name = match.group(1)
else:
continue
else:
name, value = line.split('=', 1)
self._config[name.strip()] = value.strip()
def get(self, name, strict=False):
name = cls.get_config_name(name.strip())
value = cls._parse_val(name, value.strip())
config[name] = value
return config
def __getitem__(self, name):
name = self.get_config_name(name)
res = self._config.get(name)
try:
return self._config[name]
except KeyError:
raise KernelConfigKeyError(
"{} is not exposed in kernel config".format(name),
name
)
if not res and strict:
raise IndexError("{} is not exposed in target's config")
def __iter__(self):
return iter(self._config)
return self._config.get(name)
def __len__(self):
return len(self._config)
def __contains__(self, name):
name = self.get_config_name(name)
return name in self._config
def like(self, name):
regex = re.compile(name, re.I)
result = {}
for k, v in self._config.items():
if regex.search(k):
result[k] = v
return result
return {
k: v for k, v in self.items()
if regex.search(k)
}
def is_enabled(self, name):
return self.get(name) == 'y'
return self.get(name) is KernelConfigTristate.YES
def is_module(self, name):
return self.get(name) == 'm'
return self.get(name) is KernelConfigTristate.MODULE
def is_not_set(self, name):
return self.get(name) == 'n'
return self.get(name) is KernelConfigTristate.NO
def has(self, name):
return self.get(name) in ['m', 'y']
return self.is_enabled(name) or self.is_module(name)
class KernelConfig(object):
"""
Backward compatibility shim on top of :class:`TypedKernelConfig`.
This class does not provide a Mapping API and only return string values.
"""
@staticmethod
def get_config_name(name):
return TypedKernelConfig.get_config_name(name)
def __init__(self, text):
# Expose typed_config as a non-private attribute, so that user code
# needing it can get it from any existing producer of KernelConfig.
self.typed_config = TypedKernelConfig.from_str(text)
# Expose the original text for backward compatibility
self.text = text
def __bool__(self):
return bool(self.typed_config)
not_set_regex = TypedKernelConfig.not_set_regex
def iteritems(self):
for k, v in self.typed_config.items():
yield (k, self.typed_config._val_to_str(v))
items = iteritems
def get(self, name, strict=False):
if strict:
val = self.typed_config[name]
else:
val = self.typed_config.get(name)
return self.typed_config._val_to_str(val)
def like(self, name):
return {
k: self.typed_config._val_to_str(v)
for k, v in self.typed_config.like(name).items()
}
def is_enabled(self, name):
return self.typed_config.is_enabled(name)
def is_module(self, name):
return self.typed_config.is_module(name)
def is_not_set(self, name):
return self.typed_config.is_not_set(name)
def has(self, name):
return self.typed_config.has(name)
class LocalLinuxTarget(LinuxTarget):

198
devlib/trace/dmesg.py Normal file
View File

@@ -0,0 +1,198 @@
# Copyright 2019 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import division
import re
from itertools import takewhile
from datetime import timedelta
from devlib.trace import TraceCollector
class KernelLogEntry(object):
"""
Entry of the kernel ring buffer.
:param facility: facility the entry comes from
:type facility: str
:param level: log level
:type level: str
:param timestamp: Timestamp of the entry
:type timestamp: datetime.timedelta
:param msg: Content of the entry
:type msg: str
"""
_TIMESTAMP_MSG_REGEX = re.compile(r'\[(.*?)\] (.*)')
_RAW_LEVEL_REGEX = re.compile(r'<([0-9]+)>(.*)')
_PRETTY_LEVEL_REGEX = re.compile(r'\s*([a-z]+)\s*:([a-z]+)\s*:\s*(.*)')
def __init__(self, facility, level, timestamp, msg):
self.facility = facility
self.level = level
self.timestamp = timestamp
self.msg = msg
@classmethod
def from_str(cls, line):
"""
Parses a "dmesg --decode" output line, formatted as following:
kern :err : [3618282.310743] nouveau 0000:01:00.0: systemd-logind[988]: nv50cal_space: -16
Or the more basic output given by "dmesg -r":
<3>[3618282.310743] nouveau 0000:01:00.0: systemd-logind[988]: nv50cal_space: -16
"""
def parse_raw_level(line):
match = cls._RAW_LEVEL_REGEX.match(line)
if not match:
raise ValueError('dmesg entry format not recognized: {}'.format(line))
level, remainder = match.groups()
levels = DmesgCollector.LOG_LEVELS
# BusyBox dmesg can output numbers that need to wrap around
level = levels[int(level) % len(levels)]
return level, remainder
def parse_pretty_level(line):
match = cls._PRETTY_LEVEL_REGEX.match(line)
facility, level, remainder = match.groups()
return facility, level, remainder
def parse_timestamp_msg(line):
match = cls._TIMESTAMP_MSG_REGEX.match(line)
timestamp, msg = match.groups()
timestamp = timedelta(seconds=float(timestamp.strip()))
return timestamp, msg
line = line.strip()
# If we can parse the raw prio directly, that is a basic line
try:
level, remainder = parse_raw_level(line)
facility = None
except ValueError:
facility, level, remainder = parse_pretty_level(line)
timestamp, msg = parse_timestamp_msg(remainder)
return cls(
facility=facility,
level=level,
timestamp=timestamp,
msg=msg.strip(),
)
def __str__(self):
facility = self.facility + ': ' if self.facility else ''
return '{facility}{level}: [{timestamp}] {msg}'.format(
facility=facility,
level=self.level,
timestamp=self.timestamp.total_seconds(),
msg=self.msg,
)
class DmesgCollector(TraceCollector):
"""
Dmesg output collector.
:param level: Minimum log level to enable. All levels that are more
critical will be collected as well.
:type level: str
:param facility: Facility to record, see dmesg --help for the list.
:type level: str
.. warning:: If BusyBox dmesg is used, facility and level will be ignored,
and the parsed entries will also lack that information.
"""
# taken from "dmesg --help"
# This list needs to be ordered by priority
LOG_LEVELS = [
"emerg", # system is unusable
"alert", # action must be taken immediately
"crit", # critical conditions
"err", # error conditions
"warn", # warning conditions
"notice", # normal but significant condition
"info", # informational
"debug", # debug-level messages
]
def __init__(self, target, level=LOG_LEVELS[-1], facility='kern'):
super(DmesgCollector, self).__init__(target)
if level not in self.LOG_LEVELS:
raise ValueError('level needs to be one of: {}'.format(
', '.join(self.LOG_LEVELS)
))
self.level = level
# Check if dmesg is the BusyBox one, or the one from util-linux in a
# recent version.
# Note: BusyBox dmesg does not support -h, but will still print the
# help with an exit code of 1
self.basic_dmesg = '--force-prefix' not in \
self.target.execute('dmesg -h', check_exit_code=False)
self.facility = facility
self.reset()
@property
def entries(self):
return self._parse_entries(self.dmesg_out)
@classmethod
def _parse_entries(cls, dmesg_out):
if not dmesg_out:
return []
else:
return [
KernelLogEntry.from_str(line)
for line in dmesg_out.splitlines()
]
def reset(self):
self.dmesg_out = None
def start(self):
self.reset()
# Empty the dmesg ring buffer
self.target.execute('dmesg -c', as_root=True)
def stop(self):
levels_list = list(takewhile(
lambda level: level != self.level,
self.LOG_LEVELS
))
levels_list.append(self.level)
if self.basic_dmesg:
cmd = 'dmesg -r'
else:
cmd = 'dmesg --facility={facility} --force-prefix --decode --level={levels}'.format(
levels=','.join(levels_list),
facility=self.facility,
)
self.dmesg_out = self.target.execute(cmd)
def get_trace(self, outfile):
with open(outfile, 'wt') as f:
f.write(self.dmesg_out + '\n')

View File

@@ -104,7 +104,11 @@ class PerfCollector(TraceCollector):
self.target.kick_off(command)
def stop(self):
self.target.killall('perf', signal='SIGINT',
as_root=self.target.is_rooted)
# perf doesn't transmit the signal to its sleep call so handled here:
self.target.killall('sleep', as_root=self.target.is_rooted)
# NB: we hope that no other "important" sleep is on-going
# pylint: disable=arguments-differ
def get_trace(self, outdir):

View File

@@ -28,7 +28,13 @@ import tempfile
import subprocess
from collections import defaultdict
import pexpect
from pipes import quote
import xml.etree.ElementTree
import zipfile
try:
from shlex import quote
except ImportError:
from pipes import quote
from devlib.exception import TargetTransientError, TargetStableError, HostError
from devlib.utils.misc import check_output, which, ABI_MAP
@@ -132,6 +138,7 @@ class ApkInfo(object):
version_regex = re.compile(r"name='(?P<name>[^']+)' versionCode='(?P<vcode>[^']+)' versionName='(?P<vname>[^']+)'")
name_regex = re.compile(r"name='(?P<name>[^']+)'")
permission_regex = re.compile(r"name='(?P<permission>[^']+)'")
activity_regex = re.compile(r'\s*A:\s*android:name\(0x\d+\)=".(?P<name>\w+)"')
def __init__(self, path=None):
self.path = path
@@ -147,15 +154,7 @@ class ApkInfo(object):
# pylint: disable=too-many-branches
def parse(self, apk_path):
_check_env()
command = [aapt, 'dump', 'badging', apk_path]
logger.debug(' '.join(command))
try:
output = subprocess.check_output(command, stderr=subprocess.STDOUT)
if sys.version_info[0] == 3:
output = output.decode(sys.stdout.encoding or 'utf-8', 'replace')
except subprocess.CalledProcessError as e:
raise HostError('Error parsing APK file {}. `aapt` says:\n{}'
.format(apk_path, e.output))
output = self._run([aapt, 'dump', 'badging', apk_path])
for line in output.split('\n'):
if line.startswith('application-label:'):
self.label = line.split(':')[1].strip().replace('\'', '')
@@ -188,6 +187,50 @@ class ApkInfo(object):
else:
pass # not interested
self._apk_path = apk_path
self._activities = None
self._methods = None
@property
def activities(self):
if self._activities is None:
cmd = [aapt, 'dump', 'xmltree', self._apk_path,
'AndroidManifest.xml']
matched_activities = self.activity_regex.finditer(self._run(cmd))
self._activities = [m.group('name') for m in matched_activities]
return self._activities
@property
def methods(self):
if self._methods is None:
with zipfile.ZipFile(self._apk_path, 'r') as z:
extracted = z.extract('classes.dex', tempfile.gettempdir())
dexdump = os.path.join(os.path.dirname(aapt), 'dexdump')
command = [dexdump, '-l', 'xml', extracted]
dump = self._run(command)
xml_tree = xml.etree.ElementTree.fromstring(dump)
package = next(i for i in xml_tree.iter('package')
if i.attrib['name'] == self.package)
self._methods = [(meth.attrib['name'], klass.attrib['name'])
for klass in package.iter('class')
for meth in klass.iter('method')]
return self._methods
def _run(self, command):
logger.debug(' '.join(command))
try:
output = subprocess.check_output(command, stderr=subprocess.STDOUT)
if sys.version_info[0] == 3:
output = output.decode(sys.stdout.encoding or 'utf-8', 'replace')
except subprocess.CalledProcessError as e:
raise HostError('Error while running "{}":\n{}'
.format(command, e.output))
return output
class AdbConnection(object):
@@ -268,7 +311,7 @@ class AdbConnection(object):
raise
def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False):
return adb_background_shell(self.device, command, stdout, stderr, as_root)
return adb_background_shell(self.device, command, stdout, stderr, as_root, adb_server=self.adb_server)
def close(self):
AdbConnection.active_connections[self.device] -= 1
@@ -382,23 +425,23 @@ def _ping(device):
def adb_shell(device, command, timeout=None, check_exit_code=False,
as_root=False, adb_server=None): # NOQA
_check_env()
if as_root:
command = 'echo {} | su'.format(quote(command))
device_part = []
if adb_server:
device_part = ['-H', adb_server]
device_part += ['-s', device] if device else []
parts = ['adb']
if adb_server is not None:
parts += ['-H', adb_server]
if device is not None:
parts += ['-s', device]
parts += ['shell',
command if not as_root else 'su -c {}'.format(quote(command))]
logger.debug(' '.join(quote(part) for part in parts))
# On older combinations of ADB/Android versions, the adb host command always
# exits with 0 if it was able to run the command on the target, even if the
# command failed (https://code.google.com/p/android/issues/detail?id=3254).
# Homogenise this behaviour by running the command then echoing the exit
# code.
adb_shell_command = '({}); echo \"\n$?\"'.format(command)
actual_command = ['adb'] + device_part + ['shell', adb_shell_command]
logger.debug('adb {} shell {}'.format(' '.join(device_part), command))
parts[-1] += ' ; echo "\n$?"'
try:
raw_output, _ = check_output(actual_command, timeout, shell=False, combined_output=True)
raw_output, _ = check_output(parts, timeout, shell=False, combined_output=True)
except subprocess.CalledProcessError as e:
raise TargetStableError(str(e))
@@ -439,12 +482,15 @@ def adb_shell(device, command, timeout=None, check_exit_code=False,
def adb_background_shell(device, command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
as_root=False):
as_root=False,
adb_server=None):
"""Runs the sepcified command in a subprocess, returning the the Popen object."""
_check_env()
if as_root:
command = 'echo {} | su'.format(quote(command))
device_string = ' -s {}'.format(device) if device else ''
device_string = ' -H {}'.format(adb_server) if adb_server else ''
device_string += ' -s {}'.format(device) if device else ''
full_command = 'adb{} shell {}'.format(device_string, quote(command))
logger.debug(full_command)
return subprocess.Popen(full_command, stdout=stdout, stderr=stderr, shell=True)

View File

@@ -49,12 +49,12 @@ class FrameCollector(threading.Thread):
self.refresh_period = None
self.drop_threshold = None
self.unresponsive_count = 0
self.last_ready_time = None
self.last_ready_time = 0
self.exc = None
self.header = None
def run(self):
logger.debug('Surface flinger frame data collection started.')
logger.debug('Frame data collection started.')
try:
self.stop_signal.clear()
fd, self.temp_file = tempfile.mkstemp()
@@ -71,7 +71,7 @@ class FrameCollector(threading.Thread):
except Exception as e: # pylint: disable=W0703
logger.warning('Exception on collector thread: {}({})'.format(e.__class__.__name__, e))
self.exc = WorkerThreadError(self.name, sys.exc_info())
logger.debug('Surface flinger frame data collection stopped.')
logger.debug('Frame data collection stopped.')
def stop(self):
self.stop_signal.set()
@@ -133,7 +133,7 @@ class SurfaceFlingerFrameCollector(FrameCollector):
def collect_frames(self, wfh):
for activity in self.list():
if activity == self.view:
wfh.write(self.get_latencies(activity))
wfh.write(self.get_latencies(activity).encode('utf-8'))
def clear(self):
self.target.execute('dumpsys SurfaceFlinger --latency-clear ')

View File

@@ -41,7 +41,8 @@ from pexpect import EOF, TIMEOUT, spawn
# pylint: disable=redefined-builtin,wrong-import-position
from devlib.exception import (HostError, TargetStableError, TargetNotRespondingError,
TimeoutError, TargetTransientError)
from devlib.utils.misc import which, strip_bash_colors, check_output, sanitize_cmd_template
from devlib.utils.misc import (which, strip_bash_colors, check_output,
sanitize_cmd_template, memoized)
from devlib.utils.types import boolean
@@ -62,7 +63,7 @@ def ssh_get_shell(host, username, password=None, keyfile=None, port=None, timeou
raise ValueError('keyfile may not be used with a telnet connection.')
conn = TelnetPxssh(original_prompt=original_prompt)
else: # ssh
conn = pxssh.pxssh()
conn = pxssh.pxssh(echo=False)
try:
if keyfile:
@@ -253,7 +254,7 @@ class SshConnection(object):
# simulate impatiently hitting ^C until command prompt appears
logger.debug('Sending ^C')
for _ in range(self.max_cancel_attempts):
self.conn.sendline(chr(3))
self._sendline(chr(3))
if self.conn.prompt(0.1):
return True
return False
@@ -267,25 +268,21 @@ class SshConnection(object):
command = self.sudo_cmd.format(quote(command))
if log:
logger.debug(command)
self.conn.sendline(command)
self._sendline(command)
if self.password:
index = self.conn.expect_exact([self.password_prompt, TIMEOUT], timeout=0.5)
if index == 0:
self.conn.sendline(self.password)
self._sendline(self.password)
else: # not as_root
if log:
logger.debug(command)
self.conn.sendline(command)
self._sendline(command)
timed_out = self._wait_for_prompt(timeout)
# the regex removes line breaks potential introduced when writing
# command to shell.
if sys.version_info[0] == 3:
output = process_backspaces(self.conn.before.decode(sys.stdout.encoding or 'utf-8', 'replace'))
else:
output = process_backspaces(self.conn.before)
output = re.sub(r'\r([^\n])', r'\1', output)
if '\r\n' in output: # strip the echoed command
output = output.split('\r\n', 1)[1]
if timed_out:
self.cancel_running_command()
raise TimeoutError(command, output)
@@ -321,6 +318,21 @@ class SshConnection(object):
except TimeoutError as e:
raise TimeoutError(command_redacted, e.output)
def _sendline(self, command):
# Workaround for https://github.com/pexpect/pexpect/issues/552
if len(command) == self._get_window_size()[1] - self._get_prompt_length():
command += ' '
self.conn.sendline(command)
@memoized
def _get_prompt_length(self):
self.conn.sendline()
self.conn.prompt()
return len(self.conn.after)
@memoized
def _get_window_size(self):
return self.conn.getwinsize()
class TelnetConnection(SshConnection):

View File

@@ -15,8 +15,23 @@
import os
import sys
from collections import namedtuple
from subprocess import Popen, PIPE
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev'])
version = VersionTuple(1, 1, 2, '')
def get_devlib_version():
version_string = '{}.{}.{}'.format(
version.major, version.minor, version.revision)
if version.dev:
version_string += '.{}'.format(version.dev)
return version_string
def get_commit():
p = Popen(['git', 'rev-parse', 'HEAD'], cwd=os.path.dirname(__file__),
stdout=PIPE, stderr=PIPE)

View File

@@ -346,7 +346,7 @@ Target
some sysfs entries silently failing to set the written value without
returning an error code.
.. method:: Target.read_tree_values(path, depth=1, dictcls=dict):
.. method:: Target.read_tree_values(path, depth=1, dictcls=dict, [, tar [, decode_unicode [, strip_null_char ]]]):
Read values of all sysfs (or similar) file nodes under ``path``, traversing
up to the maximum depth ``depth``.
@@ -358,9 +358,18 @@ Target
value is a dict-line object with a key for every entry under ``path``
mapping onto its value or further dict-like objects as appropriate.
Although the default behaviour should suit most users, it is possible to
encounter issues when reading binary files, or files with colons in their
name for example. In such cases, the ``tar`` parameter can be set to force a
full archive of the tree using tar, hence providing a more robust behaviour.
This can, however, slow down the read process significantly.
:param path: sysfs path to scan
:param depth: maximum depth to descend
:param dictcls: a dict-like type to be used for each level of the hierarchy.
:param tar: the files will be read using tar rather than grep
:param decode_unicode: decode the content of tar-ed files as utf-8
:param strip_null_char: remove null chars from utf-8 decoded files
.. method:: Target.read_tree_values_flat(path, depth=1):

View File

@@ -41,23 +41,13 @@ except OSError:
pass
with open(os.path.join(devlib_dir, '__init__.py')) as fh:
# Extract the version by parsing the text of the file,
# as may not be able to load as a module yet.
for line in fh:
if '__version__' in line:
parts = line.split("'")
__version__ = parts[1]
break
else:
raise RuntimeError('Did not see __version__')
vh_path = os.path.join(devlib_dir, 'utils', 'version.py')
# can load this, as it does not have any devlib imports
version_helper = imp.load_source('version_helper', vh_path)
commit = version_helper.get_commit()
if commit:
__version__ = '{}+{}'.format(__version__, commit)
vh_path = os.path.join(devlib_dir, 'utils', 'version.py')
# can load this, as it does not have any devlib imports
version_helper = imp.load_source('version_helper', vh_path)
__version__ = version_helper.get_devlib_version()
commit = version_helper.get_commit()
if commit:
__version__ = '{}+{}'.format(__version__, commit)
packages = []
@@ -95,8 +85,10 @@ params = dict(
'wrapt', # Basic for construction of decorator functions
'future', # Python 2-3 compatibility
'enum34;python_version<"3.4"', # Enums for Python < 3.4
'pandas',
'numpy',
'numpy<=1.16.4; python_version<"3"',
'numpy; python_version>="3"',
'pandas<=0.24.2; python_version<"3"',
'pandas; python_version>"3"',
],
extras_require={
'daq': ['daqpower'],
@@ -106,10 +98,11 @@ params = dict(
},
# https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
'Development Status :: 4 - Beta',
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: Apache Software License',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
],
)