1
0
mirror of https://github.com/ARM-software/devlib.git synced 2025-09-23 04:11:54 +01:00

137 Commits
v1.2 ... v1.3.1

Author SHA1 Message Date
Marc Bonnici
78938cf243 version: Bump revision number 2021-04-19 11:02:53 +01:00
Marc Bonnici
4941a7183a setup.py: Update project description 2021-04-19 10:58:59 +01:00
Marc Bonnici
3ab9d23a4a setup.py: Add long description to package
Use the readme as the long description, this will be
displayed on the PyPi page.
2021-04-19 10:58:59 +01:00
Marc Bonnici
5cf18a7b3c docs: Add note to hostid about PowerPc64 devices
Add caveat for the hostid on PowerPC64 devices due to the library
used when linking the included binary.
2021-04-19 10:46:46 +01:00
Benjamin Crawford
5bfeae08f4 bin: update aarch64 and x86_64 busybox binaries
The original BusyBox binaries were statically linked
against glibc, which caused segmentation faults to occur
on __nss_* calls. This switches the libc implementation
to uClibc in both cases.

busybox ver bump to 1.32.1 for arm64 and x86_64

update x86_64-linux-uclibc busybox

update aarch64-linux-uclibc busybox
2021-04-16 18:25:06 +01:00
Marc Bonnici
a87a1df0fb bin/busybox: Update ppc64le busybox
Update the ppc64le busybox implementation to v1.32.1
to match other architecture binaries.

Note: This binary is linked with musl as glibc has been
reported to cause issues however uclibc does not appear
to be a support build configuration.
2021-04-16 18:24:12 +01:00
Marc Bonnici
3bf763688e bin/busybox: Update 32 bit busybox binaries
Update the arm32 and add i368 v1.32.1 busybox binaries statically linked
with uclibc rather than the previous version with glibc which
caused segfaults on some devices.
2021-04-16 18:24:12 +01:00
Vincent Donnefort
a5cced85ce module/hotplug: Extend hotplug support with 'fail' and 'states' interfaces
The Linux HP path is composed of a series of states the CPU has to execute
to perform a complete hotplug or hotunplug. Devlib now exposes this
information. get_state() gives the current state of a CPU. get_states()
gives the list of all the states that composes the HP path.

The Linux HP 'fail' interface allows to simulate a failure during the
hotplug path. It helps to test the rollback mechanism. Devlib exposes this
interface with the method fail().
2021-04-12 13:38:42 +01:00
Javi Merino
9f55ae7603 connection: PopenTransferManager: Reset last_sample when we start a new transfer
last_sample is initialized in the PopenTransferManager constructor.
However, there is only one PopenTransferManager instance, which is
initialized during the construction of AdbConnection.  Afterwards, the
transfer manager is reused for every file transfer, without going
through __init__().  Therefore, after pushing/pulling a big file, the
next file transfer has compares the current size to the last sample of
the previous file transfer.  This makes it believe that the transfer
is inactive.

Reinitialize last_sample every time we start a new transfer to avoid
this.
2021-04-09 18:44:55 +01:00
Marc Bonnici
e7bafd6e5b version: Bump dev version
Exposing additional target properties so bump the development
version accordingly.
2021-04-07 18:21:33 +01:00
Marc Bonnici
ca84124fae doc/target: Fix Typo 2021-04-07 18:21:33 +01:00
Marc Bonnici
1f41853341 docs/target: Add new Target attributes 2021-04-07 18:21:33 +01:00
Marc Bonnici
82a2f7d8b6 target: Expose hostname as target property 2021-04-07 18:21:33 +01:00
Marc Bonnici
2a633b783a target: Expose hostid as target property 2021-04-07 18:21:33 +01:00
douglas-raillard-arm
b6d1863e77 collector/dmesg: DmesgCollector: Avoid not rooted targets
Fail early if the target is not rooted, as root is required by
DmesgCollector.start() anyway.
2021-04-07 10:55:27 +01:00
Vincent Donnefort
bbc891341c module/cpufreq: Warn when cpuinfo doesn't reflect the cpufreq request
The cpufreq/scaling_* files reflect the policy configuration from a kernel
point of view. The actual frequency at which the CPU is running can be
found in cpuinfo_cur_freq. Warn when the requested frequency does not
match the actual one.

As the kernel does not provide any guarantee that the requested frequency
will be actually used (due to other part of the system such as thermal or
the firmware), printing a warning instead of raising an error seems more
suitable here.
2021-04-07 10:55:08 +01:00
Marc Bonnici
d14df074ee utils/android: Fix Typo 2021-03-31 11:03:59 +01:00
Marc Bonnici
81f9ee2c50 utils/android: Use busybox implementation of ps
On some older versions of ps the flags used to look up
the pids are not supported and results in no output.
Use the busybox implementation for consistency.
2021-03-31 11:03:59 +01:00
Marc Bonnici
09d0a0f500 docs: Fix Formatting / typos 2021-03-31 11:02:49 +01:00
Robert Freeman
fe2fe3ae04 collector/perf: run simpleperf report-sample in the target if requested
If simpleperf record is called with "-f N", we may want to run
"simpleperf report-sample" on the output to dump the periodic
records.  Let the PerfCollector() run report-sample in the target,
similar to how we run report.
2021-03-31 10:03:33 +01:00
Robert Freeman
4859e818fb collector/perf: Only kill sleep if running perf stat
The stop() command of the PerfCollector kills all sleep commands in
the target, saying "We hope that no other important sleep is
on-going".  This is only needed if we are running "perf stat",
simpleperf does not need this.  Background the
perf/simpleperf command and only kill all sleeps in that specific case.
2021-03-31 10:03:33 +01:00
douglas-raillard-arm
5d342044a2 host and ssh: Fix sudo invocation
Add -k to sudo invocation to avoid using cached credentials.

If cached credentials is used, sudo will not write a prompt again,
leading to the stderr fixup code to remove a char from stderr output.
2021-03-24 19:06:05 +00:00
douglas-raillard-arm
d953377ff3 doc: Extend doc of Target.background()
Add doc for parameters "force_locale" and "timeout".
2021-03-24 19:05:50 +00:00
douglas-raillard-arm
4f2d9fa66d target: Add Target.background(timeout=...) parameter
Use a Timer daemonic thread to cancel the command after the given
timeout.
2021-03-24 19:05:50 +00:00
douglas-raillard-arm
4e44863777 connection: Un-hardcode _KILL_TIMEOUT
Replace erroneous use of _KILL_TIMEOUT constant by the kill_timeout
parameter.
2021-03-23 15:40:28 +00:00
douglas-raillard-arm
6cabad14d0 connection: Make ConnectionBase.cancel() more robust
Check with poll() if the command is already finished first, to avoid
sending SIGKILL to unrelated processes due to PID recycling.

The race window still exists between the call to poll() and _cancel(),
but is reduced a great deal.
2021-03-23 15:40:28 +00:00
douglas-raillard-arm
31f7c1e8f9 connection: Remove trailing whitespace 2021-03-23 15:40:28 +00:00
douglas-raillard-arm
3bc98f855b target: Align Target.background() LC_ALL and PATH
Make Target.background() behave like Target.execute():

* Set LC_ALL if force_locale is provided (defaults to "C")
* Add the target bin folder to PATH
2021-03-23 15:40:28 +00:00
douglas-raillard-arm
d2b80ccaf9 modules/sched: Fix sched domain flags parsing
Recent kernels can have a space-separated list of textual flags rather
than a bitfield packed in an int.
2021-03-12 17:56:01 +00:00
douglas-raillard-arm
552040f390 devlib/collector/dmesg: handle CONFIG_SECURITY_DMESG_RESTRICT
Some kernels compiled with CONFIG_SECURITY_DMESG_RESTRICT can restrict
reading the dmesg buffer to root user as a security hardening measure.
Detect this case and use root accordingly.
2021-02-12 12:11:07 +00:00
Javi Merino
0d259be01b collector/perf: Run perf as root if the target is rooted
If you want to collect events by event id (eg. in simpleperf, "rNNN"
with NNN a number), you must run it as root.  Otherwise, simpleperf
fails with a SIGABRT.

Run simpleperf as root by default if the target is rooted to avoid
this.
2021-02-02 11:30:09 +00:00
Marc Bonnici
792101819a utils/version: Prevent installation failure on systems without git
On systems that do not have git installed devlib will currently fail
to install with a FileNotFound Exception. If git is not present then
we will not have a commit hash so just ignore this error.
2021-01-12 17:53:27 +00:00
Javi Merino
3b8317d42e target: increase dump_logcat timeout
If WA is connected to a phone via a slow connection, dump_logcat() may
timeout when dumping logcat after the job has finished:

    2021-01-11 09:38:16,277 DEBUG       android:         adb -s X.Y.Z.X:5555 logcat -d -v threadtime > wa_output/wk1-wkld-1/logcat.log
    2021-01-11 09:38:46,317 DEBUG        signal:         Sending error-logged from <ErrorSignalHandler (DEBUG)>
    2021-01-11 09:38:46,318 DEBUG        signal:         Disconnecting <bound method Executor._error_signalled_callback of executor> from error-logged(<class 'louie.sender.Any'>)
    2021-01-11 09:38:46,317 ERROR        signal:         Timed out: adb -s X.Y.Z.X:5555 logcat -d -v threadtime > wa_output/wk1-wkld-1/logcat.log
    2021-01-11 09:38:46,317 ERROR        signal:         OUTPUT:
    2021-01-11 09:38:46,317 ERROR        signal:
    2021-01-11 09:38:46,317 ERROR        signal:

Increase the timeout to prevent this.
2021-01-11 10:11:42 +00:00
Marc Bonnici
e3da419e5b utils/android: Switch to using the lxml module
Using dexdump from versions 30.0.1-30.0.3 of Android build tools
does not produce valid XML caused by certain APKs
Use the lxml module for parsing xml as it is more robust and
better equipped to handle errors in input.
2021-01-08 17:22:12 +00:00
Marc Bonnici
e251b158b2 utils/android: Reorder imports 2021-01-08 17:22:12 +00:00
Marc Bonnici
c0a5765da5 utils/android: Fix aapt discovery with unexpected structure
If there is an additional file or directory in the `build_tools` directory
then WA can fail to find a working version of aapt(2), ensure that at least
one of the binaries is a valid file path.
2021-01-08 17:22:12 +00:00
Marc Bonnici
b32f15bbdb utils/version: Bump to dev version 2020-12-11 16:42:57 +00:00
Marc Bonnici
5116d46141 utils/version: Bump release version 2020-12-11 16:31:00 +00:00
Marc Bonnici
beb3b011bd utils/apk_info: Handle apks that do not contain classes.dex
Some apks do not contain the file that we use to determine app methods
so return an empty list in this case.
2020-12-10 20:20:23 +00:00
douglas-raillard-arm
bf4e242129 host: Use "sh -c" for background() like execute()
Align LocalConnection.background() and LocalConnection.execute() by
using "sh -c" when running with sudo.
2020-11-25 10:17:57 +00:00
douglas-raillard-arm
b1538fd184 host: remove unneeded concatenation
'{}'.format(x) + y is equivalent to '{}{}'.format(x, y)
2020-11-25 10:17:57 +00:00
douglas-raillard-arm
5b37dfc50b host: Remove sudo prompt from stderr in execute()
Remove the leading space introduced on stderr by: sudo -S -p ' '
background() still gets the space, since we cannot easily apply
processing to its stderr.

Note: -p '' does not work on recent sudo, so we unfortunately cannot
just completely remove it for the time being.
2020-11-25 10:17:57 +00:00
douglas-raillard-arm
a948982700 host: Fix string literal
String literals are concatenated automatically in Python:

   assert 'a' 'b' == 'ab'

This means that adding ' ' in the middle of a literal delimited by '
will be a no-op. Fix that by changing the literal delimiter to ".
2020-11-25 10:17:57 +00:00
douglas-raillard-arm
d300b9e57f devlib.utils: Fix escape sequences
Fix invalid escape sequence, mostly in regex that were not r-strings.
2020-11-18 13:41:50 +00:00
Marc Bonnici
81db8200e2 utils/logcatmonitor: Ensure adb_server is specified
Ensure the adb_server is specific when monitoring the logcat.
2020-11-13 13:58:11 +00:00
Marc Bonnici
9e9af8c6de utils/version: Bump dev version
Bump the dev version to synchronise the additional exposed parameter.
2020-11-09 17:53:24 +00:00
Marc Bonnici
5473031ab7 utils/ssh: Split out the sudo_cmd template
Split out the `sudo_cmd` template to reduce duplication
for SSH based connections and for use from WA to ensure
the template stays in sync.
2020-11-09 17:53:24 +00:00
Javi Merino
a82db5ed37 instrument/daq: Use clock boottime for the time column of the energy measurements
92e16ee873 ("instrument/daq: Add an explicit time column to the DAQ
measurements") added a time column to the DAQ measurements in order to
help correlate them with those of other collectors like
FtraceCollector.  Sadly, FTrace uses CLOCK_BOOTTIME instead of
CLOCK_MONOTONIC by default.  CLOCK_MONOTONIC is like CLOCK_BOOTTIME,
except that it stops when the device suspends, which is why I hadn't
spot the issue until now.

Switch to CLOCK_BOOTTIME to get the intended behaviour of the original
commit.
2020-11-09 17:27:21 +00:00
Valentin Schneider
1381944e5b utils/ssh, host: Remove sudo prompt from output
On a target where sudo is required, target.file_exists() erroneously
returns True despite the execute() output being:

  '[sudo] password for valsch01: 0\n'

The sudo prompt is being written to stderr (as per sudo -S), but this is
still merged into the final execute() output. Get rid of the prompt to
prevent it from interfering with any command output processor.

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
2020-11-09 17:26:02 +00:00
Marc Bonnici
822c50273f utils/check_subprocess_output: Fix std error output
Fix stderr output being dropped unless there there is
also output on stdout.
2020-11-09 16:34:17 +00:00
Marc Bonnici
8f3200679c host: Concatenate stdout and stderr from command output
Previously any output from stderr was discarded for LocalTargets.
Align the behaviour of `execute` to append any stderr output
to stdout before returning.
2020-11-09 16:34:17 +00:00
Marc Bonnici
2cfb076e4c utils/check_output: Fix missing ignore parameter propagation 2020-11-04 18:12:56 +00:00
Marc Bonnici
98bc0a31e1 target/page_size_kb: Handle missing KernelPageSize
On some systems KernelPageSize is not exported therefore
in this case return 0.
2020-11-04 18:12:56 +00:00
Marc Bonnici
345a9ed199 fw/version: Development version bump
Bump the version due to additional parameters exposed for transfer
polling.
2020-11-03 10:02:16 +00:00
Jonathan Paynter
1fc9f6cc94 doc/target: Add polling to target push method
Update the documentation for ``target`` to mention transfer polling,
and redirect to more information in ``connection``.
2020-11-03 10:01:43 +00:00
Jonathan Paynter
4194b1dd5e utils/ssh: Add remote path formatter method 2020-11-03 10:01:43 +00:00
Jonathan Paynter
ef2d1a6fa4 doc/connection: Add transfer poll parameter info
Also update SSHConnection parameters to reflect the current state.
2020-11-03 10:01:43 +00:00
Jonathan Paynter
33397649b6 connection,targets: enable file transfer polling
For connections that allow for it (ADB and SSH using SFTP
and SCP) this change enables file transfers to be polled to check
if the transfer is still in progress after some period of time or
whether the transfer should be terminated.

If a timeout is specified in the call to ``pull`` or ``push`` then the
transfer will not be polled and will terminate ordinarily when either
the transfer completes or the timeout is reached. If a timeout is
not specified, then the transfer will be polled if ``poll_transfers`` is
set, otherwise the transfer will continue with no timeout at all.

SSH transfers supply a callback to the transfer, that is called
after every block of the source is transferred. If the callback has not
been triggered within one poll period, then the transfer is cancelled.

ADB transfers have the destination size polled every poll period, and
the size compared to the previous poll to check if the transfer has
stalled. When the destination is no longer growing in size, the poller
will attempt to kill the subprocess to end the transfer.

If the transfer is still active, but the total transfer time has
exceeded the ``total_timeout`` (default: 1 hour) the transfer will then
also be killed.

Transfer polling will only begin after the ``start_transfer_poll_delay``
time has elapsed.

Polling periods that are too small may incorrectly cancel transfers.
2020-11-03 10:01:43 +00:00
Jonathan Paynter
ebf1c1a2e1 utils/ssh: Add paramiko based scp transfers
Using scp over paramiko allows scp transfers to be treated similarly to
sftp transfers, instead of requiring subprocesses, and provides
the ability to monitor an scp transfer using a callback as can be done
using sftp.
2020-11-03 10:01:43 +00:00
Jonathan Paynter
1d1ba7811d utils/misc: separate check_output functionality
The custom check_output function consisted of two main parts: fetching
the subprocess required for the command, and checking its output.

It is convenient to provide functions that implement these parts
distinctly, so that the output of any subprocess can be checked easily
and the creation of a typical Popen object wrapped inside
get_subprocess.
2020-11-03 10:01:43 +00:00
Jonathan Paynter
dc7faf46e4 connection: kill spawned child subprocesses:
Subprocesses that were spawned under the same pgid were not necessarily
being terminated when the parent was terminated, allowing them to
continue running. This change explicitly kills the process group
involved.
2020-11-03 10:01:43 +00:00
Marc Bonnici
0498017bf0 utils/apkinfo: Fix handing when no methods defined
Not all apks list their class methods so add handling of this
situation.
2020-09-18 18:12:09 +01:00
Jonathan Paynter
b2950686a7 devlib/target: Enable screen stay-on mode:
Adds the ability to set the android global setting
``stay_on_while_plugged_in``.

This setting has 4 main modes:
- 0: never stay on
- 1: stay on when plugged in to AC charger
- 2: stay on when plugged in to USB charger
- 4: stay on when wirelessly charged

These values can be OR-ed together to produce combinations.
2020-09-02 18:06:07 +01:00
Marc Bonnici
f2b5f85dab target/file_xfer: Fix incorrect method call 2020-07-24 16:30:32 +01:00
Marc Bonnici
c0f26e536a target.py: Fix incorrect parameter name 2020-07-24 16:30:32 +01:00
Marc Bonnici
1a02f77fdd target/pull: Use chmod from busybox
Not all implementations of chmod support the use of `--` so ensure
we use a known implementations from busybox.
2020-07-24 16:30:32 +01:00
Stephen Kyle
117686996b target: support threads in ps
Adds 'tid' attribute to PsEntry namedtuple. This is equal to the PID of
the process.

Adds 'threads=True' parameter to target.ps(). When true, PsEntrys will be
returned for all threads on the target, not just processes. The 'tid' will
be the distinct PID for each thread, rather than the owning process.
2020-07-21 14:11:55 +01:00
douglas-raillard-arm
8695344969 target/{host,ssh}: Align push/pull with cp/mv behaviour
When pushing or pulling a folder, replicate the mv/cp/scp/adb behaviour,
which is:
    * splitting the destination into (existing, new) components
    * if {new} component is empty, set it to the basename of the source.
    * mkdir {new} if necessary
    * merge the hierarchies of {src} and {existing}/{new}
2020-07-20 15:49:14 +01:00
douglas-raillard-arm
f23fbd22b6 target: Use Target._xfer_cache_file() context manager
Use the context manager to simplify the file transfer cache management.

Also introduce devlib.utils.misc.nullcontext() mirroring the behavior of
contextlib.nullcontext().
2020-07-20 15:49:14 +01:00
douglas-raillard-arm
24e6de67ae target: Add Target.{push,pull}(globbing=False) parameter
When globbing=True, the source is interpreted as a globbing pattern on
the target and is expanded before pulling the files or folders.

This also aligns the behaviour of all targets:
    * adb connection was supported a limited form of globbing by default
      (only on the last component of the path)
    * SCP was supporting a limited form of globbing
    * GEM5 was not supporting globbing at all
    * paramiko was not supporting globbing at all

Also fix a race condition on push/pull as root, where pushing/pulling
the same file from multiple threads would have ended up using the same
temporary file.
2020-07-20 15:49:14 +01:00
douglas-raillard-arm
07bbf902ba docs/target: Update Target.{push,pull}() description
Document the fact that it accepts folders as source and destination in
addition to files.
2020-07-20 15:49:14 +01:00
douglas-raillard-arm
590069f01f target: Add Target.makedirs()
Create a directory on the target.
2020-07-20 15:49:14 +01:00
douglas-raillard-arm
bef1ec3afc target: Add option delimiter to rm command
Use a lone -- to make sure to not treat paths as options.
2020-07-20 15:49:14 +01:00
douglas-raillard-arm
0c72763d2a target/ssh: Fix improper use of os.path.basename()
os.path.basename() can give surprising results on folder names:

    os.path.basename('/foo/') == ''
    os.path.basename(os.path.normpath('/foo/')) == 'foo'
2020-07-20 15:49:14 +01:00
Marc Bonnici
2129d85422 utils/android: Use separate tmp dirs when extracting apk methods
Create a new temporary directory to use when extracting apk methods,
as when running multiple processes in parallel the extracted files could
be overwritten by each other.
2020-07-13 10:24:03 +01:00
Marc Bonnici
80bddf38a2 utils/android: Fix xmltree dump for aapt
Fix syntax error in dump command when using the aapt binary.
2020-07-09 15:06:12 +01:00
Marc Bonnici
00f3f5f690 android/background: Specify the device for background cmds
Ensure the device is passed when executing a background
command.
2020-07-06 17:24:48 +01:00
Marc Bonnici
bc9478c324 connection/send_signal: Use signal value instead of name
Some targets do not support killing via signal name so use the signal
number for greater compatibility.
2020-07-06 17:24:48 +01:00
Marc Bonnici
9a2c413372 target/reset: Ignore all TargetErrors when rebooting
Some targets can thrown stable errors in addition to
transient errors when executing the `reboot` command.
We expect this command to not always complete cleanly
so ignore all target errors.
2020-06-29 16:29:39 +01:00
Marc Bonnici
3cb2793e51 collector/serial_trace: Ensure log is opened in binary mode 2020-06-24 17:16:11 +01:00
Marc Bonnici
1ad2e895b3 collector/serial_trace: Fix typo 2020-06-24 17:16:11 +01:00
Marc Bonnici
3d5a164338 module/vexpress: Remove reference to android.
This method is also called when booting linux so remove specific
reference to Android.
2020-06-24 17:15:40 +01:00
Jonathan Paynter
af8c47151e utils/android: Fix inconsistent logfile read mode
As the exoplayer workload did not specify a pre-existing logfile, it is
created for it by default in LogcatMonitor. This default method opens
the logfile in 'byte' mode rather than the expected 'string' mode.

Regex operations that depend on the logfile for event triggering expect it to
be in 'string' mode, which was not the case.
2020-06-24 10:27:32 +01:00
Marc Bonnici
20d1eabaf0 module/cpuidle: Fix incorrect path check 2020-06-10 18:16:21 +01:00
Marc Bonnici
45ee68fdd4 utils/android: Add support for using aapt2
aapt is now depreciated in favour of aapt2 therefore prefer using the
newer binary if it is found on the system. If not present fallback to
the old implementation.
Currently all invocations of aapt within devlib are compatible with
aapt2 however expose the `aapt_version` attribute to indicate which
version has been selected to allow for allow maintaining future
compatibility.
2020-06-08 17:37:06 +01:00
Marc Bonnici
b52462440c utils/android: Update to discover android tools from PATH
Allow falling back to detecting the required android tools from PATH.
2020-06-08 17:37:06 +01:00
Marc Bonnici
bae741dc81 docs/overview: Fix python2 style print 2020-06-08 17:37:06 +01:00
douglas-raillard-arm
b717deb8e4 module/cpuidle: Simplify Cpuidle.__init__
Replace stateful loop with a nested comprehension that makes obvious:
    * that self._states is a dict(cpu, [CpuidleState])
    * the sysfs folder being used and the constraint applied to make use
      of each level (i.e. which subfolder is used)
    * that the states are sorted by index

As a side effect:
    * Gracefully handle non-contiguous idle state names like "state0,
      state2" without a state1 (not sure if that can happen)
    * Remove some antipatterns while iterating over a dict and counting
      iterations.
2020-06-05 17:21:44 +01:00
Marc Bonnici
ccde9de257 devlib/AndroidTarget: Update screen state methods to handle doze
Newer devices can have a "DOZE" or always on screen state.
Enable the screen state to handle these cases and report these
states as `ON`.
2020-06-05 17:12:35 +01:00
Marc Bonnici
c25852b210 utils/android: Allow instantiating an ApkInfo object without a path.
Do not assume that a path is provided upon creating of an ApkInfo
instance and only attempt to extract information if present.
2020-06-05 09:28:06 +01:00
Marc Bonnici
f7b7aaf527 utils/ssh: Do not attempt to push files recursivley and add logging
No longer recursively attempt to push a file to the target. If the
second attempt goes wrong we assume there is something else wrong and
therefore let the error propagate.
Also log the original error in case it is not the error we were
expecting.
2020-06-05 09:27:37 +01:00
Javi Merino
569e4bd057 LogcatCollector: Learn to pass format to logcat
logcat -v lets you specify the format for the logcat output.  Add a
parameter to the LogcatCollector to allow us to pass that information
down to the logcat invocation.
2020-05-15 14:52:26 +01:00
Marc Bonnici
07cad78046 utils/version: dev version bump
Bump the dev version due to additional parameter exposed on SSHConnection.
2020-05-13 16:42:58 +01:00
Marc Bonnici
21cb10f550 utils/ssh: Add logging to sftp file transfer 2020-05-13 16:42:58 +01:00
Marc Bonnici
d2aea077b4 target/ChromeOsTarget: Update ssh parameter list 2020-05-13 16:42:58 +01:00
Marc Bonnici
d464053546 utils/ssh: Fix typo 2020-05-13 16:42:58 +01:00
Marc Bonnici
cfb28c47c0 utils/ssh: Allow SSH to use SCP as a file transfer method
Paramiko uses sftp for file transfer rather then scp as with the
previous implementation however not all targets support this.
Expose a parameter to the SSHConnection to allow falling back to
the scp implementation.
2020-05-13 16:42:58 +01:00
Marc Bonnici
b941c6c5a6 utils/ssh: Move the scp transport method to the SSH base class
Move the implementation of the scp transport from the Telnet connection
to the base class to allow other types of connection to use the
functionality.
2020-05-13 16:42:58 +01:00
Marc Bonnici
ea9f9c878b docs/ssh: Add note about connecting to passwordless machines. 2020-05-13 16:42:58 +01:00
Marc Bonnici
4f10387688 utils/ssh: Only attempt loading ssh keys if no password is supplied
In the case of connecting to a system without a password the password
parameter needs to be set to an empty string which will currently still
cause ssh keys to be loaded. Only check for ssh keys when the password
is explicitly set to `None`.
2020-05-13 16:42:58 +01:00
Marc Bonnici
a4f9231707 collector/perf: Disable pager for perf event list.
Pipe the list of perf events via cat to ensure that a pager is not
used to display the output as this can cause some systems to hang
waiting for user input.
2020-05-12 10:25:46 +01:00
Marc Bonnici
3c85738f0d docs/target: Fix method name 2020-05-12 10:25:08 +01:00
Marc Bonnici
45881b9f0d utils/android: Expose connection_attempts argument to AdbConnection
Allow for configuring the number of connection attempts that will be
made to the device before failing to connect. This allows for waiting longer
periods of time for the device to come online.
2020-05-12 10:24:47 +01:00
Marc Bonnici
a8ff622f33 target: Propergate adb_server in all adb_commands
Add property to AndroidTarget to retrieve the adb server if using an
AdbConnection and ensure this is passed in remaining adb_commands.
2020-05-12 10:15:47 +01:00
Javi Merino
fcd2439b50 LogcatCollector: flush the log before terminating pexpect.spawn()
Unless we tell pexpect to expect something it will not read from the
process' buffer, or write anything to the logfile.  If we follow the
collector instructions from devlib's documentation:

  In [1]: from devlib import AndroidTarget, LogcatCollector

  In [2]: t = AndroidTarget()

  # Set up the collector on the Target.

  In [3]: collector = LogcatCollector(t)

  # Configure the output file path for the collector to use.
  In [4]: collector.set_output('adb_log.txt')

  # Reset the Collector to preform any required configuration or
  # preparation.
  In [5]: collector.reset()

  # Start Collecting
  In [6]: collector.start()

  # Wait for some output to be generated
  In [7]: sleep(10)

  # Stop Collecting
  In [8]: collector.stop()

  # Retrieved the collected data
  In [9]: output = collector.get_data()

adb_log.txt will be empty because between collector.start() and
collector.stop() there were no expect() calls to
LogcatMonitor._logcat.  As the get_log() function already has code to
flush the log, abstract it to a function and call it in stop() before
terminating the pexpect.spawn().
2020-05-11 13:09:34 +01:00
Javi Merino
3709e06b5c utils/android: LogcatMonitor: put pexpect.spawn() in str mode
By default, pexpect.spawn() is a bytes interface: its read methods
return bytes and its write/send and expect method expect bytes. Logcat
uses utf-8 strings, so put pexpect.spawn() in str mode.

This code can't have worked in python3 before, since the logcat file
is not opened in "b" mode.
2020-05-11 13:09:34 +01:00
Marc Bonnici
7c8573a416 README: Update to include installation notes for paramiko 2020-04-20 12:03:42 +01:00
Marc Bonnici
6f1ffee2b7 platform/arm: Decode IP address directly
Convert bytes to a string when acquired rather than in the calling
functions.
2020-04-16 09:45:06 +01:00
Marc Bonnici
7ade1b8bcc platform/arm: Don't specify "Android" in the debug print.
This function be used to determine the IP address of other OSs
e.g. linux.
2020-04-16 09:45:06 +01:00
Marc Bonnici
3c28c280de utils/check_output: Ensure output and error are always initialised.
Ensure that the `output` and `error` variables are always initialised
regardless of whether an error occurs during execution.
2020-03-30 16:21:46 +01:00
Marc Bonnici
b9d50ec164 utils/check_output: Only attempt to decode output if present.
If an error occurs while executing a command the `output` and `error`
variables may not get initialised, only attempted to decode their
contents if this is not the case.
2020-03-30 16:05:23 +01:00
Javi Merino
7780cfdd5c utils/android: Combine stdout and stderror by combining the strings in adb_shell()
check_output(combined_output=True) does not guarantee that stdout will
come before stderr, but the ordering is needed in case check_exit_code
is True, as we are expecting the exit code at the end of stdout.
Furthermore, the exceptions can't report what is stdout and what is
stderr as they are combined.

Partially revert 77a6de9453 ("utils/android: include stderr in adb_shell
output") and parse output and err independently. Return them combined
from adb_shell() to keep the functionality that 77a6de9453 was
implementing.
2020-03-27 17:25:28 +00:00
Javi Merino
7c79a040b7 utils/misc: Revert d4b0dedc2a
d4b0dedc2a ("utils/misc: add combined output option to
check_output") adds an option that combines stdout and stderr, but
their order is arbitrary (stdout may appear before or after
stderr). This leads to problems in adb_shell() when it tries to check
the error code. Now that adb_shell() doesn't use combined_output,
remove the option as there are no more users in devlib.

squash! utils/misc: Make the return of check_output consistent
2020-03-27 17:25:28 +00:00
Marc Bonnici
779b0cbc77 utils/ssh: Only try SSH keys if no password is supplied.
By default parmaiko attempts to search for SSH keys even when connecting
with a password. Ensure this is disabled to prevent issues where
non-valid keys are found on the host when connecting using password
authentication.
2020-03-25 18:09:15 +00:00
Marc Bonnici
b6cab6467d docs: Add LinuxTarget and LocalLinuxTarget to the documentation 2020-03-20 15:35:16 +00:00
Marc Bonnici
ec0a5884c0 docs: Update to use module diretive
Update the documentation to indicated which module each class is
located. This allows the documentation to be referenced from other
modules as well as enabling links to the source code directly from the
documentation.
2020-03-20 15:35:16 +00:00
Marc Bonnici
7f5e0f5b4d utils/version: Bump dev version
Bump the development version due to the change in SSH interface.
2020-03-06 17:34:22 +00:00
Douglas RAILLARD
7e682ed97d target: Check that the connection works cleanly upon connection
Check that executing the most basic command works without troubles or stderr
content. If that's not the case, raise a TargetStableError.
2020-03-06 17:33:04 +00:00
Douglas RAILLARD
62e24c5764 connections: Unify BackgroundCommand API and use paramiko for SSH
* Unify the behavior of background commands in connections.BackgroundCommand().
  This implements a subset of subprocess.Popen class, with a unified behavior
  across all connection types

* Implement the SSH connection using paramiko rather than pxssh.
2020-03-06 17:33:04 +00:00
Douglas RAILLARD
eb6fa93845 utils/misc: Add redirect_streams() helper
Update a command line to redirect standard streams as specified using the
parameters. This helper allows honoring streams specified in the same way as
subprocess.Popen, by doing it as much using shell redirections as possible.
2020-03-06 17:33:04 +00:00
Douglas RAILLARD
9d5d70564f target: Use tls_property() to manage a thread-local connection
This frees the connection to have to handle threading issues, since each thread
using the Target will have its own connection. The connection will be garbage
collected when the thread using it dies, avoiding connection leaks.
2020-03-06 17:33:04 +00:00
Douglas RAILLARD
922686a348 utils/misc: Add tls_property()
Similar to a regular property(), with the following differences:
* Values are memoized and are threadlocal

* The value returned by the property needs to be called (like a weakref) to get
  the actual value. This level of indirection is needed to allow methods to be
  implemented in the proxy object without clashing with the value's methods.

* If the above is too annoying, a "sub property" can be created with the regular
  property() behavior (and therefore without the additional methods) using
  tls_property.basic_property .
2020-03-06 17:33:04 +00:00
Douglas RAILLARD
98e2e51d09 devlib.utils.misc: Use Popen.communicate(timeout=...) in check_output
Use the timeout parameter added in Python 3.3, which removes the need for the
timer thread and avoids some weird issues in preexec_fn, as it's now documented
to sometimes not work when threads are involved.
2020-03-06 17:33:04 +00:00
Javi Merino
92e16ee873 instrument/daq: Add an explicit time column to the DAQ measurements
Add the monotonic clock time to the energy measurements to help
correlate the measurement with those of other collectors, like
FtraceCollector or LogcatCollector.
2020-03-02 14:48:46 +00:00
Javi Merino
72ded188fa instrument/daq: Convert reading rows from all files to a generator
Instead of calling _read_next_rows() before the while and at the end,
it's simpler to read the rows in a for loop and have _read_rows() be a
generator.
2020-03-02 14:48:46 +00:00
Javi Merino
dcab0b3718 instrument/daq: Check that self.tempdir has been set before calling os.path.isdir()
self.tempdir is initialized to None, and os.path.isdir() throws an
exception if you don't pass it a str.  This can happen if teardown()
is called before get_data(), which WA sometimes does.  Check that
self.tempdir has been initializing before calling os.path.isdir().
2020-03-02 14:48:46 +00:00
Vincent Donnefort
37a6b4f96d target: a valid sha1 must be concatenated with the kernel version
Some SoC vendors add several sha's to the kernel version string. This is
problematic for the KernelVersion class, which might identify the wrong one.

Fixing this issue by matching the following "git describe" pattern:

  <version>.<major>.<minor>-<rc>-<commits>-g<sha1>

Where commits is the number of commits on top of the tag, which is now a
member of the class.

Prior to this patch:

>> KernelVersion("4.14.111-00001-gd913f26_audio-00003-g3ab4335").sha1
3ab4335
>>> KernelVersion("4.14.111_audio-00003-g3ab4335").sha1
3ab4335

With the modified regex:

>> KernelVersion("4.14.111-00001-gd913f26_audio-00003-g3ab4335").sha1
d913f26
>>> KernelVersion("4.14.111_audio-00003-g3ab4335").sha1
None
2020-02-28 13:10:31 +00:00
Marc Bonnici
1ddbb75e74 uilts/android: Fix parameters to adb_kill_server 2020-02-20 16:25:50 +00:00
Marc Bonnici
696dec9b91 utils/android: Ensure that adb_server is propergated to helper functions
Ensure that we use the correct `adb_server` in the adb helper functions.
2020-02-20 16:25:50 +00:00
Douglas RAILLARD
17374cf2b4 target: Update Target.modules from Target.install_modules()
Make sure the target.modules list stays up to date when a new module is
installed, since behaviors like devlib_cpu_frequency event injection rely on
content of target.modules.
2020-02-19 09:15:38 +00:00
Douglas RAILLARD
9661c6bff3 target: Handle non-existing /sys/devices/system/node
Some systems (ARM 32bits it seems) don't have this file in sysfs. Assume 1 node
in that case.
2020-01-22 09:21:56 +00:00
Douglas RAILLARD
0aeb5bc409 target: Remove use of ls
Using "ls" in scripts is highly discouraged:
http://mywiki.wooledge.org/ParsingLs
2020-01-22 09:21:56 +00:00
Javi Merino
a5640502ac devlib/AndroidTarget: Allow passing format options to dump_logcat()
logcat has a modifier for its output format.  Add a logcat_format
parameter to dump_logcat() so that we can pass it on to logcat.
2020-01-21 09:37:07 +00:00
Douglas RAILLARD
6fe78b4d47 module/cpufreq: Sort list of frequencies
Ensure the order of frequencies is deterministic to have consistent output when
printing it or when using it to carry out some actions.
2020-01-15 11:36:55 +00:00
Sergei Trofimov
5bda1c0eee host: add host attribute to LocalConnection
Add a host attribute (hard-coded to "localhost") to LocalConnection to
make it easier to seamlessly swap it out with ssh connection.
2020-01-09 16:59:13 +00:00
Ambroise Vincent
0465a75c56 devlib/trace/ftrace.py: Fix reset and stop states
A system with function_profile_enabled set to 1 prevents using
function_graph.

Using nop tracer left the tracing files in a dirty state.
2020-01-07 14:14:38 +00:00
Marc Bonnici
795c0f233f Development version bump 2019-12-20 16:25:03 +00:00
39 changed files with 2906 additions and 757 deletions

View File

@@ -14,6 +14,16 @@ Installation
sudo -H pip install devlib sudo -H pip install devlib
Dependencies
------------
``devlib`` should install all dependencies automatically, however if you run
into issues please ensure you are using that latest version of pip.
On some systems there may additional steps required to install the dependency
``paramiko`` please consult the `module documentation <http://www.paramiko.org/installing.html>`_
for more information.
Usage Usage
----- -----

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
devlib/bin/ppc64le/busybox Executable file → Normal file

Binary file not shown.

BIN
devlib/bin/x86/busybox Executable file

Binary file not shown.

Binary file not shown.

View File

@@ -20,6 +20,8 @@ from datetime import timedelta
from devlib.collector import (CollectorBase, CollectorOutput, from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry) CollectorOutputEntry)
from devlib.target import KernelConfigTristate
from devlib.exception import TargetStableError
class KernelLogEntry(object): class KernelLogEntry(object):
@@ -152,6 +154,10 @@ class DmesgCollector(CollectorBase):
def __init__(self, target, level=LOG_LEVELS[-1], facility='kern'): def __init__(self, target, level=LOG_LEVELS[-1], facility='kern'):
super(DmesgCollector, self).__init__(target) super(DmesgCollector, self).__init__(target)
if not target.is_rooted:
raise TargetStableError('Cannot collect dmesg on non-rooted target')
self.output_path = None self.output_path = None
if level not in self.LOG_LEVELS: if level not in self.LOG_LEVELS:
@@ -167,6 +173,8 @@ class DmesgCollector(CollectorBase):
self.basic_dmesg = '--force-prefix' not in \ self.basic_dmesg = '--force-prefix' not in \
self.target.execute('dmesg -h', check_exit_code=False) self.target.execute('dmesg -h', check_exit_code=False)
self.facility = facility self.facility = facility
self.needs_root = bool(target.config.typed_config.get(
'CONFIG_SECURITY_DMESG_RESTRICT', KernelConfigTristate.NO))
self.reset() self.reset()
@property @property
@@ -178,7 +186,7 @@ class DmesgCollector(CollectorBase):
def start(self): def start(self):
self.reset() self.reset()
# Empty the dmesg ring buffer # Empty the dmesg ring buffer. This requires root in all cases
self.target.execute('dmesg -c', as_root=True) self.target.execute('dmesg -c', as_root=True)
def stop(self): def stop(self):
@@ -195,7 +203,7 @@ class DmesgCollector(CollectorBase):
facility=self.facility, facility=self.facility,
) )
self.dmesg_out = self.target.execute(cmd) self.dmesg_out = self.target.execute(cmd, as_root=self.needs_root)
def set_output(self, output_path): def set_output(self, output_path):
self.output_path = output_path self.output_path = output_path

View File

@@ -227,6 +227,8 @@ class FtraceCollector(CollectorBase):
self._set_buffer_size() self._set_buffer_size()
self.target.execute('{} reset'.format(self.target_binary), self.target.execute('{} reset'.format(self.target_binary),
as_root=True, timeout=TIMEOUT) as_root=True, timeout=TIMEOUT)
if self.functions:
self.target.write_value(self.function_profile_file, 0, verify=False)
self._reset_needed = False self._reset_needed = False
def start(self): def start(self):
@@ -290,7 +292,7 @@ class FtraceCollector(CollectorBase):
def stop(self): def stop(self):
# Disable kernel function profiling # Disable kernel function profiling
if self.functions and self.tracer is None: if self.functions and self.tracer is None:
self.target.execute('echo 1 > {}'.format(self.function_profile_file), self.target.execute('echo 0 > {}'.format(self.function_profile_file),
as_root=True) as_root=True)
if 'cpufreq' in self.target.modules: if 'cpufreq' in self.target.modules:
self.logger.debug('Trace CPUFreq frequencies') self.logger.debug('Trace CPUFreq frequencies')

View File

@@ -22,9 +22,10 @@ from devlib.utils.android import LogcatMonitor
class LogcatCollector(CollectorBase): class LogcatCollector(CollectorBase):
def __init__(self, target, regexps=None): def __init__(self, target, regexps=None, logcat_format=None):
super(LogcatCollector, self).__init__(target) super(LogcatCollector, self).__init__(target)
self.regexps = regexps self.regexps = regexps
self.logcat_format = logcat_format
self.output_path = None self.output_path = None
self._collecting = False self._collecting = False
self._prev_log = None self._prev_log = None
@@ -49,7 +50,7 @@ class LogcatCollector(CollectorBase):
""" """
if self.output_path is None: if self.output_path is None:
raise RuntimeError("Output path was not set.") raise RuntimeError("Output path was not set.")
self._monitor = LogcatMonitor(self.target, self.regexps) self._monitor = LogcatMonitor(self.target, self.regexps, logcat_format=self.logcat_format)
if self._prev_log: if self._prev_log:
# Append new data collection to previous collection # Append new data collection to previous collection
self._monitor.start(self._prev_log) self._monitor.start(self._prev_log)

View File

@@ -24,8 +24,9 @@ from devlib.collector import (CollectorBase, CollectorOutput,
from devlib.utils.misc import ensure_file_directory_exists as _f from devlib.utils.misc import ensure_file_directory_exists as _f
PERF_COMMAND_TEMPLATE = '{binary} {command} {options} {events} sleep 1000 > {outfile} 2>&1 ' PERF_STAT_COMMAND_TEMPLATE = '{binary} {command} {options} {events} {sleep_cmd} > {outfile} 2>&1 '
PERF_REPORT_COMMAND_TEMPLATE= '{binary} report {options} -i {datafile} > {outfile} 2>&1 ' PERF_REPORT_COMMAND_TEMPLATE= '{binary} report {options} -i {datafile} > {outfile} 2>&1 '
PERF_REPORT_SAMPLE_COMMAND_TEMPLATE= '{binary} report-sample {options} -i {datafile} > {outfile} '
PERF_RECORD_COMMAND_TEMPLATE= '{binary} record {options} {events} -o {outfile}' PERF_RECORD_COMMAND_TEMPLATE= '{binary} record {options} {events} -o {outfile}'
PERF_DEFAULT_EVENTS = [ PERF_DEFAULT_EVENTS = [
@@ -90,12 +91,16 @@ class PerfCollector(CollectorBase):
events=None, events=None,
optionstring=None, optionstring=None,
report_options=None, report_options=None,
run_report_sample=False,
report_sample_options=None,
labels=None, labels=None,
force_install=False): force_install=False):
super(PerfCollector, self).__init__(target) super(PerfCollector, self).__init__(target)
self.force_install = force_install self.force_install = force_install
self.labels = labels self.labels = labels
self.report_options = report_options self.report_options = report_options
self.run_report_sample = run_report_sample
self.report_sample_options = report_sample_options
self.output_path = None self.output_path = None
# Validate parameters # Validate parameters
@@ -138,14 +143,17 @@ class PerfCollector(CollectorBase):
self.target.remove(filepath) self.target.remove(filepath)
filepath = self._get_target_file(label, 'rpt') filepath = self._get_target_file(label, 'rpt')
self.target.remove(filepath) self.target.remove(filepath)
filepath = self._get_target_file(label, 'rptsamples')
self.target.remove(filepath)
def start(self): def start(self):
for command in self.commands: for command in self.commands:
self.target.kick_off(command) self.target.background(command, as_root=self.target.is_rooted)
def stop(self): def stop(self):
self.target.killall(self.perf_type, signal='SIGINT', self.target.killall(self.perf_type, signal='SIGINT',
as_root=self.target.is_rooted) as_root=self.target.is_rooted)
if self.perf_type == "perf" and self.command == "stat":
# perf doesn't transmit the signal to its sleep call so handled here: # perf doesn't transmit the signal to its sleep call so handled here:
self.target.killall('sleep', as_root=self.target.is_rooted) self.target.killall('sleep', as_root=self.target.is_rooted)
# NB: we hope that no other "important" sleep is on-going # NB: we hope that no other "important" sleep is on-going
@@ -164,6 +172,9 @@ class PerfCollector(CollectorBase):
self._wait_for_data_file_write(label, self.output_path) self._wait_for_data_file_write(label, self.output_path)
path = self._pull_target_file_to_host(label, 'rpt', self.output_path) path = self._pull_target_file_to_host(label, 'rpt', self.output_path)
output.append(CollectorOutputEntry(path, 'file')) output.append(CollectorOutputEntry(path, 'file'))
if self.run_report_sample:
report_samples_path = self._pull_target_file_to_host(label, 'rptsamples', self.output_path)
output.append(CollectorOutputEntry(report_samples_path, 'file'))
else: else:
path = self._pull_target_file_to_host(label, 'out', self.output_path) path = self._pull_target_file_to_host(label, 'out', self.output_path)
output.append(CollectorOutputEntry(path, 'file')) output.append(CollectorOutputEntry(path, 'file'))
@@ -188,10 +199,12 @@ class PerfCollector(CollectorBase):
def _build_perf_stat_command(self, options, events, label): def _build_perf_stat_command(self, options, events, label):
event_string = ' '.join(['-e {}'.format(e) for e in events]) event_string = ' '.join(['-e {}'.format(e) for e in events])
command = PERF_COMMAND_TEMPLATE.format(binary = self.binary, sleep_cmd = 'sleep 1000' if self.perf_type == 'perf' else ''
command = PERF_STAT_COMMAND_TEMPLATE.format(binary = self.binary,
command = self.command, command = self.command,
options = options or '', options = options or '',
events = event_string, events = event_string,
sleep_cmd = sleep_cmd,
outfile = self._get_target_file(label, 'out')) outfile = self._get_target_file(label, 'out'))
return command return command
@@ -202,6 +215,13 @@ class PerfCollector(CollectorBase):
outfile=self._get_target_file(label, 'rpt')) outfile=self._get_target_file(label, 'rpt'))
return command return command
def _build_perf_report_sample_command(self, label):
command = PERF_REPORT_SAMPLE_COMMAND_TEMPLATE.format(binary=self.binary,
options=self.report_sample_options or '',
datafile=self._get_target_file(label, 'data'),
outfile=self._get_target_file(label, 'rptsamples'))
return command
def _build_perf_record_command(self, options, label): def _build_perf_record_command(self, options, label):
event_string = ' '.join(['-e {}'.format(e) for e in self.events]) event_string = ' '.join(['-e {}'.format(e) for e in self.events])
command = PERF_RECORD_COMMAND_TEMPLATE.format(binary=self.binary, command = PERF_RECORD_COMMAND_TEMPLATE.format(binary=self.binary,
@@ -234,9 +254,12 @@ class PerfCollector(CollectorBase):
data_file_finished_writing = True data_file_finished_writing = True
report_command = self._build_perf_report_command(self.report_options, label) report_command = self._build_perf_report_command(self.report_options, label)
self.target.execute(report_command) self.target.execute(report_command)
if self.run_report_sample:
report_sample_command = self._build_perf_report_sample_command(label)
self.target.execute(report_sample_command)
def _validate_events(self, events): def _validate_events(self, events):
available_events_string = self.target.execute('{} list'.format(self.perf_type)) available_events_string = self.target.execute('{} list | {} cat'.format(self.perf_type, self.target.busybox))
available_events = available_events_string.splitlines() available_events = available_events_string.splitlines()
for available_event in available_events: for available_event in available_events:
if available_event == '': if available_event == '':

View File

@@ -33,7 +33,7 @@ class SerialTraceCollector(CollectorBase):
self.serial_port = serial_port self.serial_port = serial_port
self.baudrate = baudrate self.baudrate = baudrate
self.timeout = timeout self.timeout = timeout
self.output_path - None self.output_path = None
self._serial_target = None self._serial_target = None
self._conn = None self._conn = None
@@ -54,7 +54,7 @@ class SerialTraceCollector(CollectorBase):
if self.output_path is None: if self.output_path is None:
raise RuntimeError("Output path was not set.") raise RuntimeError("Output path was not set.")
self._outfile_fh = open(self.output_path, 'w') self._outfile_fh = open(self.output_path, 'wb')
start_marker = "-------- Starting serial logging --------\n" start_marker = "-------- Starting serial logging --------\n"
self._outfile_fh.write(start_marker.encode('utf-8')) self._outfile_fh.write(start_marker.encode('utf-8'))

533
devlib/connection.py Normal file
View File

@@ -0,0 +1,533 @@
# Copyright 2019 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC, abstractmethod
from contextlib import contextmanager
from datetime import datetime
from functools import partial
from weakref import WeakSet
from shlex import quote
from time import monotonic
import os
import signal
import socket
import subprocess
import threading
import time
import logging
from devlib.utils.misc import InitCheckpoint
_KILL_TIMEOUT = 3
def _kill_pgid_cmd(pgid, sig):
return 'kill -{} -{}'.format(sig.value, pgid)
class ConnectionBase(InitCheckpoint):
"""
Base class for all connections.
"""
def __init__(self):
self._current_bg_cmds = WeakSet()
self._closed = False
self._close_lock = threading.Lock()
self.busybox = None
def cancel_running_command(self):
bg_cmds = set(self._current_bg_cmds)
for bg_cmd in bg_cmds:
bg_cmd.cancel()
@abstractmethod
def _close(self):
"""
Close the connection.
The public :meth:`close` method makes sure that :meth:`_close` will
only be called once, and will serialize accesses to it if it happens to
be called from multiple threads at once.
"""
def close(self):
# Locking the closing allows any thread to safely call close() as long
# as the connection can be closed from a thread that is not the one it
# started its life in.
with self._close_lock:
if not self._closed:
self._close()
self._closed = True
# Ideally, that should not be relied upon but that will improve the chances
# of the connection being properly cleaned up when it's not in use anymore.
def __del__(self):
# Since __del__ will be called if an exception is raised in __init__
# (e.g. we cannot connect), we only run close() when we are sure
# __init__ has completed successfully.
if self.initialized:
self.close()
class BackgroundCommand(ABC):
"""
Allows managing a running background command using a subset of the
:class:`subprocess.Popen` API.
Instances of this class can be used as context managers, with the same
semantic as :class:`subprocess.Popen`.
"""
@abstractmethod
def send_signal(self, sig):
"""
Send a POSIX signal to the background command's process group ID
(PGID).
:param signal: Signal to send.
:type signal: signal.Signals
"""
def kill(self):
"""
Send SIGKILL to the background command.
"""
self.send_signal(signal.SIGKILL)
def cancel(self, kill_timeout=_KILL_TIMEOUT):
"""
Try to gracefully terminate the process by sending ``SIGTERM``, then
waiting for ``kill_timeout`` to send ``SIGKILL``.
"""
if self.poll() is None:
self._cancel(kill_timeout=kill_timeout)
@abstractmethod
def _cancel(self, kill_timeout):
"""
Method to override in subclasses to implement :meth:`cancel`.
"""
pass
@abstractmethod
def wait(self):
"""
Block until the background command completes, and return its exit code.
"""
@abstractmethod
def poll(self):
"""
Return exit code if the command has exited, None otherwise.
"""
@property
@abstractmethod
def stdin(self):
"""
File-like object connected to the background's command stdin.
"""
@property
@abstractmethod
def stdout(self):
"""
File-like object connected to the background's command stdout.
"""
@property
@abstractmethod
def stderr(self):
"""
File-like object connected to the background's command stderr.
"""
@property
@abstractmethod
def pid(self):
"""
Process Group ID (PGID) of the background command.
Since the command is usually wrapped in shell processes for IO
redirections, sudo etc, the PID cannot be assumed to be the actual PID
of the command passed by the user. It's is guaranteed to be a PGID
instead, which means signals sent to it as such will target all
subprocesses involved in executing that command.
"""
@abstractmethod
def close(self):
"""
Close all opened streams and then wait for command completion.
:returns: Exit code of the command.
.. note:: If the command is writing to its stdout/stderr, it might be
blocked on that and die when the streams are closed.
"""
def __enter__(self):
return self
def __exit__(self, *args, **kwargs):
self.close()
class PopenBackgroundCommand(BackgroundCommand):
"""
:class:`subprocess.Popen`-based background command.
"""
def __init__(self, popen):
self.popen = popen
def send_signal(self, sig):
return os.killpg(self.popen.pid, sig)
@property
def stdin(self):
return self.popen.stdin
@property
def stdout(self):
return self.popen.stdout
@property
def stderr(self):
return self.popen.stderr
@property
def pid(self):
return self.popen.pid
def wait(self):
return self.popen.wait()
def poll(self):
return self.popen.poll()
def _cancel(self, kill_timeout):
popen = self.popen
os.killpg(os.getpgid(popen.pid), signal.SIGTERM)
try:
popen.wait(timeout=kill_timeout)
except subprocess.TimeoutExpired:
os.killpg(os.getpgid(popen.pid), signal.SIGKILL)
def close(self):
self.popen.__exit__(None, None, None)
return self.popen.returncode
def __enter__(self):
self.popen.__enter__()
return self
def __exit__(self, *args, **kwargs):
self.popen.__exit__(*args, **kwargs)
class ParamikoBackgroundCommand(BackgroundCommand):
"""
:mod:`paramiko`-based background command.
"""
def __init__(self, conn, chan, pid, as_root, stdin, stdout, stderr, redirect_thread):
self.chan = chan
self.as_root = as_root
self.conn = conn
self._pid = pid
self._stdin = stdin
self._stdout = stdout
self._stderr = stderr
self.redirect_thread = redirect_thread
def send_signal(self, sig):
# If the command has already completed, we don't want to send a signal
# to another process that might have gotten that PID in the meantime.
if self.poll() is not None:
return
# Use -PGID to target a process group rather than just the process
# itself
cmd = _kill_pgid_cmd(self.pid, sig)
self.conn.execute(cmd, as_root=self.as_root)
@property
def pid(self):
return self._pid
def wait(self):
return self.chan.recv_exit_status()
def poll(self):
if self.chan.exit_status_ready():
return self.wait()
else:
return None
def _cancel(self, kill_timeout):
self.send_signal(signal.SIGTERM)
# Check if the command terminated quickly
time.sleep(10e-3)
# Otherwise wait for the full timeout and kill it
if self.poll() is None:
time.sleep(kill_timeout)
self.send_signal(signal.SIGKILL)
self.wait()
@property
def stdin(self):
return self._stdin
@property
def stdout(self):
return self._stdout
@property
def stderr(self):
return self._stderr
def close(self):
for x in (self.stdin, self.stdout, self.stderr):
if x is not None:
x.close()
exit_code = self.wait()
thread = self.redirect_thread
if thread:
thread.join()
return exit_code
class AdbBackgroundCommand(BackgroundCommand):
"""
``adb``-based background command.
"""
def __init__(self, conn, adb_popen, pid, as_root):
self.conn = conn
self.as_root = as_root
self.adb_popen = adb_popen
self._pid = pid
def send_signal(self, sig):
self.conn.execute(
_kill_pgid_cmd(self.pid, sig),
as_root=self.as_root,
)
@property
def stdin(self):
return self.adb_popen.stdin
@property
def stdout(self):
return self.adb_popen.stdout
@property
def stderr(self):
return self.adb_popen.stderr
@property
def pid(self):
return self._pid
def wait(self):
return self.adb_popen.wait()
def poll(self):
return self.adb_popen.poll()
def _cancel(self, kill_timeout):
self.send_signal(signal.SIGTERM)
try:
self.adb_popen.wait(timeout=kill_timeout)
except subprocess.TimeoutExpired:
self.send_signal(signal.SIGKILL)
self.adb_popen.kill()
def close(self):
self.adb_popen.__exit__(None, None, None)
return self.adb_popen.returncode
def __enter__(self):
self.adb_popen.__enter__()
return self
def __exit__(self, *args, **kwargs):
self.adb_popen.__exit__(*args, **kwargs)
class TransferManagerBase(ABC):
def _pull_dest_size(self, dest):
if os.path.isdir(dest):
return sum(
os.stat(os.path.join(dirpath, f)).st_size
for dirpath, _, fnames in os.walk(dest)
for f in fnames
)
else:
return os.stat(dest).st_size
return 0
def _push_dest_size(self, dest):
cmd = '{} du -s {}'.format(quote(self.conn.busybox), quote(dest))
out = self.conn.execute(cmd)
try:
return int(out.split()[0])
except ValueError:
return 0
def __init__(self, conn, poll_period, start_transfer_poll_delay, total_timeout):
self.conn = conn
self.poll_period = poll_period
self.total_timeout = total_timeout
self.start_transfer_poll_delay = start_transfer_poll_delay
self.logger = logging.getLogger('FileTransfer')
self.managing = threading.Event()
self.transfer_started = threading.Event()
self.transfer_completed = threading.Event()
self.transfer_aborted = threading.Event()
self.monitor_thread = None
self.sources = None
self.dest = None
self.direction = None
@abstractmethod
def _cancel(self):
pass
def cancel(self, reason=None):
msg = 'Cancelling file transfer {} -> {}'.format(self.sources, self.dest)
if reason is not None:
msg += ' due to \'{}\''.format(reason)
self.logger.warning(msg)
self.transfer_aborted.set()
self._cancel()
@abstractmethod
def isactive(self):
pass
@contextmanager
def manage(self, sources, dest, direction):
try:
self.sources, self.dest, self.direction = sources, dest, direction
m_thread = threading.Thread(target=self._monitor)
self.transfer_completed.clear()
self.transfer_aborted.clear()
self.transfer_started.set()
m_thread.start()
yield self
except BaseException:
self.cancel(reason='exception during transfer')
raise
finally:
self.transfer_completed.set()
self.transfer_started.set()
m_thread.join()
self.transfer_started.clear()
self.transfer_completed.clear()
self.transfer_aborted.clear()
def _monitor(self):
start_t = monotonic()
self.transfer_completed.wait(self.start_transfer_poll_delay)
while not self.transfer_completed.wait(self.poll_period):
if not self.isactive():
self.cancel(reason='transfer inactive')
elif monotonic() - start_t > self.total_timeout:
self.cancel(reason='transfer timed out')
class PopenTransferManager(TransferManagerBase):
def __init__(self, conn, poll_period=30, start_transfer_poll_delay=30, total_timeout=3600):
super().__init__(conn, poll_period, start_transfer_poll_delay, total_timeout)
self.transfer = None
self.last_sample = None
def _cancel(self):
if self.transfer:
self.transfer.cancel()
self.transfer = None
self.last_sample = None
def isactive(self):
size_fn = self._push_dest_size if self.direction == 'push' else self._pull_dest_size
curr_size = size_fn(self.dest)
self.logger.debug('Polled file transfer, destination size {}'.format(curr_size))
active = True if self.last_sample is None else curr_size > self.last_sample
self.last_sample = curr_size
return active
def set_transfer_and_wait(self, popen_bg_cmd):
self.transfer = popen_bg_cmd
self.last_sample = None
ret = self.transfer.wait()
if ret and not self.transfer_aborted.is_set():
raise subprocess.CalledProcessError(ret, self.transfer.popen.args)
elif self.transfer_aborted.is_set():
raise TimeoutError(self.transfer.popen.args)
class SSHTransferManager(TransferManagerBase):
def __init__(self, conn, poll_period=30, start_transfer_poll_delay=30, total_timeout=3600):
super().__init__(conn, poll_period, start_transfer_poll_delay, total_timeout)
self.transferer = None
self.progressed = False
self.transferred = None
self.to_transfer = None
def _cancel(self):
self.transferer.close()
def isactive(self):
progressed = self.progressed
self.progressed = False
msg = 'Polled transfer: {}% [{}B/{}B]'
pc = format((self.transferred / self.to_transfer) * 100, '.2f')
self.logger.debug(msg.format(pc, self.transferred, self.to_transfer))
return progressed
@contextmanager
def manage(self, sources, dest, direction, transferer):
with super().manage(sources, dest, direction):
try:
self.progressed = False
self.transferer = transferer # SFTPClient or SCPClient
yield self
except socket.error as e:
if self.transfer_aborted.is_set():
self.transfer_aborted.clear()
method = 'SCP' if self.conn.use_scp else 'SFTP'
raise TimeoutError('{} {}: {} -> {}'.format(method, self.direction, sources, self.dest))
else:
raise e
def progress_cb(self, *args):
if self.transfer_started.is_set():
self.progressed = True
if len(args) == 3: # For SCPClient callbacks
self.transferred = args[2]
self.to_transfer = args[1]
elif len(args) == 2: # For SFTPClient callbacks
self.transferred = args[0]
self.to_transfer = args[1]

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from glob import iglob import glob
import os import os
import signal import signal
import shutil import shutil
@@ -24,9 +24,12 @@ from pipes import quote
from devlib.exception import TargetTransientError, TargetStableError from devlib.exception import TargetTransientError, TargetStableError
from devlib.utils.misc import check_output from devlib.utils.misc import check_output
from devlib.connection import ConnectionBase, PopenBackgroundCommand
PACKAGE_BIN_DIRECTORY = os.path.join(os.path.dirname(__file__), 'bin') PACKAGE_BIN_DIRECTORY = os.path.join(os.path.dirname(__file__), 'bin')
# pylint: disable=redefined-outer-name # pylint: disable=redefined-outer-name
def kill_children(pid, signal=signal.SIGKILL): def kill_children(pid, signal=signal.SIGKILL):
with open('/proc/{0}/task/{0}/children'.format(pid), 'r') as fd: with open('/proc/{0}/task/{0}/children'.format(pid), 'r') as fd:
@@ -34,9 +37,11 @@ def kill_children(pid, signal=signal.SIGKILL):
kill_children(cpid, signal) kill_children(cpid, signal)
os.kill(cpid, signal) os.kill(cpid, signal)
class LocalConnection(object):
class LocalConnection(ConnectionBase):
name = 'local' name = 'local'
host = 'localhost'
@property @property
def connected_as_root(self): def connected_as_root(self):
@@ -52,41 +57,55 @@ class LocalConnection(object):
# pylint: disable=unused-argument # pylint: disable=unused-argument
def __init__(self, platform=None, keep_password=True, unrooted=False, def __init__(self, platform=None, keep_password=True, unrooted=False,
password=None, timeout=None): password=None, timeout=None):
super().__init__()
self._connected_as_root = None self._connected_as_root = None
self.logger = logging.getLogger('local_connection') self.logger = logging.getLogger('local_connection')
self.keep_password = keep_password self.keep_password = keep_password
self.unrooted = unrooted self.unrooted = unrooted
self.password = password self.password = password
def push(self, source, dest, timeout=None, as_root=False): # pylint: disable=unused-argument
self.logger.debug('cp {} {}'.format(source, dest)) def _copy_path(self, source, dest):
self.logger.debug('copying {} to {}'.format(source, dest))
if os.path.isdir(source):
# Behave similarly as cp, scp, adb push, etc. by creating a new
# folder instead of merging hierarchies
if os.path.exists(dest):
dest = os.path.join(dest, os.path.basename(os.path.normpath(src)))
# Use distutils copy_tree since it behaves the same as
# shutils.copytree except that it won't fail if some folders
# already exist.
#
# Mirror the behavior of all other targets which only copy the
# content without metadata
copy_tree(source, dest, preserve_mode=False, preserve_times=False)
else:
shutil.copy(source, dest) shutil.copy(source, dest)
def pull(self, source, dest, timeout=None, as_root=False): # pylint: disable=unused-argument def _copy_paths(self, sources, dest):
self.logger.debug('cp {} {}'.format(source, dest)) for source in sources:
if ('*' in source or '?' in source) and os.path.isdir(dest): self._copy_path(source, dest)
# Pull all files matching a wildcard expression
for each_source in iglob(source): def push(self, sources, dest, timeout=None, as_root=False): # pylint: disable=unused-argument
shutil.copy(each_source, dest) self._copy_paths(sources, dest)
else:
if os.path.isdir(source): def pull(self, sources, dest, timeout=None, as_root=False): # pylint: disable=unused-argument
# Use distutils to allow copying into an existing directory structure. self._copy_paths(sources, dest)
copy_tree(source, dest)
else:
shutil.copy(source, dest)
# pylint: disable=unused-argument # pylint: disable=unused-argument
def execute(self, command, timeout=None, check_exit_code=True, def execute(self, command, timeout=None, check_exit_code=True,
as_root=False, strip_colors=True, will_succeed=False): as_root=False, strip_colors=True, will_succeed=False):
self.logger.debug(command) self.logger.debug(command)
if as_root and not self.connected_as_root: use_sudo = as_root and not self.connected_as_root
if use_sudo:
if self.unrooted: if self.unrooted:
raise TargetStableError('unrooted') raise TargetStableError('unrooted')
password = self._get_password() password = self._get_password()
command = 'echo {} | sudo -S -- sh -c '.format(quote(password)) + quote(command) command = "echo {} | sudo -k -p ' ' -S -- sh -c {}".format(quote(password), quote(command))
ignore = None if check_exit_code else 'all' ignore = None if check_exit_code else 'all'
try: try:
return check_output(command, shell=True, timeout=timeout, ignore=ignore)[0] stdout, stderr = check_output(command, shell=True, timeout=timeout, ignore=ignore)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
message = 'Got exit code {}\nfrom: {}\nOUTPUT: {}'.format( message = 'Got exit code {}\nfrom: {}\nOUTPUT: {}'.format(
e.returncode, command, e.output) e.returncode, command, e.output)
@@ -95,15 +114,38 @@ class LocalConnection(object):
else: else:
raise TargetStableError(message) raise TargetStableError(message)
# Remove the one-character prompt of sudo -S -p
if use_sudo and stderr:
stderr = stderr[1:]
return stdout + stderr
def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False): def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False):
if as_root and not self.connected_as_root: if as_root and not self.connected_as_root:
if self.unrooted: if self.unrooted:
raise TargetStableError('unrooted') raise TargetStableError('unrooted')
password = self._get_password() password = self._get_password()
command = 'echo {} | sudo -S '.format(quote(password)) + command # The sudo prompt will add a space on stderr, but we cannot filter
return subprocess.Popen(command, stdout=stdout, stderr=stderr, shell=True) # it out here
command = "echo {} | sudo -k -p ' ' -S -- sh -c {}".format(quote(password), quote(command))
def close(self): # Make sure to get a new PGID so PopenBackgroundCommand() can kill
# all sub processes that could be started without troubles.
def preexec_fn():
os.setpgrp()
popen = subprocess.Popen(
command,
stdout=stdout,
stderr=stderr,
shell=True,
preexec_fn=preexec_fn,
)
bg_cmd = PopenBackgroundCommand(popen)
self._current_bg_cmds.add(bg_cmd)
return bg_cmd
def _close(self):
pass pass
def cancel_running_command(self): def cancel_running_command(self):

View File

@@ -16,8 +16,10 @@
import os import os
import shutil import shutil
import tempfile import tempfile
from itertools import chain import time
from itertools import chain, zip_longest
from devlib.host import PACKAGE_BIN_DIRECTORY
from devlib.instrument import Instrument, MeasurementsCsv, CONTINUOUS from devlib.instrument import Instrument, MeasurementsCsv, CONTINUOUS
from devlib.exception import HostError from devlib.exception import HostError
from devlib.utils.csvutil import csvwriter, create_reader from devlib.utils.csvutil import csvwriter, create_reader
@@ -45,7 +47,8 @@ class DaqInstrument(Instrument):
dv_range=0.2, dv_range=0.2,
sample_rate_hz=10000, sample_rate_hz=10000,
channel_map=(0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23), channel_map=(0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23),
keep_raw=False keep_raw=False,
time_as_clock_boottime=True
): ):
# pylint: disable=no-member # pylint: disable=no-member
super(DaqInstrument, self).__init__(target) super(DaqInstrument, self).__init__(target)
@@ -53,6 +56,7 @@ class DaqInstrument(Instrument):
self._need_reset = True self._need_reset = True
self._raw_files = [] self._raw_files = []
self.tempdir = None self.tempdir = None
self.target_boottime_clock_at_start = 0.0
if DaqClient is None: if DaqClient is None:
raise HostError('Could not import "daqpower": {}'.format(import_error_mesg)) raise HostError('Could not import "daqpower": {}'.format(import_error_mesg))
if labels is None: if labels is None:
@@ -76,11 +80,30 @@ class DaqInstrument(Instrument):
channel_map=channel_map, channel_map=channel_map,
labels=labels) labels=labels)
self.sample_rate_hz = sample_rate_hz self.sample_rate_hz = sample_rate_hz
self.time_as_clock_boottime = time_as_clock_boottime
self.add_channel('Time', 'time')
for label in labels: for label in labels:
for kind in ['power', 'voltage']: for kind in ['power', 'voltage']:
self.add_channel(label, kind) self.add_channel(label, kind)
if time_as_clock_boottime:
host_path = os.path.join(PACKAGE_BIN_DIRECTORY, self.target.abi,
'get_clock_boottime')
self.clock_boottime_cmd = self.target.install_if_needed(host_path,
search_system_binaries=False)
def calculate_boottime_offset(self):
time_before = time.time()
out = self.target.execute(self.clock_boottime_cmd)
time_after = time.time()
remote_clock_boottime = float(out)
propagation_delay = (time_after - time_before) / 2
boottime_at_end = remote_clock_boottime + propagation_delay
return time_after - boottime_at_end
def reset(self, sites=None, kinds=None, channels=None): def reset(self, sites=None, kinds=None, channels=None):
super(DaqInstrument, self).reset(sites, kinds, channels) super(DaqInstrument, self).reset(sites, kinds, channels)
self.daq_client.close() self.daq_client.close()
@@ -90,9 +113,19 @@ class DaqInstrument(Instrument):
def start(self): def start(self):
if self._need_reset: if self._need_reset:
self.reset() # Preserve channel order
self.reset(channels=self.channels.keys())
if self.time_as_clock_boottime:
target_boottime_offset = self.calculate_boottime_offset()
time_start = time.time()
self.daq_client.start() self.daq_client.start()
if self.time_as_clock_boottime:
time_end = time.time()
self.target_boottime_clock_at_start = (time_start + time_end) / 2 - target_boottime_offset
def stop(self): def stop(self):
self.daq_client.stop() self.daq_client.stop()
self._need_reset = True self._need_reset = True
@@ -118,32 +151,32 @@ class DaqInstrument(Instrument):
site_readers[site] = reader site_readers[site] = reader
file_handles.append(fh) file_handles.append(fh)
except KeyError: except KeyError:
if not site.startswith("Time"):
message = 'Could not get DAQ trace for {}; Obtained traces are in {}' message = 'Could not get DAQ trace for {}; Obtained traces are in {}'
raise HostError(message.format(site, self.tempdir)) raise HostError(message.format(site, self.tempdir))
# The first row is the headers # The first row is the headers
channel_order = [] channel_order = ['Time_time']
for site, reader in site_readers.items(): for site, reader in site_readers.items():
channel_order.extend(['{}_{}'.format(site, kind) channel_order.extend(['{}_{}'.format(site, kind)
for kind in next(reader)]) for kind in next(reader)])
def _read_next_rows(): def _read_rows():
parts = [] row_iter = zip_longest(*site_readers.values(), fillvalue=(None, None))
for reader in site_readers.values(): for raw_row in row_iter:
try: raw_row = list(chain.from_iterable(raw_row))
parts.extend(next(reader)) raw_row.insert(0, _read_rows.row_time_s)
except StopIteration: yield raw_row
parts.extend([None, None]) _read_rows.row_time_s += 1.0 / self.sample_rate_hz
return list(chain(parts))
_read_rows.row_time_s = self.target_boottime_clock_at_start
with csvwriter(outfile) as writer: with csvwriter(outfile) as writer:
field_names = [c.label for c in self.active_channels] field_names = [c.label for c in self.active_channels]
writer.writerow(field_names) writer.writerow(field_names)
raw_row = _read_next_rows() for raw_row in _read_rows():
while any(raw_row):
row = [raw_row[channel_order.index(f)] for f in field_names] row = [raw_row[channel_order.index(f)] for f in field_names]
writer.writerow(row) writer.writerow(row)
raw_row = _read_next_rows()
return MeasurementsCsv(outfile, self.active_channels, self.sample_rate_hz) return MeasurementsCsv(outfile, self.active_channels, self.sample_rate_hz)
finally: finally:
@@ -156,5 +189,5 @@ class DaqInstrument(Instrument):
def teardown(self): def teardown(self):
self.daq_client.close() self.daq_client.close()
if not self.keep_raw: if not self.keep_raw:
if os.path.isdir(self.tempdir): if self.tempdir and os.path.isdir(self.tempdir):
shutil.rmtree(self.tempdir) shutil.rmtree(self.tempdir)

View File

@@ -212,7 +212,7 @@ class CpufreqModule(Module):
@memoized @memoized
def list_frequencies(self, cpu): def list_frequencies(self, cpu):
"""Returns a list of frequencies supported by the cpu or an empty list """Returns a sorted list of frequencies supported by the cpu or an empty list
if not could be found.""" if not could be found."""
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
@@ -234,7 +234,7 @@ class CpufreqModule(Module):
raise raise
available_frequencies = list(map(int, reversed([f for f, _ in zip(out_iter, out_iter)]))) available_frequencies = list(map(int, reversed([f for f, _ in zip(out_iter, out_iter)])))
return available_frequencies return sorted(available_frequencies)
@memoized @memoized
def get_max_available_frequency(self, cpu): def get_max_available_frequency(self, cpu):
@@ -301,7 +301,7 @@ class CpufreqModule(Module):
except ValueError: except ValueError:
raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency)) raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency))
def get_frequency(self, cpu): def get_frequency(self, cpu, cpuinfo=False):
""" """
Returns the current frequency currently set for the specified CPU. Returns the current frequency currently set for the specified CPU.
@@ -309,12 +309,18 @@ class CpufreqModule(Module):
try to read the current frequency and the following exception will be try to read the current frequency and the following exception will be
raised :: raised ::
:param cpuinfo: Read the value in the cpuinfo interface that reflects
the actual running frequency.
:raises: TargetStableError if for some reason the frequency could not be read. :raises: TargetStableError if for some reason the frequency could not be read.
""" """
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_cur_freq'.format(cpu)
sysfile = '/sys/devices/system/cpu/{}/cpufreq/{}'.format(
cpu,
'cpuinfo_cur_freq' if cpuinfo else 'scaling_cur_freq')
return self.target.read_int(sysfile) return self.target.read_int(sysfile)
def set_frequency(self, cpu, frequency, exact=True): def set_frequency(self, cpu, frequency, exact=True):
@@ -350,6 +356,10 @@ class CpufreqModule(Module):
raise TargetStableError('Can\'t set {} frequency; governor must be "userspace"'.format(cpu)) raise TargetStableError('Can\'t set {} frequency; governor must be "userspace"'.format(cpu))
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_setspeed'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_setspeed'.format(cpu)
self.target.write_value(sysfile, value, verify=False) self.target.write_value(sysfile, value, verify=False)
cpuinfo = self.get_frequency(cpu, cpuinfo=True)
if cpuinfo != value:
self.logger.warning(
'The cpufreq value has not been applied properly cpuinfo={} request={}'.format(cpuinfo, value))
except ValueError: except ValueError:
raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency)) raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency))

View File

@@ -15,6 +15,9 @@
# pylint: disable=attribute-defined-outside-init # pylint: disable=attribute-defined-outside-init
from past.builtins import basestring from past.builtins import basestring
from operator import attrgetter
from pprint import pformat
from devlib.module import Module from devlib.module import Module
from devlib.utils.types import integer, boolean from devlib.utils.types import integer, boolean
@@ -96,40 +99,35 @@ class Cpuidle(Module):
def __init__(self, target): def __init__(self, target):
super(Cpuidle, self).__init__(target) super(Cpuidle, self).__init__(target)
self._states = {}
basepath = '/sys/devices/system/cpu/' basepath = '/sys/devices/system/cpu/'
values_tree = self.target.read_tree_values(basepath, depth=4, check_exit_code=False) values_tree = self.target.read_tree_values(basepath, depth=4, check_exit_code=False)
i = 0
cpu_id = 'cpu{}'.format(i)
while cpu_id in values_tree:
cpu_node = values_tree[cpu_id]
if 'cpuidle' in cpu_node: self._states = {
idle_node = cpu_node['cpuidle'] cpu_name: sorted(
self._states[cpu_id] = [] (
j = 0 CpuidleState(
state_id = 'state{}'.format(j)
while state_id in idle_node:
state_node = idle_node[state_id]
state = CpuidleState(
self.target, self.target,
index=j, # state_name is formatted as "state42"
path=self.target.path.join(basepath, cpu_id, 'cpuidle', state_id), index=int(state_name[len('state'):]),
path=self.target.path.join(basepath, cpu_name, 'cpuidle', state_name),
name=state_node['name'], name=state_node['name'],
desc=state_node['desc'], desc=state_node['desc'],
power=int(state_node['power']), power=int(state_node['power']),
latency=int(state_node['latency']), latency=int(state_node['latency']),
residency=int(state_node['residency']) if 'residency' in state_node else None, residency=int(state_node['residency']) if 'residency' in state_node else None,
) )
msg = 'Adding {} state {}: {} {}' for state_name, state_node in cpu_node['cpuidle'].items()
self.logger.debug(msg.format(cpu_id, j, state.name, state.desc)) if state_name.startswith('state')
self._states[cpu_id].append(state) ),
j += 1 key=attrgetter('index'),
state_id = 'state{}'.format(j) )
i += 1 for cpu_name, cpu_node in values_tree.items()
cpu_id = 'cpu{}'.format(i) if cpu_name.startswith('cpu') and 'cpuidle' in cpu_node
}
self.logger.debug('Adding cpuidle states:\n{}'.format(pformat(self._states)))
def get_states(self, cpu=0): def get_states(self, cpu=0):
if isinstance(cpu, int): if isinstance(cpu, int):
@@ -174,6 +172,6 @@ class Cpuidle(Module):
def get_governor(self): def get_governor(self):
path = self.target.path.join(self.root_path, 'current_governor_ro') path = self.target.path.join(self.root_path, 'current_governor_ro')
if not self.target.path.exists(path): if not self.target.file_exists(path):
path = self.target.path.join(self.root_path, 'current_governor') path = self.target.path.join(self.root_path, 'current_governor')
return self.target.read_value(path) return self.target.read_value(path)

View File

@@ -57,3 +57,23 @@ class HotplugModule(Module):
return return
value = 1 if online else 0 value = 1 if online else 0
self.target.write_value(path, value) self.target.write_value(path, value)
def _get_path(self, path):
return self.target.path.join(self.base_path,
path)
def fail(self, cpu, state):
path = self._get_path('cpu{}/hotplug/fail'.format(cpu))
return self.target.write_value(path, state)
def get_state(self, cpu):
path = self._get_path('cpu{}/hotplug/state'.format(cpu))
return self.target.read_value(path)
def get_states(self):
path = self._get_path('hotplug/states')
states_string = self.target.read_value(path)
return dict(
map(str.strip, string.split(':', 1))
for string in states_string.strip().splitlines()
)

View File

@@ -15,7 +15,6 @@
import logging import logging
import re import re
from enum import Enum
from past.builtins import basestring from past.builtins import basestring
@@ -147,43 +146,44 @@ class SchedProcFSNode(object):
self._dyn_attrs[key] = self._build_node(key, nodes[key]) self._dyn_attrs[key] = self._build_node(key, nodes[key])
class DocInt(int): class _SchedDomainFlag:
# See https://stackoverflow.com/a/50473952/5096023
def __new__(cls, value, doc):
new = super(DocInt, cls).__new__(cls, value)
new.__doc__ = doc
return new
class SchedDomainFlag(DocInt, Enum):
""" """
Represents a sched domain flag Backward-compatible emulation of the former :class:`enum.Enum` that will
work on recent kernels with dynamic sched domain flags name and no value
exposed.
""" """
# pylint: disable=bad-whitespace
# Domain flags obtained from include/linux/sched/topology.h on v4.17
# https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux/+/v4.17/include/linux/sched/topology.h#20
SD_LOAD_BALANCE = 0x0001, "Do load balancing on this domain"
SD_BALANCE_NEWIDLE = 0x0002, "Balance when about to become idle"
SD_BALANCE_EXEC = 0x0004, "Balance on exec"
SD_BALANCE_FORK = 0x0008, "Balance on fork, clone"
SD_BALANCE_WAKE = 0x0010, "Balance on wakeup"
SD_WAKE_AFFINE = 0x0020, "Wake task to waking CPU"
SD_ASYM_CPUCAPACITY = 0x0040, "Groups have different max cpu capacities"
SD_SHARE_CPUCAPACITY = 0x0080, "Domain members share cpu capacity"
SD_SHARE_POWERDOMAIN = 0x0100, "Domain members share power domain"
SD_SHARE_PKG_RESOURCES = 0x0200, "Domain members share cpu pkg resources"
SD_SERIALIZE = 0x0400, "Only a single load balancing instance"
SD_ASYM_PACKING = 0x0800, "Place busy groups earlier in the domain"
SD_PREFER_SIBLING = 0x1000, "Prefer to place tasks in a sibling domain"
SD_OVERLAP = 0x2000, "Sched_domains of this level overlap"
SD_NUMA = 0x4000, "Cross-node balancing"
# Only defined in Android
# https://android.googlesource.com/kernel/common/+/android-4.14/include/linux/sched/topology.h#29
SD_SHARE_CAP_STATES = 0x8000, "(Android only) Domain members share capacity state"
@classmethod _INSTANCES = {}
def check_version(cls, target, logger): """
Dictionary storing the instances so that they can be compared with ``is``
operator.
"""
def __new__(cls, name, value, doc=None):
self = super().__new__(cls)
self.name = name
self._value = value
self.__doc__ = doc
return cls._INSTANCES.setdefault(self, self)
def __eq__(self, other):
# We *have to* check for "value" as well, otherwise it will be
# impossible to keep in the same set 2 instances with differing values.
return self.name == other.name and self._value == other._value
def __hash__(self):
return hash((self.name, self._value))
@property
def value(self):
value = self._value
if value is None:
raise AttributeError('The kernel does not expose the sched domain flag values')
else:
return value
@staticmethod
def check_version(target, logger):
""" """
Check the target and see if its kernel version matches our view of the world Check the target and see if its kernel version matches our view of the world
""" """
@@ -197,24 +197,111 @@ class SchedDomainFlag(DocInt, Enum):
"but target is running v{}".format(ref_parts, parts) "but target is running v{}".format(ref_parts, parts)
) )
def __str__(self): def __str__(self):
return self.name return self.name
def __repr__(self):
return '<SchedDomainFlag: {}>'.format(self.name)
class _SchedDomainFlagMeta(type):
"""
Metaclass of :class:`SchedDomainFlag`.
Provides some level of emulation of :class:`enum.Enum` behavior for
backward compatibility.
"""
@property
def _flags(self):
return [
attr
for name, attr in self.__dict__.items()
if name.startswith('SD_')
]
def __getitem__(self, i):
return self._flags[i]
def __len__(self):
return len(self._flags)
# These would be provided by collections.abc.Sequence, but using it on a
# metaclass seems to have issues around __init_subclass__
def __iter__(self):
return iter(self._flags)
def __reversed__(self):
return reversed(self._flags)
def __contains__(self, x):
return x in self._flags
@property
def __members__(self):
return {flag.name: flag for flag in self._flags}
class SchedDomainFlag(_SchedDomainFlag, metaclass=_SchedDomainFlagMeta):
"""
Represents a sched domain flag.
.. note:: ``SD_*`` class attributes are deprecated, new code should never
test a given flag against one of these attributes with ``is`` (.e.g ``x
is SchedDomainFlag.SD_LOAD_BALANCE``. This is because the
``SD_LOAD_BALANCE`` flag exists in two flavors that are not equal: one
with a value (the class attribute) and one without (dynamically created
when parsing flags for new kernels). Old code ran on old kernels should
work fine though.
"""
# pylint: disable=bad-whitespace
# Domain flags obtained from include/linux/sched/topology.h on v4.17
# https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux/+/v4.17/include/linux/sched/topology.h#20
SD_LOAD_BALANCE = _SchedDomainFlag("SD_LOAD_BALANCE", 0x0001, "Do load balancing on this domain")
SD_BALANCE_NEWIDLE = _SchedDomainFlag("SD_BALANCE_NEWIDLE", 0x0002, "Balance when about to become idle")
SD_BALANCE_EXEC = _SchedDomainFlag("SD_BALANCE_EXEC", 0x0004, "Balance on exec")
SD_BALANCE_FORK = _SchedDomainFlag("SD_BALANCE_FORK", 0x0008, "Balance on fork, clone")
SD_BALANCE_WAKE = _SchedDomainFlag("SD_BALANCE_WAKE", 0x0010, "Balance on wakeup")
SD_WAKE_AFFINE = _SchedDomainFlag("SD_WAKE_AFFINE", 0x0020, "Wake task to waking CPU")
SD_ASYM_CPUCAPACITY = _SchedDomainFlag("SD_ASYM_CPUCAPACITY", 0x0040, "Groups have different max cpu capacities")
SD_SHARE_CPUCAPACITY = _SchedDomainFlag("SD_SHARE_CPUCAPACITY", 0x0080, "Domain members share cpu capacity")
SD_SHARE_POWERDOMAIN = _SchedDomainFlag("SD_SHARE_POWERDOMAIN", 0x0100, "Domain members share power domain")
SD_SHARE_PKG_RESOURCES = _SchedDomainFlag("SD_SHARE_PKG_RESOURCES", 0x0200, "Domain members share cpu pkg resources")
SD_SERIALIZE = _SchedDomainFlag("SD_SERIALIZE", 0x0400, "Only a single load balancing instance")
SD_ASYM_PACKING = _SchedDomainFlag("SD_ASYM_PACKING", 0x0800, "Place busy groups earlier in the domain")
SD_PREFER_SIBLING = _SchedDomainFlag("SD_PREFER_SIBLING", 0x1000, "Prefer to place tasks in a sibling domain")
SD_OVERLAP = _SchedDomainFlag("SD_OVERLAP", 0x2000, "Sched_domains of this level overlap")
SD_NUMA = _SchedDomainFlag("SD_NUMA", 0x4000, "Cross-node balancing")
# Only defined in Android
# https://android.googlesource.com/kernel/common/+/android-4.14/include/linux/sched/topology.h#29
SD_SHARE_CAP_STATES = _SchedDomainFlag("SD_SHARE_CAP_STATES", 0x8000, "(Android only) Domain members share capacity state")
class SchedDomain(SchedProcFSNode): class SchedDomain(SchedProcFSNode):
""" """
Represents a sched domain as seen through procfs Represents a sched domain as seen through procfs
""" """
def __init__(self, nodes): def __init__(self, nodes):
super(SchedDomain, self).__init__(nodes) super().__init__(nodes)
obj_flags = set() flags = self.flags
for flag in list(SchedDomainFlag): # Recent kernels now have a space-separated list of flags instead of a
if self.flags & flag.value == flag.value: # packed bitfield
obj_flags.add(flag) if isinstance(flags, str):
flags = {
_SchedDomainFlag(name=name, value=None)
for name in flags.split()
}
else:
def has_flag(flags, flag):
return flags & flag.value == flag.value
self.flags = obj_flags flags = {
flag
for flag in SchedDomainFlag
if has_flag(flags, flag)
}
self.flags = flags
class SchedProcFSData(SchedProcFSNode): class SchedProcFSData(SchedProcFSNode):

View File

@@ -130,7 +130,7 @@ class VexpressBootModule(BootModule):
init_dtr=0) as tty: init_dtr=0) as tty:
self.get_through_early_boot(tty) self.get_through_early_boot(tty)
self.perform_boot_sequence(tty) self.perform_boot_sequence(tty)
self.wait_for_android_prompt(tty) self.wait_for_shell_prompt(tty)
def perform_boot_sequence(self, tty): def perform_boot_sequence(self, tty):
raise NotImplementedError() raise NotImplementedError()
@@ -159,8 +159,8 @@ class VexpressBootModule(BootModule):
menu.wait(timeout=self.timeout) menu.wait(timeout=self.timeout)
return menu return menu
def wait_for_android_prompt(self, tty): def wait_for_shell_prompt(self, tty):
self.logger.debug('Waiting for the Android prompt.') self.logger.debug('Waiting for the shell prompt.')
tty.expect(self.target.shell_prompt, timeout=self.timeout) tty.expect(self.target.shell_prompt, timeout=self.timeout)
# This delay is needed to allow the platform some time to finish # This delay is needed to allow the platform some time to finish
# initilizing; querying the ip address too early from connect() may # initilizing; querying the ip address too early from connect() may

View File

@@ -90,9 +90,6 @@ class VersatileExpressPlatform(Platform):
def _init_android_target(self, target): def _init_android_target(self, target):
if target.connection_settings.get('device') is None: if target.connection_settings.get('device') is None:
addr = self._get_target_ip_address(target) addr = self._get_target_ip_address(target)
if sys.version_info[0] == 3:
# Convert bytes to string for Python3 compatibility
addr = addr.decode("utf-8")
target.connection_settings['device'] = addr + ':5555' target.connection_settings['device'] = addr + ':5555'
def _init_linux_target(self, target): def _init_linux_target(self, target):
@@ -108,7 +105,7 @@ class VersatileExpressPlatform(Platform):
init_dtr=0) as tty: init_dtr=0) as tty:
tty.sendline('su') # this is, apprently, required to query network device tty.sendline('su') # this is, apprently, required to query network device
# info by name on recent Juno builds... # info by name on recent Juno builds...
self.logger.debug('Waiting for the Android shell prompt.') self.logger.debug('Waiting for the shell prompt.')
tty.expect(target.shell_prompt) tty.expect(target.shell_prompt)
self.logger.debug('Waiting for IP address...') self.logger.debug('Waiting for IP address...')
@@ -119,7 +116,7 @@ class VersatileExpressPlatform(Platform):
time.sleep(1) time.sleep(1)
try: try:
tty.expect(r'inet ([1-9]\d*.\d+.\d+.\d+)', timeout=10) tty.expect(r'inet ([1-9]\d*.\d+.\d+.\d+)', timeout=10)
return tty.match.group(1) return tty.match.group(1).decode('utf-8')
except pexpect.TIMEOUT: except pexpect.TIMEOUT:
pass # We have our own timeout -- see below. pass # We have our own timeout -- see below.
if (time.time() - wait_start_time) > self.ready_timeout: if (time.time() - wait_start_time) > self.ready_timeout:

View File

@@ -15,7 +15,9 @@
import io import io
import base64 import base64
import functools
import gzip import gzip
import glob
import os import os
import re import re
import time import time
@@ -26,6 +28,7 @@ import sys
import tarfile import tarfile
import tempfile import tempfile
import threading import threading
import uuid
import xml.dom.minidom import xml.dom.minidom
import copy import copy
from collections import namedtuple, defaultdict from collections import namedtuple, defaultdict
@@ -47,25 +50,25 @@ from devlib.platform import Platform
from devlib.exception import (DevlibTransientError, TargetStableError, from devlib.exception import (DevlibTransientError, TargetStableError,
TargetNotRespondingError, TimeoutError, TargetNotRespondingError, TimeoutError,
TargetTransientError, KernelConfigKeyError, TargetTransientError, KernelConfigKeyError,
TargetError) # pylint: disable=redefined-builtin TargetError, HostError) # pylint: disable=redefined-builtin
from devlib.utils.ssh import SshConnection from devlib.utils.ssh import SshConnection
from devlib.utils.android import AdbConnection, AndroidProperties, LogcatMonitor, adb_command, adb_disconnect, INTENT_FLAGS from devlib.utils.android import AdbConnection, AndroidProperties, LogcatMonitor, adb_command, adb_disconnect, INTENT_FLAGS
from devlib.utils.misc import memoized, isiterable, convert_new_lines from devlib.utils.misc import memoized, isiterable, convert_new_lines
from devlib.utils.misc import commonprefix, merge_lists from devlib.utils.misc import commonprefix, merge_lists
from devlib.utils.misc import ABI_MAP, get_cpu_name, ranges_to_list from devlib.utils.misc import ABI_MAP, get_cpu_name, ranges_to_list
from devlib.utils.misc import batch_contextmanager from devlib.utils.misc import batch_contextmanager, tls_property, nullcontext
from devlib.utils.types import integer, boolean, bitmask, identifier, caseless_string, bytes_regex from devlib.utils.types import integer, boolean, bitmask, identifier, caseless_string, bytes_regex
FSTAB_ENTRY_REGEX = re.compile(r'(\S+) on (.+) type (\S+) \((\S+)\)') FSTAB_ENTRY_REGEX = re.compile(r'(\S+) on (.+) type (\S+) \((\S+)\)')
ANDROID_SCREEN_STATE_REGEX = re.compile('(?:mPowerState|mScreenOn|Display Power: state)=([0-9]+|true|false|ON|OFF)', ANDROID_SCREEN_STATE_REGEX = re.compile('(?:mPowerState|mScreenOn|Display Power: state)=([0-9]+|true|false|ON|OFF|DOZE)',
re.IGNORECASE) re.IGNORECASE)
ANDROID_SCREEN_RESOLUTION_REGEX = re.compile(r'cur=(?P<width>\d+)x(?P<height>\d+)') ANDROID_SCREEN_RESOLUTION_REGEX = re.compile(r'cur=(?P<width>\d+)x(?P<height>\d+)')
ANDROID_SCREEN_ROTATION_REGEX = re.compile(r'orientation=(?P<rotation>[0-3])') ANDROID_SCREEN_ROTATION_REGEX = re.compile(r'orientation=(?P<rotation>[0-3])')
DEFAULT_SHELL_PROMPT = re.compile(r'^.*(shell|root|juno)@?.*:[/~]\S* *[#$] ', DEFAULT_SHELL_PROMPT = re.compile(r'^.*(shell|root|juno)@?.*:[/~]\S* *[#$] ',
re.MULTILINE) re.MULTILINE)
KVERSION_REGEX = re.compile( KVERSION_REGEX = re.compile(
r'(?P<version>\d+)(\.(?P<major>\d+)(\.(?P<minor>\d+)(-rc(?P<rc>\d+))?)?)?(.*-g(?P<sha1>[0-9a-fA-F]{7,}))?' r'(?P<version>\d+)(\.(?P<major>\d+)(\.(?P<minor>\d+)(-rc(?P<rc>\d+))?)?)?(-(?P<commits>\d+)-g(?P<sha1>[0-9a-fA-F]{7,}))?'
) )
GOOGLE_DNS_SERVER_ADDRESS = '8.8.8.8' GOOGLE_DNS_SERVER_ADDRESS = '8.8.8.8'
@@ -132,6 +135,14 @@ class Target(object):
def kernel_version(self): def kernel_version(self):
return KernelVersion(self.execute('{} uname -r -v'.format(quote(self.busybox))).strip()) return KernelVersion(self.execute('{} uname -r -v'.format(quote(self.busybox))).strip())
@property
def hostid(self):
return int(self.execute('{} hostid'.format(self.busybox)).strip(), 16)
@property
def hostname(self):
return self.execute('{} hostname'.format(self.busybox)).strip()
@property @property
def os_version(self): # pylint: disable=no-self-use def os_version(self): # pylint: disable=no-self-use
return {} return {}
@@ -167,10 +178,15 @@ class Target(object):
@property @property
@memoized @memoized
def number_of_nodes(self): def number_of_nodes(self):
cmd = 'cd /sys/devices/system/node && {busybox} find . -maxdepth 1'.format(busybox=quote(self.busybox))
try:
output = self.execute(cmd, as_root=self.is_rooted)
except TargetStableError:
return 1
else:
nodere = re.compile(r'^\./node\d+\s*$')
num_nodes = 0 num_nodes = 0
nodere = re.compile(r'^\s*node\d+\s*$') for entry in output.splitlines():
output = self.execute('ls /sys/devices/system/node', as_root=self.is_rooted)
for entry in output.split():
if nodere.match(entry): if nodere.match(entry):
num_nodes += 1 num_nodes += 1
return num_nodes return num_nodes
@@ -207,17 +223,7 @@ class Target(object):
@memoized @memoized
def page_size_kb(self): def page_size_kb(self):
cmd = "cat /proc/self/smaps | {0} grep KernelPageSize | {0} head -n 1 | {0} awk '{{ print $2 }}'" cmd = "cat /proc/self/smaps | {0} grep KernelPageSize | {0} head -n 1 | {0} awk '{{ print $2 }}'"
return int(self.execute(cmd.format(self.busybox))) return int(self.execute(cmd.format(self.busybox)) or 0)
@property
def conn(self):
if self._connections:
tid = id(threading.current_thread())
if tid not in self._connections:
self._connections[tid] = self.get_connection()
return self._connections[tid]
else:
return None
@property @property
def shutils(self): def shutils(self):
@@ -225,6 +231,13 @@ class Target(object):
self._setup_shutils() self._setup_shutils()
return self._shutils return self._shutils
@tls_property
def _conn(self):
return self.get_connection()
# Add a basic property that does not require calling to get the value
conn = _conn.basic_property
def __init__(self, def __init__(self,
connection_settings=None, connection_settings=None,
platform=None, platform=None,
@@ -237,6 +250,7 @@ class Target(object):
conn_cls=None, conn_cls=None,
is_container=False is_container=False
): ):
self._is_rooted = None self._is_rooted = None
self.connection_settings = connection_settings or {} self.connection_settings = connection_settings or {}
# Set self.platform: either it's given directly (by platform argument) # Set self.platform: either it's given directly (by platform argument)
@@ -266,7 +280,6 @@ class Target(object):
self._installed_binaries = {} self._installed_binaries = {}
self._installed_modules = {} self._installed_modules = {}
self._cache = {} self._cache = {}
self._connections = {}
self._shutils = None self._shutils = None
self._file_transfer_cache = None self._file_transfer_cache = None
self.busybox = None self.busybox = None
@@ -285,23 +298,34 @@ class Target(object):
def connect(self, timeout=None, check_boot_completed=True): def connect(self, timeout=None, check_boot_completed=True):
self.platform.init_target_connection(self) self.platform.init_target_connection(self)
tid = id(threading.current_thread()) # Forcefully set the thread-local value for the connection, with the
self._connections[tid] = self.get_connection(timeout=timeout) # timeout we want
self.conn = self.get_connection(timeout=timeout)
if check_boot_completed: if check_boot_completed:
self.wait_boot_complete(timeout) self.wait_boot_complete(timeout)
self.check_connection()
self._resolve_paths() self._resolve_paths()
self.execute('mkdir -p {}'.format(quote(self.working_directory))) self.execute('mkdir -p {}'.format(quote(self.working_directory)))
self.execute('mkdir -p {}'.format(quote(self.executables_directory))) self.execute('mkdir -p {}'.format(quote(self.executables_directory)))
self.busybox = self.install(os.path.join(PACKAGE_BIN_DIRECTORY, self.abi, 'busybox')) self.busybox = self.install(os.path.join(PACKAGE_BIN_DIRECTORY, self.abi, 'busybox'), timeout=30)
self.conn.busybox = self.busybox
self.platform.update_from_target(self) self.platform.update_from_target(self)
self._update_modules('connected') self._update_modules('connected')
if self.platform.big_core and self.load_default_modules: if self.platform.big_core and self.load_default_modules:
self._install_module(get_module('bl')) self._install_module(get_module('bl'))
def check_connection(self):
"""
Check that the connection works without obvious issues.
"""
out = self.execute('true', as_root=False)
if out.strip():
raise TargetStableError('The shell seems to not be functional and adds content to stderr: {}'.format(out))
def disconnect(self): def disconnect(self):
for conn in self._connections.values(): connections = self._conn.get_all_values()
for conn in connections:
conn.close() conn.close()
self._connections = {}
def get_connection(self, timeout=None): def get_connection(self, timeout=None):
if self.conn_cls is None: if self.conn_cls is None:
@@ -352,25 +376,137 @@ class Target(object):
# file transfer # file transfer
def push(self, source, dest, as_root=False, timeout=None): # pylint: disable=arguments-differ @contextmanager
if not as_root: def _xfer_cache_path(self, name):
self.conn.push(source, dest, timeout=timeout) """
else: Context manager to provide a unique path in the transfer cache with the
device_tempfile = self.path.join(self._file_transfer_cache, source.lstrip(self.path.sep)) basename of the given name.
self.execute("mkdir -p {}".format(quote(self.path.dirname(device_tempfile)))) """
self.conn.push(source, device_tempfile, timeout=timeout) # Use a UUID to avoid race conditions on the target side
self.execute("cp {} {}".format(quote(device_tempfile), quote(dest)), as_root=True) xfer_uuid = uuid.uuid4().hex
folder = self.path.join(self._file_transfer_cache, xfer_uuid)
# Make sure basename will work on folders too
name = os.path.normpath(name)
# Ensure the name is relative so that os.path.join() will actually
# join the paths rather than ignoring the first one.
name = './{}'.format(os.path.basename(name))
def pull(self, source, dest, as_root=False, timeout=None): # pylint: disable=arguments-differ check_rm = False
if not as_root: try:
self.conn.pull(source, dest, timeout=timeout) self.makedirs(folder)
# Don't check the exit code as the folder might not even exist
# before this point, if creating it failed
check_rm = True
yield self.path.join(folder, name)
finally:
self.execute('rm -rf -- {}'.format(quote(folder)), check_exit_code=check_rm)
def _prepare_xfer(self, action, sources, dest):
"""
Check the sanity of sources and destination and prepare the ground for
transfering multiple sources.
"""
if action == 'push':
src_excep = HostError
dst_excep = TargetStableError
dst_path_exists = self.file_exists
dst_is_dir = self.directory_exists
dst_mkdir = self.makedirs
for source in sources:
if not os.path.exists(source):
raise HostError('No such file "{}"'.format(source))
else: else:
device_tempfile = self.path.join(self._file_transfer_cache, source.lstrip(self.path.sep)) src_excep = TargetStableError
self.execute("mkdir -p {}".format(quote(self.path.dirname(device_tempfile)))) dst_excep = HostError
self.execute("cp -r {} {}".format(quote(source), quote(device_tempfile)), as_root=True) dst_path_exists = os.path.exists
self.execute("chmod 0644 {}".format(quote(device_tempfile)), as_root=True) dst_is_dir = os.path.isdir
self.conn.pull(device_tempfile, dest, timeout=timeout) dst_mkdir = functools.partial(os.makedirs, exist_ok=True)
self.execute("rm -r {}".format(quote(device_tempfile)), as_root=True)
if not sources:
raise src_excep('No file matching: {}'.format(source))
elif len(sources) > 1:
if dst_path_exists(dest):
if not dst_is_dir(dest):
raise dst_excep('A folder dest is required for multiple matches but destination is a file: {}'.format(dest))
else:
dst_mkdir(dest)
def push(self, source, dest, as_root=False, timeout=None, globbing=False): # pylint: disable=arguments-differ
sources = glob.glob(source) if globbing else [source]
self._prepare_xfer('push', sources, dest)
def do_push(sources, dest):
return self.conn.push(sources, dest, timeout=timeout)
if as_root:
for source in sources:
with self._xfer_cache_path(source) as device_tempfile:
do_push([source], device_tempfile)
self.execute("mv -f -- {} {}".format(quote(device_tempfile), quote(dest)), as_root=True)
else:
do_push(sources, dest)
def _expand_glob(self, pattern, **kwargs):
"""
Expand the given path globbing pattern on the target using the shell
globbing.
"""
# Since we split the results based on new lines, forbid them in the
# pattern
if '\n' in pattern:
raise ValueError(r'Newline character \n are not allowed in globbing patterns')
# If the pattern is in fact a plain filename, skip the expansion on the
# target to avoid an unncessary command execution.
#
# fnmatch char list from: https://docs.python.org/3/library/fnmatch.html
special_chars = ['*', '?', '[', ']']
if not any(char in pattern for char in special_chars):
return [pattern]
# Characters to escape that are impacting parameter splitting, since we
# want the pattern to be given in one piece. Unfortunately, there is no
# fool-proof way of doing that without also escaping globbing special
# characters such as wildcard which would defeat the entire purpose of
# that function.
for c in [' ', "'", '"']:
pattern = pattern.replace(c, '\\' + c)
cmd = "exec printf '%s\n' {}".format(pattern)
# Make sure to use the same shell everywhere for the path globbing,
# ensuring consistent results no matter what is the default platform
# shell
cmd = '{} sh -c {} 2>/dev/null'.format(quote(self.busybox), quote(cmd))
# On some shells, match failure will make the command "return" a
# non-zero code, even though the command was not actually called
result = self.execute(cmd, strip_colors=False, check_exit_code=False, **kwargs)
paths = result.splitlines()
if not paths:
raise TargetStableError('No file matching: {}'.format(pattern))
return paths
def pull(self, source, dest, as_root=False, timeout=None, globbing=False): # pylint: disable=arguments-differ
if globbing:
sources = self._expand_glob(source, as_root=as_root)
else:
sources = [source]
self._prepare_xfer('pull', sources, dest)
def do_pull(sources, dest):
self.conn.pull(sources, dest, timeout=timeout)
if as_root:
for source in sources:
with self._xfer_cache_path(source) as device_tempfile:
self.execute("cp -r -- {} {}".format(quote(source), quote(device_tempfile)), as_root=True)
self.execute("{} chmod 0644 -- {}".format(self.busybox, quote(device_tempfile)), as_root=True)
do_pull([device_tempfile], dest)
else:
do_pull(sources, dest)
def get_directory(self, source_dir, dest, as_root=False): def get_directory(self, source_dir, dest, as_root=False):
""" Pull a directory from the device, after compressing dir """ """ Pull a directory from the device, after compressing dir """
@@ -383,11 +519,12 @@ class Target(object):
tmpfile = os.path.join(dest, tar_file_name) tmpfile = os.path.join(dest, tar_file_name)
# If root is required, use tmp location for tar creation. # If root is required, use tmp location for tar creation.
if as_root: tar_file_cm = self._xfer_cache_path if as_root else nullcontext
tar_file_name = self.path.join(self._file_transfer_cache, tar_file_name)
# Does the folder exist? # Does the folder exist?
self.execute('ls -la {}'.format(quote(source_dir)), as_root=as_root) self.execute('ls -la {}'.format(quote(source_dir)), as_root=as_root)
with tar_file_cm(tar_file_name) as tar_file_name:
# Try compressing the folder # Try compressing the folder
try: try:
self.execute('{} tar -cvf {} {}'.format( self.execute('{} tar -cvf {} {}'.format(
@@ -401,16 +538,13 @@ class Target(object):
os.mkdir(dest) os.mkdir(dest)
self.pull(tar_file_name, tmpfile) self.pull(tar_file_name, tmpfile)
# Decompress # Decompress
f = tarfile.open(tmpfile, 'r') with tarfile.open(tmpfile, 'r') as f:
f.extractall(outdir) f.extractall(outdir)
os.remove(tmpfile) os.remove(tmpfile)
# execution # execution
def execute(self, command, timeout=None, check_exit_code=True, def _prepare_cmd(self, command, force_locale):
as_root=False, strip_colors=True, will_succeed=False,
force_locale='C'):
# Force the locale if necessary for more predictable output # Force the locale if necessary for more predictable output
if force_locale: if force_locale:
# Use an explicit export so that the command is allowed to be any # Use an explicit export so that the command is allowed to be any
@@ -421,12 +555,26 @@ class Target(object):
if self.executables_directory: if self.executables_directory:
command = "export PATH={}:$PATH && {}".format(quote(self.executables_directory), command) command = "export PATH={}:$PATH && {}".format(quote(self.executables_directory), command)
return command
def execute(self, command, timeout=None, check_exit_code=True,
as_root=False, strip_colors=True, will_succeed=False,
force_locale='C'):
command = self._prepare_cmd(command, force_locale)
return self.conn.execute(command, timeout=timeout, return self.conn.execute(command, timeout=timeout,
check_exit_code=check_exit_code, as_root=as_root, check_exit_code=check_exit_code, as_root=as_root,
strip_colors=strip_colors, will_succeed=will_succeed) strip_colors=strip_colors, will_succeed=will_succeed)
def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False): def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False,
return self.conn.background(command, stdout, stderr, as_root) force_locale='C', timeout=None):
command = self._prepare_cmd(command, force_locale)
bg_cmd = self.conn.background(command, stdout, stderr, as_root)
if timeout is not None:
timer = threading.Timer(timeout, function=bg_cmd.cancel)
timer.daemon = True
timer.start()
return bg_cmd
def invoke(self, binary, args=None, in_directory=None, on_cpus=None, def invoke(self, binary, args=None, in_directory=None, on_cpus=None,
redirect_stderr=False, as_root=False, timeout=30): redirect_stderr=False, as_root=False, timeout=30):
@@ -538,7 +686,7 @@ class Target(object):
def reset(self): def reset(self):
try: try:
self.execute('reboot', as_root=self.needs_su, timeout=2) self.execute('reboot', as_root=self.needs_su, timeout=2)
except (DevlibTransientError, subprocess.CalledProcessError): except (TargetError, subprocess.CalledProcessError):
# on some targets "reboot" doesn't return gracefully # on some targets "reboot" doesn't return gracefully
pass pass
self.conn.connected_as_root = None self.conn.connected_as_root = None
@@ -573,6 +721,9 @@ class Target(object):
# files # files
def makedirs(self, path):
self.execute('mkdir -p {}'.format(quote(path)))
def file_exists(self, filepath): def file_exists(self, filepath):
command = 'if [ -e {} ]; then echo 1; else echo 0; fi' command = 'if [ -e {} ]; then echo 1; else echo 0; fi'
output = self.execute(command.format(quote(filepath)), as_root=self.is_rooted) output = self.execute(command.format(quote(filepath)), as_root=self.is_rooted)
@@ -616,7 +767,7 @@ class Target(object):
raise IOError('No usable temporary filename found') raise IOError('No usable temporary filename found')
def remove(self, path, as_root=False): def remove(self, path, as_root=False):
self.execute('rm -rf {}'.format(quote(path)), as_root=as_root) self.execute('rm -rf -- {}'.format(quote(path)), as_root=as_root)
# misc # misc
def core_cpus(self, core): def core_cpus(self, core):
@@ -901,16 +1052,19 @@ class Target(object):
self.logger.warning(msg) self.logger.warning(msg)
def _install_module(self, mod, **params): def _install_module(self, mod, **params):
if mod.name not in self._installed_modules: name = mod.name
self.logger.debug('Installing module {}'.format(mod.name)) if name not in self._installed_modules:
self.logger.debug('Installing module {}'.format(name))
try: try:
mod.install(self, **params) mod.install(self, **params)
except Exception as e: except Exception as e:
self.logger.error('Module "{}" failed to install on target'.format(mod.name)) self.logger.error('Module "{}" failed to install on target'.format(name))
raise raise
self._installed_modules[mod.name] = mod self._installed_modules[name] = mod
if name not in self.modules:
self.modules.append(name)
else: else:
self.logger.debug('Module {} is already installed.'.format(mod.name)) self.logger.debug('Module {} is already installed.'.format(name))
def _resolve_paths(self): def _resolve_paths(self):
raise NotImplementedError() raise NotImplementedError()
@@ -1029,16 +1183,20 @@ class LinuxTarget(Target):
else: else:
return [] return []
def ps(self, **kwargs): def ps(self, threads=False, **kwargs):
command = 'ps -eo user,pid,ppid,vsize,rss,wchan,pcpu,state,fname' ps_flags = '-eo'
if threads:
ps_flags = '-eLo'
command = 'ps {} user,pid,tid,ppid,vsize,rss,wchan,pcpu,state,fname'.format(ps_flags)
lines = iter(convert_new_lines(self.execute(command)).split('\n')) lines = iter(convert_new_lines(self.execute(command)).split('\n'))
next(lines) # header next(lines) # header
result = [] result = []
for line in lines: for line in lines:
parts = re.split(r'\s+', line, maxsplit=8) parts = re.split(r'\s+', line, maxsplit=9)
if parts and parts != ['']: if parts and parts != ['']:
result.append(PsEntry(*(parts[0:1] + list(map(int, parts[1:5])) + parts[5:]))) result.append(PsEntry(*(parts[0:1] + list(map(int, parts[1:6])) + parts[6:])))
if not kwargs: if not kwargs:
return result return result
@@ -1138,7 +1296,11 @@ class AndroidTarget(Target):
@property @property
def adb_name(self): def adb_name(self):
return self.conn.device return getattr(self.conn, 'device', None)
@property
def adb_server(self):
return getattr(self.conn, 'adb_server', None)
@property @property
@memoized @memoized
@@ -1281,18 +1443,32 @@ class AndroidTarget(Target):
result.append(entry.pid) result.append(entry.pid)
return result return result
def ps(self, **kwargs): def ps(self, threads=False, **kwargs):
lines = iter(convert_new_lines(self.execute('ps')).split('\n')) maxsplit = 9 if threads else 8
command = 'ps'
if threads:
command = 'ps -AT'
lines = iter(convert_new_lines(self.execute(command)).split('\n'))
next(lines) # header next(lines) # header
result = [] result = []
for line in lines: for line in lines:
parts = line.split(None, 8) parts = line.split(None, maxsplit)
if not parts: if not parts:
continue continue
if len(parts) == 8:
wchan_missing = False
if len(parts) == maxsplit:
wchan_missing = True
if not threads:
# Duplicate PID into TID location.
parts.insert(2, parts[1])
if wchan_missing:
# wchan was blank; insert an empty field where it should be. # wchan was blank; insert an empty field where it should be.
parts.insert(5, '') parts.insert(6, '')
result.append(PsEntry(*(parts[0:1] + list(map(int, parts[1:5])) + parts[5:]))) result.append(PsEntry(*(parts[0:1] + list(map(int, parts[1:6])) + parts[6:])))
if not kwargs: if not kwargs:
return result return result
else: else:
@@ -1429,7 +1605,8 @@ class AndroidTarget(Target):
flags.append('-g') # Grant all runtime permissions flags.append('-g') # Grant all runtime permissions
self.logger.debug("Replace APK = {}, ADB flags = '{}'".format(replace, ' '.join(flags))) self.logger.debug("Replace APK = {}, ADB flags = '{}'".format(replace, ' '.join(flags)))
if isinstance(self.conn, AdbConnection): if isinstance(self.conn, AdbConnection):
return adb_command(self.adb_name, "install {} {}".format(' '.join(flags), quote(filepath)), timeout=timeout) return adb_command(self.adb_name, "install {} {}".format(' '.join(flags), quote(filepath)),
timeout=timeout, adb_server=self.adb_server)
else: else:
dev_path = self.get_workpath(filepath.rsplit(os.path.sep, 1)[-1]) dev_path = self.get_workpath(filepath.rsplit(os.path.sep, 1)[-1])
self.push(quote(filepath), dev_path, timeout=timeout) self.push(quote(filepath), dev_path, timeout=timeout)
@@ -1498,7 +1675,8 @@ class AndroidTarget(Target):
def uninstall_package(self, package): def uninstall_package(self, package):
if isinstance(self.conn, AdbConnection): if isinstance(self.conn, AdbConnection):
adb_command(self.adb_name, "uninstall {}".format(quote(package)), timeout=30) adb_command(self.adb_name, "uninstall {}".format(quote(package)), timeout=30,
adb_server=self.adb_server)
else: else:
self.execute("pm uninstall {}".format(quote(package)), timeout=30) self.execute("pm uninstall {}".format(quote(package)), timeout=30)
@@ -1507,15 +1685,18 @@ class AndroidTarget(Target):
self._ensure_executables_directory_is_writable() self._ensure_executables_directory_is_writable()
self.remove(on_device_executable, as_root=self.needs_su) self.remove(on_device_executable, as_root=self.needs_su)
def dump_logcat(self, filepath, filter=None, append=False, timeout=30): # pylint: disable=redefined-builtin def dump_logcat(self, filepath, filter=None, logcat_format=None, append=False,
timeout=60): # pylint: disable=redefined-builtin
op = '>>' if append else '>' op = '>>' if append else '>'
filtstr = ' -s {}'.format(quote(filter)) if filter else '' filtstr = ' -s {}'.format(quote(filter)) if filter else ''
formatstr = ' -v {}'.format(quote(logcat_format)) if logcat_format else ''
logcat_opts = '-d' + formatstr + filtstr
if isinstance(self.conn, AdbConnection): if isinstance(self.conn, AdbConnection):
command = 'logcat -d{} {} {}'.format(filtstr, op, quote(filepath)) command = 'logcat {} {} {}'.format(logcat_opts, op, quote(filepath))
adb_command(self.adb_name, command, timeout=timeout) adb_command(self.adb_name, command, timeout=timeout, adb_server=self.adb_server)
else: else:
dev_path = self.get_workpath('logcat') dev_path = self.get_workpath('logcat')
command = 'logcat -d{} {} {}'.format(filtstr, op, quote(dev_path)) command = 'logcat {} {} {}'.format(logcat_opts, op, quote(dev_path))
self.execute(command, timeout=timeout) self.execute(command, timeout=timeout)
self.pull(dev_path, filepath) self.pull(dev_path, filepath)
self.remove(dev_path) self.remove(dev_path)
@@ -1523,7 +1704,7 @@ class AndroidTarget(Target):
def clear_logcat(self): def clear_logcat(self):
with self.clear_logcat_lock: with self.clear_logcat_lock:
if isinstance(self.conn, AdbConnection): if isinstance(self.conn, AdbConnection):
adb_command(self.adb_name, 'logcat -c', timeout=30) adb_command(self.adb_name, 'logcat -c', timeout=30, adb_server=self.adb_server)
else: else:
self.execute('logcat -c', timeout=30) self.execute('logcat -c', timeout=30)
@@ -1540,17 +1721,32 @@ class AndroidTarget(Target):
output = self.execute('dumpsys power') output = self.execute('dumpsys power')
match = ANDROID_SCREEN_STATE_REGEX.search(output) match = ANDROID_SCREEN_STATE_REGEX.search(output)
if match: if match:
if 'DOZE' in match.group(1).upper():
return True
return boolean(match.group(1)) return boolean(match.group(1))
else: else:
raise TargetStableError('Could not establish screen state.') raise TargetStableError('Could not establish screen state.')
def ensure_screen_is_on(self): def ensure_screen_is_on(self, verify=True):
if not self.is_screen_on(): if not self.is_screen_on():
self.execute('input keyevent 26') self.execute('input keyevent 26')
if verify and not self.is_screen_on():
raise TargetStableError('Display cannot be turned on.')
def ensure_screen_is_off(self): def ensure_screen_is_on_and_stays(self, verify=True, mode=7):
self.ensure_screen_is_on(verify=verify)
self.set_stay_on_mode(mode)
def ensure_screen_is_off(self, verify=True):
# Allow 2 attempts to help with cases of ambient display modes
# where the first attempt will switch the display fully on.
for _ in range(2):
if self.is_screen_on(): if self.is_screen_on():
self.execute('input keyevent 26') self.execute('input keyevent 26')
time.sleep(0.5)
if verify and self.is_screen_on():
msg = 'Display cannot be turned off. Is always on display enabled?'
raise TargetStableError(msg)
def set_auto_brightness(self, auto_brightness): def set_auto_brightness(self, auto_brightness):
cmd = 'settings put system screen_brightness_mode {}' cmd = 'settings put system screen_brightness_mode {}'
@@ -1584,6 +1780,10 @@ class AndroidTarget(Target):
cmd = 'settings get global airplane_mode_on' cmd = 'settings get global airplane_mode_on'
return boolean(self.execute(cmd).strip()) return boolean(self.execute(cmd).strip())
def get_stay_on_mode(self):
cmd = 'settings get global stay_on_while_plugged_in'
return int(self.execute(cmd).strip())
def set_airplane_mode(self, mode): def set_airplane_mode(self, mode):
root_required = self.get_sdk_version() > 23 root_required = self.get_sdk_version() > 23
if root_required and not self.is_rooted: if root_required and not self.is_rooted:
@@ -1629,6 +1829,18 @@ class AndroidTarget(Target):
cmd = 'settings put system user_rotation {}' cmd = 'settings put system user_rotation {}'
self.execute(cmd.format(rotation)) self.execute(cmd.format(rotation))
def set_stay_on_never(self):
self.set_stay_on_mode(0)
def set_stay_on_while_powered(self):
self.set_stay_on_mode(7)
def set_stay_on_mode(self, mode):
if not 0 <= mode <= 7:
raise ValueError('Screen stay on mode must be between 0 and 7')
cmd = 'settings put global stay_on_while_plugged_in {}'
self.execute(cmd.format(mode))
def open_url(self, url, force_new=False): def open_url(self, url, force_new=False):
""" """
Start a view activity by specifying an URL Start a view activity by specifying an URL
@@ -1700,7 +1912,7 @@ class AndroidTarget(Target):
self.write_value(self._charging_enabled_path, int(bool(enabled))) self.write_value(self._charging_enabled_path, int(bool(enabled)))
FstabEntry = namedtuple('FstabEntry', ['device', 'mount_point', 'fs_type', 'options', 'dump_freq', 'pass_num']) FstabEntry = namedtuple('FstabEntry', ['device', 'mount_point', 'fs_type', 'options', 'dump_freq', 'pass_num'])
PsEntry = namedtuple('PsEntry', 'user pid ppid vsize rss wchan pc state name') PsEntry = namedtuple('PsEntry', 'user pid tid ppid vsize rss wchan pc state name')
LsmodEntry = namedtuple('LsmodEntry', ['name', 'size', 'use_count', 'used_by']) LsmodEntry = namedtuple('LsmodEntry', ['name', 'size', 'use_count', 'used_by'])
@@ -1795,6 +2007,8 @@ class KernelVersion(object):
:type minor: int :type minor: int
:ivar rc: Release candidate number (e.g. 3 for Linux 4.9-rc3). May be None. :ivar rc: Release candidate number (e.g. 3 for Linux 4.9-rc3). May be None.
:type rc: int :type rc: int
:ivar commits: Number of additional commits on the branch. May be None.
:type commits: int
:ivar sha1: Kernel git revision hash, if available (otherwise None) :ivar sha1: Kernel git revision hash, if available (otherwise None)
:type sha1: str :type sha1: str
@@ -1819,6 +2033,7 @@ class KernelVersion(object):
self.minor = None self.minor = None
self.sha1 = None self.sha1 = None
self.rc = None self.rc = None
self.commits = None
match = KVERSION_REGEX.match(version_string) match = KVERSION_REGEX.match(version_string)
if match: if match:
groups = match.groupdict() groups = match.groupdict()
@@ -1828,6 +2043,8 @@ class KernelVersion(object):
self.minor = int(groups['minor']) self.minor = int(groups['minor'])
if groups['rc'] is not None: if groups['rc'] is not None:
self.rc = int(groups['rc']) self.rc = int(groups['rc'])
if groups['commits'] is not None:
self.commits = int(groups['commits'])
if groups['sha1'] is not None: if groups['sha1'] is not None:
self.sha1 = match.group('sha1') self.sha1 = match.group('sha1')
@@ -2205,7 +2422,10 @@ class ChromeOsTarget(LinuxTarget):
# Pull out ssh connection settings # Pull out ssh connection settings
ssh_conn_params = ['host', 'username', 'password', 'keyfile', ssh_conn_params = ['host', 'username', 'password', 'keyfile',
'port', 'password_prompt', 'timeout', 'sudo_cmd'] 'port', 'timeout', 'sudo_cmd',
'strict_host_check', 'use_scp',
'total_timeout', 'poll_transfers',
'start_transfer_poll_delay']
self.ssh_connection_settings = {} self.ssh_connection_settings = {}
for setting in ssh_conn_params: for setting in ssh_conn_params:
if connection_settings.get(setting, None): if connection_settings.get(setting, None):

View File

@@ -19,31 +19,37 @@ Utility functions for working with Android devices through adb.
""" """
# pylint: disable=E1103 # pylint: disable=E1103
import os import glob
import re
import sys
import time
import logging import logging
import tempfile import os
import subprocess
from collections import defaultdict
import pexpect import pexpect
import xml.etree.ElementTree import re
import subprocess
import sys
import tempfile
import time
import uuid
import zipfile import zipfile
from collections import defaultdict
from io import StringIO
from lxml import etree
try: try:
from shlex import quote from shlex import quote
except ImportError: except ImportError:
from pipes import quote from pipes import quote
from devlib.exception import TargetTransientError, TargetStableError, HostError from devlib.exception import TargetTransientError, TargetStableError, HostError
from devlib.utils.misc import check_output, which, ABI_MAP from devlib.utils.misc import check_output, which, ABI_MAP, redirect_streams, get_subprocess
from devlib.connection import ConnectionBase, AdbBackgroundCommand, PopenBackgroundCommand, PopenTransferManager
logger = logging.getLogger('android') logger = logging.getLogger('android')
MAX_ATTEMPTS = 5 MAX_ATTEMPTS = 5
AM_START_ERROR = re.compile(r"Error: Activity.*") AM_START_ERROR = re.compile(r"Error: Activity.*")
AAPT_BADGING_OUTPUT = re.compile(r"no dump ((file)|(apk)) specified", re.IGNORECASE)
# See: # See:
# http://developer.android.com/guide/topics/manifest/uses-sdk-element.html#ApiLevels # http://developer.android.com/guide/topics/manifest/uses-sdk-element.html#ApiLevels
@@ -91,6 +97,7 @@ android_home = None
platform_tools = None platform_tools = None
adb = None adb = None
aapt = None aapt = None
aapt_version = None
fastboot = None fastboot = None
@@ -150,6 +157,10 @@ class ApkInfo(object):
self.version_code = None self.version_code = None
self.native_code = None self.native_code = None
self.permissions = [] self.permissions = []
self._apk_path = None
self._activities = None
self._methods = None
if path:
self.parse(path) self.parse(path)
# pylint: disable=too-many-branches # pylint: disable=too-many-branches
@@ -195,8 +206,10 @@ class ApkInfo(object):
@property @property
def activities(self): def activities(self):
if self._activities is None: if self._activities is None:
cmd = [aapt, 'dump', 'xmltree', self._apk_path, cmd = [aapt, 'dump', 'xmltree', self._apk_path]
'AndroidManifest.xml'] if aapt_version == 2:
cmd += ['--file']
cmd += ['AndroidManifest.xml']
matched_activities = self.activity_regex.finditer(self._run(cmd)) matched_activities = self.activity_regex.finditer(self._run(cmd))
self._activities = [m.group('name') for m in matched_activities] self._activities = [m.group('name') for m in matched_activities]
return self._activities return self._activities
@@ -204,21 +217,29 @@ class ApkInfo(object):
@property @property
def methods(self): def methods(self):
if self._methods is None: if self._methods is None:
# Only try to extract once
self._methods = []
with tempfile.TemporaryDirectory() as tmp_dir:
with zipfile.ZipFile(self._apk_path, 'r') as z: with zipfile.ZipFile(self._apk_path, 'r') as z:
extracted = z.extract('classes.dex', tempfile.gettempdir()) try:
extracted = z.extract('classes.dex', tmp_dir)
except KeyError:
return []
dexdump = os.path.join(os.path.dirname(aapt), 'dexdump') dexdump = os.path.join(os.path.dirname(aapt), 'dexdump')
command = [dexdump, '-l', 'xml', extracted] command = [dexdump, '-l', 'xml', extracted]
dump = self._run(command) dump = self._run(command)
xml_tree = xml.etree.ElementTree.fromstring(dump) # Dexdump from build tools v30.0.X does not seem to produce
# valid xml from certain APKs so ignore errors and attempt to recover.
parser = etree.XMLParser(encoding='utf-8', recover=True)
xml_tree = etree.parse(StringIO(dump), parser)
package = next(i for i in xml_tree.iter('package') package = next((i for i in xml_tree.iter('package')
if i.attrib['name'] == self.package) if i.attrib['name'] == self.package), None)
self._methods = [(meth.attrib['name'], klass.attrib['name']) self._methods = [(meth.attrib['name'], klass.attrib['name'])
for klass in package.iter('class') for klass in package.iter('class')
for meth in klass.iter('method')] for meth in klass.iter('method')] if package else []
return self._methods return self._methods
def _run(self, command): def _run(self, command):
@@ -233,7 +254,7 @@ class ApkInfo(object):
return output return output
class AdbConnection(object): class AdbConnection(ConnectionBase):
# maintains the count of parallel active connections to a device, so that # maintains the count of parallel active connections to a device, so that
# adb disconnect is not invoked untill all connections are closed # adb disconnect is not invoked untill all connections are closed
@@ -259,45 +280,54 @@ class AdbConnection(object):
def connected_as_root(self, state): def connected_as_root(self, state):
self._connected_as_root[self.device] = state self._connected_as_root[self.device] = state
# pylint: disable=unused-argument # pylint: disable=unused-argument
def __init__(self, device=None, timeout=None, platform=None, adb_server=None, def __init__(self, device=None, timeout=None, platform=None, adb_server=None,
adb_as_root=False): adb_as_root=False, connection_attempts=MAX_ATTEMPTS,
poll_transfers=False,
start_transfer_poll_delay=30,
total_transfer_timeout=3600,
transfer_poll_period=30,):
super().__init__()
self.timeout = timeout if timeout is not None else self.default_timeout self.timeout = timeout if timeout is not None else self.default_timeout
if device is None: if device is None:
device = adb_get_device(timeout=timeout, adb_server=adb_server) device = adb_get_device(timeout=timeout, adb_server=adb_server)
self.device = device self.device = device
self.adb_server = adb_server self.adb_server = adb_server
self.adb_as_root = adb_as_root self.adb_as_root = adb_as_root
self.poll_transfers = poll_transfers
if poll_transfers:
transfer_opts = {'start_transfer_poll_delay': start_transfer_poll_delay,
'total_timeout': total_transfer_timeout,
'poll_period': transfer_poll_period,
}
self.transfer_mgr = PopenTransferManager(self, **transfer_opts) if poll_transfers else None
if self.adb_as_root: if self.adb_as_root:
self.adb_root(enable=True) self.adb_root(enable=True)
adb_connect(self.device) adb_connect(self.device, adb_server=self.adb_server, attempts=connection_attempts)
AdbConnection.active_connections[self.device] += 1 AdbConnection.active_connections[self.device] += 1
self._setup_ls() self._setup_ls()
self._setup_su() self._setup_su()
def push(self, source, dest, timeout=None): def push(self, sources, dest, timeout=None):
if timeout is None: return self._push_pull('push', sources, dest, timeout)
timeout = self.timeout
command = "push {} {}".format(quote(source), quote(dest))
if not os.path.exists(source):
raise HostError('No such file "{}"'.format(source))
return adb_command(self.device, command, timeout=timeout, adb_server=self.adb_server)
def pull(self, source, dest, timeout=None): def pull(self, sources, dest, timeout=None):
if timeout is None: return self._push_pull('pull', sources, dest, timeout)
timeout = self.timeout
# Pull all files matching a wildcard expression def _push_pull(self, action, sources, dest, timeout):
if os.path.isdir(dest) and \ paths = sources + [dest]
('*' in source or '?' in source):
command = 'shell {} {}'.format(self.ls_command, source) # Quote twice to avoid expansion by host shell, then ADB globbing
output = adb_command(self.device, command, timeout=timeout, adb_server=self.adb_server) do_quote = lambda x: quote(glob.escape(x))
for line in output.splitlines(): paths = ' '.join(map(do_quote, paths))
command = "pull {} {}".format(quote(line.strip()), quote(dest))
command = "{} {}".format(action, paths)
if timeout or not self.poll_transfers:
adb_command(self.device, command, timeout=timeout, adb_server=self.adb_server) adb_command(self.device, command, timeout=timeout, adb_server=self.adb_server)
return else:
command = "pull {} {}".format(quote(source), quote(dest)) with self.transfer_mgr.manage(sources, dest, action):
return adb_command(self.device, command, timeout=timeout, adb_server=self.adb_server) bg_cmd = adb_command_background(self.device, command, adb_server=self.adb_server)
self.transfer_mgr.set_transfer_and_wait(bg_cmd)
# pylint: disable=unused-argument # pylint: disable=unused-argument
def execute(self, command, timeout=None, check_exit_code=False, def execute(self, command, timeout=None, check_exit_code=False,
@@ -312,14 +342,26 @@ class AdbConnection(object):
raise raise
def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False): def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False):
return adb_background_shell(self.device, command, stdout, stderr, as_root, adb_server=self.adb_server) bg_cmd = self._background(command, stdout, stderr, as_root)
self._current_bg_cmds.add(bg_cmd)
return bg_cmd
def close(self): def _background(self, command, stdout, stderr, as_root):
adb_shell, pid = adb_background_shell(self, command, stdout, stderr, as_root)
bg_cmd = AdbBackgroundCommand(
conn=self,
adb_popen=adb_shell,
pid=pid,
as_root=as_root
)
return bg_cmd
def _close(self):
AdbConnection.active_connections[self.device] -= 1 AdbConnection.active_connections[self.device] -= 1
if AdbConnection.active_connections[self.device] <= 0: if AdbConnection.active_connections[self.device] <= 0:
if self.adb_as_root: if self.adb_as_root:
self.adb_root(self.device, enable=False) self.adb_root(enable=False)
adb_disconnect(self.device) adb_disconnect(self.device, self.adb_server)
del AdbConnection.active_connections[self.device] del AdbConnection.active_connections[self.device]
def cancel_running_command(self): def cancel_running_command(self):
@@ -330,16 +372,16 @@ class AdbConnection(object):
def adb_root(self, enable=True): def adb_root(self, enable=True):
cmd = 'root' if enable else 'unroot' cmd = 'root' if enable else 'unroot'
output = adb_command(self.device, cmd, timeout=30) output = adb_command(self.device, cmd, timeout=30, adb_server=self.adb_server)
if 'cannot run as root in production builds' in output: if 'cannot run as root in production builds' in output:
raise TargetStableError(output) raise TargetStableError(output)
AdbConnection._connected_as_root[self.device] = enable AdbConnection._connected_as_root[self.device] = enable
def wait_for_device(self, timeout=30): def wait_for_device(self, timeout=30):
adb_command(self.device, 'wait-for-device', timeout) adb_command(self.device, 'wait-for-device', timeout, self.adb_server)
def reboot_bootloader(self, timeout=30): def reboot_bootloader(self, timeout=30):
adb_command(self.device, 'reboot-bootloader', timeout) adb_command(self.device, 'reboot-bootloader', timeout, self.adb_server)
# Again, we need to handle boards where the default output format from ls is # Again, we need to handle boards where the default output format from ls is
# single column *and* boards where the default output is multi-column. # single column *and* boards where the default output is multi-column.
@@ -423,7 +465,7 @@ def adb_get_device(timeout=None, adb_server=None):
time.sleep(1) time.sleep(1)
def adb_connect(device, timeout=None, attempts=MAX_ATTEMPTS): def adb_connect(device, timeout=None, attempts=MAX_ATTEMPTS, adb_server=None):
_check_env() _check_env()
tries = 0 tries = 0
output = None output = None
@@ -436,11 +478,12 @@ def adb_connect(device, timeout=None, attempts=MAX_ATTEMPTS):
# adb connection may have gone "stale", resulting in adb blocking # adb connection may have gone "stale", resulting in adb blocking
# indefinitely when making calls to the device. To avoid this, # indefinitely when making calls to the device. To avoid this,
# always disconnect first. # always disconnect first.
adb_disconnect(device) adb_disconnect(device, adb_server)
command = 'adb connect {}'.format(quote(device)) adb_cmd = get_adb_command(None, 'connect', adb_server)
command = '{} {}'.format(adb_cmd, quote(device))
logger.debug(command) logger.debug(command)
output, _ = check_output(command, shell=True, timeout=timeout) output, _ = check_output(command, shell=True, timeout=timeout)
if _ping(device): if _ping(device, adb_server):
break break
time.sleep(10) time.sleep(10)
else: # did not connect to the device else: # did not connect to the device
@@ -450,22 +493,23 @@ def adb_connect(device, timeout=None, attempts=MAX_ATTEMPTS):
raise HostError(message) raise HostError(message)
def adb_disconnect(device): def adb_disconnect(device, adb_server=None):
_check_env() _check_env()
if not device: if not device:
return return
if ":" in device and device in adb_list_devices(): if ":" in device and device in adb_list_devices(adb_server):
command = "adb disconnect " + device adb_cmd = get_adb_command(None, 'disconnect', adb_server)
command = "{} {}".format(adb_cmd, device)
logger.debug(command) logger.debug(command)
retval = subprocess.call(command, stdout=open(os.devnull, 'wb'), shell=True) retval = subprocess.call(command, stdout=open(os.devnull, 'wb'), shell=True)
if retval: if retval:
raise TargetTransientError('"{}" returned {}'.format(command, retval)) raise TargetTransientError('"{}" returned {}'.format(command, retval))
def _ping(device): def _ping(device, adb_server=None):
_check_env() _check_env()
device_string = ' -s {}'.format(quote(device)) if device else '' adb_cmd = get_adb_command(device, 'shell', adb_server)
command = "adb{} shell \"ls /data/local/tmp > /dev/null\"".format(device_string) command = "{} {}".format(adb_cmd, quote('ls /data/local/tmp > /dev/null'))
logger.debug(command) logger.debug(command)
result = subprocess.call(command, stderr=subprocess.PIPE, shell=True) result = subprocess.call(command, stderr=subprocess.PIPE, shell=True)
if not result: # pylint: disable=simplifiable-if-statement if not result: # pylint: disable=simplifiable-if-statement
@@ -496,7 +540,7 @@ def adb_shell(device, command, timeout=None, check_exit_code=False,
logger.debug(' '.join(quote(part) for part in parts)) logger.debug(' '.join(quote(part) for part in parts))
try: try:
raw_output, _ = check_output(parts, timeout, shell=False, combined_output=True) raw_output, error = check_output(parts, timeout, shell=False)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise TargetStableError(str(e)) raise TargetStableError(str(e))
@@ -516,8 +560,8 @@ def adb_shell(device, command, timeout=None, check_exit_code=False,
if exit_code.isdigit(): if exit_code.isdigit():
if int(exit_code): if int(exit_code):
message = ('Got exit code {}\nfrom target command: {}\n' message = ('Got exit code {}\nfrom target command: {}\n'
'OUTPUT: {}') 'OUTPUT: {}\nSTDERR: {}\n')
raise TargetStableError(message.format(exit_code, command, output)) raise TargetStableError(message.format(exit_code, command, output, error))
elif re_search: elif re_search:
message = 'Could not start activity; got the following:\n{}' message = 'Could not start activity; got the following:\n{}'
raise TargetStableError(message.format(re_search[0])) raise TargetStableError(message.format(re_search[0]))
@@ -528,30 +572,50 @@ def adb_shell(device, command, timeout=None, check_exit_code=False,
else: else:
message = 'adb has returned early; did not get an exit code. '\ message = 'adb has returned early; did not get an exit code. '\
'Was kill-server invoked?\nOUTPUT:\n-----\n{}\n'\ 'Was kill-server invoked?\nOUTPUT:\n-----\n{}\n'\
'-----' '-----\nSTDERR:\n-----\n{}\n-----'
raise TargetTransientError(message.format(raw_output)) raise TargetTransientError(message.format(raw_output, error))
return output return output + error
def adb_background_shell(device, command, def adb_background_shell(conn, command,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
as_root=False, as_root=False):
adb_server=None): """Runs the specified command in a subprocess, returning the the Popen object."""
"""Runs the sepcified command in a subprocess, returning the the Popen object.""" device = conn.device
adb_server = conn.adb_server
_check_env() _check_env()
stdout, stderr, command = redirect_streams(stdout, stderr, command)
if as_root: if as_root:
command = 'echo {} | su'.format(quote(command)) command = 'echo {} | su'.format(quote(command))
device_string = ' -H {}'.format(adb_server) if adb_server else '' # Attach a unique UUID to the command line so it can be looked for without
device_string += ' -s {}'.format(device) if device else '' # any ambiguity with ps
full_command = 'adb{} shell {}'.format(device_string, quote(command)) uuid_ = uuid.uuid4().hex
logger.debug(full_command) uuid_var = 'BACKGROUND_COMMAND_UUID={}'.format(uuid_)
return subprocess.Popen(full_command, stdout=stdout, stderr=stderr, shell=True) command = "{} sh -c {}".format(uuid_var, quote(command))
def adb_kill_server(self, timeout=30): adb_cmd = get_adb_command(device, 'shell', adb_server)
adb_command(None, 'kill-server', timeout) full_command = '{} {}'.format(adb_cmd, quote(command))
logger.debug(full_command)
p = subprocess.Popen(full_command, stdout=stdout, stderr=stderr, shell=True)
# Out of band PID lookup, to avoid conflicting needs with stdout redirection
find_pid = '{} ps -A -o pid,args | grep {}'.format(conn.busybox, quote(uuid_var))
ps_out = conn.execute(find_pid)
pids = [
int(line.strip().split(' ', 1)[0])
for line in ps_out.splitlines()
]
# The line we are looking for is the first one, since it was started before
# any look up command
pid = sorted(pids)[0]
return (p, pid)
def adb_kill_server(timeout=30, adb_server=None):
adb_command(None, 'kill-server', timeout, adb_server)
def adb_list_devices(adb_server=None): def adb_list_devices(adb_server=None):
output = adb_command(None, 'devices', adb_server=adb_server) output = adb_command(None, 'devices', adb_server=adb_server)
@@ -571,12 +635,22 @@ def get_adb_command(device, command, adb_server=None):
device_string += ' -s {}'.format(device) if device else '' device_string += ' -s {}'.format(device) if device else ''
return "adb{} {}".format(device_string, command) return "adb{} {}".format(device_string, command)
def adb_command(device, command, timeout=None, adb_server=None): def adb_command(device, command, timeout=None, adb_server=None):
full_command = get_adb_command(device, command, adb_server) full_command = get_adb_command(device, command, adb_server)
logger.debug(full_command) logger.debug(full_command)
output, _ = check_output(full_command, timeout, shell=True) output, _ = check_output(full_command, timeout, shell=True)
return output return output
def adb_command_background(device, command, adb_server=None):
full_command = get_adb_command(device, command, adb_server)
logger.debug(full_command)
proc = get_subprocess(full_command, shell=True)
cmd = PopenBackgroundCommand(proc)
return cmd
def grant_app_permissions(target, package): def grant_app_permissions(target, package):
""" """
Grant an app all the permissions it may ask for Grant an app all the permissions it may ask for
@@ -584,7 +658,7 @@ def grant_app_permissions(target, package):
dumpsys = target.execute('dumpsys package {}'.format(package)) dumpsys = target.execute('dumpsys package {}'.format(package))
permissions = re.search( permissions = re.search(
'requested permissions:\s*(?P<permissions>(android.permission.+\s*)+)', dumpsys r'requested permissions:\s*(?P<permissions>(android.permission.+\s*)+)', dumpsys
) )
if permissions is None: if permissions is None:
return return
@@ -604,8 +678,10 @@ class _AndroidEnvironment(object):
def __init__(self): def __init__(self):
self.android_home = None self.android_home = None
self.platform_tools = None self.platform_tools = None
self.build_tools = None
self.adb = None self.adb = None
self.aapt = None self.aapt = None
self.aapt_version = None
self.fastboot = None self.fastboot = None
@@ -631,28 +707,75 @@ def _initialize_without_android_home(env):
_init_common(env) _init_common(env)
return env return env
def _init_common(env): def _init_common(env):
_discover_build_tools(env)
_discover_aapt(env)
def _discover_build_tools(env):
logger.debug('ANDROID_HOME: {}'.format(env.android_home)) logger.debug('ANDROID_HOME: {}'.format(env.android_home))
build_tools_directory = os.path.join(env.android_home, 'build-tools') build_tools_directory = os.path.join(env.android_home, 'build-tools')
if not os.path.isdir(build_tools_directory): if os.path.isdir(build_tools_directory):
msg = '''ANDROID_HOME ({}) does not appear to have valid Android SDK install env.build_tools = build_tools_directory
(cannot find build-tools)'''
raise HostError(msg.format(env.android_home))
versions = os.listdir(build_tools_directory)
for version in reversed(sorted(versions)):
aapt_path = os.path.join(build_tools_directory, version, 'aapt')
if os.path.isfile(aapt_path):
logger.debug('Using aapt for version {}'.format(version))
env.aapt = aapt_path
break
else:
raise HostError('aapt not found. Please make sure at least one Android '
'platform is installed.')
def _check_supported_aapt2(binary):
# At time of writing the version argument of aapt2 is not helpful as
# the output is only a placeholder that does not distinguish between versions
# with and without support for badging. Unfortunately aapt has been
# deprecated and fails to parse some valid apks so we will try to favour
# aapt2 if possible else will fall back to aapt.
# Try to execute the badging command and check if we get an expected error
# message as opposed to an unknown command error to determine if we have a
# suitable version.
cmd = '{} dump badging'.format(binary)
result = subprocess.run(cmd.encode('utf-8'), shell=True, stderr=subprocess.PIPE)
supported = bool(AAPT_BADGING_OUTPUT.search(result.stderr.decode('utf-8')))
msg = 'Found a {} aapt2 binary at: {}'
logger.debug(msg.format('supported' if supported else 'unsupported', binary))
return supported
def _discover_aapt(env):
if env.build_tools:
aapt_path = ''
aapt2_path = ''
versions = os.listdir(env.build_tools)
for version in reversed(sorted(versions)):
if not os.path.isfile(aapt2_path):
aapt2_path = os.path.join(env.build_tools, version, 'aapt2')
if not os.path.isfile(aapt_path):
aapt_path = os.path.join(env.build_tools, version, 'aapt')
aapt_version = 1
# Use latest available version for aapt/appt2 but ensure at least one is valid.
if os.path.isfile(aapt2_path) or os.path.isfile(aapt_path):
break
# Use aapt2 only if present and we have a suitable version
if aapt2_path and _check_supported_aapt2(aapt2_path):
aapt_path = aapt2_path
aapt_version = 2
# Use the aapt version discoverted from build tools.
if aapt_path:
logger.debug('Using {} for version {}'.format(aapt_path, version))
env.aapt = aapt_path
env.aapt_version = aapt_version
return
# Try detecting aapt2 and aapt from PATH
if not env.aapt:
aapt2_path = which('aapt2')
if _check_supported_aapt2(aapt2_path):
env.aapt = aapt2_path
env.aapt_version = 2
else:
env.aapt = which('aapt')
env.aapt_version = 1
if not env.aapt:
raise HostError('aapt/aapt2 not found. Please make sure it is avaliable in PATH'
' or at least one Android platform is installed')
def _check_env(): def _check_env():
global android_home, platform_tools, adb, aapt # pylint: disable=W0603 global android_home, platform_tools, adb, aapt, aapt_version # pylint: disable=W0603
if not android_home: if not android_home:
android_home = os.getenv('ANDROID_HOME') android_home = os.getenv('ANDROID_HOME')
if android_home: if android_home:
@@ -663,6 +786,7 @@ def _check_env():
platform_tools = _env.platform_tools platform_tools = _env.platform_tools
adb = _env.adb adb = _env.adb
aapt = _env.aapt aapt = _env.aapt
aapt_version = _env.aapt_version
class LogcatMonitor(object): class LogcatMonitor(object):
""" """
@@ -681,11 +805,12 @@ class LogcatMonitor(object):
def logfile(self): def logfile(self):
return self._logfile return self._logfile
def __init__(self, target, regexps=None): def __init__(self, target, regexps=None, logcat_format=None):
super(LogcatMonitor, self).__init__() super(LogcatMonitor, self).__init__()
self.target = target self.target = target
self._regexps = regexps self._regexps = regexps
self._logcat_format = logcat_format
self._logcat = None self._logcat = None
self._logfile = None self._logfile = None
@@ -699,7 +824,7 @@ class LogcatMonitor(object):
if outfile: if outfile:
self._logfile = open(outfile, 'w') self._logfile = open(outfile, 'w')
else: else:
self._logfile = tempfile.NamedTemporaryFile() self._logfile = tempfile.NamedTemporaryFile(mode='w')
self.target.clear_logcat() self.target.clear_logcat()
@@ -717,12 +842,16 @@ class LogcatMonitor(object):
else: else:
logcat_cmd = '{} | grep {}'.format(logcat_cmd, quote(regexp)) logcat_cmd = '{} | grep {}'.format(logcat_cmd, quote(regexp))
logcat_cmd = get_adb_command(self.target.conn.device, logcat_cmd) if self._logcat_format:
logcat_cmd = "{} -v {}".format(logcat_cmd, quote(self._logcat_format))
logcat_cmd = get_adb_command(self.target.conn.device, logcat_cmd, self.target.adb_server)
logger.debug('logcat command ="{}"'.format(logcat_cmd)) logger.debug('logcat command ="{}"'.format(logcat_cmd))
self._logcat = pexpect.spawn(logcat_cmd, logfile=self._logfile) self._logcat = pexpect.spawn(logcat_cmd, logfile=self._logfile, encoding='utf-8')
def stop(self): def stop(self):
self.flush_log()
self._logcat.terminate() self._logcat.terminate()
self._logfile.close() self._logfile.close()
@@ -730,6 +859,12 @@ class LogcatMonitor(object):
""" """
Return the list of lines found by the monitor Return the list of lines found by the monitor
""" """
self.flush_log()
with open(self._logfile.name) as fh:
return [line for line in fh]
def flush_log(self):
# Unless we tell pexect to 'expect' something, it won't read from # Unless we tell pexect to 'expect' something, it won't read from
# logcat's buffer or write into our logfile. We'll need to force it to # logcat's buffer or write into our logfile. We'll need to force it to
# read any pending logcat output. # read any pending logcat output.
@@ -760,9 +895,6 @@ class LogcatMonitor(object):
# printed anything since pexpect last read from its buffer. # printed anything since pexpect last read from its buffer.
break break
with open(self._logfile.name) as fh:
return [line for line in fh]
def clear_log(self): def clear_log(self):
with open(self._logfile.name, 'w') as _: with open(self._logfile.name, 'w') as _:
pass pass

View File

@@ -18,7 +18,7 @@ import logging
from devlib.utils.types import numeric from devlib.utils.types import numeric
GEM5STATS_FIELD_REGEX = re.compile("^(?P<key>[^- ]\S*) +(?P<value>[^#]+).+$") GEM5STATS_FIELD_REGEX = re.compile(r"^(?P<key>[^- ]\S*) +(?P<value>[^#]+).+$")
GEM5STATS_DUMP_HEAD = '---------- Begin Simulation Statistics ----------' GEM5STATS_DUMP_HEAD = '---------- Begin Simulation Statistics ----------'
GEM5STATS_DUMP_TAIL = '---------- End Simulation Statistics ----------' GEM5STATS_DUMP_TAIL = '---------- End Simulation Statistics ----------'
GEM5STATS_ROI_NUMBER = 8 GEM5STATS_ROI_NUMBER = 8

View File

@@ -20,9 +20,10 @@ Miscellaneous functions that don't fit anywhere else.
""" """
from __future__ import division from __future__ import division
from contextlib import contextmanager from contextlib import contextmanager
from functools import partial, reduce from functools import partial, reduce, wraps
from itertools import groupby from itertools import groupby
from operator import itemgetter from operator import itemgetter
from weakref import WeakKeyDictionary, WeakSet
import ctypes import ctypes
import functools import functools
@@ -45,6 +46,11 @@ try:
except AttributeError: except AttributeError:
from contextlib2 import ExitStack from contextlib2 import ExitStack
try:
from shlex import quote
except ImportError:
from pipes import quote
from past.builtins import basestring from past.builtins import basestring
# pylint: disable=redefined-builtin # pylint: disable=redefined-builtin
@@ -136,9 +142,6 @@ def get_cpu_name(implementer, part, variant):
def preexec_function(): def preexec_function():
# Ignore the SIGINT signal by setting the handler to the standard
# signal handler SIG_IGN.
signal.signal(signal.SIGINT, signal.SIG_IGN)
# Change process group in case we have to kill the subprocess and all of # Change process group in case we have to kill the subprocess and all of
# its children later. # its children later.
# TODO: this is Unix-specific; would be good to find an OS-agnostic way # TODO: this is Unix-specific; would be good to find an OS-agnostic way
@@ -152,10 +155,22 @@ check_output_logger = logging.getLogger('check_output')
check_output_lock = threading.Lock() check_output_lock = threading.Lock()
def check_output(command, timeout=None, ignore=None, inputtext=None, def get_subprocess(command, **kwargs):
combined_output=False, **kwargs): if 'stdout' in kwargs:
"""This is a version of subprocess.check_output that adds a timeout parameter to kill raise ValueError('stdout argument not allowed, it will be overridden.')
the subprocess if it does not return within the specified time.""" with check_output_lock:
process = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
preexec_fn=preexec_function,
**kwargs)
return process
def check_subprocess_output(process, timeout=None, ignore=None, inputtext=None):
output = None
error = None
# pylint: disable=too-many-branches # pylint: disable=too-many-branches
if ignore is None: if ignore is None:
ignore = [] ignore = []
@@ -164,49 +179,35 @@ def check_output(command, timeout=None, ignore=None, inputtext=None,
elif not isinstance(ignore, list) and ignore != 'all': elif not isinstance(ignore, list) and ignore != 'all':
message = 'Invalid value for ignore parameter: "{}"; must be an int or a list' message = 'Invalid value for ignore parameter: "{}"; must be an int or a list'
raise ValueError(message.format(ignore)) raise ValueError(message.format(ignore))
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
def callback(pid):
try:
check_output_logger.debug('{} timed out; sending SIGKILL'.format(pid))
os.killpg(pid, signal.SIGKILL)
except OSError:
pass # process may have already terminated.
with check_output_lock:
stderr = subprocess.STDOUT if combined_output else subprocess.PIPE
process = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=stderr,
stdin=subprocess.PIPE,
preexec_fn=preexec_function,
**kwargs)
if timeout:
timer = threading.Timer(timeout, callback, [process.pid, ])
timer.start()
try: try:
output, error = process.communicate(inputtext) output, error = process.communicate(inputtext, timeout=timeout)
if sys.version_info[0] == 3: except subprocess.TimeoutExpired as e:
timeout_expired = e
else:
timeout_expired = None
# Currently errors=replace is needed as 0x8c throws an error # Currently errors=replace is needed as 0x8c throws an error
output = output.decode(sys.stdout.encoding or 'utf-8', "replace") output = output.decode(sys.stdout.encoding or 'utf-8', "replace") if output else ''
if error: error = error.decode(sys.stderr.encoding or 'utf-8', "replace") if error else ''
error = error.decode(sys.stderr.encoding or 'utf-8', "replace")
finally: if timeout_expired:
if timeout: raise TimeoutError(process.args, output='\n'.join([output, error]))
timer.cancel()
retcode = process.poll() retcode = process.poll()
if retcode: if retcode and ignore != 'all' and retcode not in ignore:
if retcode == -9: # killed, assume due to timeout callback raise subprocess.CalledProcessError(retcode, process.args, output='\n'.join([output, error]))
raise TimeoutError(command, output='\n'.join([output or '', error or '']))
elif ignore != 'all' and retcode not in ignore:
raise subprocess.CalledProcessError(retcode, command, output='\n'.join([output or '', error or '']))
return output, error return output, error
def check_output(command, timeout=None, ignore=None, inputtext=None, **kwargs):
"""This is a version of subprocess.check_output that adds a timeout parameter to kill
the subprocess if it does not return within the specified time."""
process = get_subprocess(command, **kwargs)
return check_subprocess_output(process, timeout=timeout, ignore=ignore, inputtext=inputtext)
def walk_modules(path): def walk_modules(path):
""" """
Given package name, return a list of all modules (including submodules, etc) Given package name, return a list of all modules (including submodules, etc)
@@ -244,6 +245,32 @@ def walk_modules(path):
mods.append(submod) mods.append(submod)
return mods return mods
def redirect_streams(stdout, stderr, command):
"""
Update a command to redirect a given stream to /dev/null if it's
``subprocess.DEVNULL``.
:return: A tuple (stdout, stderr, command) with stream set to ``subprocess.PIPE``
if the `stream` parameter was set to ``subprocess.DEVNULL``.
"""
def redirect(stream, redirection):
if stream == subprocess.DEVNULL:
suffix = '{}/dev/null'.format(redirection)
elif stream == subprocess.STDOUT:
suffix = '{}&1'.format(redirection)
# Indicate that there is nothing to monitor for stderr anymore
# since it's merged into stdout
stream = subprocess.DEVNULL
else:
suffix = ''
return (stream, suffix)
stdout, suffix1 = redirect(stdout, '>')
stderr, suffix2 = redirect(stderr, '2>')
command = 'sh -c {} {} {}'.format(quote(command), suffix1, suffix2)
return (stdout, stderr, command)
def ensure_directory_exists(dirpath): def ensure_directory_exists(dirpath):
"""A filter for directory paths to ensure they exist.""" """A filter for directory paths to ensure they exist."""
@@ -468,7 +495,7 @@ def escape_spaces(text):
.. note:: :func:`pipes.quote` should be favored where possible. .. note:: :func:`pipes.quote` should be favored where possible.
""" """
return text.replace(' ', '\ ') return text.replace(' ', '\\ ')
def getch(count=1): def getch(count=1):
@@ -718,3 +745,184 @@ def batch_contextmanager(f, kwargs_list):
for kwargs in kwargs_list: for kwargs in kwargs_list:
stack.enter_context(f(**kwargs)) stack.enter_context(f(**kwargs))
yield yield
@contextmanager
def nullcontext(enter_result=None):
"""
Backport of Python 3.7 ``contextlib.nullcontext``
This context manager does nothing, so it can be used as a default
placeholder for code that needs to select at runtime what context manager
to use.
:param enter_result: Object that will be bound to the target of the with
statement, or `None` if nothing is specified.
:type enter_result: object
"""
yield enter_result
class tls_property:
"""
Use it like `property` decorator, but the result will be memoized per
thread. When the owning thread dies, the values for that thread will be
destroyed.
In order to get the values, it's necessary to call the object
given by the property. This is necessary in order to be able to add methods
to that object, like :meth:`_BoundTLSProperty.get_all_values`.
Values can be set and deleted as well, which will be a thread-local set.
"""
@property
def name(self):
return self.factory.__name__
def __init__(self, factory):
self.factory = factory
# Lock accesses to shared WeakKeyDictionary and WeakSet
self.lock = threading.Lock()
def __get__(self, instance, owner=None):
return _BoundTLSProperty(self, instance, owner)
def _get_value(self, instance, owner):
tls, values = self._get_tls(instance)
try:
return tls.value
except AttributeError:
# Bind the method to `instance`
f = self.factory.__get__(instance, owner)
obj = f()
tls.value = obj
# Since that's a WeakSet, values will be removed automatically once
# the threading.local variable that holds them is destroyed
with self.lock:
values.add(obj)
return obj
def _get_all_values(self, instance, owner):
with self.lock:
# Grab a reference to all the objects at the time of the call by
# using a regular set
tls, values = self._get_tls(instance=instance)
return set(values)
def __set__(self, instance, value):
tls, values = self._get_tls(instance)
tls.value = value
with self.lock:
values.add(value)
def __delete__(self, instance):
tls, values = self._get_tls(instance)
with self.lock:
values.discard(tls.value)
del tls.value
def _get_tls(self, instance):
dct = instance.__dict__
name = self.name
try:
# Using instance.__dict__[self.name] is safe as
# getattr(instance, name) will return the property instead, as
# the property is a descriptor
tls = dct[name]
except KeyError:
with self.lock:
# Double check after taking the lock to avoid a race
if name not in dct:
tls = (threading.local(), WeakSet())
dct[name] = tls
return tls
@property
def basic_property(self):
"""
Return a basic property that can be used to access the TLS value
without having to call it first.
The drawback is that it's not possible to do anything over than
getting/setting/deleting.
"""
def getter(instance, owner=None):
prop = self.__get__(instance, owner)
return prop()
return property(getter, self.__set__, self.__delete__)
class _BoundTLSProperty:
"""
Simple proxy object to allow either calling it to get the TLS value, or get
some other informations by calling methods.
"""
def __init__(self, tls_property, instance, owner):
self.tls_property = tls_property
self.instance = instance
self.owner = owner
def __call__(self):
return self.tls_property._get_value(
instance=self.instance,
owner=self.owner,
)
def get_all_values(self):
"""
Returns all the thread-local values currently in use in the process for
that property for that instance.
"""
return self.tls_property._get_all_values(
instance=self.instance,
owner=self.owner,
)
class InitCheckpointMeta(type):
"""
Metaclass providing an ``initialized`` boolean attributes on instances.
``initialized`` is set to ``True`` once the ``__init__`` constructor has
returned. It will deal cleanly with nested calls to ``super().__init__``.
"""
def __new__(metacls, name, bases, dct, **kwargs):
cls = super().__new__(metacls, name, bases, dct, **kwargs)
init_f = cls.__init__
@wraps(init_f)
def init_wrapper(self, *args, **kwargs):
self.initialized = False
# Track the nesting of super()__init__ to set initialized=True only
# when the outer level is finished
try:
stack = self._init_stack
except AttributeError:
stack = []
self._init_stack = stack
stack.append(init_f)
try:
x = init_f(self, *args, **kwargs)
finally:
stack.pop()
if not stack:
self.initialized = True
del self._init_stack
return x
cls.__init__ = init_wrapper
return cls
class InitCheckpoint(metaclass=InitCheckpointMeta):
"""
Inherit from this class to set the :class:`InitCheckpointMeta` metaclass.
"""
pass

View File

@@ -14,6 +14,7 @@
# #
import glob
import os import os
import stat import stat
import logging import logging
@@ -26,9 +27,19 @@ import socket
import sys import sys
import time import time
import atexit import atexit
import contextlib
import weakref
import select
import copy
from pipes import quote from pipes import quote
from future.utils import raise_from from future.utils import raise_from
from paramiko.client import SSHClient, AutoAddPolicy, RejectPolicy
import paramiko.ssh_exception
from scp import SCPClient
# By default paramiko is very verbose, including at the INFO level
logging.getLogger("paramiko").setLevel(logging.WARNING)
# pylint: disable=import-error,wrong-import-position,ungrouped-imports,wrong-import-order # pylint: disable=import-error,wrong-import-position,ungrouped-imports,wrong-import-order
import pexpect import pexpect
from distutils.version import StrictVersion as V from distutils.version import StrictVersion as V
@@ -42,8 +53,13 @@ from pexpect import EOF, TIMEOUT, spawn
from devlib.exception import (HostError, TargetStableError, TargetNotRespondingError, from devlib.exception import (HostError, TargetStableError, TargetNotRespondingError,
TimeoutError, TargetTransientError) TimeoutError, TargetTransientError)
from devlib.utils.misc import (which, strip_bash_colors, check_output, from devlib.utils.misc import (which, strip_bash_colors, check_output,
sanitize_cmd_template, memoized) sanitize_cmd_template, memoized, redirect_streams)
from devlib.utils.types import boolean from devlib.utils.types import boolean
from devlib.connection import (ConnectionBase, ParamikoBackgroundCommand, PopenBackgroundCommand,
SSHTransferManager)
DEFAULT_SSH_SUDO_COMMAND = "sudo -k -p ' ' -S -- sh -c {}"
ssh = None ssh = None
@@ -54,30 +70,113 @@ sshpass = None
logger = logging.getLogger('ssh') logger = logging.getLogger('ssh')
gem5_logger = logging.getLogger('gem5-connection') gem5_logger = logging.getLogger('gem5-connection')
def ssh_get_shell(host,
@contextlib.contextmanager
def _handle_paramiko_exceptions(command=None):
try:
yield
except paramiko.ssh_exception.NoValidConnectionsError as e:
raise TargetNotRespondingError('Connection lost: {}'.format(e))
except paramiko.ssh_exception.AuthenticationException as e:
raise TargetStableError('Could not authenticate: {}'.format(e))
except paramiko.ssh_exception.BadAuthenticationType as e:
raise TargetStableError('Bad authentication type: {}'.format(e))
except paramiko.ssh_exception.BadHostKeyException as e:
raise TargetStableError('Bad host key: {}'.format(e))
except paramiko.ssh_exception.ChannelException as e:
raise TargetStableError('Could not open an SSH channel: {}'.format(e))
except paramiko.ssh_exception.PasswordRequiredException as e:
raise TargetStableError('Please unlock the private key file: {}'.format(e))
except paramiko.ssh_exception.ProxyCommandFailure as e:
raise TargetStableError('Proxy command failure: {}'.format(e))
except paramiko.ssh_exception.SSHException as e:
raise TargetTransientError('SSH logic error: {}'.format(e))
except socket.timeout:
raise TimeoutError(command, output=None)
def _read_paramiko_streams(stdout, stderr, select_timeout, callback, init, chunk_size=int(1e42)):
try:
return _read_paramiko_streams_internal(stdout, stderr, select_timeout, callback, init, chunk_size)
finally:
# Close the channel to make sure the remove process will receive
# SIGPIPE when writing on its streams. That could happen if the
# user closed the out_streams but the remote process has not
# finished yet.
assert stdout.channel is stderr.channel
stdout.channel.close()
def _read_paramiko_streams_internal(stdout, stderr, select_timeout, callback, init, chunk_size):
channel = stdout.channel
assert stdout.channel is stderr.channel
def read_channel(callback_state):
read_list, _, _ = select.select([channel], [], [], select_timeout)
for desc in read_list:
for ready, recv, name in (
(desc.recv_ready(), desc.recv, 'stdout'),
(desc.recv_stderr_ready(), desc.recv_stderr, 'stderr')
):
if ready:
chunk = recv(chunk_size)
if chunk:
try:
callback_state = callback(callback_state, name, chunk)
except Exception as e:
return (e, callback_state)
return (None, callback_state)
def read_all_channel(callback=None, callback_state=None):
for stream, name in ((stdout, 'stdout'), (stderr, 'stderr')):
try:
chunk = stream.read()
except Exception:
continue
if callback is not None and chunk:
callback_state = callback(callback_state, name, chunk)
return callback_state
callback_excep = None
try:
callback_state = init
while not channel.exit_status_ready():
callback_excep, callback_state = read_channel(callback_state)
if callback_excep is not None:
raise callback_excep
# Make sure to always empty the streams to unblock the remote process on
# the way to exit, in case something bad happened. For example, the
# callback could raise an exception to signal it does not want to do
# anything anymore, or only reading from one of the stream might have
# raised an exception, leaving the other one non-empty.
except Exception as e:
if callback_excep is None:
# Only call the callback if there was no exception originally, as
# we don't want to reenter it if it raised an exception
read_all_channel(callback, callback_state)
raise e
else:
# Finish emptying the buffers
callback_state = read_all_channel(callback, callback_state)
exit_code = channel.recv_exit_status()
return (callback_state, exit_code)
def telnet_get_shell(host,
username, username,
password=None, password=None,
keyfile=None,
port=None, port=None,
timeout=10, timeout=10,
telnet=False, original_prompt=None):
original_prompt=None,
options=None):
_check_env() _check_env()
start_time = time.time() start_time = time.time()
while True: while True:
if telnet:
if keyfile:
raise ValueError('keyfile may not be used with a telnet connection.')
conn = TelnetPxssh(original_prompt=original_prompt) conn = TelnetPxssh(original_prompt=original_prompt)
else: # ssh
conn = pxssh.pxssh(options=options,
echo=False)
try: try:
if keyfile:
conn.login(host, username, ssh_key=keyfile, port=port, login_timeout=timeout)
else:
conn.login(host, username, password, port=port, login_timeout=timeout) conn.login(host, username, password, port=port, login_timeout=timeout)
break break
except EOF: except EOF:
@@ -157,10 +256,11 @@ def check_keyfile(keyfile):
return keyfile return keyfile
class SshConnection(object): class SshConnectionBase(ConnectionBase):
"""
Base class for SSH connections.
"""
default_password_prompt = '[sudo] password'
max_cancel_attempts = 5
default_timeout = 10 default_timeout = 10
@property @property
@@ -170,60 +270,599 @@ class SshConnection(object):
@property @property
def connected_as_root(self): def connected_as_root(self):
if self._connected_as_root is None: if self._connected_as_root is None:
# Execute directly to prevent deadlocking of connection try:
result = self._execute_and_wait_for_prompt('id', as_root=False) result = self.execute('id', as_root=False)
self._connected_as_root = 'uid=0(' in result except TargetStableError:
is_root = False
else:
is_root = 'uid=0(' in result
self._connected_as_root = is_root
return self._connected_as_root return self._connected_as_root
@connected_as_root.setter @connected_as_root.setter
def connected_as_root(self, state): def connected_as_root(self, state):
self._connected_as_root = state self._connected_as_root = state
# pylint: disable=unused-argument,super-init-not-called
def __init__(self, def __init__(self,
host, host,
username, username,
password=None, password=None,
keyfile=None, keyfile=None,
port=None, port=None,
timeout=None,
telnet=False,
password_prompt=None,
original_prompt=None,
platform=None, platform=None,
sudo_cmd="sudo -- sh -c {}", sudo_cmd=DEFAULT_SSH_SUDO_COMMAND,
options=None strict_host_check=True,
): ):
super().__init__()
self._connected_as_root = None self._connected_as_root = None
self.host = host self.host = host
self.username = username self.username = username
self.password = password self.password = password
self.keyfile = check_keyfile(keyfile) if keyfile else keyfile self.keyfile = check_keyfile(keyfile) if keyfile else keyfile
self.port = port self.port = port
self.lock = threading.Lock()
self.password_prompt = password_prompt if password_prompt is not None else self.default_password_prompt
self.sudo_cmd = sanitize_cmd_template(sudo_cmd) self.sudo_cmd = sanitize_cmd_template(sudo_cmd)
self.platform = platform
self.strict_host_check = strict_host_check
self.options = {}
logger.debug('Logging in {}@{}'.format(username, host)) logger.debug('Logging in {}@{}'.format(username, host))
timeout = timeout if timeout is not None else self.default_timeout
self.options = options if options is not None else {} def fmt_remote_path(self, path):
self.conn = ssh_get_shell(host, return '{}@{}:{}'.format(self.username, self.host, path)
def push(self, sources, dest, timeout=30):
# Quote the destination as SCP would apply globbing too
dest = self.fmt_remote_path(quote(dest))
paths = sources + [dest]
return self._scp(paths, timeout)
def pull(self, sources, dest, timeout=30):
# First level of escaping for the remote shell
sources = ' '.join(map(quote, sources))
# All the sources are merged into one scp parameter
sources = self.fmt_remote_path(sources)
paths = [sources, dest]
self._scp(paths, timeout)
def _scp(self, paths, timeout=30):
# NOTE: the version of scp in Ubuntu 12.04 occasionally (and bizarrely)
# fails to connect to a device if port is explicitly specified using -P
# option, even if it is the default port, 22. To minimize this problem,
# only specify -P for scp if the port is *not* the default.
port_string = '-P {}'.format(quote(str(self.port))) if (self.port and self.port != 22) else ''
keyfile_string = '-i {}'.format(quote(self.keyfile)) if self.keyfile else ''
options = " ".join(["-o {}={}".format(key, val)
for key, val in self.options.items()])
paths = ' '.join(map(quote, paths))
command = '{} {} -r {} {} {}'.format(scp,
options,
keyfile_string,
port_string,
paths)
command_redacted = command
logger.debug(command)
if self.password:
command, command_redacted = _give_password(self.password, command)
try:
check_output(command, timeout=timeout, shell=True)
except subprocess.CalledProcessError as e:
raise_from(HostError("Failed to copy file with '{}'. Output:\n{}".format(
command_redacted, e.output)), None)
except TimeoutError as e:
raise TimeoutError(command_redacted, e.output)
def _get_default_options(self):
if self.strict_host_check:
options = {
'StrictHostKeyChecking': 'yes',
}
else:
options = {
'StrictHostKeyChecking': 'no',
'UserKnownHostsFile': '/dev/null',
}
return options
class SshConnection(SshConnectionBase):
# pylint: disable=unused-argument,super-init-not-called
def __init__(self,
host,
username, username,
password, password=None,
self.keyfile, keyfile=None,
port, port=22,
timeout, timeout=None,
False, platform=None,
None, sudo_cmd=DEFAULT_SSH_SUDO_COMMAND,
self.options) strict_host_check=True,
use_scp=False,
poll_transfers=False,
start_transfer_poll_delay=30,
total_transfer_timeout=3600,
transfer_poll_period=30,
):
super().__init__(
host=host,
username=username,
password=password,
keyfile=keyfile,
port=port,
platform=platform,
sudo_cmd=sudo_cmd,
strict_host_check=strict_host_check,
)
self.timeout = timeout if timeout is not None else self.default_timeout
# Allow using scp for file transfer if sftp is not supported
self.use_scp = use_scp
self.poll_transfers=poll_transfers
if poll_transfers:
transfer_opts = {'start_transfer_poll_delay': start_transfer_poll_delay,
'total_timeout': total_transfer_timeout,
'poll_period': transfer_poll_period,
}
if self.use_scp:
logger.debug('Using SCP for file transfer')
_check_env()
self.options = self._get_default_options()
else:
logger.debug('Using SFTP for file transfer')
self.transfer_mgr = SSHTransferManager(self, **transfer_opts) if poll_transfers else None
self.client = self._make_client()
atexit.register(self.close) atexit.register(self.close)
def push(self, source, dest, timeout=30): # Use a marker in the output so that we will be able to differentiate
dest = '{}@{}:{}'.format(self.username, self.host, dest) # target connection issues with "password needed".
return self._scp(source, dest, timeout) # Also, sudo might not be installed at all on the target (but
# everything will work as long as we login as root). If sudo is still
# needed, it will explode when someone tries to use it. After all, the
# user might not be interested in being root at all.
self._sudo_needs_password = (
'NEED_PASSWORD' in
self.execute(
# sudo -n is broken on some versions on MacOSX, revisit that if
# someone ever cares
'sudo -n true || echo NEED_PASSWORD',
as_root=False,
check_exit_code=False,
)
)
def pull(self, source, dest, timeout=30): def _make_client(self):
source = '{}@{}:{}'.format(self.username, self.host, source) if self.strict_host_check:
return self._scp(source, dest, timeout) policy = RejectPolicy
else:
policy = AutoAddPolicy
# Only try using SSH keys if we're not using a password
check_ssh_keys = self.password is None
with _handle_paramiko_exceptions():
client = SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(policy)
client.connect(
hostname=self.host,
port=self.port,
username=self.username,
password=self.password,
key_filename=self.keyfile,
timeout=self.timeout,
look_for_keys=check_ssh_keys,
allow_agent=check_ssh_keys
)
return client
def _make_channel(self):
with _handle_paramiko_exceptions():
transport = self.client.get_transport()
channel = transport.open_session()
return channel
def _get_progress_cb(self):
return self.transfer_mgr.progress_cb if self.transfer_mgr is not None else None
def _get_sftp(self, timeout):
sftp = self.client.open_sftp()
sftp.get_channel().settimeout(timeout)
return sftp
def _get_scp(self, timeout):
return SCPClient(self.client.get_transport(), socket_timeout=timeout, progress=self._get_progress_cb())
def _push_file(self, sftp, src, dst):
try:
sftp.put(src, dst, callback=self._get_progress_cb())
# Maybe the dst was a folder
except OSError as orig_excep:
# If dst was an existing folder, we add the src basename to create
# a new destination for the file as cp would do
new_dst = os.path.join(
dst,
os.path.basename(src),
)
logger.debug('Trying: {} -> {}'.format(src, new_dst))
try:
sftp.put(src, new_dst, callback=self._get_progress_cb())
# This still failed, which either means:
# * There are some missing folders in the dirnames
# * Something else SFTP-related is wrong
except OSError as e:
# Raise the original exception, as it is closer to what the
# user asked in the first place
raise orig_excep
@classmethod
def _path_exists(cls, sftp, path):
try:
sftp.lstat(path)
except FileNotFoundError:
return False
else:
return True
def _push_folder(self, sftp, src, dst):
# Behave like the "mv" command or adb push: a new folder is created
# inside the destination folder, rather than merging the trees, but
# only if the destination already exists. Otherwise, it is use as-is as
# the new hierarchy name.
if self._path_exists(sftp, dst):
dst = os.path.join(
dst,
os.path.basename(os.path.normpath(src)),
)
return self._push_folder_internal(sftp, src, dst)
def _push_folder_internal(self, sftp, src, dst):
# This might fail if the folder already exists
with contextlib.suppress(IOError):
sftp.mkdir(dst)
for entry in os.scandir(src):
name = entry.name
src_path = os.path.join(src, name)
dst_path = os.path.join(dst, name)
if entry.is_dir():
push = self._push_folder_internal
else:
push = self._push_file
push(sftp, src_path, dst_path)
def _push_path(self, sftp, src, dst):
logger.debug('Pushing via sftp: {} -> {}'.format(src, dst))
push = self._push_folder if os.path.isdir(src) else self._push_file
push(sftp, src, dst)
def _pull_file(self, sftp, src, dst):
# Pulling a file into a folder will use the source basename
if os.path.isdir(dst):
dst = os.path.join(
dst,
os.path.basename(src),
)
with contextlib.suppress(FileNotFoundError):
os.remove(dst)
sftp.get(src, dst, callback=self._get_progress_cb())
def _pull_folder(self, sftp, src, dst):
with contextlib.suppress(FileNotFoundError):
try:
shutil.rmtree(dst)
except OSError:
os.remove(dst)
os.makedirs(dst)
for fileattr in sftp.listdir_attr(src):
filename = fileattr.filename
src_path = os.path.join(src, filename)
dst_path = os.path.join(dst, filename)
if stat.S_ISDIR(fileattr.st_mode):
pull = self._pull_folder
else:
pull = self._pull_file
pull(sftp, src_path, dst_path)
def _pull_path(self, sftp, src, dst):
logger.debug('Pulling via sftp: {} -> {}'.format(src, dst))
try:
self._pull_file(sftp, src, dst)
except IOError:
# Maybe that was a directory, so retry as such
self._pull_folder(sftp, src, dst)
def push(self, sources, dest, timeout=None):
self._push_pull('push', sources, dest, timeout)
def pull(self, sources, dest, timeout=None):
self._push_pull('pull', sources, dest, timeout)
def _push_pull(self, action, sources, dest, timeout):
if action not in ['push', 'pull']:
raise ValueError("Action must be either `push` or `pull`")
# If timeout is set, or told not to poll
if timeout is not None or not self.poll_transfers:
if self.use_scp:
scp = self._get_scp(timeout)
scp_cmd = getattr(scp, 'put' if action == 'push' else 'get')
scp_msg = '{}ing via scp: {} -> {}'.format(action, sources, dest)
logger.debug(scp_msg.capitalize())
scp_cmd(sources, dest, recursive=True)
else:
sftp = self._get_sftp(timeout)
sftp_cmd = getattr(self, '_' + action + '_path')
with _handle_paramiko_exceptions():
for source in sources:
sftp_cmd(sftp, source, dest)
# No timeout, and polling is set
elif self.use_scp:
scp = self._get_scp(timeout)
scp_cmd = getattr(scp, 'put' if action == 'push' else 'get')
with _handle_paramiko_exceptions(), self.transfer_mgr.manage(sources, dest, action, scp):
scp_msg = '{}ing via scp: {} -> {}'.format(action, sources, dest)
logger.debug(scp_msg.capitalize())
scp_cmd(sources, dest, recursive=True)
else:
sftp = self._get_sftp(timeout)
sftp_cmd = getattr(self, '_' + action + '_path')
with _handle_paramiko_exceptions(), self.transfer_mgr.manage(sources, dest, action, sftp):
for source in sources:
sftp_cmd(sftp, source, dest)
def execute(self, command, timeout=None, check_exit_code=True,
as_root=False, strip_colors=True, will_succeed=False): #pylint: disable=unused-argument
if command == '':
return ''
try:
with _handle_paramiko_exceptions(command):
exit_code, output = self._execute(command, timeout, as_root, strip_colors)
except TargetStableError as e:
if will_succeed:
raise TargetTransientError(e)
else:
raise
else:
if check_exit_code and exit_code:
message = 'Got exit code {}\nfrom: {}\nOUTPUT: {}'
raise TargetStableError(message.format(exit_code, command, output))
return output
def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False):
with _handle_paramiko_exceptions(command):
bg_cmd = self._background(command, stdout, stderr, as_root)
self._current_bg_cmds.add(bg_cmd)
return bg_cmd
def _background(self, command, stdout, stderr, as_root):
stdout, stderr, command = redirect_streams(stdout, stderr, command)
command = "printf '%s\n' $$; exec sh -c {}".format(quote(command))
channel = self._make_channel()
def executor(cmd, timeout):
channel.exec_command(cmd)
# Read are not buffered so we will always get the data as soon as
# they arrive
return (
channel.makefile_stdin(),
channel.makefile(),
channel.makefile_stderr(),
)
stdin, stdout_in, stderr_in = self._execute_command(
command,
as_root=as_root,
log=False,
timeout=None,
executor=executor,
)
pid = int(stdout_in.readline())
def create_out_stream(stream_in, stream_out):
"""
Create a pair of file-like objects. The first one is used to read
data and the second one to write.
"""
if stream_out == subprocess.DEVNULL:
r, w = None, None
# When asked for a pipe, we just give the file-like object as the
# reading end and no writing end, since paramiko already writes to
# it
elif stream_out == subprocess.PIPE:
r, w = os.pipe()
r = os.fdopen(r, 'rb')
w = os.fdopen(w, 'wb')
# Turn a file descriptor into a file-like object
elif isinstance(stream_out, int) and stream_out >= 0:
r = os.fdopen(stream_out, 'rb')
w = os.fdopen(stream_out, 'wb')
# file-like object
else:
r = stream_out
w = stream_out
return (r, w)
out_streams = {
name: create_out_stream(stream_in, stream_out)
for stream_in, stream_out, name in (
(stdout_in, stdout, 'stdout'),
(stderr_in, stderr, 'stderr'),
)
}
def redirect_thread_f(stdout_in, stderr_in, out_streams, select_timeout):
def callback(out_streams, name, chunk):
try:
r, w = out_streams[name]
except KeyError:
return out_streams
try:
w.write(chunk)
# Write failed
except ValueError:
# Since that stream is now closed, stop trying to write to it
del out_streams[name]
# If that was the last open stream, we raise an
# exception so the thread can terminate.
if not out_streams:
raise
return out_streams
try:
_read_paramiko_streams(stdout_in, stderr_in, select_timeout, callback, copy.copy(out_streams))
# The streams closed while we were writing to it, the job is done here
except ValueError:
pass
# Make sure the writing end are closed proper since we are not
# going to write anything anymore
for r, w in out_streams.values():
if r is not w and w is not None:
w.close()
# If there is anything we need to redirect to, spawn a thread taking
# care of that
select_timeout = 1
thread_out_streams = {
name: (r, w)
for name, (r, w) in out_streams.items()
if w is not None
}
redirect_thread = threading.Thread(
target=redirect_thread_f,
args=(stdout_in, stderr_in, thread_out_streams, select_timeout),
# The thread will die when the main thread dies
daemon=True,
)
redirect_thread.start()
return ParamikoBackgroundCommand(
conn=self,
as_root=as_root,
chan=channel,
pid=pid,
stdin=stdin,
# We give the reading end to the consumer of the data
stdout=out_streams['stdout'][0],
stderr=out_streams['stderr'][0],
redirect_thread=redirect_thread,
)
def _close(self):
logger.debug('Logging out {}@{}'.format(self.username, self.host))
with _handle_paramiko_exceptions():
bg_cmds = set(self._current_bg_cmds)
for bg_cmd in bg_cmds:
bg_cmd.close()
self.client.close()
def _execute_command(self, command, as_root, log, timeout, executor):
# As we're already root, there is no need to use sudo.
log_debug = logger.debug if log else lambda msg: None
use_sudo = as_root and not self.connected_as_root
if use_sudo:
if self._sudo_needs_password and not self.password:
raise TargetStableError('Attempt to use sudo but no password was specified')
command = self.sudo_cmd.format(quote(command))
log_debug(command)
streams = executor(command, timeout=timeout)
if self._sudo_needs_password:
stdin = streams[0]
stdin.write(self.password + '\n')
stdin.flush()
else:
log_debug(command)
streams = executor(command, timeout=timeout)
return streams
def _execute(self, command, timeout=None, as_root=False, strip_colors=True, log=True):
# Merge stderr into stdout since we are going without a TTY
command = '({}) 2>&1'.format(command)
stdin, stdout, stderr = self._execute_command(
command,
as_root=as_root,
log=log,
timeout=timeout,
executor=self.client.exec_command,
)
stdin.close()
# Empty the stdout buffer of the command, allowing it to carry on to
# completion
def callback(output_chunks, name, chunk):
output_chunks.append(chunk)
return output_chunks
select_timeout = 1
output_chunks, exit_code = _read_paramiko_streams(stdout, stderr, select_timeout, callback, [])
# Join in one go to avoid O(N^2) concatenation
output = b''.join(output_chunks)
if sys.version_info[0] == 3:
output = output.decode(sys.stdout.encoding or 'utf-8', 'replace')
if strip_colors:
output = strip_bash_colors(output)
return (exit_code, output)
class TelnetConnection(SshConnectionBase):
default_password_prompt = '[sudo] password'
max_cancel_attempts = 5
# pylint: disable=unused-argument,super-init-not-called
def __init__(self,
host,
username,
password=None,
port=None,
timeout=None,
password_prompt=None,
original_prompt=None,
sudo_cmd="sudo -- sh -c {}",
strict_host_check=True,
platform=None):
super().__init__(
host=host,
username=username,
password=password,
keyfile=None,
port=port,
platform=platform,
sudo_cmd=sudo_cmd,
strict_host_check=strict_host_check,
)
self.options = self._get_default_options()
self.lock = threading.Lock()
self.password_prompt = password_prompt if password_prompt is not None else self.default_password_prompt
logger.debug('Logging in {}@{}'.format(username, host))
timeout = timeout if timeout is not None else self.default_timeout
self.conn = telnet_get_shell(host, username, password, port, timeout, original_prompt)
atexit.register(self.close)
def execute(self, command, timeout=None, check_exit_code=True, def execute(self, command, timeout=None, check_exit_code=True,
as_root=False, strip_colors=True, will_succeed=False): #pylint: disable=unused-argument as_root=False, strip_colors=True, will_succeed=False): #pylint: disable=unused-argument
@@ -282,7 +921,7 @@ class SshConnection(object):
except EOF: except EOF:
raise TargetNotRespondingError('Connection lost.') raise TargetNotRespondingError('Connection lost.')
def close(self): def _close(self):
logger.debug('Logging out {}@{}'.format(self.username, self.host)) logger.debug('Logging out {}@{}'.format(self.username, self.host))
try: try:
self.conn.logout() self.conn.logout()
@@ -344,33 +983,6 @@ class SshConnection(object):
pass pass
return False return False
def _scp(self, source, dest, timeout=30):
# NOTE: the version of scp in Ubuntu 12.04 occasionally (and bizarrely)
# fails to connect to a device if port is explicitly specified using -P
# option, even if it is the default port, 22. To minimize this problem,
# only specify -P for scp if the port is *not* the default.
port_string = '-P {}'.format(quote(str(self.port))) if (self.port and self.port != 22) else ''
keyfile_string = '-i {}'.format(quote(self.keyfile)) if self.keyfile else ''
options = " ".join(["-o {}={}".format(key,val)
for key,val in self.options.items()])
command = '{} {} -r {} {} {} {}'.format(scp,
options,
keyfile_string,
port_string,
quote(source),
quote(dest))
command_redacted = command
logger.debug(command)
if self.password:
command, command_redacted = _give_password(self.password, command)
try:
check_output(command, timeout=timeout, shell=True)
except subprocess.CalledProcessError as e:
raise_from(HostError("Failed to copy file with '{}'. Output:\n{}".format(
command_redacted, e.output)), None)
except TimeoutError as e:
raise TimeoutError(command_redacted, e.output)
def _sendline(self, command): def _sendline(self, command):
# Workaround for https://github.com/pexpect/pexpect/issues/552 # Workaround for https://github.com/pexpect/pexpect/issues/552
if len(command) == self._get_window_size()[1] - self._get_prompt_length(): if len(command) == self._get_window_size()[1] - self._get_prompt_length():
@@ -387,29 +999,6 @@ class SshConnection(object):
def _get_window_size(self): def _get_window_size(self):
return self.conn.getwinsize() return self.conn.getwinsize()
class TelnetConnection(SshConnection):
# pylint: disable=super-init-not-called
def __init__(self,
host,
username,
password=None,
port=None,
timeout=None,
password_prompt=None,
original_prompt=None,
platform=None):
self.host = host
self.username = username
self.password = password
self.port = port
self.keyfile = None
self.lock = threading.Lock()
self.password_prompt = password_prompt if password_prompt is not None else self.default_password_prompt
logger.debug('Logging in {}@{}'.format(username, host))
timeout = timeout if timeout is not None else self.default_timeout
self.conn = ssh_get_shell(host, username, password, None, port, timeout, True, original_prompt)
class Gem5Connection(TelnetConnection): class Gem5Connection(TelnetConnection):
@@ -491,7 +1080,7 @@ class Gem5Connection(TelnetConnection):
.format(self.gem5_input_dir, indir)) .format(self.gem5_input_dir, indir))
self.gem5_input_dir = indir self.gem5_input_dir = indir
def push(self, source, dest, timeout=None): def push(self, sources, dest, timeout=None):
""" """
Push a file to the gem5 device using VirtIO Push a file to the gem5 device using VirtIO
@@ -503,6 +1092,7 @@ class Gem5Connection(TelnetConnection):
# First check if the connection is set up to interact with gem5 # First check if the connection is set up to interact with gem5
self._check_ready() self._check_ready()
for source in sources:
filename = os.path.basename(source) filename = os.path.basename(source)
logger.debug("Pushing {} to device.".format(source)) logger.debug("Pushing {} to device.".format(source))
logger.debug("gem5interactdir: {}".format(self.gem5_interact_dir)) logger.debug("gem5interactdir: {}".format(self.gem5_interact_dir))
@@ -524,7 +1114,7 @@ class Gem5Connection(TelnetConnection):
self._gem5_shell("ls -al {}".format(quote(self.gem5_input_dir))) self._gem5_shell("ls -al {}".format(quote(self.gem5_input_dir)))
logger.debug("Push complete.") logger.debug("Push complete.")
def pull(self, source, dest, timeout=0): #pylint: disable=unused-argument def pull(self, sources, dest, timeout=0): #pylint: disable=unused-argument
""" """
Pull a file from the gem5 device using m5 writefile Pull a file from the gem5 device using m5 writefile
@@ -536,6 +1126,7 @@ class Gem5Connection(TelnetConnection):
# First check if the connection is set up to interact with gem5 # First check if the connection is set up to interact with gem5
self._check_ready() self._check_ready()
for source in sources:
result = self._gem5_shell("ls {}".format(source)) result = self._gem5_shell("ls {}".format(source))
files = strip_bash_colors(result).split() files = strip_bash_colors(result).split()
@@ -616,7 +1207,7 @@ class Gem5Connection(TelnetConnection):
'get this file'.format(redirection_file)) 'get this file'.format(redirection_file))
return output return output
def close(self): def _close(self):
""" """
Close and disconnect from the gem5 simulation. Additionally, we remove Close and disconnect from the gem5 simulation. Additionally, we remove
the temporary directory used to pass files into the simulation. the temporary directory used to pass files into the simulation.
@@ -805,7 +1396,7 @@ class Gem5Connection(TelnetConnection):
both of these. both of these.
""" """
gem5_logger.debug("Sending Sync") gem5_logger.debug("Sending Sync")
self.conn.send("echo \*\*sync\*\*\n") self.conn.send("echo \\*\\*sync\\*\\*\n")
self.conn.expect(r"\*\*sync\*\*", timeout=self.default_timeout) self.conn.expect(r"\*\*sync\*\*", timeout=self.default_timeout)
self.conn.expect([self.conn.UNIQUE_PROMPT, self.conn.PROMPT], timeout=self.default_timeout) self.conn.expect([self.conn.UNIQUE_PROMPT, self.conn.PROMPT], timeout=self.default_timeout)
self.conn.expect([self.conn.UNIQUE_PROMPT, self.conn.PROMPT], timeout=self.default_timeout) self.conn.expect([self.conn.UNIQUE_PROMPT, self.conn.PROMPT], timeout=self.default_timeout)

View File

@@ -21,7 +21,7 @@ from subprocess import Popen, PIPE
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev']) VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev'])
version = VersionTuple(1, 2, 0, '') version = VersionTuple(1, 3, 1, '')
def get_devlib_version(): def get_devlib_version():
@@ -33,8 +33,11 @@ def get_devlib_version():
def get_commit(): def get_commit():
try:
p = Popen(['git', 'rev-parse', 'HEAD'], cwd=os.path.dirname(__file__), p = Popen(['git', 'rev-parse', 'HEAD'], cwd=os.path.dirname(__file__),
stdout=PIPE, stderr=PIPE) stdout=PIPE, stderr=PIPE)
except FileNotFoundError:
return None
std, _ = p.communicate() std, _ = p.communicate()
p.wait() p.wait()
if p.returncode: if p.returncode:

View File

@@ -3,16 +3,17 @@ Connection
A :class:`Connection` abstracts an actual physical connection to a device. The A :class:`Connection` abstracts an actual physical connection to a device. The
first connection is created when :func:`Target.connect` method is called. If a first connection is created when :func:`Target.connect` method is called. If a
:class:`Target` is used in a multi-threaded environment, it will maintain a :class:`~devlib.target.Target` is used in a multi-threaded environment, it will
connection for each thread in which it is invoked. This allows the same target maintain a connection for each thread in which it is invoked. This allows
object to be used in parallel in multiple threads. the same target object to be used in parallel in multiple threads.
:class:`Connection`\ s will be automatically created and managed by :class:`Connection`\ s will be automatically created and managed by
:class:`Target`\ s, so there is usually no reason to create one manually. :class:`~devlib.target.Target`\ s, so there is usually no reason to create one
Instead, configuration for a :class:`Connection` is passed as manually. Instead, configuration for a :class:`Connection` is passed as
`connection_settings` parameter when creating a :class:`Target`. The connection `connection_settings` parameter when creating a
to be used target is also specified on instantiation by `conn_cls` parameter, :class:`~devlib.target.Target`. The connection to be used target is also
though all concrete :class:`Target` implementations will set an appropriate specified on instantiation by `conn_cls` parameter, though all concrete
:class:`~devlib.target.Target` implementations will set an appropriate
default, so there is typically no need to specify this explicitly. default, so there is typically no need to specify this explicitly.
:class:`Connection` classes are not a part of an inheritance hierarchy, i.e. :class:`Connection` classes are not a part of an inheritance hierarchy, i.e.
@@ -20,25 +21,25 @@ they do not derive from a common base. Instead, a :class:`Connection` is any
class that implements the following methods. class that implements the following methods.
.. method:: push(self, source, dest, timeout=None) .. method:: push(self, sources, dest, timeout=None)
Transfer a file from the host machine to the connected device. Transfer a list of files from the host machine to the connected device.
:param source: path of to the file on the host :param sources: list of paths on the host
:param dest: path of to the file on the connected device. :param dest: path to the file or folder on the connected device.
:param timeout: timeout (in seconds) for the transfer; if the transfer does :param timeout: timeout (in seconds) for the transfer of each file; if the
not complete within this period, an exception will be raised. transfer does not complete within this period, an exception will be
raised.
.. method:: pull(self, source, dest, timeout=None) .. method:: pull(self, sources, dest, timeout=None)
Transfer a file, or files matching a glob pattern, from the connected device Transfer a list of files from the connected device to the host machine.
to the host machine.
:param source: path of to the file on the connected device. If ``dest`` is a :param sources: list of paths on the connected device.
directory, may be a glob pattern. :param dest: path to the file or folder on the host
:param dest: path of to the file on the host :param timeout: timeout (in seconds) for the transfer for each file; if the
:param timeout: timeout (in seconds) for the transfer; if the transfer does transfer does not complete within this period, an exception will be
not complete within this period, an exception will be raised. raised.
.. method:: execute(self, command, timeout=None, check_exit_code=False, as_root=False, strip_colors=True, will_succeed=False) .. method:: execute(self, command, timeout=None, check_exit_code=False, as_root=False, strip_colors=True, will_succeed=False)
@@ -58,7 +59,7 @@ class that implements the following methods.
:param will_succeed: The command is assumed to always succeed, unless there is :param will_succeed: The command is assumed to always succeed, unless there is
an issue in the environment like the loss of network connectivity. That an issue in the environment like the loss of network connectivity. That
will make the method always raise an instance of a subclass of will make the method always raise an instance of a subclass of
:class:`DevlibTransientError' when the command fails, instead of a :class:`DevlibTransientError` when the command fails, instead of a
:class:`DevlibStableError`. :class:`DevlibStableError`.
.. method:: background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False) .. method:: background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False)
@@ -76,7 +77,7 @@ class that implements the following methods.
.. note:: This **will block the connection** until the command completes. .. note:: This **will block the connection** until the command completes.
.. note:: The above methods are directly wrapped by :class:`Target` methods, .. note:: The above methods are directly wrapped by :class:`~devlib.target.Target` methods,
however note that some of the defaults are different. however note that some of the defaults are different.
.. method:: cancel_running_command(self) .. method:: cancel_running_command(self)
@@ -100,7 +101,12 @@ class that implements the following methods.
Connection Types Connection Types
---------------- ----------------
.. class:: AdbConnection(device=None, timeout=None, adb_server=None, adb_as_root=False)
.. module:: devlib.utils.android
.. class:: AdbConnection(device=None, timeout=None, adb_server=None, adb_as_root=False, connection_attempts=MAX_ATTEMPTS,\
poll_transfers=False, start_transfer_poll_delay=30, total_transfer_timeout=3600,\
transfer_poll_period=30)
A connection to an android device via ``adb`` (Android Debug Bridge). A connection to an android device via ``adb`` (Android Debug Bridge).
``adb`` is part of the Android SDK (though stand-alone versions are also ``adb`` is part of the Android SDK (though stand-alone versions are also
@@ -115,11 +121,35 @@ Connection Types
is raised. is raised.
:param adb_server: Allows specifying the address of the adb server to use. :param adb_server: Allows specifying the address of the adb server to use.
:param adb_as_root: Specify whether the adb server should be restarted in root mode. :param adb_as_root: Specify whether the adb server should be restarted in root mode.
:param connection_attempts: Specify how many connection attempts, 10 seconds
apart, should be attempted to connect to the device.
Defaults to 5.
:param poll_transfers: Specify whether file transfers should be polled. Polling
monitors the progress of file transfers and periodically
checks whether they have stalled, attempting to cancel
the transfers prematurely if so.
:param start_transfer_poll_delay: If transfers are polled, specify the length of
time after a transfer has started before polling
should start.
:param total_transfer_timeout: If transfers are polled, specify the total amount of time
to elapse before the transfer is cancelled, regardless
of its activity.
:param transfer_poll_period: If transfers are polled, specify the period at which
the transfers are sampled for activity. Too small values
may cause the destination size to appear the same over
one or more sample periods, causing improper transfer
cancellation.
.. class:: SshConnection(host, username, password=None, keyfile=None, port=None,\
timeout=None, password_prompt=None, \ .. module:: devlib.utils.ssh
sudo_cmd="sudo -- sh -c {}", options=None)
.. class:: SshConnection(host, username, password=None, keyfile=None, port=22,\
timeout=None, platform=None, \
sudo_cmd="sudo -- sh -c {}", strict_host_check=True, \
use_scp=False, poll_transfers=False, \
start_transfer_poll_delay=30, total_transfer_timeout=3600,\
transfer_poll_period=30)
A connection to a device on the network over SSH. A connection to a device on the network over SSH.
@@ -127,6 +157,9 @@ Connection Types
:param username: username for SSH login :param username: username for SSH login
:param password: password for the SSH connection :param password: password for the SSH connection
.. note:: To connect to a system without a password this
parameter should be set to an empty string otherwise
ssh key authentication will be attempted.
.. note:: In order to user password-based authentication, .. note:: In order to user password-based authentication,
``sshpass`` utility must be installed on the ``sshpass`` utility must be installed on the
system. system.
@@ -141,12 +174,26 @@ Connection Types
:param timeout: Timeout for the connection in seconds. If a connection :param timeout: Timeout for the connection in seconds. If a connection
cannot be established within this time, an error will be cannot be established within this time, an error will be
raised. raised.
:param password_prompt: A string with the password prompt used by :param platform: Specify the platform to be used. The generic :class:`~devlib.platform.Platform`
``sshpass``. Set this if your version of ``sshpass`` class is used by default.
uses something other than ``"[sudo] password"``.
:param sudo_cmd: Specify the format of the command used to grant sudo access. :param sudo_cmd: Specify the format of the command used to grant sudo access.
:param options: A dictionary with extra ssh configuration options. :param strict_host_check: Specify the ssh connection parameter ``StrictHostKeyChecking``,
:param use_scp: Use SCP for file transfers, defaults to SFTP.
:param poll_transfers: Specify whether file transfers should be polled. Polling
monitors the progress of file transfers and periodically
checks whether they have stalled, attempting to cancel
the transfers prematurely if so.
:param start_transfer_poll_delay: If transfers are polled, specify the length of
time after a transfer has started before polling
should start.
:param total_transfer_timeout: If transfers are polled, specify the total amount of time
to elapse before the transfer is cancelled, regardless
of its activity.
:param transfer_poll_period: If transfers are polled, specify the period at which
the transfers are sampled for activity. Too small values
may cause the destination size to appear the same over
one or more sample periods, causing improper transfer
cancellation.
.. class:: TelnetConnection(host, username, password=None, port=None,\ .. class:: TelnetConnection(host, username, password=None, port=None,\
timeout=None, password_prompt=None,\ timeout=None, password_prompt=None,\
@@ -179,6 +226,7 @@ Connection Types
connection to reduce the possibility of clashes). connection to reduce the possibility of clashes).
This parameter is ignored for SSH connections. This parameter is ignored for SSH connections.
.. module:: devlib.host
.. class:: LocalConnection(keep_password=True, unrooted=False, password=None) .. class:: LocalConnection(keep_password=True, unrooted=False, password=None)
@@ -194,6 +242,9 @@ Connection Types
prompting for it. prompting for it.
.. module:: devlib.utils.ssh
:noindex:
.. class:: Gem5Connection(platform, host=None, username=None, password=None,\ .. class:: Gem5Connection(platform, host=None, username=None, password=None,\
timeout=None, password_prompt=None,\ timeout=None, password_prompt=None,\
original_prompt=None) original_prompt=None)
@@ -202,7 +253,7 @@ Connection Types
.. note:: Some of the following input parameters are optional and will be ignored during .. note:: Some of the following input parameters are optional and will be ignored during
initialisation. They were kept to keep the analogy with a :class:`TelnetConnection` initialisation. They were kept to keep the analogy with a :class:`TelnetConnection`
(i.e. ``host``, `username``, ``password``, ``port``, (i.e. ``host``, ``username``, ``password``, ``port``,
``password_prompt`` and ``original_promp``) ``password_prompt`` and ``original_promp``)
@@ -212,7 +263,7 @@ Connection Types
will be ignored, the gem5 simulation needs to be will be ignored, the gem5 simulation needs to be
on the same host the user is currently on, so if on the same host the user is currently on, so if
the host given as input parameter is not the the host given as input parameter is not the
same as the actual host, a ``TargetStableError`` same as the actual host, a :class:`TargetStableError`
will be raised to prevent confusion. will be raised to prevent confusion.
:param username: Username in the simulated system :param username: Username in the simulated system

View File

@@ -1,7 +1,6 @@
Derived Measurements Derived Measurements
===================== =====================
The ``DerivedMeasurements`` API provides a consistent way of performing post The ``DerivedMeasurements`` API provides a consistent way of performing post
processing on a provided :class:`MeasurementCsv` file. processing on a provided :class:`MeasurementCsv` file.
@@ -35,6 +34,8 @@ API
Derived Measurements Derived Measurements
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
.. module:: devlib.derived
.. class:: DerivedMeasurements .. class:: DerivedMeasurements
The ``DerivedMeasurements`` class provides an API for post-processing The ``DerivedMeasurements`` class provides an API for post-processing
@@ -102,17 +103,20 @@ Available Derived Measurements
Energy Energy
~~~~~~ ~~~~~~
.. module:: devlib.derived.energy
.. class:: DerivedEnergyMeasurements .. class:: DerivedEnergyMeasurements
The ``DerivedEnergyMeasurements`` class is used to calculate average power and The ``DerivedEnergyMeasurements`` class is used to calculate average power
cumulative energy for each site if the required data is present. and cumulative energy for each site if the required data is present.
The calculation of cumulative energy can occur in 3 ways. If a The calculation of cumulative energy can occur in 3 ways. If a ``site``
``site`` contains ``energy`` results, the first and last measurements are extracted contains ``energy`` results, the first and last measurements are extracted
and the delta calculated. If not, a ``timestamp`` channel will be used to calculate and the delta calculated. If not, a ``timestamp`` channel will be used to
the energy from the power channel, failing back to using the sample rate attribute calculate the energy from the power channel, failing back to using the sample
of the :class:`MeasurementCsv` file if timestamps are not available. If neither rate attribute of the :class:`MeasurementCsv` file if timestamps are not
timestamps or a sample rate are available then an error will be raised. available. If neither timestamps or a sample rate are available then an error
will be raised.
.. method:: DerivedEnergyMeasurements.process(measurement_csv) .. method:: DerivedEnergyMeasurements.process(measurement_csv)
@@ -128,6 +132,8 @@ Energy
FPS / Rendering FPS / Rendering
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
.. module:: devlib.derived.fps
.. class:: DerivedGfxInfoStats(drop_threshold=5, suffix='-fps', filename=None, outdir=None) .. class:: DerivedGfxInfoStats(drop_threshold=5, suffix='-fps', filename=None, outdir=None)
Produces FPS (frames-per-second) and other derived statistics from Produces FPS (frames-per-second) and other derived statistics from

View File

@@ -3,6 +3,8 @@
You can adapt this file completely to your liking, but it should at least You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive. contain the root `toctree` directive.
.. module:: devlib
Welcome to devlib documentation Welcome to devlib documentation
=============================== ===============================

View File

@@ -5,9 +5,9 @@ Instrumentation
The ``Instrument`` API provide a consistent way of collecting measurements from The ``Instrument`` API provide a consistent way of collecting measurements from
a target. Measurements are collected via an instance of a class derived from a target. Measurements are collected via an instance of a class derived from
:class:`Instrument`. An ``Instrument`` allows collection of measurement from one :class:`~devlib.instrument.Instrument`. An ``Instrument`` allows collection of
or more channels. An ``Instrument`` may support ``INSTANTANEOUS`` or measurement from one or more channels. An ``Instrument`` may support
``CONTINUOUS`` collection, or both. ``INSTANTANEOUS`` or ``CONTINUOUS`` collection, or both.
Example Example
------- -------
@@ -50,6 +50,8 @@ Android target.
API API
--- ---
.. module:: devlib.instrument
Instrument Instrument
~~~~~~~~~~ ~~~~~~~~~~
@@ -122,14 +124,16 @@ Instrument
Take a single measurement from ``active_channels``. Returns a list of Take a single measurement from ``active_channels``. Returns a list of
:class:`Measurement` objects (one for each active channel). :class:`Measurement` objects (one for each active channel).
.. note:: This method is only implemented by :class:`Instrument`\ s that .. note:: This method is only implemented by
:class:`~devlib.instrument.Instrument`\ s that
support ``INSTANTANEOUS`` measurement. support ``INSTANTANEOUS`` measurement.
.. method:: Instrument.start() .. method:: Instrument.start()
Starts collecting measurements from ``active_channels``. Starts collecting measurements from ``active_channels``.
.. note:: This method is only implemented by :class:`Instrument`\ s that .. note:: This method is only implemented by
:class:`~devlib.instrument.Instrument`\ s that
support ``CONTINUOUS`` measurement. support ``CONTINUOUS`` measurement.
.. method:: Instrument.stop() .. method:: Instrument.stop()
@@ -137,7 +141,8 @@ Instrument
Stops collecting measurements from ``active_channels``. Must be called after Stops collecting measurements from ``active_channels``. Must be called after
:func:`start()`. :func:`start()`.
.. note:: This method is only implemented by :class:`Instrument`\ s that .. note:: This method is only implemented by
:class:`~devlib.instrument.Instrument`\ s that
support ``CONTINUOUS`` measurement. support ``CONTINUOUS`` measurement.
.. method:: Instrument.get_data(outfile) .. method:: Instrument.get_data(outfile)
@@ -148,9 +153,9 @@ Instrument
``<site>_<kind>`` (see :class:`InstrumentChannel`). The order of the columns ``<site>_<kind>`` (see :class:`InstrumentChannel`). The order of the columns
will be the same as the order of channels in ``Instrument.active_channels``. will be the same as the order of channels in ``Instrument.active_channels``.
If reporting timestamps, one channel must have a ``site`` named ``"timestamp"`` If reporting timestamps, one channel must have a ``site`` named
and a ``kind`` of a :class:`MeasurmentType` of an appropriate time unit which will ``"timestamp"`` and a ``kind`` of a :class:`MeasurmentType` of an appropriate
be used, if appropriate, during any post processing. time unit which will be used, if appropriate, during any post processing.
.. note:: Currently supported time units are seconds, milliseconds and .. note:: Currently supported time units are seconds, milliseconds and
microseconds, other units can also be used if an appropriate microseconds, other units can also be used if an appropriate
@@ -160,7 +165,8 @@ Instrument
that can be used to stream :class:`Measurement`\ s lists (similar to what is that can be used to stream :class:`Measurement`\ s lists (similar to what is
returned by ``take_measurement()``. returned by ``take_measurement()``.
.. note:: This method is only implemented by :class:`Instrument`\ s that .. note:: This method is only implemented by
:class:`~devlib.instrument.Instrument`\ s that
support ``CONTINUOUS`` measurement. support ``CONTINUOUS`` measurement.
.. method:: Instrument.get_raw() .. method:: Instrument.get_raw()
@@ -185,7 +191,8 @@ Instrument
Sample rate of the instrument in Hz. Assumed to be the same for all channels. Sample rate of the instrument in Hz. Assumed to be the same for all channels.
.. note:: This attribute is only provided by :class:`Instrument`\ s that .. note:: This attribute is only provided by
:class:`~devlib.instrument.Instrument`\ s that
support ``CONTINUOUS`` measurement. support ``CONTINUOUS`` measurement.
Instrument Channel Instrument Channel
@@ -194,8 +201,8 @@ Instrument Channel
.. class:: InstrumentChannel(name, site, measurement_type, \*\*attrs) .. class:: InstrumentChannel(name, site, measurement_type, \*\*attrs)
An :class:`InstrumentChannel` describes a single type of measurement that may An :class:`InstrumentChannel` describes a single type of measurement that may
be collected by an :class:`Instrument`. A channel is primarily defined by a be collected by an :class:`~devlib.instrument.Instrument`. A channel is
``site`` and a ``measurement_type``. primarily defined by a ``site`` and a ``measurement_type``.
A ``site`` indicates where on the target a measurement is collected from A ``site`` indicates where on the target a measurement is collected from
(e.g. a voltage rail or location of a sensor). (e.g. a voltage rail or location of a sensor).
@@ -488,12 +495,13 @@ voltage (see previous figure), samples are retrieved at a frequency of
where :math:`T_X` is the integration time for the :math:`X` voltage. where :math:`T_X` is the integration time for the :math:`X` voltage.
As described below (:meth:`BaylibreAcmeInstrument.reset`), the integration As described below (:meth:`BaylibreAcmeInstrument.reset
times for the bus and shunt voltage can be set separately which allows a <devlib.instrument.baylibre_acme.BaylibreAcmeInstrument.reset>`), the
tradeoff of accuracy between signals. This is particularly useful as the shunt integration times for the bus and shunt voltage can be set separately which
voltage returned by the INA226 has a higher resolution than the bus voltage allows a tradeoff of accuracy between signals. This is particularly useful as
(2.5 μV and 1.25 mV LSB, respectively) and therefore would benefit more from a the shunt voltage returned by the INA226 has a higher resolution than the bus
longer integration time. voltage (2.5 μV and 1.25 mV LSB, respectively) and therefore would benefit more
from a longer integration time.
As an illustration, consider the following sampled sine wave and notice how As an illustration, consider the following sampled sine wave and notice how
increasing the integration time (of the bus voltage in this case) "smoothes" increasing the integration time (of the bus voltage in this case) "smoothes"
@@ -601,8 +609,9 @@ Buffer-based transactions
Samples made available by the INA226 are retrieved by the BBB and stored in a Samples made available by the INA226 are retrieved by the BBB and stored in a
buffer which is sent back to the host once it is full (see buffer which is sent back to the host once it is full (see
``buffer_samples_count`` in :meth:`BaylibreAcmeInstrument.setup` for setting ``buffer_samples_count`` in :meth:`BaylibreAcmeInstrument.setup
its size). Therefore, the larger the buffer is, the longer it takes to be <devlib.instrument.baylibre_acme.BaylibreAcmeInstrument.setup>` for setting its
size). Therefore, the larger the buffer is, the longer it takes to be
transmitted back but the less often it has to be transmitted. To illustrate transmitted back but the less often it has to be transmitted. To illustrate
this, consider the following graphs showing the time difference between this, consider the following graphs showing the time difference between
successive samples in a retrieved signal when the size of the buffer changes: successive samples in a retrieved signal when the size of the buffer changes:
@@ -624,6 +633,8 @@ given by `libiio (the Linux IIO interface)`_ however only the network-based one
has been tested. For the other classes, please refer to the official IIO has been tested. For the other classes, please refer to the official IIO
documentation for the meaning of their constructor parameters. documentation for the meaning of their constructor parameters.
.. module:: devlib.instrument.baylibre_acme
.. class:: BaylibreAcmeInstrument(target=None, iio_context=None, use_base_iio_context=False, probe_names=None) .. class:: BaylibreAcmeInstrument(target=None, iio_context=None, use_base_iio_context=False, probe_names=None)
Base class wrapper for the ACME instrument which itself is a wrapper for the Base class wrapper for the ACME instrument which itself is a wrapper for the

View File

@@ -1,11 +1,13 @@
.. module:: devlib.module
.. _modules: .. _modules:
Modules Modules
======= =======
Modules add additional functionality to the core :class:`Target` interface. Modules add additional functionality to the core :class:`~devlib.target.Target`
Usually, it is support for specific subsystems on the target. Modules are interface. Usually, it is support for specific subsystems on the target. Modules
instantiated as attributes of the :class:`Target` instance. are instantiated as attributes of the :class:`~devlib.target.Target` instance.
hotplug hotplug
------- -------
@@ -28,6 +30,8 @@ interface to this subsystem
# Make sure all cpus are online # Make sure all cpus are online
target.hotplug.online_all() target.hotplug.online_all()
.. module:: devlib.module.cpufreq
cpufreq cpufreq
------- -------
@@ -132,6 +136,9 @@ policies (governors). The ``devlib`` module exposes the following interface
``1`` or ``"cpu1"``). ``1`` or ``"cpu1"``).
:param frequency: Frequency to set. :param frequency: Frequency to set.
.. module:: devlib.module.cupidle
cpuidle cpuidle
------- -------
@@ -167,11 +174,15 @@ cpuidle
You can also call ``enable()`` or ``disable()`` on :class:`CpuidleState` objects You can also call ``enable()`` or ``disable()`` on :class:`CpuidleState` objects
returned by get_state(s). returned by get_state(s).
.. module:: devlib.module.cgroups
cgroups cgroups
------- -------
TODO TODO
.. module:: devlib.module.hwmon
hwmon hwmon
----- -----
@@ -187,8 +198,8 @@ Modules implement discrete, optional pieces of functionality ("optional" in the
sense that the functionality may or may not be present on the target device, or sense that the functionality may or may not be present on the target device, or
that it may or may not be necessary for a particular application). that it may or may not be necessary for a particular application).
Every module (ultimately) derives from :class:`Module` class. A module must Every module (ultimately) derives from :class:`devlib.module.Module` class. A
define the following class attributes: module must define the following class attributes:
:name: A unique name for the module. This cannot clash with any of the existing :name: A unique name for the module. This cannot clash with any of the existing
names and must be a valid Python identifier, but is otherwise free-form. names and must be a valid Python identifier, but is otherwise free-form.
@@ -204,14 +215,16 @@ define the following class attributes:
which case the module's ``name`` will be treated as its which case the module's ``name`` will be treated as its
``kind`` as well. ``kind`` as well.
:stage: This defines when the module will be installed into a :class:`Target`. :stage: This defines when the module will be installed into a
Currently, the following values are allowed: :class:`~devlib.target.Target`. Currently, the following values are
allowed:
:connected: The module is installed after a connection to the target has :connected: The module is installed after a connection to the target has
been established. This is the default. been established. This is the default.
:early: The module will be installed when a :class:`Target` is first :early: The module will be installed when a
created. This should be used for modules that do not rely on a :class:`~devlib.target.Target` is first created. This should be
live connection to the target. used for modules that do not rely on a live connection to the
target.
:setup: The module will be installed after initial setup of the device :setup: The module will be installed after initial setup of the device
has been performed. This allows the module to utilize assets has been performed. This allows the module to utilize assets
deployed during the setup stage for example 'Busybox'. deployed during the setup stage for example 'Busybox'.
@@ -220,8 +233,8 @@ Additionally, a module must implement a static (or class) method :func:`probe`:
.. method:: Module.probe(target) .. method:: Module.probe(target)
This method takes a :class:`Target` instance and returns ``True`` if this This method takes a :class:`~devlib.target.Target` instance and returns
module is supported by that target, or ``False`` otherwise. ``True`` if this module is supported by that target, or ``False`` otherwise.
.. note:: If the module ``stage`` is ``"early"``, this method cannot assume .. note:: If the module ``stage`` is ``"early"``, this method cannot assume
that a connection has been established (i.e. it can only access that a connection has been established (i.e. it can only access
@@ -231,9 +244,9 @@ Installation and invocation
*************************** ***************************
The default installation method will create an instance of a module (the The default installation method will create an instance of a module (the
:class:`Target` instance being the sole argument) and assign it to the target :class:`~devlib.target.Target` instance being the sole argument) and assign it
instance attribute named after the module's ``kind`` (or ``name`` if ``kind`` is to the target instance attribute named after the module's ``kind`` (or
``None``). ``name`` if ``kind`` is ``None``).
It is possible to change the installation procedure for a module by overriding It is possible to change the installation procedure for a module by overriding
the default :func:`install` method. The method must have the following the default :func:`install` method. The method must have the following
@@ -344,10 +357,11 @@ FlashModule
Module Registration Module Registration
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
Modules are specified on :class:`Target` or :class:`Platform` creation by name. Modules are specified on :class:`~devlib.target.Target` or
In order to find the class associated with the name, the module needs to be :class:`~devlib.platform.Platform` creation by name. In order to find the class
registered with ``devlib``. This is accomplished by passing the module class associated with the name, the module needs to be registered with ``devlib``.
into :func:`register_module` method once it is defined. This is accomplished by passing the module class into :func:`register_module`
method once it is defined.
.. note:: If you're wiring a module to be included as part of ``devlib`` code .. note:: If you're wiring a module to be included as part of ``devlib`` code
base, you can place the file with the module class under base, you can place the file with the module class under

View File

@@ -1,25 +1,26 @@
Overview Overview
======== ========
A :class:`Target` instance serves as the main interface to the target device. A :class:`~devlib.target.Target` instance serves as the main interface to the target device.
There are currently four target interfaces: There are currently four target interfaces:
- :class:`LinuxTarget` for interacting with Linux devices over SSH. - :class:`~devlib.target.LinuxTarget` for interacting with Linux devices over SSH.
- :class:`AndroidTarget` for interacting with Android devices over adb. - :class:`~devlib.target.AndroidTarget` for interacting with Android devices over adb.
- :class:`ChromeOsTarget`: for interacting with ChromeOS devices over SSH, and their Android containers over adb. - :class:`~devlib.target.ChromeOsTarget`: for interacting with ChromeOS devices
- :class:`LocalLinuxTarget`: for interacting with the local Linux host. over SSH, and their Android containers over adb.
- :class:`~devlib.target.LocalLinuxTarget`: for interacting with the local Linux host.
They all work in more-or-less the same way, with the major difference being in They all work in more-or-less the same way, with the major difference being in
how connection settings are specified; though there may also be a few APIs how connection settings are specified; though there may also be a few APIs
specific to a particular target type (e.g. :class:`AndroidTarget` exposes specific to a particular target type (e.g. :class:`~devlib.target.AndroidTarget`
methods for working with logcat). exposes methods for working with logcat).
Acquiring a Target Acquiring a Target
------------------ ------------------
To create an interface to your device, you just need to instantiate one of the To create an interface to your device, you just need to instantiate one of the
:class:`Target` derivatives listed above, and pass it the right :class:`~devlib.target.Target` derivatives listed above, and pass it the right
``connection_settings``. Code snippet below gives a typical example of ``connection_settings``. Code snippet below gives a typical example of
instantiating each of the three target types. instantiating each of the three target types.
@@ -46,21 +47,22 @@ instantiating each of the three target types.
t3 = AndroidTarget(connection_settings={'device': '0123456789abcde'}) t3 = AndroidTarget(connection_settings={'device': '0123456789abcde'})
Instantiating a target may take a second or two as the remote device will be Instantiating a target may take a second or two as the remote device will be
queried to initialize :class:`Target`'s internal state. If you would like to queried to initialize :class:`~devlib.target.Target`'s internal state. If you
create a :class:`Target` instance but not immediately connect to the remote would like to create a :class:`~devlib.target.Target` instance but not
device, you can pass ``connect=False`` parameter. If you do that, you would have immediately connect to the remote device, you can pass ``connect=False``
to then explicitly call ``t.connect()`` before you can interact with the device. parameter. If you do that, you would have to then explicitly call
``t.connect()`` before you can interact with the device.
There are a few additional parameters you can pass in instantiation besides There are a few additional parameters you can pass in instantiation besides
``connection_settings``, but they are usually unnecessary. Please see ``connection_settings``, but they are usually unnecessary. Please see
:class:`Target` API documentation for more details. :class:`~devlib.target.Target` API documentation for more details.
Target Interface Target Interface
---------------- ----------------
This is a quick overview of the basic interface to the device. See This is a quick overview of the basic interface to the device. See
:class:`Target` API documentation for the full list of supported methods and :class:`~devlib.target.Target` API documentation for the full list of supported
more detailed documentation. methods and more detailed documentation.
One-time Setup One-time Setup
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@@ -166,15 +168,16 @@ Process Control
# PsEntry records. # PsEntry records.
entries = t.ps() entries = t.ps()
# e.g. print virtual memory sizes of all running sshd processes: # e.g. print virtual memory sizes of all running sshd processes:
print ', '.join(str(e.vsize) for e in entries if e.name == 'sshd') print(', '.join(str(e.vsize) for e in entries if e.name == 'sshd'))
More... More...
~~~~~~~ ~~~~~~~
As mentioned previously, the above is not intended to be exhaustive As mentioned previously, the above is not intended to be exhaustive
documentation of the :class:`Target` interface. Please refer to the API documentation of the :class:`~devlib.target.Target` interface. Please refer to
documentation for the full list of attributes and methods and their parameters. the API documentation for the full list of attributes and methods and their
parameters.
Super User Privileges Super User Privileges
--------------------- ---------------------
@@ -238,6 +241,8 @@ complete. Retrying it or bailing out is therefore a responsability of the caller
The hierarchy is as follows: The hierarchy is as follows:
.. module:: devlib.exception
- :class:`DevlibError` - :class:`DevlibError`
- :class:`WorkerThreadError` - :class:`WorkerThreadError`
@@ -287,7 +292,7 @@ Modules
Additional functionality is exposed via modules. Modules are initialized as Additional functionality is exposed via modules. Modules are initialized as
attributes of a target instance. By default, ``hotplug``, ``cpufreq``, attributes of a target instance. By default, ``hotplug``, ``cpufreq``,
``cpuidle``, ``cgroups`` and ``hwmon`` will attempt to load on target; additional ``cpuidle``, ``cgroups`` and ``hwmon`` will attempt to load on target; additional
modules may be specified when creating a :class:`Target` instance. modules may be specified when creating a :class:`~devlib.target.Target` instance.
A module will probe the target for support before attempting to load. So if the A module will probe the target for support before attempting to load. So if the
underlying platform does not support particular functionality (e.g. the kernel underlying platform does not support particular functionality (e.g. the kernel

View File

@@ -1,14 +1,17 @@
.. module:: devlib.platform
.. _platform: .. _platform:
Platform Platform
======== ========
:class:`Platform`\ s describe the system underlying the OS. They encapsulate :class:`~devlib.platform.Platform`\ s describe the system underlying the OS.
hardware- and firmware-specific details. In most cases, the generic They encapsulate hardware- and firmware-specific details. In most cases, the
:class:`Platform` class, which gets used if a platform is not explicitly generic :class:`~devlib.platform.Platform` class, which gets used if a
specified on :class:`Target` creation, will be sufficient. It will automatically platform is not explicitly specified on :class:`~devlib.target.Target`
query as much platform information (such CPU topology, hardware model, etc) if creation, will be sufficient. It will automatically query as much platform
it was not specified explicitly by the user. information (such CPU topology, hardware model, etc) if it was not specified
explicitly by the user.
.. class:: Platform(name=None, core_names=None, core_clusters=None,\ .. class:: Platform(name=None, core_names=None, core_clusters=None,\
@@ -31,6 +34,7 @@ it was not specified explicitly by the user.
platform (e.g. for handling flashing, rebooting, etc). These platform (e.g. for handling flashing, rebooting, etc). These
would be added to the Target's modules. (See :ref:`modules`\ ). would be added to the Target's modules. (See :ref:`modules`\ ).
.. module:: devlib.platform.arm
Versatile Express Versatile Express
----------------- -----------------
@@ -38,8 +42,8 @@ Versatile Express
The generic platform may be extended to support hardware- or The generic platform may be extended to support hardware- or
infrastructure-specific functionality. Platforms exist for ARM infrastructure-specific functionality. Platforms exist for ARM
VersatileExpress-based :class:`Juno` and :class:`TC2` development boards. In VersatileExpress-based :class:`Juno` and :class:`TC2` development boards. In
addition to the standard :class:`Platform` parameters above, these platforms addition to the standard :class:`~devlib.platform.Platform` parameters above,
support additional configuration: these platforms support additional configuration:
.. class:: VersatileExpressPlatform .. class:: VersatileExpressPlatform
@@ -116,43 +120,53 @@ support additional configuration:
Gem5 Simulation Platform Gem5 Simulation Platform
------------------------ ------------------------
By initialising a Gem5SimulationPlatform, devlib will start a gem5 simulation (based upon the By initialising a Gem5SimulationPlatform, devlib will start a gem5 simulation
arguments the user provided) and then connect to it using :class:`Gem5Connection`. (based upon the arguments the user provided) and then connect to it using
Using the methods discussed above, some methods of the :class:`Target` will be altered :class:`~devlib.utils.ssh.Gem5Connection`. Using the methods discussed above,
slightly to better suit gem5. some methods of the :class:`~devlib.target.Target` will be altered slightly to
better suit gem5.
.. module:: devlib.platform.gem5
.. class:: Gem5SimulationPlatform(name, host_output_dir, gem5_bin, gem5_args, gem5_virtio, gem5_telnet_port=None) .. class:: Gem5SimulationPlatform(name, host_output_dir, gem5_bin, gem5_args, gem5_virtio, gem5_telnet_port=None)
During initialisation the gem5 simulation will be kicked off (based upon the arguments During initialisation the gem5 simulation will be kicked off (based upon the
provided by the user) and the telnet port used by the gem5 simulation will be intercepted arguments provided by the user) and the telnet port used by the gem5
and stored for use by the :class:`Gem5Connection`. simulation will be intercepted and stored for use by the
:class:`~devlib.utils.ssh.Gem5Connection`.
:param name: Platform name :param name: Platform name
:param host_output_dir: Path on the host where the gem5 outputs will be placed (e.g. stats file) :param host_output_dir: Path on the host where the gem5 outputs will be
placed (e.g. stats file)
:param gem5_bin: gem5 binary :param gem5_bin: gem5 binary
:param gem5_args: Arguments to be passed onto gem5 such as config file etc. :param gem5_args: Arguments to be passed onto gem5 such as config file etc.
:param gem5_virtio: Arguments to be passed onto gem5 in terms of the virtIO device used :param gem5_virtio: Arguments to be passed onto gem5 in terms of the virtIO
to transfer files between the host and the gem5 simulated system. device used to transfer files between the host and the gem5 simulated
system.
:param gem5_telnet_port: Not yet in use as it would be used in future implementations :param gem5_telnet_port: Not yet in use as it would be used in future
of devlib in which the user could use the platform to pick implementations of devlib in which the user could
up an existing and running simulation. use the platform to pick up an existing and running
simulation.
.. method:: Gem5SimulationPlatform.init_target_connection([target]) .. method:: Gem5SimulationPlatform.init_target_connection([target])
Based upon the OS defined in the :class:`Target`, the type of :class:`Gem5Connection` Based upon the OS defined in the :class:`~devlib.target.Target`, the type of
will be set (:class:`AndroidGem5Connection` or :class:`AndroidGem5Connection`). :class:`~devlib.utils.ssh.Gem5Connection` will be set
(:class:`~devlib.utils.ssh.AndroidGem5Connection` or
:class:`~devlib.utils.ssh.AndroidGem5Connection`).
.. method:: Gem5SimulationPlatform.update_from_target([target]) .. method:: Gem5SimulationPlatform.update_from_target([target])
This method provides specific setup procedures for a gem5 simulation. First of all, the m5 This method provides specific setup procedures for a gem5 simulation. First
binary will be installed on the guest (if it is not present). Secondly, three methods of all, the m5 binary will be installed on the guest (if it is not present).
in the :class:`Target` will be monkey-patched: Secondly, three methods in the :class:`~devlib.target.Target` will be
monkey-patched:
- **reboot**: this is not supported in gem5 - **reboot**: this is not supported in gem5
- **reset**: this is not supported in gem5 - **reset**: this is not supported in gem5
@@ -160,7 +174,7 @@ slightly to better suit gem5.
monkey-patched method will first try to monkey-patched method will first try to
transfer the existing screencaps. transfer the existing screencaps.
In case that does not work, it will fall back In case that does not work, it will fall back
to the original :class:`Target` implementation to the original :class:`~devlib.target.Target` implementation
of :func:`capture_screen`. of :func:`capture_screen`.
Finally, it will call the parent implementation of :func:`update_from_target`. Finally, it will call the parent implementation of :func:`update_from_target`.

View File

@@ -1,57 +1,62 @@
.. module:: devlib.target
Target Target
====== ======
.. class:: Target(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=None) .. class:: Target(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=None)
:class:`Target` is the primary interface to the remote device. All interactions :class:`~devlib.target.Target` is the primary interface to the remote
with the device are performed via a :class:`Target` instance, either device. All interactions with the device are performed via a
directly, or via its modules or a wrapper interface (such as an :class:`~devlib.target.Target` instance, either directly, or via its
:class:`Instrument`). modules or a wrapper interface (such as an
:class:`~devlib.instrument.Instrument`).
:param connection_settings: A ``dict`` that specifies how to connect to the remote :param connection_settings: A ``dict`` that specifies how to connect to the
device. Its contents depend on the specific :class:`Target` type (used see remote device. Its contents depend on the specific
:class:`~devlib.target.Target` type (used see
:ref:`connection-types`\ ). :ref:`connection-types`\ ).
:param platform: A :class:`Target` defines interactions at Operating System level. A :param platform: A :class:`~devlib.target.Target` defines interactions at
:class:`Platform` describes the underlying hardware (such as CPUs Operating System level. A :class:`~devlib.platform.Platform` describes
available). If a :class:`Platform` instance is not specified on the underlying hardware (such as CPUs available). If a
:class:`Target` creation, one will be created automatically and it will :class:`~devlib.platform.Platform` instance is not specified on
dynamically probe the device to discover as much about the underlying :class:`~devlib.target.Target` creation, one will be created
hardware as it can. See also :ref:`platform`\ . automatically and it will dynamically probe the device to discover
as much about the underlying hardware as it can. See also
:ref:`platform`\ .
:param working_directory: This is primary location for on-target file system :param working_directory: This is primary location for on-target file system
interactions performed by ``devlib``. This location *must* be readable and interactions performed by ``devlib``. This location *must* be readable
writable directly (i.e. without sudo) by the connection's user account. and writable directly (i.e. without sudo) by the connection's user
It may or may not allow execution. This location will be created, account. It may or may not allow execution. This location will be
if necessary, during ``setup()``. created, if necessary, during :meth:`setup()`.
If not explicitly specified, this will be set to a default value If not explicitly specified, this will be set to a default value
depending on the type of :class:`Target` depending on the type of :class:`~devlib.target.Target`
:param executables_directory: This is the location to which ``devlib`` will :param executables_directory: This is the location to which ``devlib`` will
install executable binaries (either during ``setup()`` or via an install executable binaries (either during :meth:`setup()` or via an
explicit ``install()`` call). This location *must* support execution explicit :meth:`install()` call). This location *must* support execution
(obviously). It should also be possible to write to this location, (obviously). It should also be possible to write to this location,
possibly with elevated privileges (i.e. on a rooted Linux target, it possibly with elevated privileges (i.e. on a rooted Linux target, it
should be possible to write here with sudo, but not necessarily directly should be possible to write here with sudo, but not necessarily directly
by the connection's account). This location will be created, by the connection's account). This location will be created, if
if necessary, during ``setup()``. necessary, during :meth:`setup()`.
This location does *not* need to be same as the system's executables This location does *not* need to be same as the system's executables
location. In fact, to prevent devlib from overwriting system's defaults, location. In fact, to prevent devlib from overwriting system's defaults,
it better if this is a separate location, if possible. it better if this is a separate location, if possible.
If not explicitly specified, this will be set to a default value If not explicitly specified, this will be set to a default value
depending on the type of :class:`Target` depending on the type of :class:`~devlib.target.Target`
:param connect: Specifies whether a connections should be established to the :param connect: Specifies whether a connections should be established to the
target. If this is set to ``False``, then ``connect()`` must be target. If this is set to ``False``, then :meth:`connect()` must be
explicitly called later on before the :class:`Target` instance can be explicitly called later on before the :class:`~devlib.target.Target`
used. instance can be used.
:param modules: a list of additional modules to be installed. Some modules will :param modules: a list of additional modules to be installed. Some modules
try to install by default (if supported by the underlying target). will try to install by default (if supported by the underlying target).
Current default modules are ``hotplug``, ``cpufreq``, ``cpuidle``, Current default modules are ``hotplug``, ``cpufreq``, ``cpuidle``,
``cgroups``, and ``hwmon`` (See :ref:`modules`\ ). ``cgroups``, and ``hwmon`` (See :ref:`modules`\ ).
@@ -59,40 +64,40 @@ Target
:param load_default_modules: If set to ``False``, default modules listed :param load_default_modules: If set to ``False``, default modules listed
above will *not* attempt to load. This may be used to either speed up above will *not* attempt to load. This may be used to either speed up
target instantiation (probing for initializing modules takes a bit of time) target instantiation (probing for initializing modules takes a bit of
or if there is an issue with one of the modules on a particular device time) or if there is an issue with one of the modules on a particular
(the rest of the modules will then have to be explicitly specified in device (the rest of the modules will then have to be explicitly
the ``modules``). specified in the ``modules``).
:param shell_prompt: This is a regular expression that matches the shell :param shell_prompt: This is a regular expression that matches the shell
prompted on the target. This may be used by some modules that establish prompted on the target. This may be used by some modules that establish
auxiliary connections to a target over UART. auxiliary connections to a target over UART.
:param conn_cls: This is the type of connection that will be used to communicate :param conn_cls: This is the type of connection that will be used to
with the device. communicate with the device.
.. attribute:: Target.core_names .. attribute:: Target.core_names
This is a list containing names of CPU cores on the target, in the order in This is a list containing names of CPU cores on the target, in the order in
which they are index by the kernel. This is obtained via the underlying which they are index by the kernel. This is obtained via the underlying
:class:`Platform`. :class:`~devlib.platform.Platform`.
.. attribute:: Target.core_clusters .. attribute:: Target.core_clusters
Some devices feature heterogeneous core configurations (such as ARM Some devices feature heterogeneous core configurations (such as ARM
big.LITTLE). This is a list that maps CPUs onto underlying clusters. big.LITTLE). This is a list that maps CPUs onto underlying clusters.
(Usually, but not always, clusters correspond to groups of CPUs with the same (Usually, but not always, clusters correspond to groups of CPUs with the same
name). This is obtained via the underlying :class:`Platform`. name). This is obtained via the underlying :class:`~devlib.platform.Platform`.
.. attribute:: Target.big_core .. attribute:: Target.big_core
This is the name of the cores that are the "big"s in an ARM big.LITTLE This is the name of the cores that are the "big"s in an ARM big.LITTLE
configuration. This is obtained via the underlying :class:`Platform`. configuration. This is obtained via the underlying :class:`~devlib.platform.Platform`.
.. attribute:: Target.little_core .. attribute:: Target.little_core
This is the name of the cores that are the "little"s in an ARM big.LITTLE This is the name of the cores that are the "little"s in an ARM big.LITTLE
configuration. This is obtained via the underlying :class:`Platform`. configuration. This is obtained via the underlying :class:`~devlib.platform.Platform`.
.. attribute:: Target.is_connected .. attribute:: Target.is_connected
@@ -120,10 +125,21 @@ Target
This is a dict that contains a mapping of OS version elements to their This is a dict that contains a mapping of OS version elements to their
values. This mapping is OS-specific. values. This mapping is OS-specific.
.. attribute:: Target.hostname
A string containing the hostname of the target.
.. attribute:: Target.hostid
A numerical id used to represent the identity of the target.
.. note:: Currently on 64-bit PowerPC devices this id will always be 0. This is
due to the included busybox binary being linked with musl.
.. attribute:: Target.system_id .. attribute:: Target.system_id
A unique identifier for the system running on the target. This identifier is A unique identifier for the system running on the target. This identifier is
intended to be uninque for the combination of hardware, kernel, and file intended to be unique for the combination of hardware, kernel, and file
system. system.
.. attribute:: Target.model .. attribute:: Target.model
@@ -152,11 +168,11 @@ Target
The underlying connection object. This will be ``None`` if an active The underlying connection object. This will be ``None`` if an active
connection does not exist (e.g. if ``connect=False`` as passed on connection does not exist (e.g. if ``connect=False`` as passed on
initialization and ``connect()`` has not been called). initialization and :meth:`connect()` has not been called).
.. note:: a :class:`Target` will automatically create a connection per .. note:: a :class:`~devlib.target.Target` will automatically create a
thread. This will always be set to the connection for the current connection per thread. This will always be set to the connection
thread. for the current thread.
.. method:: Target.connect([timeout]) .. method:: Target.connect([timeout])
@@ -176,19 +192,20 @@ Target
being executed. being executed.
This should *not* be used to establish an initial connection; use This should *not* be used to establish an initial connection; use
``connect()`` instead. :meth:`connect()` instead.
.. note:: :class:`Target` will automatically create a connection per .. note:: :class:`~devlib.target.Target` will automatically create a connection
thread, so you don't normally need to use this explicitly in per thread, so you don't normally need to use this explicitly in
threaded code. This is generally useful if you want to perform a threaded code. This is generally useful if you want to perform a
blocking operation (e.g. using ``background()``) while at the same blocking operation (e.g. using :class:`background()`) while at the same
time doing something else in the same host-side thread. time doing something else in the same host-side thread.
.. method:: Target.setup([executables]) .. method:: Target.setup([executables])
This will perform an initial one-time set up of a device for devlib This will perform an initial one-time set up of a device for devlib
interaction. This involves deployment of tools relied on the :class:`Target`, interaction. This involves deployment of tools relied on the
creation of working locations on the device, etc. :class:`~devlib.target.Target`, creation of working locations on the device,
etc.
Usually, it is enough to call this method once per new device, as its effects Usually, it is enough to call this method once per new device, as its effects
will persist across reboots. However, it is safe to call this method multiple will persist across reboots. However, it is safe to call this method multiple
@@ -212,25 +229,44 @@ Target
operations during reboot process to detect if the reboot has failed and operations during reboot process to detect if the reboot has failed and
the device has hung. the device has hung.
.. method:: Target.push(source, dest [,as_root , timeout]) .. method:: Target.push(source, dest [,as_root , timeout, globbing])
Transfer a file from the host machine to the target device. Transfer a file from the host machine to the target device.
:param source: path of to the file on the host If transfer polling is supported (ADB connections and SSH connections),
:param dest: path of to the file on the target ``poll_transfers`` is set in the connection, and a timeout is not specified,
the push will be polled for activity. Inactive transfers will be
cancelled. (See :ref:`connection-types` for more information on polling).
:param source: path on the host
:param dest: path on the target
:param as_root: whether root is required. Defaults to false. :param as_root: whether root is required. Defaults to false.
:param timeout: timeout (in seconds) for the transfer; if the transfer does :param timeout: timeout (in seconds) for the transfer; if the transfer does
not complete within this period, an exception will be raised. not complete within this period, an exception will be raised. Leave unset
to utilise transfer polling if enabled.
:param globbing: If ``True``, the ``source`` is interpreted as a globbing
pattern instead of being take as-is. If the pattern has multiple
matches, ``dest`` must be a folder (or will be created as such if it
does not exists yet).
.. method:: Target.pull(source, dest [, as_root, timeout]) .. method:: Target.pull(source, dest [, as_root, timeout, globbing])
Transfer a file from the target device to the host machine. Transfer a file from the target device to the host machine.
:param source: path of to the file on the target If transfer polling is supported (ADB connections and SSH connections),
:param dest: path of to the file on the host ``poll_transfers`` is set in the connection, and a timeout is not specified,
the pull will be polled for activity. Inactive transfers will be
cancelled. (See :ref:`connection-types` for more information on polling).
:param source: path on the target
:param dest: path on the host
:param as_root: whether root is required. Defaults to false. :param as_root: whether root is required. Defaults to false.
:param timeout: timeout (in seconds) for the transfer; if the transfer does :param timeout: timeout (in seconds) for the transfer; if the transfer does
not complete within this period, an exception will be raised. not complete within this period, an exception will be raised.
:param globbing: If ``True``, the ``source`` is interpreted as a globbing
pattern instead of being take as-is. If the pattern has multiple
matches, ``dest`` must be a folder (or will be created as such if it
does not exists yet).
.. method:: Target.execute(command [, timeout [, check_exit_code [, as_root [, strip_colors [, will_succeed [, force_locale]]]]]]) .. method:: Target.execute(command [, timeout [, check_exit_code [, as_root [, strip_colors [, will_succeed [, force_locale]]]]]])
@@ -256,7 +292,7 @@ Target
command to get predictable output that can be more safely parsed. command to get predictable output that can be more safely parsed.
If ``None``, no locale is prepended. If ``None``, no locale is prepended.
.. method:: Target.background(command [, stdout [, stderr [, as_root]]]) .. method:: Target.background(command [, stdout [, stderr [, as_root, [, force_locale [, timeout]]])
Execute the command on the target, invoking it via subprocess on the host. Execute the command on the target, invoking it via subprocess on the host.
This will return :class:`subprocess.Popen` instance for the command. This will return :class:`subprocess.Popen` instance for the command.
@@ -268,6 +304,12 @@ Target
this may be used to redirect it to an alternative file handle. this may be used to redirect it to an alternative file handle.
:param as_root: The command will be executed as root. This will fail on :param as_root: The command will be executed as root. This will fail on
unrooted targets. unrooted targets.
:param force_locale: Prepend ``LC_ALL=<force_locale>`` in front of the
command to get predictable output that can be more safely parsed.
If ``None``, no locale is prepended.
:param timeout: Timeout (in seconds) for the execution of the command. When
the timeout expires, :meth:`BackgroundCommand.cancel` is executed to
terminate the command.
.. note:: This **will block the connection** until the command completes. .. note:: This **will block the connection** until the command completes.
@@ -281,31 +323,31 @@ Target
a string. a string.
:param in_directory: execute the binary in the specified directory. This must :param in_directory: execute the binary in the specified directory. This must
be an absolute path. be an absolute path.
:param on_cpus: taskset the binary to these CPUs. This may be a single ``int`` (in which :param on_cpus: taskset the binary to these CPUs. This may be a single
case, it will be interpreted as the mask), a list of ``ints``, in which ``int`` (in which case, it will be interpreted as the mask), a list of
case this will be interpreted as the list of cpus, or string, which ``ints``, in which case this will be interpreted as the list of cpus,
will be interpreted as a comma-separated list of cpu ranges, e.g. or string, which will be interpreted as a comma-separated list of cpu
``"0,4-7"``. ranges, e.g. ``"0,4-7"``.
:param as_root: Specify whether the command should be run as root :param as_root: Specify whether the command should be run as root
:param timeout: If this is specified and invocation does not terminate within this number :param timeout: If this is specified and invocation does not terminate within this number
of seconds, an exception will be raised. of seconds, an exception will be raised.
.. method:: Target.background_invoke(binary [, args [, in_directory [, on_cpus [, as_root ]]]]) .. method:: Target.background_invoke(binary [, args [, in_directory [, on_cpus [, as_root ]]]])
Execute the specified binary on target (must already be installed) as a background Execute the specified binary on target (must already be installed) as a
task, under the specified conditions and return the :class:`subprocess.Popen` background task, under the specified conditions and return the
instance for the command. :class:`subprocess.Popen` instance for the command.
:param binary: binary to execute. Must be present and executable on the device. :param binary: binary to execute. Must be present and executable on the device.
:param args: arguments to be passed to the binary. The can be either a list or :param args: arguments to be passed to the binary. The can be either a list or
a string. a string.
:param in_directory: execute the binary in the specified directory. This must :param in_directory: execute the binary in the specified directory. This must
be an absolute path. be an absolute path.
:param on_cpus: taskset the binary to these CPUs. This may be a single ``int`` (in which :param on_cpus: taskset the binary to these CPUs. This may be a single
case, it will be interpreted as the mask), a list of ``ints``, in which ``int`` (in which case, it will be interpreted as the mask), a list of
case this will be interpreted as the list of cpus, or string, which ``ints``, in which case this will be interpreted as the list of cpus,
will be interpreted as a comma-separated list of cpu ranges, e.g. or string, which will be interpreted as a comma-separated list of cpu
``"0,4-7"``. ranges, e.g. ``"0,4-7"``.
:param as_root: Specify whether the command should be run as root :param as_root: Specify whether the command should be run as root
.. method:: Target.kick_off(command [, as_root]) .. method:: Target.kick_off(command [, as_root])
@@ -361,7 +403,7 @@ Target
multiple files at once, leaving them in their original state on exit. If one multiple files at once, leaving them in their original state on exit. If one
write fails, all the already-performed writes will be reverted as well. write fails, all the already-performed writes will be reverted as well.
.. method:: Target.read_tree_values(path, depth=1, dictcls=dict, [, tar [, decode_unicode [, strip_null_char ]]]): .. method:: Target.read_tree_values(path, depth=1, dictcls=dict, [, tar [, decode_unicode [, strip_null_char ]]])
Read values of all sysfs (or similar) file nodes under ``path``, traversing Read values of all sysfs (or similar) file nodes under ``path``, traversing
up to the maximum depth ``depth``. up to the maximum depth ``depth``.
@@ -386,7 +428,7 @@ Target
:param decode_unicode: decode the content of tar-ed files as utf-8 :param decode_unicode: decode the content of tar-ed files as utf-8
:param strip_null_char: remove null chars from utf-8 decoded files :param strip_null_char: remove null chars from utf-8 decoded files
.. method:: Target.read_tree_values_flat(path, depth=1): .. method:: Target.read_tree_values_flat(path, depth=1)
Read values of all sysfs (or similar) file nodes under ``path``, traversing Read values of all sysfs (or similar) file nodes under ``path``, traversing
up to the maximum depth ``depth``. up to the maximum depth ``depth``.
@@ -430,6 +472,10 @@ Target
Return a list of :class:`PsEntry` instances for all running processes on the Return a list of :class:`PsEntry` instances for all running processes on the
system. system.
.. method:: Target.makedirs(self, path)
Create a directory at the given path and all its ancestors if needed.
.. method:: Target.file_exists(self, filepath) .. method:: Target.file_exists(self, filepath)
Returns ``True`` if the specified path exists on the target and ``False`` Returns ``True`` if the specified path exists on the target and ``False``
@@ -553,16 +599,35 @@ Target
Installs an additional module to the target after the initial setup has been Installs an additional module to the target after the initial setup has been
performed. performed.
Linux Target
------------
.. class:: LinuxTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=SshConnection, is_container=False,)
:class:`LinuxTarget` is a subclass of :class:`~devlib.target.Target`
with customisations specific to a device running linux.
Local Linux Target
------------------
.. class:: LocalLinuxTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=SshConnection, is_container=False,)
:class:`LocalLinuxTarget` is a subclass of
:class:`~devlib.target.LinuxTarget` with customisations specific to using
the host machine running linux as the target.
Android Target Android Target
--------------- ---------------
.. class:: AndroidTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=AdbConnection, package_data_directory="/data/data") .. class:: AndroidTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=AdbConnection, package_data_directory="/data/data")
:class:`AndroidTarget` is a subclass of :class:`Target` with additional features specific to a device running Android. :class:`AndroidTarget` is a subclass of :class:`~devlib.target.Target` with
additional features specific to a device running Android.
:param package_data_directory: This is the location of the data stored :param package_data_directory: This is the location of the data stored for
for installed Android packages on the device. installed Android packages on the device.
.. method:: AndroidTarget.set_rotation(rotation) .. method:: AndroidTarget.set_rotation(rotation)
@@ -635,25 +700,57 @@ Android Target
Returns ``True`` if the targets auto brightness is currently Returns ``True`` if the targets auto brightness is currently
enabled and ``False`` otherwise. enabled and ``False`` otherwise.
.. method:: AndroidTarget.ensure_screen_is_off() .. method:: AndroidTarget.set_stay_on_never()
Sets the stay-on mode to ``0``, where the screen will turn off
as standard after the timeout.
.. method:: AndroidTarget.set_stay_on_while_powered()
Sets the stay-on mode to ``7``, where the screen will stay on
while the device is charging
.. method:: AndroidTarget.set_stay_on_mode(mode)
Sets the stay-on mode to the specified number between ``0`` and
``7`` (inclusive).
.. method:: AndroidTarget.get_stay_on_mode()
Returns an integer between ``0`` and ``7`` representing the current
stay-on mode of the device.
.. method:: AndroidTarget.ensure_screen_is_off(verify=True)
Checks if the devices screen is on and if so turns it off. Checks if the devices screen is on and if so turns it off.
If ``verify`` is set to ``True`` then a ``TargetStableError``
will be raise if the display cannot be turned off. E.g. if
always on mode is enabled.
.. method:: AndroidTarget.ensure_screen_is_on() .. method:: AndroidTarget.ensure_screen_is_on(verify=True)
Checks if the devices screen is off and if so turns it on. Checks if the devices screen is off and if so turns it on.
If ``verify`` is set to ``True`` then a ``TargetStableError``
will be raise if the display cannot be turned on.
.. method:: AndroidTarget.ensure_screen_is_on_and_stays(verify=True, mode=7)
Calls ``AndroidTarget.ensure_screen_is_on(verify)`` then additionally
sets the screen stay on mode to ``mode``.
.. method:: AndroidTarget.is_screen_on() .. method:: AndroidTarget.is_screen_on()
Returns ``True`` if the targets screen is currently on and ``False`` Returns ``True`` if the targets screen is currently on and ``False``
otherwise. otherwise. If the display is in a "Doze" mode or similar always on state,
this will return ``True``.
.. method:: AndroidTarget.wait_for_target(timeout=30) .. method:: AndroidTarget.wait_for_device(timeout=30)
Returns when the devices becomes available withing the given timeout Returns when the devices becomes available withing the given timeout
otherwise returns a ``TimeoutError``. otherwise returns a ``TimeoutError``.
.. method:: AndroidTarget.reboot_bootloader(timeout=30) .. method:: AndroidTarget.reboot_bootloader(timeout=30)
Attempts to reboot the target into it's bootloader. Attempts to reboot the target into it's bootloader.
.. method:: AndroidTarget.homescreen() .. method:: AndroidTarget.homescreen()
@@ -687,9 +784,9 @@ ChromeOS Target
:class:`ChromeOsTarget` if the device supports android otherwise only the :class:`ChromeOsTarget` if the device supports android otherwise only the
:class:`LinuxTarget` methods will be available. :class:`LinuxTarget` methods will be available.
:param working_directory: This is the location of the working :param working_directory: This is the location of the working directory to
directory to be used for the Linux target container. If not specified will be used for the Linux target container. If not specified will default to
default to ``"/mnt/stateful_partition/devlib-target"``. ``"/mnt/stateful_partition/devlib-target"``.
:param android_working_directory: This is the location of the working :param android_working_directory: This is the location of the working
directory to be used for the android container. If not specified it will directory to be used for the android container. If not specified it will

View File

@@ -69,9 +69,13 @@ for root, dirs, files in os.walk(devlib_dir):
filepaths = [os.path.join(root, f) for f in files] filepaths = [os.path.join(root, f) for f in files]
data_files[package_name].extend([os.path.relpath(f, package_dir) for f in filepaths]) data_files[package_name].extend([os.path.relpath(f, package_dir) for f in filepaths])
with open("README.rst", "r") as fh:
long_description = fh.read()
params = dict( params = dict(
name='devlib', name='devlib',
description='A framework for automating workload execution and measurment collection on ARM devices.', description='A library for interacting with and instrumentation of remote devices.',
long_description=long_description,
version=__version__, version=__version__,
packages=packages, packages=packages,
package_data=data_files, package_data=data_files,
@@ -82,6 +86,8 @@ params = dict(
'python-dateutil', # converting between UTC and local time. 'python-dateutil', # converting between UTC and local time.
'pexpect>=3.3', # Send/recieve to/from device 'pexpect>=3.3', # Send/recieve to/from device
'pyserial', # Serial port interface 'pyserial', # Serial port interface
'paramiko', # SSH connection
'scp', # SSH connection file transfers
'wrapt', # Basic for construction of decorator functions 'wrapt', # Basic for construction of decorator functions
'future', # Python 2-3 compatibility 'future', # Python 2-3 compatibility
'enum34;python_version<"3.4"', # Enums for Python < 3.4 'enum34;python_version<"3.4"', # Enums for Python < 3.4
@@ -90,6 +96,7 @@ params = dict(
'numpy; python_version>="3"', 'numpy; python_version>="3"',
'pandas<=0.24.2; python_version<"3"', 'pandas<=0.24.2; python_version<"3"',
'pandas; python_version>"3"', 'pandas; python_version>"3"',
'lxml', # More robust xml parsing
], ],
extras_require={ extras_require={
'daq': ['daqpower>=2'], 'daq': ['daqpower>=2'],

View File

@@ -0,0 +1,6 @@
CFLAGS=-Wall --pedantic-errors -O2 -static
all: get_clock_boottime
get_clock_boottime: get_clock_boottime.c
$(CC) $(CFLAGS) $^ -o $@

View File

@@ -0,0 +1,18 @@
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void) {
int ret;
struct timespec tp;
ret = clock_gettime(CLOCK_BOOTTIME, &tp);
if (ret) {
perror("clock_gettime()");
return EXIT_FAILURE;
}
printf("%ld.%ld\n", tp.tv_sec, tp.tv_nsec);
return EXIT_SUCCESS;
}