1
0
mirror of https://github.com/ARM-software/devlib.git synced 2025-03-14 05:47:51 +00:00

Compare commits

...

173 Commits

Author SHA1 Message Date
Marc Bonnici
5425f4afff version: bump dev version
Bump the dev version to enable WA dependency for trace-cmd
2025-03-01 16:48:23 -06:00
Marc Bonnici
fa0d099707 ssh: Fix incorrect method name 2025-03-01 16:48:23 -06:00
Marc Bonnici
f2e81a8b5b host: Fix incorrect import
Fix import of `shutils` to `shutil`
2025-03-01 16:48:23 -06:00
Douglas Raillard
c88a5dbb8b devlib: Replace Target.tempfile() by Target.make_temp()
Replace as many uses of tempfile() by make_temp() as possible, as the
latter provide more reliable resource control by way of a context
manager. This also paves the way to having a single point in devlib
where temporary files are created, simplifying maintenance.
2025-03-01 16:39:07 -06:00
Douglas Raillard
1da260b897 target: Align Target.tempfile() with Target.make_temp()
Align the parameters between the 2 methods:
* Use "None" as default value
* Do not add suffix or prefix if not asked for.
* Separate components with "-" instead of "_"
2025-03-01 16:39:07 -06:00
Douglas Raillard
3e1c928db3 target: Make Target.tempfile() use Target.tmp_directory 2025-03-01 16:39:07 -06:00
Douglas Raillard
e402fc7544 target: Make Target.make_temp() use Target.tmp_directory
Also avoid a "None" prefix when no prefix is asked for, and set None as
the default prefix value.

Remove the "devlib-test" default value as make_temp() has nothing to do
with tests.
2025-03-01 16:39:07 -06:00
Douglas Raillard
1ac461ad77 target: Rework temp folder management
* Make use of Target.make_temp() in Target._xfer_cache_path() to
  deduplicate temp folder creation code

* Introduce Target.tmp_directory attribute for the root of temporary
  files.

* Make Target.tempfile() use Target.tmp_directory

* Rework the Target._resolve_paths() implementations:
    * Target.tmp_directory is set to a default returned by "mktemp -d".
      This way, if "mktemp -d" works out of of the box, all devlib
      temporary files will be located in the expected location for the
      that operating system.

    * Target._resolve_paths() must set Target.working_directory and that
      is checked with an assert. Other directories have defaults based
      on this if _resolve_paths() does not set them.
2025-03-01 16:39:07 -06:00
Douglas Raillard
e551b46207 ftrace: Add write-to-disk mode
Allow using trace-cmd record that continuously dump the trace to disk,
allowing to overcome the buffer size limitations when recording for
extended periods of time.
2025-03-01 16:39:07 -06:00
Douglas Raillard
9ec36e9040 connection: Support all signals in BackgroundCommand.send_signal()
Support sending any signal to background commands, instead of only
supporting properly SIGKILL/SIGTERM/SIGQUIT.

The main issue is figuring out what PID to send the signal to, as the
devlib API allows running a whole snippet of shell script that typically
is wrapped under many layers of sh -c and sudo calls. In order to lift
the ambiguity, the user has access to a "devlib-signal-target" command
that points devlib at what process should be the target of signals:

    # Run a "setup" command, then the main command that will receive the
    # signals
    cmd = 'echo setup; devlib-signal-target echo hello world'
    with target.background(cmd) as bg:
	bg.communicate()

The devlib-signal-target script can only be used once per background
command, so that it is never ambiguous what process is targeted, and so
that the Python code can cache the target PID.  Subsequent invocations
of devlib-signal-target will fail.
2025-03-01 16:39:07 -06:00
Douglas Raillard
eb9e0c9870 collector/ftrace: Make emitting cpu_frequency_devlib and extra idle events dependent on the configured events
Make cpu_frequency_devlib dependent on whether the "cpu_frequency" event
has been selected rather than dependent on the cpufreq devlib module
being loaded on the target.

The old behavior became particularly problematic with the lazy loading
of modules. However, it was never a reliable way of knowing if the user
was interested in the frequency or not.

Apply a similar mechanism for the extra idle state transitions only done
if "cpu_idle" event is selected.
2025-03-01 16:21:19 -06:00
Douglas Raillard
ae8149077c collector/ftrace: Remove cpu_frequency_devlib event in FtraceCollector.stop()
Emiting the current frequency of all CPUs in the stop() hook is not
useful as the current frequency should already be known from the trace.
Either the event is emitted every time the frequency changes and the
up-to-date information is available, or the frequency never changes
(e.g. userspace governor) and the frequencies will be known from
emitting cpu_frequency_devlib in start().
2025-03-01 16:21:19 -06:00
Douglas Raillard
1b6c8069bd target: Asyncify Target._prepare_xfer()
_prepare_xfer() deals with all the paths resulting from glob expansion,
so it can benefit from async capabilities in order to process multiple
files concurrently.

Convert the internals to async/await to enable useful map_concurrently()
2025-03-01 16:11:45 -06:00
Douglas Raillard
4431932e0d target: Reduce the number of commands involved in push/pull
* Combine cp and chmod for pull
* Make both push and pull use concurrent async code
2025-03-01 16:11:45 -06:00
Douglas Raillard
8af9f1a328 target: Use busybox for file transfer operations
Ensure we use the busybox command in operations involved in file
transfers.
2025-03-01 16:11:45 -06:00
Douglas Raillard
1efcfed63f target: Copy symlinks as files when pulling
When pulling a file from the target, copy all paths as files and follow
symlinks if necessary. That fixes issues related to chmod not working on
symlinks and generally allows getting any path.

If we want to one day preserve symlinks in some capacities, we can
always add an option to Target.pull() to do so.
2025-03-01 16:11:45 -06:00
Douglas Raillard
df1b5ef4a2 ssh: Fix folder pull on SSH connection
Paramiko seems to have had a slight change in behavior that broke
devlib: to save a remote command execution, we attempt to pull any path
as file first, and then as a folder if the former failed.

This is now broken as paramiko will create an empty destination file
when trying to pull as a file. When we attempt again to pull as folder,
the destination exists already (empty file) and we raise an exception.

To fix that, make sure we cleanup any attempt after pulling as a file
before trying again.
2025-03-01 16:11:45 -06:00
Douglas Raillard
facd251edb collector/dmesg: Fix dmesg variant detection
Check for all the CLI options we are going to use when deciding whether
to use the system's dmesg or the one we ship via busybox.
2025-02-10 14:44:06 -06:00
Douglas Raillard
a3765cc27d target: Remove duplicated disconnection logic
The logic in Target.disconnect() appears to have been duplicated by
error. While _probably_ harmless, this is at least confusing, and since
this happens outside of the lock, this may actually be a real problem.
2025-02-10 14:32:48 -06:00
Douglas Raillard
20e5bcd2c7 utils/android: Restore adb root state when disconnecting
The current behavior is to issue "adb unroot" if the device needed to be
rooted upon connection. This breaks use of nested Targets, which LISA
requires as some target interaction needs to happen in a subprocess.

Fix that by restoring the same adb root state that there was when
creating the connection, rather than blindly unrooting the device upon
disconnection.
2025-02-10 14:30:35 -06:00
Douglas Raillard
f60fa59ac1 collector/ftrace: Handle missing kprobe_events file
Deal cleanly with kernels that are compiled without kprobe events.
2025-02-10 14:18:49 -06:00
Douglas Raillard
499ea4753c target: Check command output sanity
Check that no element in the chain adds any unwanted content to stdout
or stderr when running a command. This is especially important as PAM
modules can just write arbitrary messages to stdout when using sudo,
such as password expiry notification. There unfortunately seems to be no
way of silencing it, but we can at least catch it before it triggers
errors down the line.
2024-10-08 17:36:26 -05:00
Douglas Raillard
dabee29350 devlib: Remove sudo prompt
Since the prompt is added to stdout, remove the one-space-prompt that
currently corrupts stdout when a command is ran with sudo.

That non-empty prompt was added as Windows Subsystem for Linux version 1
(WSL1) has a broken sudo implementation that chokes on an empty prompt.
Considering this is not a platform that is normally suported by devlib,
we re-introduce that empty prompt.
2024-10-08 17:36:26 -05:00
Douglas Raillard
6a6d9f30dd collector/ftrace: Fix FtraceCollector.kprobe_events attr name
self.kprobe_events is actually a path to a file, so should be suffixed
_file like all the others.
2024-09-30 18:31:36 -05:00
Douglas Raillard
e927e2f2cd collector/dmesg: Allow not raising on dmesg output parsing failure
Some drivers emit broken multiline dmesg output (with some Windows-style
newline ...) . In order to parse the rest of the content, allow not
raising on such input.
2024-09-30 18:31:02 -05:00
Metin Kaya
d4d9c92ae9 ftrace: Preserve kprobe events during trace-cmd reset
FtraceCollector.reset() executes 'trace-cmd reset ..' command which
clears all kprobes. This breaks tracing existing kprobe events (if any).
Thus, save kprobe events before trace-cmd reset and restore them after
the reset operation.

For the context, I want to trace an ordinary function in kernel (e.g.,
"echo 'p do_sys_open' > /sys/kernel/tracing/kprobe_events"). However,
FtraceCollector.reset() destroys kprobes, too. Preserving existing
kprobes allows me to use FtraceCollector class as is.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-09-25 11:56:14 -05:00
Douglas Raillard
8773c10424 utils/asyn: Ensure _AsyncPolymorphicFunction is not detected as a coroutine function
inspect.iscoroutinefunction() currently detects
_AsyncPolymorphicFunction() as being a coroutine function since it
inspects x.__code__.co_flags to determine so. Since we delegate
attribute access to the async function, it makes
_AsyncPolymorphicFunction() appear as being a coroutine function even
though it is not.

Fix that by directing __code__ attribute access to __call__'s code.
2024-09-12 18:04:13 -05:00
Douglas Raillard
b6da67d12f utils/asyn: Ensure that we propagate docstrings in asyncf() 2024-09-12 18:04:13 -05:00
Douglas Raillard
fb4e155696 utils/asyn: Replace nest_asyncio with greenlet
Provide an implementation of re-entrant asyncio.run() that is less
brittle than what greenback provides (e.g. no use of ctypes to poke
extension types).

The general idea of the implementation consists in treating the executed
coroutine as a generator, then turning that generator into a generator
implemented using greenlet. This allows a nested function to make the
top-level parent yield values on its behalf, as if every call was
annotated with "yield from".
2024-09-12 17:59:19 -05:00
Douglas Raillard
b2e19d333b utils/asyn: Factor out the calls to asyncio.run
Prepare for providing our own implementation of asyncio.run() to work
without nest_asyncio package.
2024-09-12 17:59:19 -05:00
Douglas Raillard
165b87f248 target: Allow reuse of a connection once the owning thread is terminated
Once a thread exits, the connection instance it was using can be
returned to the pool so it can be reused by another thread.

Since there is no per-thread equivalent to atexit, this is achieved by
returning the connection to the pool after every top-level method call
that uses it directly, so the object the user can get by accessing
Target.conn can change after each call to Target method.
2024-09-12 17:59:19 -05:00
Douglas Raillard
1d6a007bad tests: Add tests for nested async support 2024-09-12 17:59:19 -05:00
Douglas Raillard
796b9fc1ef utils/asyn: Fix memoized_method.__set_name__
Set the "_name" attribute rather than trying to set the "name" read-only
property.
2024-09-12 17:59:19 -05:00
Metin Kaya
54a5732c61 tools/setup_host.sh: Remove unused package cpu-checker
cpu-checker was planned to detect availability of KVM acceleration in
QEMU by running kvm-ok command. However, the implementation diverged
from plan and made cpu-checker redundant. Thus, remove it from apt
package list.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-08-06 09:19:06 -05:00
Metin Kaya
bbdd2ab67c target: Properly propagate ADB port information
Some ADB servers may use non-standard port number. Hence, add 'adb_port'
property to AndroidTarget class and pass port number down to
adb_command().

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-07-11 18:55:20 -05:00
Douglas Raillard
38d4796e41 utils/ssh: Allow passing known_hosts path via strict_host_check value
Allow passing a known_hosts file path to strict_host_check.
2024-07-11 18:54:15 -05:00
Metin Kaya
de84a08bf8 tools/docker: Fixup test config file name
Apparently commit

492d42dddb63 ("target: tests: Address review comments on PR#667")

erroneously renamed target_configs.yaml to target_configs.yml.
Rename it to test_config.yml.

Also address 2 Docker warnings related to environment variables while we
are here.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-06-26 15:27:07 -05:00
Metin Kaya
b7d7b46626 tools/setup_host.sh: Rename install_base.sh to setup_host.sh
install_base.sh is left-over from LISA/install_base.sh. Scope of the
script in question is different (and potentially can diverge more) than
its root in LISA. Hence, give it a more descriptive (hopefully) name.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-06-26 15:27:07 -05:00
Metin Kaya
b485484850 tools/android: Make install_base.sh modular for LISA integration
Both devlib and LISA utilizes install_base.sh script, but they install
different packages and support different input arguments. Also support
custom ANDROID_HOME environment variable in order to let LISA (or just
let users install Android SDK/tools wherever they want) choose install
location.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-06-26 15:27:07 -05:00
Metin Kaya
d8a09e895c tools/android: Remove emulator skins
Apparently skins are just nice to have. Also devlib uses emulated
devices in command line (no GUI), so skins are unnecessary. Removing
skins will also reduce the disparity in install_base.sh scripts of LISA
and devlib.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-06-26 15:27:07 -05:00
Metin Kaya
1c52f13e50 tools/android: Clone install_android_platform_tools() from LISA
Make sure devlib/install_base.sh has complete Android SDK support. This
will be the first step of removing duplicate Android SDK installation
functions from LISA/install_base.sh.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-06-26 15:27:07 -05:00
Metin Kaya
14905fb515 tools/android: Make cleanups in install_base.sh
Just a house-keeping patch to do some trivial improvements:
- Move global variables to the beginning of the script
- Eliminate redundant echo commands
- Tidy up the system package list
- Drop superfluous ';'

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-06-26 15:27:07 -05:00
Douglas Raillard
d3ca49f245 utils/misc: Fix AttributeError in tls_property
Do not assume the value of the property was set before it is deleted.
2024-06-20 16:13:57 -05:00
Douglas Raillard
3e45a2298e collector/dmesg: Handle non-matching regex
Raise an exception allowing diagnosis when a dmesg line does not match
the regex it is supposed to, rather than the cryptic groups()
AttributeError on None.
2024-06-13 16:17:10 -05:00
Douglas Raillard
3c37bf3de1 target: Make Target.make_temp() async-compatible 2024-06-13 16:10:04 -05:00
Douglas Raillard
52281051b2 target: Run Target.disconnect() upon process termination
Use atexit handler to run Target.disconnect() when the process is about
to exit. This avoids running it with a half torn down namespace, with
ensuing exceptions and non-clean disconnect.
2024-06-12 16:32:21 -05:00
Douglas Raillard
7714acc897 target: Make Target.disconnect() steal current connections
Ensure the connections that Target.disconnect() closes do not stay
around in case the Target object is later reused.
2024-06-12 16:32:21 -05:00
Douglas Raillard
f5f06122f3 target: Provide context manager API for Target
Allow cleanly disconnecting the Target object, so that we don't get
garbage output from __del__ later on when half of the namespace has
already disappeared.
2024-06-12 16:32:21 -05:00
Sebastian Goscik
c9b539f722 Validate cgroups_run_into has taken effect
On some systems this seems to have no effect, leaving the executed shell in the root cgroup. Before, this function would still execute and the end user would think the desired process was run in the cgroup when infact it had not.
2024-06-12 16:23:35 -05:00
Douglas Raillard
a28c6d7ce0 utils/android: Use subprocess.DEVNULL where appropriate 2024-06-12 16:03:19 -05:00
Douglas Raillard
b8292b1f2b utils/android: Log error in _ping()
Log any error happening in adb command ran by _ping() so it can be diagnosed.
Also fix possible deadlock by not using subprocess.PIPE along
subprocess.call(), as the documentation recommends against it.
2024-06-12 16:03:19 -05:00
Stephen Paulger
94f1812ab2 Create LICENSE 2024-06-12 15:59:29 -05:00
Metin Kaya
71d1663b2d tools/android: Address review comments on PR#668
PR#668: https://github.com/ARM-software/devlib/pull/668

- Fix mixed tab-space white-spacing issues
- s/CMDLINE_VERSION/ANDROID_CMDLINE_VERSION/ to be more precise
- s/set_host_arch/get_android_sdk_host_arch/ because the global variable
  for Android host architecture is removed now

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-05-28 19:08:40 -05:00
Metin Kaya
492d42dddb target: tests: Address review comments on PR#667
PR#667: https://github.com/ARM-software/devlib/pull/667

- Implement a test module initializer with a tear down method in
  test/test_target.py
- Make various cleanups in test/test_target.py
- Improve structure of test/test_config.yml (previously
  target_configs.yaml)
- Make docstrings Sphinx compatible
- Make ``TargetRunner`` and its subclasses private
- Cleanup tests/test_target.py
- Replace print()'s with appropriate logging calls
- Implement ``NOPTargetRunner`` class for simplifying tests
- Improve Python v3.7 compatibility
- Relax host machine type checking
- Escape user input strings

and more..

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-05-28 19:08:05 -05:00
Douglas Raillard
7276097d4e target: Make default modules list empty
Default modules are a recurrent source of errors as they fail to
initialize (cgroups particularly) on any recent target. This leads to
error in basically any workload-automation setup on Android 12 and
above targets.

Since modules can now be lazily loaded upon Target attribute access,
there is no reason to preload those anymore.
2024-04-24 10:04:09 -05:00
Douglas Raillard
6939e5660e target: Cleanup and lazily initialize modules
Cleanup the module loading code and enable lazy initialization of modules:

* Target.modules is now a read-only property, that is a list of strings
  (always, not either strings or dict with mod name as key and params dict as
  value).
  Target._modules dict stores parameters for each module that was asked
  for.

* Target.__init__() now makes thorough validation of the modules list it
  is given:
    * Specifying the same module mulitple time is only allowed if they
      are specified with the same parameters. If a module is specified
      both with and without parameters, the parameters take precedence
      and the conflict is resolved.
    * Only one module of each "kind" can be present in the list.

* Module subclasses gained a class attribute "attr_name" that computes
  their "attribute name", i.e. the name under which they are expected to
  be lookedup on a Target instance.

* Modules are now automatically registered by simple virtue of
  inheriting from Module and defining a name, wherever the source
  resides. They do not have to be located in devlib.modules anymore.
  This allows 3rd party module providers to very easily add new ones.

* Modules are accessible as Target attribute as:
    * Their "kind" if they specified one
    * Their "name" (always)

    This allows the consumer to either rely on a generic API (via the
    "kind") or to expect a specific module (via the "name").

* Accessing a module on Target will lazily load it even if was not
  selected using Target(modules=...):
    * If the module parameters were specified in Target(modules=...) or
      via platform modules, they will be applied automatically.
    * Otherwise, no parameter is passed.
    * If no module can be found with that name, the list of
      Target.modules will be searched for a module matching the given
      kind. The first one to be found will be used.

* Modules specified in Target(modules=...) are still loaded eagerly when
  their stage is reached just like it used to. We could easily make
  those lazily loaded though if we wanted.

* Specifying Target(modules={'foo': None}) will make the "foo" module
  unloadable. This can be used to prevent lazy loading a specific
  module.
2024-04-24 10:04:09 -05:00
Brendan Jackman
ce02f8695f Add missing import 2024-04-18 14:01:11 -05:00
Metin Kaya
b5f311feff tools/docker: Add Docker image support for devlib
Introduce a Dockerfile in order to create Docker image for devlib and
``run_tests.sh`` script to test Android, Linux, LocalLinux, and QEMU
targets on the Docker image.

The Dockerfile forks from ``Ubuntu-22.04``, installs required system
packages, checks out ``master`` branch of devlib, installs devlib,
creates Android virtual devices via ``tools/android/install_base.sh``,
and QEMU images for aarch64 and x86_84 architectures.

Note that Android command line tools version, buildroot and devlib
branches can be customized via environment variables.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-04-01 14:02:38 -05:00
Metin Kaya
233f76d03a test_target.py: Allow specifying connection timeout for Android targets
Default connection timeout (30 secs) may be insufficient for some test
setups or in some conditions. Thus, support specifying timeout parameter
in target configuration file.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-04-01 14:02:38 -05:00
Metin Kaya
ac4f581f4b target: tests: Add support for testing ChromeOS targets
We can mimic ChromeOS target by combining a QEMU guest (for Linux
bindings of ``ChromeOsTarget`` class) with a Android virtual desktop
(for Android bits of ``ChromeOsTarget``).

Note that Android bindings of ``ChromeOsTarget`` class also requires
existence of ``/opt/google/containers/android`` folder on the Linux
guest.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-04-01 14:02:38 -05:00
Metin Kaya
c6bd736c82 target: Address pylint issues in ChromeOsTarget class
Also clean a mutable default value (``modules=[]`` in ``ChromeOsTarget``
class).

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-04-01 14:02:38 -05:00
Douglas Raillard
28b30649f1 utils/ssh: Fix atexit.register() SshConnection leak
SshConnection registers an atexit handler so the connection is closed
upon exiting the process if it has not been done before. However, the
handler keeps a reference on the connection, which means it _will_ stay
alive. If lots of short-lived connections are created (which can happen
when using e.g. ThreadPoolExecutor), they will simply stay around and
leak.

Fix that by using a weak reference (WeakMethod) to register in the
atexit handler, with a callback to unregister it when the object is
deallocated.
2024-03-28 20:04:38 -05:00
Sebastian Goscik
5817866ad0 Fixed issue where non-consecutive list resulted in incorrect ranges
For example if `[3,1,2]` was provided, it would result in `3,1-2`, but after writing this to a sysfs file, it would read back as `1-3`
2024-03-28 20:04:12 -05:00
Ola Olsson
8247ac91e7 Add option not to validate PMU counters.
The validation call can take a long for targets where PLL:s have
been clocked down, such as FPGAs.
2024-03-28 20:03:53 -05:00
Metin Kaya
228baeb317 target: Implement target runner classes
Add support for launching emulated targets on QEMU. The base class
``TargetRunner`` has groundwork for target runners like
``QEMUTargetRunner``.

``TargetRunner`` is a contextmanager which starts runner process (e.g.,
QEMU), makes sure the target is accessible over SSH (if
``connect=True``), and terminates the runner process once it's done.

The other newly introduced ``QEMUTargetRunner`` class:
- performs sanity checks to ensure QEMU executable, kernel, and initrd
  images exist,
- builds QEMU parameters properly,
- creates ``Target`` object,
- and lets ``TargetRunner`` manage the QEMU instance.

Also add a new test case in ``tests/test_target.py`` to ensure devlib
can run a QEMU target and execute some basic commands on it.

While we are in neighborhood, fix a typo in ``Target.setup()``.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-03-20 12:16:12 +00:00
Metin Kaya
1431bebd80 tools/buildroot: Add support for generating Linux target system images
Integrate buildroot into devlib in order to ease building kernel and
root filesystem images via 'generate-kernel-initrd.sh' helper script.

As its name suggests, the script builds kernel image which also includes
an initial RAM disk per default config files located under
configs/<arch>/.

Provide config files for buildroot and Linux kernel as well as a
post-build.sh script which tweaks (e.g., allowing root login on SSH)
target's root filesystem.

doc/tools.rst talks about details of kernel and rootfs configuration.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-03-20 12:16:12 +00:00
Metin Kaya
dd84dc7e38 tests/test_target: Test more targets
Test Android and Linux targets as well in addition to LocalLinux target.
In order to keep basic verification easy, list complete list of test
targets in tests/target_configs.yaml.example and keep the default
configuration file for targets simple.

Also:
- Create a test folder on target's working directory.
- Remove all devlib artefacts after execution of the test.
- Add logs to show progress of operations.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-03-20 12:16:12 +00:00
Metin Kaya
295f1269ed target: Introduce make_temp() for creating temp file/folder on target
``Target.make_temp()`` employs ``mktemp`` command to create a temporary
file or folder.

This method will be used in unit tests.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-03-20 12:16:12 +00:00
Metin Kaya
84c0935fb2 utils/ssh: Try to free up resources during client creation
SshConnection._make_client() may throw exceptions for several reasons
(e.g., target is not ready yet). The client should be closed if that is
the case. Otherwise Python unittest like tools report resource warning
for 'unclosed socket', etc.

Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-03-20 12:16:12 +00:00
Metin Kaya
598c0c1d3c tools/android: Add support for creating Android virtual devices
Introduce ``tools/android/install_base.sh`` [1] script to install
Android command line tools including necessary platforms and
system-images for Linux and create Android Virtual Devices (AVD) for
Pixel 6 with Android v12 & v14 as well as an Android virtual *desktop*
device (v13) for ChromeOS tests.

[1] Forked from https://github.com/ARM-software/lisa/blob/main/install_base.sh

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-03-20 11:56:49 +00:00
Metin Kaya
a1718c3700 tests/test_target: Read target connection settings from a YAML file
This will be useful in automating CI tests without modifying the source
code.

Replace unittest with pytest in order to make parameter passing to test
functions easier.

Move target configuration reading and generating target object outside
of the test function. Because we will run the test function for new
targets and may want to add new test functions.

While we are here, also fix pylint issues.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-02-23 12:48:44 -08:00
Metin Kaya
b5715b6560 utils/misc: Move load_struct_from_yaml() from WA to devlib
This is copied from WA (workload-automation/wa/utils/misc.py).
Hence, published another PR [1] removes the implementation from WA.

OTOH, this patch uses ``ruamel`` instead of ``yaml`` because of the
latter's design issues.

And also this patch fixes pylint issues in ``load_struct_from_yaml()``.

[1] https://github.com/ARM-software/workload-automation/pull/1248

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-02-23 12:48:44 -08:00
Metin Kaya
39dfa7ef72 utils/android: Add debug log about connection settings
While we are there, also fix a trivial pylint issue regarding string
format.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-02-23 12:48:44 -08:00
Metin Kaya
a83fe52382 test_target: Add copyright statement
Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-23 06:51:59 -08:00
Metin Kaya
613b4fabba ChromeOsTarget: Fix building SSH connection parameter list
'if list.get(elem, None)' like probing ignores list elements whose
values are falsy.

Here is a sample test code:
```Python
connection_settings={'host': '127.0.0.1',
                     'port': 8022,
                     'username': 'root',
                     'password': 'root',
                     'strict_host_check': False}

ssh_conn_params = ['host', 'username', 'password', 'port', 'strict_host_check']

print(f'connection_settings={connection_settings}')

ssh_connection_settings = {}
for setting in ssh_conn_params:
    if connection_settings.get(setting, None):
        print(f'1. setting "{setting}" to "{connection_settings[setting]}"...')
        ssh_connection_settings[setting] = connection_settings[setting]
    else:
        print(f'1. "{setting}" is None!')

ssh_connection_settings = {}
for setting in ssh_conn_params:
    if setting in connection_settings:
        print(f'2. setting "{setting}" to "{connection_settings[setting]}"...')
        ssh_connection_settings[setting] = connection_settings[setting]
    else:
        print(f'2. "{setting}" is None!')
```

And its output:
```
connection_settings={'host': '127.0.0.1', 'port': 8022, 'username': 'root', 'password': 'root', 'strict_host_check': False}

1. setting "host" to "127.0.0.1"...
1. setting "username" to "root"...
1. setting "password" to "root"...
1. setting "port" to "8022"...
1. "strict_host_check" is None!

2. setting "host" to "127.0.0.1"...
2. setting "username" to "root"...
2. setting "password" to "root"...
2. setting "port" to "8022"...
2. setting "strict_host_check" to "False"...
```

Also fix a typo in a log message.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-23 06:51:59 -08:00
Metin Kaya
be7e73db16 utils/ssh: Load host keys only if strict_host_check is true
Loading host keys breaks setting up SSH connection (paramiko throws
BadHostKeyException exception) if issuer does not want/need strict key
matching.

One use case for ignoring strict_host_check is automating virtual guests
(i.e., over QEMU). Issuer may want to skip loading host keys and start
with a blank list of known host keys for sure.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-23 06:51:59 -08:00
Metin Kaya
e334f8816c target: Customize as_root parameter of *write_value()
Not all command executions (or write operations in this specific case)
requires being root. So, allow write_value() and dependent
revertable_write_value() to support non-root executions by introducing
'as_root' optional parameter whose default is True to preserve current
behavior of the aforementioned methods.

Meanwhile, update the copyright year of the touched file, too.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-23 06:51:59 -08:00
Metin Kaya
38d8053f2f devlib: Remove unused imports
Also import 'warnings' before 'wrapt' module to address a pylint
warning.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-23 06:51:59 -08:00
Metin Kaya
7ccdea6b8e devlib/init: Resolve pylint issues
This is for increasing pylint score of __init__.py to 10/10.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-23 06:51:59 -08:00
Metin Kaya
cb36347dfe doc/connection: Fix typo Telenet
It should be *telnet* instead.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-23 06:51:59 -08:00
Luis Machado
c60737c78e [Android] Fix use-before-initialization during initialization of ApkInfo
I noticed the following errors during invocation of uibench/uibenchjanktests:

     job:     Initializing job wk1 (uibench) [1]
  signal:         Sending before-workload-initialized from wk1 (uibench) [1]
     apk:         Resolving package on host system
resolver:         Resolving <<Workload uibench>'s apk 14>
resolver:         Trying user.get
  signal:         Sending error-logged from <ErrorSignalHandler (DEBUG)>
  signal:         Disconnecting <bound method Executor._error_signalled_callback of executor> from error-logged(<class 'louie.sender.Any'>)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/signal.py", line 324, in wrap
  signal:             yield
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/job.py", line 97, in initialize
  signal:             self.workload.initialize(context)
  signal:           File "/repos/lisa/external/workload-automation/wa/utils/exec_control.py", line 83, in wrapper
  signal:             return method(*args, **kwargs)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/workload.py", line 305, in initialize
  signal:             self.apk.initialize(context)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/workload.py", line 717, in initialize
  signal:             self.resolve_package(context)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/workload.py", line 734, in resolve_package
  signal:             self.resolve_package_from_host(context)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/workload.py", line 774, in resolve_package_from_host
  signal:             apk_file = context.get_resource(ApkFile(self.owner,
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/execution.py", line 197, in get_resource
  signal:             result = self.resolver.get(resource, strict)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/resource.py", line 268, in get
  signal:             result = source(resource)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/getters.py", line 139, in get
  signal:             return get_from_location(directory, resource)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/getters.py", line 106, in get_from_location
  signal:             return get_generic_resource(resource, files)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/getters.py", line 63, in get_generic_resource
  signal:             if resource.match(f):
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/resource.py", line 165, in match
  signal:             uiauto_matches = uiauto_test_matches(path, self.uiauto)
  signal:           File "/repos/lisa/external/workload-automation/wa/framework/resource.py", line 335, in uiauto_test_matches
  signal:             info = get_cacheable_apk_info(path)
  signal:           File "/repos/lisa/external/workload-automation/wa/utils/android.py", line 192, in get_cacheable_apk_info
  signal:             info = ApkInfo(path)
  signal:           File "/repos/lisa/external/workload-automation/wa/utils/android.py", line 116, in __init__
  signal:             super().__init__(path)
  signal:           File "/repos/lisa/external/devlib/devlib/utils/android.py", line 152, in __init__
  signal:             self.parse(path)
  signal:           File "/repos/lisa/external/devlib/devlib/utils/android.py", line 159, in parse
  signal:             output = self._run([self._aapt, 'dump', 'badging', apk_path])
  signal:
  signal:         Sending error-logged from <ErrorSignalHandler (DEBUG)>
  signal:         AttributeError('ApkInfo' object has no attribute '_aapt')
  signal:         Sending after-workload-initialized from wk1 (uibench) [1]
  signal: Sending error-logged from <ErrorSignalHandler (DEBUG)>
  runner: Skipping remaining jobs due to "'ApkInfo' object has no attribute '_aapt'".

This is due to the fact we might call self.parse in ApkInfo::__init__, if the
path variable is set to a non-empty value, but the initialization of both
self._aapt and self._aapt_version is after this call.

Fix this by moving the initialization of both variables before the call to
self.parse.
2024-01-17 09:39:43 -08:00
Douglas Raillard
f60e341d6e target: Fix read_sysctl()
Add a leading "/" so the path is absolute.
2024-01-16 13:21:10 -08:00
Douglas Raillard
46219ace04 android: Fix typo in ApkInfo
Change self.aapt into self._aapt
2024-01-16 13:20:33 -08:00
Elif Topuz
4589b4698e target: Fix typo
Changed target variable to self because it is not defined in the file.
2024-01-15 13:54:39 -08:00
Douglas Raillard
56746fdb33 ssh: Fix tools detection
Fix inadequate use of module-level __getattr__ (it is not used by the
global variable lookup path). Instead, detect all tools lazily in the
same fashion as with _AndroidEnv()
2024-01-15 13:47:23 -08:00
Douglas Raillard
c347861db4 android: Ensure we use the detected fastboot
Use fastboot as detected by _AndroidEnvironment instead of whatever
binary is in PATH.
2024-01-15 13:47:23 -08:00
Douglas Raillard
3f9ce8ba73 android: Fix tool detections
Module-level __getattr__ is not called on the global variable lookup
path, rendering it useless for what we want to do here.

Instead, use the _AndroidEnvironment class and make it lazy so that we
will not raise an exception by just importing the module.
2024-01-15 13:47:23 -08:00
Douglas Raillard
f30fb0b3fd utils/ssh: Ensure the detected sshpass is used
Since we detect the sshpass tool using which(), ensure that the code
uses that instead of just relying on PATH.
2024-01-10 11:22:54 -08:00
Douglas Raillard
c39d40c6f8 utils/ssh: Remove _check_env()
Replace _check_env() by lazily initialized global var.
2024-01-10 11:22:54 -08:00
Douglas Raillard
926aee1833 utils/android: Remove PATH manipulation
Android tools detection was manipulating os.environ['PATH'] which has
an impact beyond devlib (and even beyond the current process as it will
be inherited by any child).

Remove that hack and instead use global variables to get adb and
fastboot paths. These tools are now detected by _AndroidEnvironment()
like the others.
2024-01-10 11:22:54 -08:00
Douglas Raillard
19c51547d1 utils/android: Cleanup android tool detection
* Use lazy global var init using module-level __getattr__() and remove
  all the _check_env() calls.

* Cleanup the code by removing unnecessary statefullness. While doing so,
prune paths that can never happen.

* Encapsulate all the logic in _AndroidEnvironment() instead of mutating
it using standalone functions.

* Set "adb" and "fastboot" global variables to None as fastboot was
  always set to None, and adb was set to None on the path with
  ANDROID_HOME env var set.
2024-01-10 11:22:54 -08:00
Douglas Raillard
52485fbaa5 setup.py: Re-add "future" PyPI package
Re-add the "future" PyPI package since it actually contains the "past"
Python package that devlib still uses.
2024-01-09 12:07:57 -08:00
Douglas Raillard
416e8ac40f devlib: Remove Python 2 dead code
Remove code that was used for Python 2 only.
2024-01-09 12:07:57 -08:00
Douglas Raillard
ea4eccf95d setup.py: Remove use of "imp" module
Python 3.12 removed the "imp" module, so replace its use in devlib.
2024-01-09 12:07:57 -08:00
Marc Bonnici
b8bf2abf3b AndroidTarget: Skip ungrantable Android permission
Don't throw an error if attempting to grant a permission that
is not manageable.
2024-01-09 12:06:59 -08:00
Douglas Raillard
9f71c818c4 android: Add adb_port connection setting
Allow specifying the port of the adb server in use.
2024-01-09 12:06:26 -08:00
Douglas Raillard
0579a814f1 android: Add a retry logic for background command PID detection
PID detection can sometimes fail for unknown reason. Maybe there is a
latency between the process being started and "ps" being able to see it
that can sometimes be high enough that we look for the process before
it's exposed.

In order to remedy that, add a retry logic to avoid plain failures.
2024-01-09 12:06:26 -08:00
Douglas Raillard
900531b417 android: Fix background command PID detection
Close the race between the background command and the detection of its
PID by freezing it while we detect the PID, then resuming it.
2024-01-09 12:06:26 -08:00
Metin Kaya
14b4e2069b target: Add helper function to check Android screen's locking state
Introduce is_screen_locked() which returns true if device screen is
locked and false otherwise.

This will be useful to automate unlocking the screen [1].

Also fix a typo in is_screen_on()'s documentation.

[1] https://github.com/ARM-software/workload-automation/pull/1246

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-09 07:39:56 -08:00
Metin Kaya
07294251c8 target: Handle dozing case in checking Android screen state
is_screen_on() should also check if the screen is in 'Dozing' state. If
the device is dozing, then is_screen_on() should return false.

Without this patch, is_screen_on() throws 'Could not establish screen
state' exception if the device is idling (screen is completely off).

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-09 07:39:56 -08:00
Metin Kaya
2f48b84e6b target: Fix indentation of a misaligned line
Apparently the line has an extra leading space.

Signed-off-by: Metin Kaya <metin.kaya@arm.com>
2024-01-09 07:39:56 -08:00
Elif Topuz
5a1eb4a778 UIBenchJankTests:modification to support Android 12/14 versions
dex file search is modified. It collects all the available methods under the package name. Tested with other benchmarks (geekbench,pcmark,jankbench in Android 12) as well.
2023-12-12 12:18:53 -08:00
Douglas Raillard
d7d1deedda collector/dmesg: Query systcl kernel.dmesg_restrict
Query systcl instead of checking CONFIG_SECURITY_DMESG_RESTRICT as that
option only provides a default value for the sysctl parameter.

Fixes https://github.com/ARM-software/devlib/issues/653
2023-11-06 08:57:13 -08:00
Douglas Raillard
18d2a343c7 target: Add Target.read_systcl()
Add a getter to query systcl values.
2023-11-06 08:57:13 -08:00
Douglas Raillard
5104002f1a target: Update kernel version parsing for Android GKI kernels
Android GKI kernels have versions such as:
5.15.110-android14-11-ga6d7915820a0-ab10726252

Update the parsing regex to include:
* gki_abi: 10726252 in this example
* android_version: 14 in this example

This also allows parsing the git sha1 correctly, which otherwise is
broken on a version like that.

Fixes https://github.com/ARM-software/devlib/issues/654
2023-11-06 08:54:44 -08:00
Morten Rasmussen
90973cac08 devlib: Make add_trip_point and add_thermal_zone private
Adding thermal zones and trip points are only done at thermal module
initialization. There is no need for these functions to be public.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2023-10-11 16:30:16 -07:00
Morten Rasmussen
403a0faf93 devlib: Add ThermalZone type and policy support to thermal module
The thermal module currently only reads thermal zone ids and allow
temperature reading. The mandatory thermal zone 'type' describes
what the zone is and is therefore quite useful information. This
commit also adds support for reading the current thermal zone
policy and available policies along with a few other properties.

This commit also adds async support to the thermal module.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2023-10-11 16:30:16 -07:00
Christian Loehle
9199d8884e ftrace: Do not read-verify buffer_size_kb value
The sysfs documentation mentions that the value written
to buffer_size_kb ftrace field may be rounded up.
So skip the verify loop on this field.

The case we are worried about, a requested buffer
size that the target cannot fulfill is caught anyway,
as the sysfs write returns with an error that is caught.

Signed-off-by: Christian Loehle <christian.loehle@arm.com>
2023-09-21 08:54:50 -07:00
Douglas Raillard
14bb86efad collector/perfetto: Use busybox cat
Use busybox cat instead of system's cat.
2023-09-12 17:04:26 -05:00
Douglas Raillard
1c0223556f utils/ssh: Fix SSHTransferHandle when using SCP
Using SSHConnection(use_scp=True) lead to an exception:

    UnboundLocalError: local variable 'handle' referenced before assignment

This is cause by some (false) cyclic dependency between initialization
of SSHTransferHandle and creation of the SCPClient. We can fix that by
adding a level of indirection to tie together both objects.
2023-09-12 17:02:09 -05:00
Kajetan Puchalski
9b15807c17 collector: Add PerfettoCollector
Add a Collector for accessing Google's Perfetto tracing infrastructure.
The Collector takes a path to an on-device config file, starts tracing
in the background using the perfetto binary and then stops by killing
the tracing process.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-09-06 16:48:56 -05:00
Kajetan Puchalski
86fcc11ae1 target: Add is_running()
Add the "is_running" function that can be used to check if a given
process is running on the target device. It will return True if a
process matching the name is found and Falsa otherwise.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-09-06 16:48:56 -05:00
Douglas Raillard
b5aa065f7b bin: Update busybox
Update busybox to version 1.36.1 with defconfig and uniformly built on
Alpine v3.18, statically linked to musl libc.

Binaries were built using lisa-build-asset from LISA project:

    lisa-build-asset busybox --native-build
2023-08-29 19:18:31 -05:00
Douglas Raillard
35e7288149 utils/android: Use LC_ALL for adb commands
Ensures that adb commands are executed with english locale since we
sometimes match on the output.
2023-08-29 16:55:21 -05:00
Kajetan Puchalski
6b09571859 ftrace: Separate top_buffer_size from buffer_size
Since we now set the top buffer size to be the same as the devlib buffer
size, this effectively halves the maximum available buffer size that can
be set while using devlib. Whatever size is passed as `buffer_size` will
be allocated twice, even if the top buffer is hardly used at all.

This commit separates them into `buffer_size` and `top_buffer_size`. If
the latter is not passed, the behaviour will not change compared to now.

Fixes: e0c53d09990b5501e493d048a5dce067d8990281
Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-15 17:47:10 -05:00
Douglas Raillard
1730f69461 target: Avoid intermittent error when installing binary
Installing busybox sometimes fails with:

    cp: /data/local/tmp/bin/busybox: Text file busy

This happens when trying to modify a binary file while a process is
still running (e.g. unclean previous disconnection).

Fix that by using the -f option, which will remove the destination file
first and retry the copy in case of failure.
2023-08-09 16:39:37 -05:00
Douglas Raillard
cf4d3b5f4c collector/dmesg: Avoid unnecessary dmesg command
Only run the minimal amount of commands, as executing a command can be
costly.

In the sequence reset() -> start(), we only need to get the output of
dmesg upon start() to know what part of the log will be ignored
(everything before the call to start()). There is no need to perform
that upon reset() since the sequence:

    reset() -> start() -> stop() -> start() -> stop()
               \______1________/    \______2________/

is anyway equivalent to:

    reset() -> start() -> stop()
               \______2________/

So reset() can essentially be a no-op and the actual reset logic lives
in start().
2023-08-09 16:39:21 -05:00
Douglas Raillard
eb2c7e488b devlib/utils/serial_port: Avoid use of deprecated disutils 2023-08-09 16:39:08 -05:00
Douglas Raillard
306fd0624c devlib/utils/ssh: Avoid using deprecated distutils 2023-08-09 16:39:08 -05:00
Douglas Raillard
fe28e086c2 devlib/host: Remove use of deprecated distutils 2023-08-09 16:39:08 -05:00
Kajetan Puchalski
59ff6100d8 utils.rendering: Fix activity matching
Change the SurfaceFlingerFrameCollector to match activities by prefix
instead of looking for an exact match. This will allow to account for
activities with variable suffixes.
Raise an error if more than one activity matches the provided view.
Show a warning if no activities match the provided view in order to
avoid silently failing.

Suggested-by: Andriani Mappoura <andriani.mappoura@arm.com>
Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-08-09 16:27:26 -05:00
Kajetan Puchalski
be988bb42b target: Expose Android external storage app dir
FEATURE

Add a convenience property for AndroidTarget to expose Android's
external storage app directory path.
This path is used for some applications (such as Unity games) to
store persistent application data instead of '/data/data'.

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2023-05-30 17:39:46 -05:00
Douglas Raillard
ac0c39e31a connection: Make BackgroundCommand.wait() return non-None
Lack of return statement in wait() was making it return None instead of
the exit code. Add appropriate return statement in wait() and other
function to ensure return value is not lost.
2023-05-17 10:24:18 -05:00
Marc Bonnici
e6323fc8bf connection/bg_cmd: fix missing use of signal
The signal parameter was being ignored and instead always sending
the KILL signal instead.
2023-05-17 10:24:18 -05:00
Marc Bonnici
7e2399055b connection: update kill command format
The kill applet in the current busybox executable does not support
the `--` syntax therefore remove from the template command.
2023-05-17 10:24:18 -05:00
Douglas Raillard
ddaa2f1621 connection: Rework TransferManager
* Split TransferManager and TransferHandle:
    * TransferManager deals with the generic monitoring. To abort a
      transfer, it simply cancels the transfer and raises an exception
      from manage().
    * TransferHandle provides a way for the manager to query the state
      of the transfer and cancel it. It is backend-specific.

* Remove most of the state in TransferManager, along with the associated
  background command leak etc

* Use a daemonic monitor thread to behave as excpected on interpreter
  shutdown.

* Ensure a transfer manager _always_ exists. When no management is
  desired, a noop object is used. This avoids using a None sentinel,
  which is invariably mishandled by some code leading to crashes.

* Try to merge more paths in the code to uncover as many issues as
  possible in testing.

* Fix percentage for SSHTransferHandle (transferred / (remaining +
  transferred) instead of transferred / remaining)

* Rename total_timeout TransferManager parameter and attribute to
  total_transfer_timeout to match the connection name parameter.
2023-05-05 15:58:20 -05:00
Douglas Raillard
1c5412be2f connection: Remove dead code 2023-05-05 15:58:20 -05:00
Douglas Raillard
e0b1176757 connection: Cleanup TransferManager callback interface
Implement a sane interface avoiding variable positional arguments.
2023-05-05 15:58:20 -05:00
Douglas Raillard
45aebdaca9 connection: Ensure we don't leak too many BackgroundCommand
Make BackgroundCommand.__init__() poll all current BackgroundCommands on
the associated connection so they deregister themselves if they are
completed.

This ensures that a BackgroundCommand-heavy application that also does
not close them properly will not accumulate useless instances forever
and leak associated resources like Popen objects.
2023-04-29 13:46:56 -05:00
Douglas Raillard
1239fd922e connection: Make BackgroundCommand deregister itself
Instead of loosely tracking the current BackgroundCommand for a
connection in _current_bg_cmds WeakSet attribute, use a normal set and
make the BackgroundCommand deregister itself upon termination.

This allows canceling any outstanding BackgroundCommand when the
connection is closed. Currently, destroying a BackgroundCommand will not
cancel the command but devlib will simply loose track of it, and some
threads will likely fail in the background if they try to use the now
broken connection.
2023-04-29 13:46:56 -05:00
Douglas Raillard
069d2322f1 connection: Add BackgroundCommand.__init__(conn)
Add a constructor to BackgroundCommand so that the command knows the
connection it's tied to.
2023-04-29 13:46:56 -05:00
Douglas Raillard
7bdd6a0ade connection: Terminate background commands on close()
Ensure all background commands are terminated before we close the
connection.
2023-04-29 13:46:56 -05:00
Douglas Raillard
27fb0453a3 target: Fix and generalize Target.kick_off()
kick_off() implementation had a number of issue:
* Target specific implementation making testing more difficult.
* Not wrapping the command in sh -c lead to blocking behavior if the
  command had multiple parts, e.g. "sleep 42; echo foo"
* nohup sometimes writes to stdout, breaking return code parsing in
  adb_shell().

These issues are fixed by a generic implementing of kick_off() that
simply delegates to Target.background().

Fixes https://github.com/ARM-software/devlib/issues/623
2023-04-29 13:46:46 -05:00
Douglas Raillard
9e0300b9f2 shutils: Fix broken redirections
Redirecting all output to /dev/null needs >/dev/null 2>&1 .

Fix cases where 2>&1 /dev/null was used, and also remove &> that is not
POSIX.
2023-04-29 13:46:46 -05:00
Douglas Raillard
e0c53d0999 ftrace: Set top-level buffer size
trace-cmd start -B devlib -b 42 will set the buffer size for the
"devlib" ftrace instance but will not set the buffer size of the
top-level buffer.

Unfortunately, some events still end up in the top-level buffer
regardless of any configuration such as the "print" event. This can lead
to lost events because the buffer size was too small.

Avoid that by using the buffer size for both top-level and devlib's
instance.
2023-04-18 18:12:26 -05:00
Douglas Raillard
0a910071f8 utils/android: Fix adb_root() exceptions
Ensure adb_root() always raises AdbRootError so that the caller can
catch it reliably. This is especially important since adb_root() failing
is ignored and simply triggers a fallback on using `su`. Android
production builds refuse adb root nowadays, so it's important that adb
root failures are handled well.
2023-04-06 11:05:23 -05:00
Douglas Raillard
4b13ee79eb ftrace: Avoid repeated available events query
FtraceCollector.available_events is not memoized anymore as the set of
events supported by the target can change dynamically (e.g. loading a
kernel module).

This means that calling self.available_events is somewhat expensive, so
avoid doing it in a loop. Instead, save the events in a variable and
reuse it in the function to save a substantial amount of time.
2023-04-06 11:05:04 -05:00
Douglas Raillard
fade6b4247 ftrace: Fix use of named buffer
trace-cmd extract needs -B devlib to be passed, otherwise an empty
buffer will be extracted.
2023-03-13 14:10:17 -05:00
Douglas Raillard
3d2cdd99c5 ftrace: Use named ftrace buffer
Use a buffer named "devlib" instead of using the top-level default
buffer. That will improve interop with other tools trying to collect a
trace at the same time as devlib.
2023-02-03 12:04:51 +00:00
Ibrahim Hassan
e012b175c6 module/cgroups2: Added utilisation of the 'LinuxTarget' interface.
Replaced all references to devlib.Target with devlib.LinuxTarget to correctly
utilise appropriate interface.
Also added relevant functionality to corectly
create the root directories of the CGroup hierarchies, handling any error that
occurs in that case.
2023-02-03 12:04:34 +00:00
Ibrahim Hassan
5ea63490a9 module/cgroups2: Replaced references to 'lisa' to 'devlib' 2023-02-03 12:04:34 +00:00
Ibrahim Hassan
d7b38e471d module/cgroups2: Add new CGroups management module
Handles both V1 and V2 CGroups transparently with an API matching
CGroup V2 semantics.

Also handles the CGroup delegation API provided by systemd.
2023-02-03 12:04:34 +00:00
Marc Bonnici
7f778e767d target: Ensure max_async is used during connect method
The value for `max_async` when creating a target was being ignored
if a connection was not established as part of the __init__ method.
Save this value for use via `connect` if called directly.
2023-02-03 12:04:01 +00:00
Douglas Raillard
93ada9762d devlib: Remove "future"
Remove the "future" dependency as devlib does not support Python 2
anymore.

Also remove the "from __future__ import division" as this is the default
in Python 3.
2023-01-19 11:38:11 +00:00
setrofim
111aa327ce Import quote() form shlex rather than pipes
pipes module is deprecated since 3.11, and quote() has been available in
shlex since 3.3.
2022-11-24 10:55:22 +00:00
setrofim
cc3498d315 Mitigate CVE-2007-4995
Prevent potential directory path traversal attacks (see
https://www.trellix.com/en-us/about/newsroom/stories/research/tarfile-exploiting-the-world.html)
2022-11-18 11:57:41 +00:00
Douglas Raillard
678822f9e4 utils/misc: Cleanup check_output()
* Remove check_output_lock as the issue has been fixed in Python 3.4
* Use Popen process as a context manager. Technically,
  Popen.communicate() already achieves those but the context manager
  will ensure this is done even if an exception happens at any point.
2022-08-22 09:32:11 +01:00
Douglas Raillard
be734140b3 utils/android: Make AdbConnection.active_connections thread safe
Add a lock to serialize access to the dictionary.
2022-08-22 09:32:11 +01:00
Douglas Raillard
b988e245d9 utils/android: Fix AdbConnection.adb_root()
adb_root() restarts the server, leading to aborting commands ran by
other connections and also apparently leading to command hanging in some
situations.

Therefore, only allow adb_root() when there is only one connection
active. That should not be a big issue as this is typically called by
the first connection ever made when the Target is created.
2022-08-22 09:32:11 +01:00
Douglas Raillard
b7ef2dc2e0 devlib.target: Fix AndroidTarget unpickle
Fix __setstate__ to call super().__setstate__ in order to handle the
generic part of the deserialization.
2022-08-17 10:53:43 +01:00
Douglas Raillard
492284f46d module/cpufreq: Fix typo
Fix per-cpu/global cpufreq governor tunable setting by replacing a
"pass" into a "continue".

Also name some futures to improve error reporting.
2022-08-09 14:02:17 +01:00
Douglas Raillard
fefdf29ed8 utils/asyn: Add memoize_method() decorator
Add a memoize_method decorator that works for async methods. It will not
leak memory since the memoization cache is held in the instance
__dict__, and it does not rely on hacks to hash unhashable data.
2022-07-28 14:40:15 +01:00
Douglas Raillard
0ea9c73ec0 module/cpufreq: Fix async use_governor()
use_governor() was trying to set concurrently both per-cpu and global tunables for
each governor, which lead to a write conflict.

Split the work into the per-governor global tunables and the per-cpu
tunables, and do all that in concurrently. Each task is therefore
responsible of a distinct set of files and all is well.

Also remove @memoized on async functions. It will be reintroduced in a
later commit when there is a safe alternative for async functions.
2022-07-28 14:40:15 +01:00
Douglas Raillard
2c4b16f280 devlib: Use async Target API
Make use of the new async API to speedup other parts of devlib.
2022-07-28 14:40:15 +01:00
Douglas Raillard
18ab9f80b0 target: Expose Target(max_async=50) parameter
Allow the user to set a maximum number of conrruent connections used to
dispatch non-blocking commands when using the async API.
2022-07-28 14:40:15 +01:00
Douglas Raillard
92f58e4e7a target: Enable async methods
Add async variants of Target methods.
2022-07-28 14:40:15 +01:00
Douglas Raillard
bdf8b88ac7 utils/async: Add new utils.async module
Home for async-related utilities.
2022-07-28 14:40:15 +01:00
Douglas Raillard
1da174a438 setup.py: Require Python >= 3.7
Require Python >= 3.7 in order to have access to a fully fledged asyncio
module.
2022-07-28 14:40:15 +01:00
Douglas Raillard
3c9804a45b setup.py: cleanup dependencies in setup.py
Remove dependencies that are ruled out due to the current Python minimal
version requirement.
2022-07-28 14:40:15 +01:00
Douglas Raillard
3fe105ffb7 target: Make __getstate__ more future-proof
Remove all the tls_property from the state, as they will be recreated
automatically.
2022-07-28 14:40:15 +01:00
Douglas Raillard
9bd76fd8af target: Fix Target.get_connection()'s busybox
The conncetion returned by Target.get_connection() does not have its
.busybox attribute initialized. This is expected for the first
connection, but connections created for new threads should have busybox
set.
2022-07-28 14:40:15 +01:00
Douglas Raillard
ef9384d161 utils.misc: Make nullcontext work with asyncio
Implement __aenter__ and __aexit__ on nullcontext so it can be used as
an asynchronous context manager.
2022-07-28 14:40:15 +01:00
Kajetan Puchalski
ff2268b715 module/cpuidle: Add listing & setting governors
Add support for listing the currently available idle governors and
setting the currently used one through sysfs.
2022-07-19 09:33:36 +01:00
Kajetan Puchalski
5042f474c2 module/cgroups: Skip disabled cgroup controllers
Currently the cgroups module will pull all available controllers from
/proc/cgroups and then try to mount them, including the disabled ones.
This will result in the entire mount failing.

Lines in /proc/cgroups ending in 0 correspond to disabled controllers.
Filtering those out solves the issue.
2022-07-19 09:33:23 +01:00
Marc Bonnici
a585426924 android: Don't error if ADB is already running as root
With recent versions of adb, adb root can fail if the
daemon is already running as root.

Check the raised error message for this case and avoid
raising an error in this scenario.
2022-06-22 11:23:15 +01:00
Marc Bonnici
1196e336a5 version: bump minor version number
Re-bump the minor version to prepare for dropping
Python < 3.7 support.
2022-05-24 17:50:01 +01:00
Marc Bonnici
f525374fbb version: perform additional revision release
Revert the minor version number to allow release of additional
revision release to fix some bugs that made it into the previous
release.
2022-05-24 17:50:01 +01:00
Douglas Raillard
42e62aed57 target: Fix AndroidTarget pickling
Avoid pickling the "clear_logcat_lock". Instead, discard the attribute
and re-initialize it anew.
2022-05-24 10:39:31 +01:00
Douglas Raillard
f5cfcafb08 shutils: Remove shebang
Since shutils should be run using busybox shell anyway, remove the
shebang.
2022-05-24 10:37:17 +01:00
Douglas Raillard
7853d2c85c target: Run shutils.in in busybox
Ensure shutils.in runs in a busybox shell.
2022-05-24 10:37:17 +01:00
Douglas Raillard
a9fcc75f60 collector/dmesg: Fix dmesg_out property
When no entry has been recorded by the collector, return an empty string
rather than returning the full dmesg log.

Also fix get_data() that would fail try to add None + '\n' if dmesg_out
property returns None.
2022-05-18 15:21:18 +01:00
Douglas Raillard
cd8720b901 module/cgroups: Fix move_tasks()/move_all_tasks_to()
Both move_all_tasks_to() and move_tasks() take a list of grep patterns
to exclude.

It turned out that move_all_tasks_to() was calling move_tasks() with a
string instead of a list, leading to broken quoting.

Fix that by passing the pattern list to move_tasks() and let
move_tasks() add the "-e" option in front of it. Also add a
DeprecationWarning in move_tasks() if someone passes a string instead of
an iterable of strings.
2022-05-17 19:04:29 +01:00
Marc Bonnici
03569fb01f version: Bump minor version number
This next release will drop support for Python < 3.7
therefore bump to a dev tag of the next minor version.
2022-04-29 19:38:50 +01:00
77 changed files with 11399 additions and 1757 deletions

201
LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 Arm Ltd.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,4 +1,4 @@
# Copyright 2018 ARM Limited # Copyright 2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -13,9 +13,25 @@
# limitations under the License. # limitations under the License.
# #
from devlib.target import Target, LinuxTarget, AndroidTarget, LocalLinuxTarget, ChromeOsTarget '''
from devlib.host import PACKAGE_BIN_DIRECTORY Initializations for devlib module
from devlib.exception import DevlibError, DevlibTransientError, DevlibStableError, TargetError, TargetTransientError, TargetStableError, TargetNotRespondingError, HostError '''
from devlib.target import (
Target, LinuxTarget, AndroidTarget, LocalLinuxTarget,
ChromeOsTarget,
)
from devlib.host import (
PACKAGE_BIN_DIRECTORY,
LocalConnection,
)
from devlib.exception import (
DevlibError, DevlibTransientError, DevlibStableError,
TargetError, TargetTransientError, TargetStableError,
TargetNotRespondingError, HostError,
)
from devlib.module import Module, HardRestModule, BootModule, FlashModule from devlib.module import Module, HardRestModule, BootModule, FlashModule
from devlib.module import get_module, register_module from devlib.module import get_module, register_module
@ -46,15 +62,14 @@ from devlib.derived.energy import DerivedEnergyMeasurements
from devlib.derived.fps import DerivedGfxInfoStats, DerivedSurfaceFlingerStats from devlib.derived.fps import DerivedGfxInfoStats, DerivedSurfaceFlingerStats
from devlib.collector.ftrace import FtraceCollector from devlib.collector.ftrace import FtraceCollector
from devlib.collector.perfetto import PerfettoCollector
from devlib.collector.perf import PerfCollector from devlib.collector.perf import PerfCollector
from devlib.collector.serial_trace import SerialTraceCollector from devlib.collector.serial_trace import SerialTraceCollector
from devlib.collector.dmesg import DmesgCollector from devlib.collector.dmesg import DmesgCollector
from devlib.collector.logcat import LogcatCollector from devlib.collector.logcat import LogcatCollector
from devlib.host import LocalConnection
from devlib.utils.android import AdbConnection from devlib.utils.android import AdbConnection
from devlib.utils.ssh import SshConnection, TelnetConnection, Gem5Connection from devlib.utils.ssh import SshConnection, TelnetConnection, Gem5Connection
from devlib.utils.version import (get_devlib_version as __get_devlib_version, from devlib.utils.version import (get_devlib_version as __get_devlib_version,
get_commit as __get_commit) get_commit as __get_commit)
@ -63,6 +78,6 @@ __version__ = __get_devlib_version()
__commit = __get_commit() __commit = __get_commit()
if __commit: if __commit:
__full_version__ = '{}+{}'.format(__version__, __commit) __full_version__ = f'{__version__}+{__commit}'
else: else:
__full_version__ = __version__ __full_version__ = __version__

284
devlib/_target_runner.py Normal file
View File

@ -0,0 +1,284 @@
# Copyright 2024 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Target runner and related classes are implemented here.
"""
import logging
import os
import time
from platform import machine
from devlib.exception import (TargetStableError, HostError)
from devlib.target import LinuxTarget
from devlib.utils.misc import get_subprocess, which
from devlib.utils.ssh import SshConnection
class TargetRunner:
"""
A generic class for interacting with targets runners.
It mainly aims to provide framework support for QEMU like target runners
(e.g., :class:`QEMUTargetRunner`).
:param target: Specifies type of target per :class:`Target` based classes.
:type target: Target
"""
def __init__(self,
target):
self.target = target
self.logger = logging.getLogger(self.__class__.__name__)
def __enter__(self):
return self
def __exit__(self, *_):
pass
class SubprocessTargetRunner(TargetRunner):
"""
Class for providing subprocess support to the target runners.
:param runner_cmd: The command to start runner process (e.g.,
``qemu-system-aarch64 -kernel Image -append "console=ttyAMA0" ...``).
:type runner_cmd: list(str)
:param target: Specifies type of target per :class:`Target` based classes.
:type target: Target
:param connect: Specifies if :class:`TargetRunner` should try to connect
target after launching it, defaults to True.
:type connect: bool or None
:param boot_timeout: Timeout for target's being ready for SSH access in
seconds, defaults to 60.
:type boot_timeout: int or None
:raises HostError: if it cannot execute runner command successfully.
:raises TargetStableError: if Target is inaccessible.
"""
def __init__(self,
runner_cmd,
target,
connect=True,
boot_timeout=60):
super().__init__(target=target)
self.boot_timeout = boot_timeout
self.logger.info('runner_cmd: %s', runner_cmd)
try:
self.runner_process = get_subprocess(runner_cmd)
except Exception as ex:
raise HostError(f'Error while running "{runner_cmd}": {ex}') from ex
if connect:
self.wait_boot_complete()
def __enter__(self):
return self
def __exit__(self, *_):
"""
Exit routine for contextmanager.
Ensure ``SubprocessTargetRunner.runner_process`` is terminated on exit.
"""
self.terminate()
def wait_boot_complete(self):
"""
Wait for target OS to finish boot up and become accessible over SSH in at most
``SubprocessTargetRunner.boot_timeout`` seconds.
:raises TargetStableError: In case of timeout.
"""
start_time = time.time()
elapsed = 0
while self.boot_timeout >= elapsed:
try:
self.target.connect(timeout=self.boot_timeout - elapsed)
self.logger.debug('Target is ready.')
return
# pylint: disable=broad-except
except Exception as ex:
self.logger.info('Cannot connect target: %s', ex)
time.sleep(1)
elapsed = time.time() - start_time
self.terminate()
raise TargetStableError(f'Target is inaccessible for {self.boot_timeout} seconds!')
def terminate(self):
"""
Terminate ``SubprocessTargetRunner.runner_process``.
"""
self.logger.debug('Killing target runner...')
self.runner_process.kill()
self.runner_process.__exit__(None, None, None)
class NOPTargetRunner(TargetRunner):
"""
Class for implementing a target runner which does nothing except providing .target attribute.
:param target: Specifies type of target per :class:`Target` based classes.
:type target: Target
"""
def __init__(self, target):
super().__init__(target=target)
def __enter__(self):
return self
def __exit__(self, *_):
pass
def terminate(self):
"""
Nothing to terminate for NOP target runners.
Defined to be compliant with other runners (e.g., ``SubprocessTargetRunner``).
"""
class QEMUTargetRunner(SubprocessTargetRunner):
"""
Class for preparing necessary groundwork for launching a guest OS on QEMU.
:param qemu_settings: A dictionary which has QEMU related parameters. The full list
of QEMU parameters is below:
* ``kernel_image``: This is the location of kernel image (e.g., ``Image``) which
will be used as target's kernel.
* ``arch``: Architecture type. Defaults to ``aarch64``.
* ``cpu_types``: List of CPU ids for QEMU. The list only contains ``cortex-a72`` by
default. This parameter is valid for Arm architectures only.
* ``initrd_image``: This points to the location of initrd image (e.g.,
``rootfs.cpio.xz``) which will be used as target's root filesystem if kernel
does not include one already.
* ``mem_size``: Size of guest memory in MiB.
* ``num_cores``: Number of CPU cores. Guest will have ``2`` cores by default.
* ``num_threads``: Number of CPU threads. Set to ``2`` by defaults.
* ``cmdline``: Kernel command line parameter. It only specifies console device in
default (i.e., ``console=ttyAMA0``) which is valid for Arm architectures.
May be changed to ``ttyS0`` for x86 platforms.
* ``enable_kvm``: Specifies if KVM will be used as accelerator in QEMU or not.
Enabled by default if host architecture matches with target's for improving
QEMU performance.
:type qemu_settings: Dict
:param connection_settings: the dictionary to store connection settings
of ``Target.connection_settings``, defaults to None.
:type connection_settings: Dict or None
:param make_target: Lambda function for creating :class:`Target` based object.
:type make_target: func or None
:Variable positional arguments: Forwarded to :class:`TargetRunner`.
:raises FileNotFoundError: if QEMU executable, kernel or initrd image cannot be found.
"""
def __init__(self,
qemu_settings,
connection_settings=None,
make_target=LinuxTarget,
**args):
self.connection_settings = {
'host': '127.0.0.1',
'port': 8022,
'username': 'root',
'password': 'root',
'strict_host_check': False,
}
self.connection_settings = {**self.connection_settings, **(connection_settings or {})}
qemu_args = {
'arch': 'aarch64',
'cpu_type': 'cortex-a72',
'mem_size': 512,
'num_cores': 2,
'num_threads': 2,
'cmdline': 'console=ttyAMA0',
'enable_kvm': True,
}
qemu_args = {**qemu_args, **qemu_settings}
qemu_executable = f'qemu-system-{qemu_args["arch"]}'
qemu_path = which(qemu_executable)
if qemu_path is None:
raise FileNotFoundError(f'Cannot find {qemu_executable} executable!')
if qemu_args.get("kernel_image"):
if not os.path.exists(qemu_args["kernel_image"]):
raise FileNotFoundError(f'{qemu_args["kernel_image"]} does not exist!')
else:
raise KeyError('qemu_settings must have kernel_image!')
qemu_cmd = [qemu_path,
'-kernel', qemu_args["kernel_image"],
'-append', f"'{qemu_args['cmdline']}'",
'-m', str(qemu_args["mem_size"]),
'-smp', f'cores={qemu_args["num_cores"]},threads={qemu_args["num_threads"]}',
'-netdev', f'user,id=net0,hostfwd=tcp::{self.connection_settings["port"]}-:22',
'-device', 'virtio-net-pci,netdev=net0',
'--nographic',
]
if qemu_args.get("initrd_image"):
if not os.path.exists(qemu_args["initrd_image"]):
raise FileNotFoundError(f'{qemu_args["initrd_image"]} does not exist!')
qemu_cmd.extend(['-initrd', qemu_args["initrd_image"]])
if qemu_args["enable_kvm"]:
# Enable KVM accelerator if host and guest architectures match.
# Comparison is done based on x86 for the sake of simplicity.
if (qemu_args['arch'].startswith('x86') and machine().startswith('x86')) or (
qemu_args['arch'].startswith('x86') and machine().startswith('x86')):
qemu_cmd.append('--enable-kvm')
# qemu-system-x86_64 does not support -machine virt as of now.
if not qemu_args['arch'].startswith('x86'):
qemu_cmd.extend(['-machine', 'virt', '-cpu', qemu_args["cpu_type"]])
target = make_target(connect=False,
conn_cls=SshConnection,
connection_settings=self.connection_settings)
super().__init__(runner_cmd=qemu_cmd,
target=target,
**args)

View File

@ -0,0 +1,604 @@
Sources of busybox available at:
Git commit: 1a64f6a20aaf6ea4dbba68bbfa8cc1ab7e5c57c4
Git repository: git://git.busybox.net/busybox
Build host info:
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.18.3
PRETTY_NAME="Alpine Linux v3.18"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
Build recipe:
export ARCH=arm64
export LISA_ARCH_ASSETS=/lisa/_assets/binaries/arm64
export LISA_HOME=''
#! /bin/bash
ALPINE_VERSION=v3.18
ALPINE_BUILD_DEPENDENCIES=(bash gcc make musl-dev linux-headers git)
download() {
git clone git://git.busybox.net/busybox --branch 1_36_stable --depth=1
git -C busybox checkout 1_36_1
}
build() {
cd busybox
make defconfig
# We need to generate a defconfig then remove the config, then set them to
# the value we want, as there is no make olddefconfig to fixup an edited
# config.
cat .config | grep -v '\bCONFIG_MODPROBE_SMALL\b' | grep -v '\bCONFIG_STATIC\b' > myconfig
echo "CONFIG_STATIC=y" >> myconfig
# MODPROBE_SMALL=y breaks the return code of insmod. Instead of forwarding
# the value from the kernel mod init function, it just returns 1.
echo "CONFIG_MODPROBE_SMALL=n" >> myconfig
cp myconfig .config
make -j 4 "CROSS_COMPILE=$CROSS_COMPILE"
}
install() {
cp -v busybox/busybox "$LISA_ARCH_ASSETS/busybox"
source "$LISA_HOME/tools/recipes/utils.sh"
install_readme busybox busybox LICENSE
}
The sources were distributed under the following licence (content of busybox/LICENSE):
--- A note on GPL versions
BusyBox is distributed under version 2 of the General Public License (included
in its entirety, below). Version 2 is the only version of this license which
this version of BusyBox (or modified versions derived from this one) may be
distributed under.
------------------------------------------------------------------------
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.
The sources were compiled with musl-libc (content of COPYRIGHT):
musl as a whole is licensed under the following standard MIT license:
----------------------------------------------------------------------
Copyright © 2005-2020 Rich Felker, et al.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------
Authors/contributors include:
A. Wilcox
Ada Worcester
Alex Dowad
Alex Suykov
Alexander Monakov
Andre McCurdy
Andrew Kelley
Anthony G. Basile
Aric Belsito
Arvid Picciani
Bartosz Brachaczek
Benjamin Peterson
Bobby Bingham
Boris Brezillon
Brent Cook
Chris Spiegel
Clément Vasseur
Daniel Micay
Daniel Sabogal
Daurnimator
David Carlier
David Edelsohn
Denys Vlasenko
Dmitry Ivanov
Dmitry V. Levin
Drew DeVault
Emil Renner Berthing
Fangrui Song
Felix Fietkau
Felix Janda
Gianluca Anzolin
Hauke Mehrtens
He X
Hiltjo Posthuma
Isaac Dunham
Jaydeep Patil
Jens Gustedt
Jeremy Huntwork
Jo-Philipp Wich
Joakim Sindholt
John Spencer
Julien Ramseier
Justin Cormack
Kaarle Ritvanen
Khem Raj
Kylie McClain
Leah Neukirchen
Luca Barbato
Luka Perkov
M Farkas-Dyck (Strake)
Mahesh Bodapati
Markus Wichmann
Masanori Ogino
Michael Clark
Michael Forney
Mikhail Kremnyov
Natanael Copa
Nicholas J. Kain
orc
Pascal Cuoq
Patrick Oppenlander
Petr Hosek
Petr Skocik
Pierre Carrier
Reini Urban
Rich Felker
Richard Pennington
Ryan Fairfax
Samuel Holland
Segev Finer
Shiz
sin
Solar Designer
Stefan Kristiansson
Stefan O'Rear
Szabolcs Nagy
Timo Teräs
Trutz Behn
Valentin Ochs
Will Dietz
William Haddon
William Pitcock
Portions of this software are derived from third-party works licensed
under terms compatible with the above MIT license:
The TRE regular expression implementation (src/regex/reg* and
src/regex/tre*) is Copyright © 2001-2008 Ville Laurikari and licensed
under a 2-clause BSD license (license text in the source files). The
included version has been heavily modified by Rich Felker in 2012, in
the interests of size, simplicity, and namespace cleanliness.
Much of the math library code (src/math/* and src/complex/*) is
Copyright © 1993,2004 Sun Microsystems or
Copyright © 2003-2011 David Schultz or
Copyright © 2003-2009 Steven G. Kargl or
Copyright © 2003-2009 Bruce D. Evans or
Copyright © 2008 Stephen L. Moshier or
Copyright © 2017-2018 Arm Limited
and labelled as such in comments in the individual source files. All
have been licensed under extremely permissive terms.
The ARM memcpy code (src/string/arm/memcpy.S) is Copyright © 2008
The Android Open Source Project and is licensed under a two-clause BSD
license. It was taken from Bionic libc, used on Android.
The AArch64 memcpy and memset code (src/string/aarch64/*) are
Copyright © 1999-2019, Arm Limited.
The implementation of DES for crypt (src/crypt/crypt_des.c) is
Copyright © 1994 David Burren. It is licensed under a BSD license.
The implementation of blowfish crypt (src/crypt/crypt_blowfish.c) was
originally written by Solar Designer and placed into the public
domain. The code also comes with a fallback permissive license for use
in jurisdictions that may not recognize the public domain.
The smoothsort implementation (src/stdlib/qsort.c) is Copyright © 2011
Valentin Ochs and is licensed under an MIT-style license.
The x86_64 port was written by Nicholas J. Kain and is licensed under
the standard MIT terms.
The mips and microblaze ports were originally written by Richard
Pennington for use in the ellcc project. The original code was adapted
by Rich Felker for build system and code conventions during upstream
integration. It is licensed under the standard MIT terms.
The mips64 port was contributed by Imagination Technologies and is
licensed under the standard MIT terms.
The powerpc port was also originally written by Richard Pennington,
and later supplemented and integrated by John Spencer. It is licensed
under the standard MIT terms.
All other files which have no copyright comments are original works
produced specifically for use as part of this library, written either
by Rich Felker, the main author of the library, or by one or more
contibutors listed above. Details on authorship of individual files
can be found in the git version control history of the project. The
omission of copyright and license comments in each file is in the
interest of source tree size.
In addition, permission is hereby granted for all public header files
(include/* and arch/*/bits/*) and crt files intended to be linked into
applications (crt/*, ldso/dlstart.c, and arch/*/crt_arch.h) to omit
the copyright notice and permission notice otherwise required by the
license, and to use these files without any requirement of
attribution. These files include substantial contributions from:
Bobby Bingham
John Spencer
Nicholas J. Kain
Rich Felker
Richard Pennington
Stefan Kristiansson
Szabolcs Nagy
all of whom have explicitly granted such permission.
This file previously contained text expressing a belief that most of
the files covered by the above exception were sufficiently trivial not
to be subject to copyright, resulting in confusion over whether it
negated the permissions granted in the license. In the spirit of
permissive licensing, and of not having licensing issues being an
obstacle to adoption, that text has been removed.

Binary file not shown.

View File

@ -0,0 +1,604 @@
Sources of busybox available at:
Git commit: 1a64f6a20aaf6ea4dbba68bbfa8cc1ab7e5c57c4
Git repository: git://git.busybox.net/busybox
Build host info:
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.18.3
PRETTY_NAME="Alpine Linux v3.18"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
Build recipe:
export ARCH=armeabi
export LISA_ARCH_ASSETS=/lisa/_assets/binaries/armeabi
export LISA_HOME=''
#! /bin/bash
ALPINE_VERSION=v3.18
ALPINE_BUILD_DEPENDENCIES=(bash gcc make musl-dev linux-headers git)
download() {
git clone git://git.busybox.net/busybox --branch 1_36_stable --depth=1
git -C busybox checkout 1_36_1
}
build() {
cd busybox
make defconfig
# We need to generate a defconfig then remove the config, then set them to
# the value we want, as there is no make olddefconfig to fixup an edited
# config.
cat .config | grep -v '\bCONFIG_MODPROBE_SMALL\b' | grep -v '\bCONFIG_STATIC\b' > myconfig
echo "CONFIG_STATIC=y" >> myconfig
# MODPROBE_SMALL=y breaks the return code of insmod. Instead of forwarding
# the value from the kernel mod init function, it just returns 1.
echo "CONFIG_MODPROBE_SMALL=n" >> myconfig
cp myconfig .config
make -j 4 "CROSS_COMPILE=$CROSS_COMPILE"
}
install() {
cp -v busybox/busybox "$LISA_ARCH_ASSETS/busybox"
source "$LISA_HOME/tools/recipes/utils.sh"
install_readme busybox busybox LICENSE
}
The sources were distributed under the following licence (content of busybox/LICENSE):
--- A note on GPL versions
BusyBox is distributed under version 2 of the General Public License (included
in its entirety, below). Version 2 is the only version of this license which
this version of BusyBox (or modified versions derived from this one) may be
distributed under.
------------------------------------------------------------------------
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.
The sources were compiled with musl-libc (content of COPYRIGHT):
musl as a whole is licensed under the following standard MIT license:
----------------------------------------------------------------------
Copyright © 2005-2020 Rich Felker, et al.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------
Authors/contributors include:
A. Wilcox
Ada Worcester
Alex Dowad
Alex Suykov
Alexander Monakov
Andre McCurdy
Andrew Kelley
Anthony G. Basile
Aric Belsito
Arvid Picciani
Bartosz Brachaczek
Benjamin Peterson
Bobby Bingham
Boris Brezillon
Brent Cook
Chris Spiegel
Clément Vasseur
Daniel Micay
Daniel Sabogal
Daurnimator
David Carlier
David Edelsohn
Denys Vlasenko
Dmitry Ivanov
Dmitry V. Levin
Drew DeVault
Emil Renner Berthing
Fangrui Song
Felix Fietkau
Felix Janda
Gianluca Anzolin
Hauke Mehrtens
He X
Hiltjo Posthuma
Isaac Dunham
Jaydeep Patil
Jens Gustedt
Jeremy Huntwork
Jo-Philipp Wich
Joakim Sindholt
John Spencer
Julien Ramseier
Justin Cormack
Kaarle Ritvanen
Khem Raj
Kylie McClain
Leah Neukirchen
Luca Barbato
Luka Perkov
M Farkas-Dyck (Strake)
Mahesh Bodapati
Markus Wichmann
Masanori Ogino
Michael Clark
Michael Forney
Mikhail Kremnyov
Natanael Copa
Nicholas J. Kain
orc
Pascal Cuoq
Patrick Oppenlander
Petr Hosek
Petr Skocik
Pierre Carrier
Reini Urban
Rich Felker
Richard Pennington
Ryan Fairfax
Samuel Holland
Segev Finer
Shiz
sin
Solar Designer
Stefan Kristiansson
Stefan O'Rear
Szabolcs Nagy
Timo Teräs
Trutz Behn
Valentin Ochs
Will Dietz
William Haddon
William Pitcock
Portions of this software are derived from third-party works licensed
under terms compatible with the above MIT license:
The TRE regular expression implementation (src/regex/reg* and
src/regex/tre*) is Copyright © 2001-2008 Ville Laurikari and licensed
under a 2-clause BSD license (license text in the source files). The
included version has been heavily modified by Rich Felker in 2012, in
the interests of size, simplicity, and namespace cleanliness.
Much of the math library code (src/math/* and src/complex/*) is
Copyright © 1993,2004 Sun Microsystems or
Copyright © 2003-2011 David Schultz or
Copyright © 2003-2009 Steven G. Kargl or
Copyright © 2003-2009 Bruce D. Evans or
Copyright © 2008 Stephen L. Moshier or
Copyright © 2017-2018 Arm Limited
and labelled as such in comments in the individual source files. All
have been licensed under extremely permissive terms.
The ARM memcpy code (src/string/arm/memcpy.S) is Copyright © 2008
The Android Open Source Project and is licensed under a two-clause BSD
license. It was taken from Bionic libc, used on Android.
The AArch64 memcpy and memset code (src/string/aarch64/*) are
Copyright © 1999-2019, Arm Limited.
The implementation of DES for crypt (src/crypt/crypt_des.c) is
Copyright © 1994 David Burren. It is licensed under a BSD license.
The implementation of blowfish crypt (src/crypt/crypt_blowfish.c) was
originally written by Solar Designer and placed into the public
domain. The code also comes with a fallback permissive license for use
in jurisdictions that may not recognize the public domain.
The smoothsort implementation (src/stdlib/qsort.c) is Copyright © 2011
Valentin Ochs and is licensed under an MIT-style license.
The x86_64 port was written by Nicholas J. Kain and is licensed under
the standard MIT terms.
The mips and microblaze ports were originally written by Richard
Pennington for use in the ellcc project. The original code was adapted
by Rich Felker for build system and code conventions during upstream
integration. It is licensed under the standard MIT terms.
The mips64 port was contributed by Imagination Technologies and is
licensed under the standard MIT terms.
The powerpc port was also originally written by Richard Pennington,
and later supplemented and integrated by John Spencer. It is licensed
under the standard MIT terms.
All other files which have no copyright comments are original works
produced specifically for use as part of this library, written either
by Rich Felker, the main author of the library, or by one or more
contibutors listed above. Details on authorship of individual files
can be found in the git version control history of the project. The
omission of copyright and license comments in each file is in the
interest of source tree size.
In addition, permission is hereby granted for all public header files
(include/* and arch/*/bits/*) and crt files intended to be linked into
applications (crt/*, ldso/dlstart.c, and arch/*/crt_arch.h) to omit
the copyright notice and permission notice otherwise required by the
license, and to use these files without any requirement of
attribution. These files include substantial contributions from:
Bobby Bingham
John Spencer
Nicholas J. Kain
Rich Felker
Richard Pennington
Stefan Kristiansson
Szabolcs Nagy
all of whom have explicitly granted such permission.
This file previously contained text expressing a belief that most of
the files covered by the above exception were sufficiently trivial not
to be subject to copyright, resulting in confusion over whether it
negated the permissions granted in the license. In the spirit of
permissive licensing, and of not having licensing issues being an
obstacle to adoption, that text has been removed.

Binary file not shown.

View File

@ -0,0 +1,604 @@
Sources of busybox available at:
Git commit: 1a64f6a20aaf6ea4dbba68bbfa8cc1ab7e5c57c4
Git repository: git://git.busybox.net/busybox
Build host info:
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.18.3
PRETTY_NAME="Alpine Linux v3.18"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
Build recipe:
export ARCH=ppc64le
export LISA_ARCH_ASSETS=/lisa/_assets/binaries/ppc64le
export LISA_HOME=''
#! /bin/bash
ALPINE_VERSION=v3.18
ALPINE_BUILD_DEPENDENCIES=(bash gcc make musl-dev linux-headers git)
download() {
git clone git://git.busybox.net/busybox --branch 1_36_stable --depth=1
git -C busybox checkout 1_36_1
}
build() {
cd busybox
make defconfig
# We need to generate a defconfig then remove the config, then set them to
# the value we want, as there is no make olddefconfig to fixup an edited
# config.
cat .config | grep -v '\bCONFIG_MODPROBE_SMALL\b' | grep -v '\bCONFIG_STATIC\b' > myconfig
echo "CONFIG_STATIC=y" >> myconfig
# MODPROBE_SMALL=y breaks the return code of insmod. Instead of forwarding
# the value from the kernel mod init function, it just returns 1.
echo "CONFIG_MODPROBE_SMALL=n" >> myconfig
cp myconfig .config
make -j 4 "CROSS_COMPILE=$CROSS_COMPILE"
}
install() {
cp -v busybox/busybox "$LISA_ARCH_ASSETS/busybox"
source "$LISA_HOME/tools/recipes/utils.sh"
install_readme busybox busybox LICENSE
}
The sources were distributed under the following licence (content of busybox/LICENSE):
--- A note on GPL versions
BusyBox is distributed under version 2 of the General Public License (included
in its entirety, below). Version 2 is the only version of this license which
this version of BusyBox (or modified versions derived from this one) may be
distributed under.
------------------------------------------------------------------------
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.
The sources were compiled with musl-libc (content of COPYRIGHT):
musl as a whole is licensed under the following standard MIT license:
----------------------------------------------------------------------
Copyright © 2005-2020 Rich Felker, et al.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------
Authors/contributors include:
A. Wilcox
Ada Worcester
Alex Dowad
Alex Suykov
Alexander Monakov
Andre McCurdy
Andrew Kelley
Anthony G. Basile
Aric Belsito
Arvid Picciani
Bartosz Brachaczek
Benjamin Peterson
Bobby Bingham
Boris Brezillon
Brent Cook
Chris Spiegel
Clément Vasseur
Daniel Micay
Daniel Sabogal
Daurnimator
David Carlier
David Edelsohn
Denys Vlasenko
Dmitry Ivanov
Dmitry V. Levin
Drew DeVault
Emil Renner Berthing
Fangrui Song
Felix Fietkau
Felix Janda
Gianluca Anzolin
Hauke Mehrtens
He X
Hiltjo Posthuma
Isaac Dunham
Jaydeep Patil
Jens Gustedt
Jeremy Huntwork
Jo-Philipp Wich
Joakim Sindholt
John Spencer
Julien Ramseier
Justin Cormack
Kaarle Ritvanen
Khem Raj
Kylie McClain
Leah Neukirchen
Luca Barbato
Luka Perkov
M Farkas-Dyck (Strake)
Mahesh Bodapati
Markus Wichmann
Masanori Ogino
Michael Clark
Michael Forney
Mikhail Kremnyov
Natanael Copa
Nicholas J. Kain
orc
Pascal Cuoq
Patrick Oppenlander
Petr Hosek
Petr Skocik
Pierre Carrier
Reini Urban
Rich Felker
Richard Pennington
Ryan Fairfax
Samuel Holland
Segev Finer
Shiz
sin
Solar Designer
Stefan Kristiansson
Stefan O'Rear
Szabolcs Nagy
Timo Teräs
Trutz Behn
Valentin Ochs
Will Dietz
William Haddon
William Pitcock
Portions of this software are derived from third-party works licensed
under terms compatible with the above MIT license:
The TRE regular expression implementation (src/regex/reg* and
src/regex/tre*) is Copyright © 2001-2008 Ville Laurikari and licensed
under a 2-clause BSD license (license text in the source files). The
included version has been heavily modified by Rich Felker in 2012, in
the interests of size, simplicity, and namespace cleanliness.
Much of the math library code (src/math/* and src/complex/*) is
Copyright © 1993,2004 Sun Microsystems or
Copyright © 2003-2011 David Schultz or
Copyright © 2003-2009 Steven G. Kargl or
Copyright © 2003-2009 Bruce D. Evans or
Copyright © 2008 Stephen L. Moshier or
Copyright © 2017-2018 Arm Limited
and labelled as such in comments in the individual source files. All
have been licensed under extremely permissive terms.
The ARM memcpy code (src/string/arm/memcpy.S) is Copyright © 2008
The Android Open Source Project and is licensed under a two-clause BSD
license. It was taken from Bionic libc, used on Android.
The AArch64 memcpy and memset code (src/string/aarch64/*) are
Copyright © 1999-2019, Arm Limited.
The implementation of DES for crypt (src/crypt/crypt_des.c) is
Copyright © 1994 David Burren. It is licensed under a BSD license.
The implementation of blowfish crypt (src/crypt/crypt_blowfish.c) was
originally written by Solar Designer and placed into the public
domain. The code also comes with a fallback permissive license for use
in jurisdictions that may not recognize the public domain.
The smoothsort implementation (src/stdlib/qsort.c) is Copyright © 2011
Valentin Ochs and is licensed under an MIT-style license.
The x86_64 port was written by Nicholas J. Kain and is licensed under
the standard MIT terms.
The mips and microblaze ports were originally written by Richard
Pennington for use in the ellcc project. The original code was adapted
by Rich Felker for build system and code conventions during upstream
integration. It is licensed under the standard MIT terms.
The mips64 port was contributed by Imagination Technologies and is
licensed under the standard MIT terms.
The powerpc port was also originally written by Richard Pennington,
and later supplemented and integrated by John Spencer. It is licensed
under the standard MIT terms.
All other files which have no copyright comments are original works
produced specifically for use as part of this library, written either
by Rich Felker, the main author of the library, or by one or more
contibutors listed above. Details on authorship of individual files
can be found in the git version control history of the project. The
omission of copyright and license comments in each file is in the
interest of source tree size.
In addition, permission is hereby granted for all public header files
(include/* and arch/*/bits/*) and crt files intended to be linked into
applications (crt/*, ldso/dlstart.c, and arch/*/crt_arch.h) to omit
the copyright notice and permission notice otherwise required by the
license, and to use these files without any requirement of
attribution. These files include substantial contributions from:
Bobby Bingham
John Spencer
Nicholas J. Kain
Rich Felker
Richard Pennington
Stefan Kristiansson
Szabolcs Nagy
all of whom have explicitly granted such permission.
This file previously contained text expressing a belief that most of
the files covered by the above exception were sufficiently trivial not
to be subject to copyright, resulting in confusion over whether it
negated the permissions granted in the license. In the spirit of
permissive licensing, and of not having licensing issues being an
obstacle to adoption, that text has been removed.

Binary file not shown.

View File

@ -0,0 +1,20 @@
(
# If there is no data dir, it means we are not running as a background
# command so we just do nothing
if [ -e "$_DEVLIB_BG_CMD_DATA_DIR" ]; then
pid_file="$_DEVLIB_BG_CMD_DATA_DIR/pid"
# Atomically check if the PID file already exist and make the write
# fail if it already does. This way we don't have any race condition
# with the Python API, as there is either no PID or the same PID for
# the duration of the command
set -o noclobber
if ! printf "%u\n" $$ > "$pid_file"; then
echo "$0 was already called for this command" >&2
exit 1
fi
fi
) || exit $?
# Use exec so that the PID of the command we run is the same as the current $$
# PID that we just registered
exec "$@"

View File

@ -1,5 +1,3 @@
#!__DEVLIB_SHELL__
CMD=$1 CMD=$1
shift shift
@ -156,14 +154,23 @@ cgroups_run_into() {
# Move this shell into that control group # Move this shell into that control group
echo $$ > $CGPATH/cgroup.procs echo $$ > $CGPATH/cgroup.procs
echo "Moving task into root CGroup ($CGPATH)" echo "Moving task into root CGroup ($CGPATH)"
# Check the move actually worked
$GREP -E "$$" $CGPATH/cgroup.procs >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo "ERROR: Process was not moved into $CGP"
exit 1
fi
done done
if [ $? -ne 0 ]; then
exit 1
fi
# Execution under specified CGroup # Execution under specified CGroup
else else
# Check if the required CGroup exists # Check if the required CGroup exists
$FIND $CGMOUNT -type d -mindepth 1 | \ $FIND $CGMOUNT -type d -mindepth 1 | \
$GREP -E "^$CGMOUNT/devlib_cgh[0-9]{1,2}$CGP" &>/dev/null $GREP -E "^$CGMOUNT/devlib_cgh[0-9]{1,2}$CGP" >/dev/null 2>&1
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
echo "ERROR: could not find any $CGP cgroup under $CGMOUNT" echo "ERROR: could not find any $CGP cgroup under $CGMOUNT"
exit 1 exit 1
@ -175,8 +182,16 @@ cgroups_run_into() {
# Move this shell into that control group # Move this shell into that control group
echo $$ > $CGPATH/cgroup.procs echo $$ > $CGPATH/cgroup.procs
echo "Moving task into $CGPATH" echo "Moving task into $CGPATH"
# Check the move actually worked
$GREP -E "$$" $CGPATH/cgroup.procs >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo "ERROR: Process was not moved into $CGP"
exit 1
fi
done done
if [ $? -ne 0 ]; then
exit 1
fi
fi fi
# Execute the command # Execute the command
@ -347,7 +362,7 @@ _command_not_found() {
exit 1 exit 1
} }
# Check the command exists # Check the command exists
type "$CMD" 2>&1 >/dev/null || _command_not_found type "$CMD" >/dev/null 2>&1 || _command_not_found
"$CMD" "$@" "$CMD" "$@"

View File

@ -0,0 +1,604 @@
Sources of busybox available at:
Git commit: 1a64f6a20aaf6ea4dbba68bbfa8cc1ab7e5c57c4
Git repository: git://git.busybox.net/busybox
Build host info:
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.18.3
PRETTY_NAME="Alpine Linux v3.18"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
Build recipe:
export ARCH=x86
export LISA_ARCH_ASSETS=/lisa/_assets/binaries/x86
export LISA_HOME=''
#! /bin/bash
ALPINE_VERSION=v3.18
ALPINE_BUILD_DEPENDENCIES=(bash gcc make musl-dev linux-headers git)
download() {
git clone git://git.busybox.net/busybox --branch 1_36_stable --depth=1
git -C busybox checkout 1_36_1
}
build() {
cd busybox
make defconfig
# We need to generate a defconfig then remove the config, then set them to
# the value we want, as there is no make olddefconfig to fixup an edited
# config.
cat .config | grep -v '\bCONFIG_MODPROBE_SMALL\b' | grep -v '\bCONFIG_STATIC\b' > myconfig
echo "CONFIG_STATIC=y" >> myconfig
# MODPROBE_SMALL=y breaks the return code of insmod. Instead of forwarding
# the value from the kernel mod init function, it just returns 1.
echo "CONFIG_MODPROBE_SMALL=n" >> myconfig
cp myconfig .config
make -j 4 "CROSS_COMPILE=$CROSS_COMPILE"
}
install() {
cp -v busybox/busybox "$LISA_ARCH_ASSETS/busybox"
source "$LISA_HOME/tools/recipes/utils.sh"
install_readme busybox busybox LICENSE
}
The sources were distributed under the following licence (content of busybox/LICENSE):
--- A note on GPL versions
BusyBox is distributed under version 2 of the General Public License (included
in its entirety, below). Version 2 is the only version of this license which
this version of BusyBox (or modified versions derived from this one) may be
distributed under.
------------------------------------------------------------------------
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.
The sources were compiled with musl-libc (content of COPYRIGHT):
musl as a whole is licensed under the following standard MIT license:
----------------------------------------------------------------------
Copyright © 2005-2020 Rich Felker, et al.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------
Authors/contributors include:
A. Wilcox
Ada Worcester
Alex Dowad
Alex Suykov
Alexander Monakov
Andre McCurdy
Andrew Kelley
Anthony G. Basile
Aric Belsito
Arvid Picciani
Bartosz Brachaczek
Benjamin Peterson
Bobby Bingham
Boris Brezillon
Brent Cook
Chris Spiegel
Clément Vasseur
Daniel Micay
Daniel Sabogal
Daurnimator
David Carlier
David Edelsohn
Denys Vlasenko
Dmitry Ivanov
Dmitry V. Levin
Drew DeVault
Emil Renner Berthing
Fangrui Song
Felix Fietkau
Felix Janda
Gianluca Anzolin
Hauke Mehrtens
He X
Hiltjo Posthuma
Isaac Dunham
Jaydeep Patil
Jens Gustedt
Jeremy Huntwork
Jo-Philipp Wich
Joakim Sindholt
John Spencer
Julien Ramseier
Justin Cormack
Kaarle Ritvanen
Khem Raj
Kylie McClain
Leah Neukirchen
Luca Barbato
Luka Perkov
M Farkas-Dyck (Strake)
Mahesh Bodapati
Markus Wichmann
Masanori Ogino
Michael Clark
Michael Forney
Mikhail Kremnyov
Natanael Copa
Nicholas J. Kain
orc
Pascal Cuoq
Patrick Oppenlander
Petr Hosek
Petr Skocik
Pierre Carrier
Reini Urban
Rich Felker
Richard Pennington
Ryan Fairfax
Samuel Holland
Segev Finer
Shiz
sin
Solar Designer
Stefan Kristiansson
Stefan O'Rear
Szabolcs Nagy
Timo Teräs
Trutz Behn
Valentin Ochs
Will Dietz
William Haddon
William Pitcock
Portions of this software are derived from third-party works licensed
under terms compatible with the above MIT license:
The TRE regular expression implementation (src/regex/reg* and
src/regex/tre*) is Copyright © 2001-2008 Ville Laurikari and licensed
under a 2-clause BSD license (license text in the source files). The
included version has been heavily modified by Rich Felker in 2012, in
the interests of size, simplicity, and namespace cleanliness.
Much of the math library code (src/math/* and src/complex/*) is
Copyright © 1993,2004 Sun Microsystems or
Copyright © 2003-2011 David Schultz or
Copyright © 2003-2009 Steven G. Kargl or
Copyright © 2003-2009 Bruce D. Evans or
Copyright © 2008 Stephen L. Moshier or
Copyright © 2017-2018 Arm Limited
and labelled as such in comments in the individual source files. All
have been licensed under extremely permissive terms.
The ARM memcpy code (src/string/arm/memcpy.S) is Copyright © 2008
The Android Open Source Project and is licensed under a two-clause BSD
license. It was taken from Bionic libc, used on Android.
The AArch64 memcpy and memset code (src/string/aarch64/*) are
Copyright © 1999-2019, Arm Limited.
The implementation of DES for crypt (src/crypt/crypt_des.c) is
Copyright © 1994 David Burren. It is licensed under a BSD license.
The implementation of blowfish crypt (src/crypt/crypt_blowfish.c) was
originally written by Solar Designer and placed into the public
domain. The code also comes with a fallback permissive license for use
in jurisdictions that may not recognize the public domain.
The smoothsort implementation (src/stdlib/qsort.c) is Copyright © 2011
Valentin Ochs and is licensed under an MIT-style license.
The x86_64 port was written by Nicholas J. Kain and is licensed under
the standard MIT terms.
The mips and microblaze ports were originally written by Richard
Pennington for use in the ellcc project. The original code was adapted
by Rich Felker for build system and code conventions during upstream
integration. It is licensed under the standard MIT terms.
The mips64 port was contributed by Imagination Technologies and is
licensed under the standard MIT terms.
The powerpc port was also originally written by Richard Pennington,
and later supplemented and integrated by John Spencer. It is licensed
under the standard MIT terms.
All other files which have no copyright comments are original works
produced specifically for use as part of this library, written either
by Rich Felker, the main author of the library, or by one or more
contibutors listed above. Details on authorship of individual files
can be found in the git version control history of the project. The
omission of copyright and license comments in each file is in the
interest of source tree size.
In addition, permission is hereby granted for all public header files
(include/* and arch/*/bits/*) and crt files intended to be linked into
applications (crt/*, ldso/dlstart.c, and arch/*/crt_arch.h) to omit
the copyright notice and permission notice otherwise required by the
license, and to use these files without any requirement of
attribution. These files include substantial contributions from:
Bobby Bingham
John Spencer
Nicholas J. Kain
Rich Felker
Richard Pennington
Stefan Kristiansson
Szabolcs Nagy
all of whom have explicitly granted such permission.
This file previously contained text expressing a belief that most of
the files covered by the above exception were sufficiently trivial not
to be subject to copyright, resulting in confusion over whether it
negated the permissions granted in the license. In the spirit of
permissive licensing, and of not having licensing issues being an
obstacle to adoption, that text has been removed.

Binary file not shown.

View File

@ -0,0 +1,604 @@
Sources of busybox available at:
Git commit: 1a64f6a20aaf6ea4dbba68bbfa8cc1ab7e5c57c4
Git repository: git://git.busybox.net/busybox
Build host info:
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.18.3
PRETTY_NAME="Alpine Linux v3.18"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
Build recipe:
export ARCH=x86_64
export LISA_ARCH_ASSETS=/lisa/_assets/binaries/x86_64
export LISA_HOME=''
#! /bin/bash
ALPINE_VERSION=v3.18
ALPINE_BUILD_DEPENDENCIES=(bash gcc make musl-dev linux-headers git)
download() {
git clone git://git.busybox.net/busybox --branch 1_36_stable --depth=1
git -C busybox checkout 1_36_1
}
build() {
cd busybox
make defconfig
# We need to generate a defconfig then remove the config, then set them to
# the value we want, as there is no make olddefconfig to fixup an edited
# config.
cat .config | grep -v '\bCONFIG_MODPROBE_SMALL\b' | grep -v '\bCONFIG_STATIC\b' > myconfig
echo "CONFIG_STATIC=y" >> myconfig
# MODPROBE_SMALL=y breaks the return code of insmod. Instead of forwarding
# the value from the kernel mod init function, it just returns 1.
echo "CONFIG_MODPROBE_SMALL=n" >> myconfig
cp myconfig .config
make -j 4 "CROSS_COMPILE=$CROSS_COMPILE"
}
install() {
cp -v busybox/busybox "$LISA_ARCH_ASSETS/busybox"
source "$LISA_HOME/tools/recipes/utils.sh"
install_readme busybox busybox LICENSE
}
The sources were distributed under the following licence (content of busybox/LICENSE):
--- A note on GPL versions
BusyBox is distributed under version 2 of the General Public License (included
in its entirety, below). Version 2 is the only version of this license which
this version of BusyBox (or modified versions derived from this one) may be
distributed under.
------------------------------------------------------------------------
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.
The sources were compiled with musl-libc (content of COPYRIGHT):
musl as a whole is licensed under the following standard MIT license:
----------------------------------------------------------------------
Copyright © 2005-2020 Rich Felker, et al.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------
Authors/contributors include:
A. Wilcox
Ada Worcester
Alex Dowad
Alex Suykov
Alexander Monakov
Andre McCurdy
Andrew Kelley
Anthony G. Basile
Aric Belsito
Arvid Picciani
Bartosz Brachaczek
Benjamin Peterson
Bobby Bingham
Boris Brezillon
Brent Cook
Chris Spiegel
Clément Vasseur
Daniel Micay
Daniel Sabogal
Daurnimator
David Carlier
David Edelsohn
Denys Vlasenko
Dmitry Ivanov
Dmitry V. Levin
Drew DeVault
Emil Renner Berthing
Fangrui Song
Felix Fietkau
Felix Janda
Gianluca Anzolin
Hauke Mehrtens
He X
Hiltjo Posthuma
Isaac Dunham
Jaydeep Patil
Jens Gustedt
Jeremy Huntwork
Jo-Philipp Wich
Joakim Sindholt
John Spencer
Julien Ramseier
Justin Cormack
Kaarle Ritvanen
Khem Raj
Kylie McClain
Leah Neukirchen
Luca Barbato
Luka Perkov
M Farkas-Dyck (Strake)
Mahesh Bodapati
Markus Wichmann
Masanori Ogino
Michael Clark
Michael Forney
Mikhail Kremnyov
Natanael Copa
Nicholas J. Kain
orc
Pascal Cuoq
Patrick Oppenlander
Petr Hosek
Petr Skocik
Pierre Carrier
Reini Urban
Rich Felker
Richard Pennington
Ryan Fairfax
Samuel Holland
Segev Finer
Shiz
sin
Solar Designer
Stefan Kristiansson
Stefan O'Rear
Szabolcs Nagy
Timo Teräs
Trutz Behn
Valentin Ochs
Will Dietz
William Haddon
William Pitcock
Portions of this software are derived from third-party works licensed
under terms compatible with the above MIT license:
The TRE regular expression implementation (src/regex/reg* and
src/regex/tre*) is Copyright © 2001-2008 Ville Laurikari and licensed
under a 2-clause BSD license (license text in the source files). The
included version has been heavily modified by Rich Felker in 2012, in
the interests of size, simplicity, and namespace cleanliness.
Much of the math library code (src/math/* and src/complex/*) is
Copyright © 1993,2004 Sun Microsystems or
Copyright © 2003-2011 David Schultz or
Copyright © 2003-2009 Steven G. Kargl or
Copyright © 2003-2009 Bruce D. Evans or
Copyright © 2008 Stephen L. Moshier or
Copyright © 2017-2018 Arm Limited
and labelled as such in comments in the individual source files. All
have been licensed under extremely permissive terms.
The ARM memcpy code (src/string/arm/memcpy.S) is Copyright © 2008
The Android Open Source Project and is licensed under a two-clause BSD
license. It was taken from Bionic libc, used on Android.
The AArch64 memcpy and memset code (src/string/aarch64/*) are
Copyright © 1999-2019, Arm Limited.
The implementation of DES for crypt (src/crypt/crypt_des.c) is
Copyright © 1994 David Burren. It is licensed under a BSD license.
The implementation of blowfish crypt (src/crypt/crypt_blowfish.c) was
originally written by Solar Designer and placed into the public
domain. The code also comes with a fallback permissive license for use
in jurisdictions that may not recognize the public domain.
The smoothsort implementation (src/stdlib/qsort.c) is Copyright © 2011
Valentin Ochs and is licensed under an MIT-style license.
The x86_64 port was written by Nicholas J. Kain and is licensed under
the standard MIT terms.
The mips and microblaze ports were originally written by Richard
Pennington for use in the ellcc project. The original code was adapted
by Rich Felker for build system and code conventions during upstream
integration. It is licensed under the standard MIT terms.
The mips64 port was contributed by Imagination Technologies and is
licensed under the standard MIT terms.
The powerpc port was also originally written by Richard Pennington,
and later supplemented and integrated by John Spencer. It is licensed
under the standard MIT terms.
All other files which have no copyright comments are original works
produced specifically for use as part of this library, written either
by Rich Felker, the main author of the library, or by one or more
contibutors listed above. Details on authorship of individual files
can be found in the git version control history of the project. The
omission of copyright and license comments in each file is in the
interest of source tree size.
In addition, permission is hereby granted for all public header files
(include/* and arch/*/bits/*) and crt files intended to be linked into
applications (crt/*, ldso/dlstart.c, and arch/*/crt_arch.h) to omit
the copyright notice and permission notice otherwise required by the
license, and to use these files without any requirement of
attribution. These files include substantial contributions from:
Bobby Bingham
John Spencer
Nicholas J. Kain
Rich Felker
Richard Pennington
Stefan Kristiansson
Szabolcs Nagy
all of whom have explicitly granted such permission.
This file previously contained text expressing a belief that most of
the files covered by the above exception were sufficiently trivial not
to be subject to copyright, resulting in confusion over whether it
negated the permissions granted in the license. In the spirit of
permissive licensing, and of not having licensing issues being an
obstacle to adoption, that text has been removed.

Binary file not shown.

View File

@ -1,4 +1,4 @@
# Copyright 2019 ARM Limited # Copyright 2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -13,18 +13,20 @@
# limitations under the License. # limitations under the License.
# #
from __future__ import division
import re import re
from itertools import takewhile from itertools import takewhile
from datetime import timedelta from datetime import timedelta
import logging
from devlib.collector import (CollectorBase, CollectorOutput, from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry) CollectorOutputEntry)
from devlib.target import KernelConfigTristate
from devlib.exception import TargetStableError from devlib.exception import TargetStableError
from devlib.utils.misc import memoized from devlib.utils.misc import memoized
_LOGGER = logging.getLogger('dmesg')
class KernelLogEntry(object): class KernelLogEntry(object):
""" """
Entry of the kernel ring buffer. Entry of the kernel ring buffer.
@ -72,7 +74,7 @@ class KernelLogEntry(object):
def parse_raw_level(line): def parse_raw_level(line):
match = cls._RAW_LEVEL_REGEX.match(line) match = cls._RAW_LEVEL_REGEX.match(line)
if not match: if not match:
raise ValueError('dmesg entry format not recognized: {}'.format(line)) raise ValueError(f'dmesg entry format not recognized: {line}')
level, remainder = match.groups() level, remainder = match.groups()
levels = DmesgCollector.LOG_LEVELS levels = DmesgCollector.LOG_LEVELS
# BusyBox dmesg can output numbers that need to wrap around # BusyBox dmesg can output numbers that need to wrap around
@ -81,11 +83,15 @@ class KernelLogEntry(object):
def parse_pretty_level(line): def parse_pretty_level(line):
match = cls._PRETTY_LEVEL_REGEX.match(line) match = cls._PRETTY_LEVEL_REGEX.match(line)
if not match:
raise ValueError(f'dmesg entry pretty format not recognized: {line}')
facility, level, remainder = match.groups() facility, level, remainder = match.groups()
return facility, level, remainder return facility, level, remainder
def parse_timestamp_msg(line): def parse_timestamp_msg(line):
match = cls._TIMESTAMP_MSG_REGEX.match(line) match = cls._TIMESTAMP_MSG_REGEX.match(line)
if not match:
raise ValueError(f'dmesg entry timestamp format not recognized: {line}')
timestamp, msg = match.groups() timestamp, msg = match.groups()
timestamp = timedelta(seconds=float(timestamp.strip())) timestamp = timedelta(seconds=float(timestamp.strip()))
return timestamp, msg return timestamp, msg
@ -110,17 +116,35 @@ class KernelLogEntry(object):
) )
@classmethod @classmethod
def from_dmesg_output(cls, dmesg_out): def from_dmesg_output(cls, dmesg_out, error=None):
""" """
Return a generator of :class:`KernelLogEntry` for each line of the Return a generator of :class:`KernelLogEntry` for each line of the
output of dmesg command. output of dmesg command.
:param error: If ``"raise"`` or ``None``, an exception will be raised
if a parsing error occurs. If ``"warn"``, it will be logged at
WARNING level. If ``"ignore"``, it will be ignored. If a callable
is passed, the exception will be passed to it.
:type error: str or None or typing.Callable[[BaseException], None]
.. note:: The same restrictions on the dmesg output format as for .. note:: The same restrictions on the dmesg output format as for
:meth:`from_str` apply. :meth:`from_str` apply.
""" """
for i, line in enumerate(dmesg_out.splitlines()): for i, line in enumerate(dmesg_out.splitlines()):
if line.strip(): if line.strip():
yield cls.from_str(line, line_nr=i) try:
yield cls.from_str(line, line_nr=i)
except Exception as e:
if error in (None, 'raise'):
raise e
elif error == 'warn':
_LOGGER.warn(f'error while parsing line "{line!r}": {e}')
elif error == 'ignore':
pass
elif callable(error):
error(e)
else:
raise ValueError(f'Unknown error handling strategy: {error}')
def __str__(self): def __str__(self):
facility = self.facility + ': ' if self.facility else '' facility = self.facility + ': ' if self.facility else ''
@ -165,7 +189,7 @@ class DmesgCollector(CollectorBase):
"debug", # debug-level messages "debug", # debug-level messages
] ]
def __init__(self, target, level=LOG_LEVELS[-1], facility='kern', empty_buffer=False): def __init__(self, target, level=LOG_LEVELS[-1], facility='kern', empty_buffer=False, parse_error=None):
super(DmesgCollector, self).__init__(target) super(DmesgCollector, self).__init__(target)
if not target.is_rooted: if not target.is_rooted:
@ -179,19 +203,29 @@ class DmesgCollector(CollectorBase):
)) ))
self.level = level self.level = level
# Check if dmesg is the BusyBox one, or the one from util-linux in a # Check if we have a dmesg from a recent util-linux build, rather than
# recent version. # e.g. busybox's dmesg or the one shipped on some Android versions
# Note: BusyBox dmesg does not support -h, but will still print the # (toybox). Note: BusyBox dmesg does not support -h, but will still
# help with an exit code of 1 # print the help with an exit code of 1
self.basic_dmesg = '--force-prefix' not in \ help_ = self.target.execute('dmesg -h', check_exit_code=False)
self.target.execute('dmesg -h', check_exit_code=False) self.basic_dmesg = not all(
opt in help_
for opt in ('--facility', '--force-prefix', '--decode', '--level')
)
self.facility = facility self.facility = facility
self.needs_root = bool(target.config.typed_config.get( try:
'CONFIG_SECURITY_DMESG_RESTRICT', KernelConfigTristate.NO)) needs_root = target.read_sysctl('kernel.dmesg_restrict')
except ValueError:
needs_root = True
else:
needs_root = bool(int(needs_root))
self.needs_root = needs_root
self._begin_timestamp = None self._begin_timestamp = None
self.empty_buffer = empty_buffer self.empty_buffer = empty_buffer
self._dmesg_out = None self._dmesg_out = None
self._parse_error = parse_error
@property @property
def dmesg_out(self): def dmesg_out(self):
@ -202,18 +236,22 @@ class DmesgCollector(CollectorBase):
try: try:
entry = self.entries[0] entry = self.entries[0]
except IndexError: except IndexError:
i = 0 return ''
else: else:
i = entry.line_nr i = entry.line_nr
return '\n'.join(out.splitlines()[i:]) return '\n'.join(out.splitlines()[i:])
@property @property
def entries(self): def entries(self):
return self._get_entries(self._dmesg_out, self._begin_timestamp) return self._get_entries(
self._dmesg_out,
self._begin_timestamp,
error=self._parse_error,
)
@memoized @memoized
def _get_entries(self, dmesg_out, timestamp): def _get_entries(self, dmesg_out, timestamp, error):
entries = KernelLogEntry.from_dmesg_output(dmesg_out) entries = KernelLogEntry.from_dmesg_output(dmesg_out, error=error)
entries = list(entries) entries = list(entries)
if timestamp is None: if timestamp is None:
return entries return entries
@ -235,27 +273,7 @@ class DmesgCollector(CollectorBase):
if entry.timestamp > timestamp if entry.timestamp > timestamp
] ]
def reset(self): def _get_output(self):
# If the buffer is emptied on start(), it does not matter as we will
# not end up with entries dating from before start()
if self.empty_buffer:
# Empty the dmesg ring buffer. This requires root in all cases
self.target.execute('dmesg -c', as_root=True)
else:
self.stop()
try:
entry = self.entries[-1]
except IndexError:
pass
else:
self._begin_timestamp = entry.timestamp
self._dmesg_out = None
def start(self):
self.reset()
def stop(self):
levels_list = list(takewhile( levels_list = list(takewhile(
lambda level: level != self.level, lambda level: level != self.level,
self.LOG_LEVELS self.LOG_LEVELS
@ -271,6 +289,27 @@ class DmesgCollector(CollectorBase):
self._dmesg_out = self.target.execute(cmd, as_root=self.needs_root) self._dmesg_out = self.target.execute(cmd, as_root=self.needs_root)
def reset(self):
self._dmesg_out = None
def start(self):
# If the buffer is emptied on start(), it does not matter as we will
# not end up with entries dating from before start()
if self.empty_buffer:
# Empty the dmesg ring buffer. This requires root in all cases
self.target.execute('dmesg -c', as_root=True)
else:
self._get_output()
try:
entry = self.entries[-1]
except IndexError:
pass
else:
self._begin_timestamp = entry.timestamp
def stop(self):
self._get_output()
def set_output(self, output_path): def set_output(self, output_path):
self.output_path = output_path self.output_path = output_path
@ -278,5 +317,5 @@ class DmesgCollector(CollectorBase):
if self.output_path is None: if self.output_path is None:
raise RuntimeError("Output path was not set.") raise RuntimeError("Output path was not set.")
with open(self.output_path, 'wt') as f: with open(self.output_path, 'wt') as f:
f.write(self.dmesg_out + '\n') f.write((self.dmesg_out or '') + '\n')
return CollectorOutput([CollectorOutputEntry(self.output_path, 'file')]) return CollectorOutput([CollectorOutputEntry(self.output_path, 'file')])

View File

@ -13,7 +13,6 @@
# limitations under the License. # limitations under the License.
# #
from __future__ import division
import os import os
import json import json
import time import time
@ -21,13 +20,15 @@ import re
import subprocess import subprocess
import sys import sys
import contextlib import contextlib
from pipes import quote from shlex import quote
import signal
from devlib.collector import (CollectorBase, CollectorOutput, from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry) CollectorOutputEntry)
from devlib.host import PACKAGE_BIN_DIRECTORY from devlib.host import PACKAGE_BIN_DIRECTORY
from devlib.exception import TargetStableError, HostError from devlib.exception import TargetStableError, HostError
from devlib.utils.misc import check_output, which, memoized from devlib.utils.misc import check_output, which, memoized
from devlib.utils.asyn import asyncf
TRACE_MARKER_START = 'TRACE_MARKER_START' TRACE_MARKER_START = 'TRACE_MARKER_START'
@ -60,6 +61,7 @@ class FtraceCollector(CollectorBase):
tracer=None, tracer=None,
trace_children_functions=False, trace_children_functions=False,
buffer_size=None, buffer_size=None,
top_buffer_size=None,
buffer_size_step=1000, buffer_size_step=1000,
tracing_path=None, tracing_path=None,
automark=True, automark=True,
@ -70,6 +72,7 @@ class FtraceCollector(CollectorBase):
report_on_target=False, report_on_target=False,
trace_clock='local', trace_clock='local',
saved_cmdlines_nr=4096, saved_cmdlines_nr=4096,
mode='write-to-memory',
): ):
super(FtraceCollector, self).__init__(target) super(FtraceCollector, self).__init__(target)
self.events = events if events is not None else DEFAULT_EVENTS self.events = events if events is not None else DEFAULT_EVENTS
@ -77,6 +80,7 @@ class FtraceCollector(CollectorBase):
self.tracer = tracer self.tracer = tracer
self.trace_children_functions = trace_children_functions self.trace_children_functions = trace_children_functions
self.buffer_size = buffer_size self.buffer_size = buffer_size
self.top_buffer_size = top_buffer_size
self.tracing_path = self._resolve_tracing_path(target, tracing_path) self.tracing_path = self._resolve_tracing_path(target, tracing_path)
self.automark = automark self.automark = automark
self.autoreport = autoreport self.autoreport = autoreport
@ -91,11 +95,12 @@ class FtraceCollector(CollectorBase):
self.host_binary = None self.host_binary = None
self.start_time = None self.start_time = None
self.stop_time = None self.stop_time = None
self.event_string = None
self.function_string = None self.function_string = None
self.trace_clock = trace_clock self.trace_clock = trace_clock
self.saved_cmdlines_nr = saved_cmdlines_nr self.saved_cmdlines_nr = saved_cmdlines_nr
self._reset_needed = True self._reset_needed = True
self.mode = mode
self._bg_cmd = None
# pylint: disable=bad-whitespace # pylint: disable=bad-whitespace
# Setup tracing paths # Setup tracing paths
@ -105,7 +110,8 @@ class FtraceCollector(CollectorBase):
self.function_profile_file = self.target.path.join(self.tracing_path, 'function_profile_enabled') self.function_profile_file = self.target.path.join(self.tracing_path, 'function_profile_enabled')
self.marker_file = self.target.path.join(self.tracing_path, 'trace_marker') self.marker_file = self.target.path.join(self.tracing_path, 'trace_marker')
self.ftrace_filter_file = self.target.path.join(self.tracing_path, 'set_ftrace_filter') self.ftrace_filter_file = self.target.path.join(self.tracing_path, 'set_ftrace_filter')
self.available_tracers_file = self.target.path.join(self.tracing_path, 'available_tracers') self.available_tracers_file = self.target.path.join(self.tracing_path, 'available_tracers')
self.kprobe_events_file = self.target.path.join(self.tracing_path, 'kprobe_events')
self.host_binary = which('trace-cmd') self.host_binary = which('trace-cmd')
self.kernelshark = which('kernelshark') self.kernelshark = which('kernelshark')
@ -137,10 +143,11 @@ class FtraceCollector(CollectorBase):
for _event in events for _event in events
) )
available_events = self.available_events
unavailable_events = [ unavailable_events = [
event event
for event in self.events for event in self.events
if not event_is_in_list(event, self.available_events) if not event_is_in_list(event, available_events)
] ]
if unavailable_events: if unavailable_events:
message = 'Events not available for tracing: {}'.format( message = 'Events not available for tracing: {}'.format(
@ -191,7 +198,11 @@ class FtraceCollector(CollectorBase):
elif self.tracer == 'function_graph': elif self.tracer == 'function_graph':
self.function_string = _build_graph_functions(selected_functions, trace_children_functions) self.function_string = _build_graph_functions(selected_functions, trace_children_functions)
self.event_string = _build_trace_events(selected_events) self._selected_events = selected_events
@property
def event_string(self):
return _build_trace_events(self._selected_events)
@classmethod @classmethod
def _resolve_tracing_path(cls, target, path): def _resolve_tracing_path(cls, target, path):
@ -237,13 +248,57 @@ class FtraceCollector(CollectorBase):
return self.target.read_value(self.available_functions_file).splitlines() return self.target.read_value(self.available_functions_file).splitlines()
def reset(self): def reset(self):
self.target.execute('{} reset'.format(self.target_binary), # Save kprobe events
try:
kprobe_events = self.target.read_value(self.kprobe_events_file)
except TargetStableError:
kprobe_events = None
self.target.execute('{} reset -B devlib'.format(self.target_binary),
as_root=True, timeout=TIMEOUT) as_root=True, timeout=TIMEOUT)
# trace-cmd start will not set the top-level buffer size if passed -B
# parameter, but unfortunately some events still end up there (e.g.
# print event). So we still need to set that size, otherwise the buffer
# might be too small and some event lost.
top_buffer_size = self.top_buffer_size if self.top_buffer_size else self.buffer_size
if top_buffer_size:
self.target.write_value(
self.target.path.join(self.tracing_path, 'buffer_size_kb'),
top_buffer_size, verify=False
)
if self.functions: if self.functions:
self.target.write_value(self.function_profile_file, 0, verify=False) self.target.write_value(self.function_profile_file, 0, verify=False)
# Restore kprobe events
if kprobe_events:
self.target.write_value(self.kprobe_events_file, kprobe_events)
self._reset_needed = False self._reset_needed = False
def start(self): def _trace_frequencies(self):
if 'cpu_frequency' in self._selected_events:
self.logger.debug('Trace CPUFreq frequencies')
try:
mod = self.target.cpufreq
except TargetStableError as e:
self.logger.error(f'Could not trace CPUFreq frequencies as the cpufreq module cannot be loaded: {e}')
else:
mod.trace_frequencies()
def _trace_idle(self):
if 'cpu_idle' in self._selected_events:
self.logger.debug('Trace CPUIdle states')
try:
mod = self.target.cpuidle
except TargetStableError as e:
self.logger.error(f'Could not trace CPUIdle states as the cpuidle module cannot be loaded: {e}')
else:
mod.perturb_cpus()
@asyncf
async def start(self):
self.start_time = time.time() self.start_time = time.time()
if self._reset_needed: if self._reset_needed:
self.reset() self.reset()
@ -260,36 +315,52 @@ class FtraceCollector(CollectorBase):
with contextlib.suppress(TargetStableError): with contextlib.suppress(TargetStableError):
self.target.write_value('/proc/sys/kernel/kptr_restrict', 0) self.target.write_value('/proc/sys/kernel/kptr_restrict', 0)
self.target.execute( params = '-B devlib {buffer_size} {cmdlines_size} {clock} {events} {tracer} {functions}'.format(
'{} start {buffer_size} {cmdlines_size} {clock} {events} {tracer} {functions}'.format( events=self.event_string,
self.target_binary, tracer=tracer_string,
events=self.event_string, functions=tracecmd_functions,
tracer=tracer_string, buffer_size='-b {}'.format(self.buffer_size) if self.buffer_size is not None else '',
functions=tracecmd_functions, clock='-C {}'.format(self.trace_clock) if self.trace_clock else '',
buffer_size='-b {}'.format(self.buffer_size) if self.buffer_size is not None else '', cmdlines_size='--cmdlines-size {}'.format(self.saved_cmdlines_nr) if self.saved_cmdlines_nr is not None else '',
clock='-C {}'.format(self.trace_clock) if self.trace_clock else '',
cmdlines_size='--cmdlines-size {}'.format(self.saved_cmdlines_nr) if self.saved_cmdlines_nr is not None else '',
),
as_root=True,
) )
mode = self.mode
if mode == 'write-to-disk':
bg_cmd = self.target.background(
# cd into the working_directory first to workaround this issue:
# https://lore.kernel.org/linux-trace-devel/20240119162743.1a107fa9@gandalf.local.home/
f'cd {self.target.working_directory} && devlib-signal-target {self.target_binary} record -o {quote(self.target_output_file)} {params}',
as_root=True,
)
assert self._bg_cmd is None
self._bg_cmd = bg_cmd.__enter__()
elif mode == 'write-to-memory':
self.target.execute(
f'{self.target_binary} start {params}',
as_root=True,
)
else:
raise ValueError(f'Unknown mode {mode}')
if self.automark: if self.automark:
self.mark_start() self.mark_start()
if 'cpufreq' in self.target.modules:
self.logger.debug('Trace CPUFreq frequencies') self._trace_frequencies()
self.target.cpufreq.trace_frequencies() self._trace_idle()
if 'cpuidle' in self.target.modules:
self.logger.debug('Trace CPUIdle states')
self.target.cpuidle.perturb_cpus()
# Enable kernel function profiling # Enable kernel function profiling
if self.functions and self.tracer is None: if self.functions and self.tracer is None:
self.target.execute('echo nop > {}'.format(self.current_tracer_file), target = self.target
as_root=True) await target.async_manager.concurrently(
self.target.execute('echo 0 > {}'.format(self.function_profile_file), execute.asyn('echo nop > {}'.format(self.current_tracer_file),
as_root=True) as_root=True),
self.target.execute('echo {} > {}'.format(self.function_string, self.ftrace_filter_file), execute.asyn('echo 0 > {}'.format(self.function_profile_file),
as_root=True) as_root=True),
self.target.execute('echo 1 > {}'.format(self.function_profile_file), execute.asyn('echo {} > {}'.format(self.function_string, self.ftrace_filter_file),
as_root=True) as_root=True),
execute.asyn('echo 1 > {}'.format(self.function_profile_file),
as_root=True),
)
def stop(self): def stop(self):
@ -297,14 +368,24 @@ class FtraceCollector(CollectorBase):
if self.functions and self.tracer is None: if self.functions and self.tracer is None:
self.target.execute('echo 0 > {}'.format(self.function_profile_file), self.target.execute('echo 0 > {}'.format(self.function_profile_file),
as_root=True) as_root=True)
if 'cpufreq' in self.target.modules:
self.logger.debug('Trace CPUFreq frequencies')
self.target.cpufreq.trace_frequencies()
self.stop_time = time.time() self.stop_time = time.time()
if self.automark: if self.automark:
self.mark_stop() self.mark_stop()
self.target.execute('{} stop'.format(self.target_binary),
timeout=TIMEOUT, as_root=True) mode = self.mode
if mode == 'write-to-disk':
bg_cmd = self._bg_cmd
self._bg_cmd = None
assert bg_cmd is not None
bg_cmd.send_signal(signal.SIGINT)
bg_cmd.communicate()
bg_cmd.__exit__(None, None, None)
elif mode == 'write-to-memory':
self.target.execute('{} stop -B devlib'.format(self.target_binary),
timeout=TIMEOUT, as_root=True)
else:
raise ValueError(f'Unknown mode {mode}')
self._reset_needed = True self._reset_needed = True
def set_output(self, output_path): def set_output(self, output_path):
@ -315,9 +396,18 @@ class FtraceCollector(CollectorBase):
def get_data(self): def get_data(self):
if self.output_path is None: if self.output_path is None:
raise RuntimeError("Output path was not set.") raise RuntimeError("Output path was not set.")
self.target.execute('{0} extract -o {1}; chmod 666 {1}'.format(self.target_binary,
self.target_output_file), busybox = quote(self.target.busybox)
timeout=TIMEOUT, as_root=True)
mode = self.mode
if mode == 'write-to-disk':
# Interrupting trace-cmd record will make it create the file
pass
elif mode == 'write-to-memory':
cmd = f'{self.target_binary} extract -B devlib -o {self.target_output_file} && {busybox} chmod 666 {self.target_output_file}'
self.target.execute(cmd, timeout=TIMEOUT, as_root=True)
else:
raise ValueError(f'Unknown mode {mode}')
# The size of trace.dat will depend on how long trace-cmd was running. # The size of trace.dat will depend on how long trace-cmd was running.
# Therefore timout for the pull command must also be adjusted # Therefore timout for the pull command must also be adjusted
@ -389,8 +479,7 @@ class FtraceCollector(CollectorBase):
self.logger.debug(command) self.logger.debug(command)
process = subprocess.Popen(command, stderr=subprocess.PIPE, shell=True) process = subprocess.Popen(command, stderr=subprocess.PIPE, shell=True)
_, error = process.communicate() _, error = process.communicate()
if sys.version_info[0] == 3: error = error.decode(sys.stdout.encoding or 'utf-8', 'replace')
error = error.decode(sys.stdout.encoding or 'utf-8', 'replace')
if process.returncode: if process.returncode:
raise TargetStableError('trace-cmd returned non-zero exit code {}'.format(process.returncode)) raise TargetStableError('trace-cmd returned non-zero exit code {}'.format(process.returncode))
if error: if error:

View File

@ -1,4 +1,4 @@
# Copyright 2018 ARM Limited # Copyright 2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -14,7 +14,6 @@
# #
import os import os
import shutil
from devlib.collector import (CollectorBase, CollectorOutput, from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry) CollectorOutputEntry)

View File

@ -59,7 +59,7 @@ class PerfCollector(CollectorBase):
mispredicted. They form a basis for profiling applications to trace dynamic mispredicted. They form a basis for profiling applications to trace dynamic
control flow and identify hotspots. control flow and identify hotspots.
pref accepts options and events. If no option is given the default '-a' is Perf accepts options and events. If no option is given the default '-a' is
used. For events, the default events are migrations and cs for perf and raw-cpu-cycles, used. For events, the default events are migrations and cs for perf and raw-cpu-cycles,
raw-l1-dcache, raw-l1-dcache-refill, raw-instructions-retired. They both can raw-l1-dcache, raw-l1-dcache-refill, raw-instructions-retired. They both can
be specified in the config file. be specified in the config file.
@ -94,7 +94,8 @@ class PerfCollector(CollectorBase):
run_report_sample=False, run_report_sample=False,
report_sample_options=None, report_sample_options=None,
labels=None, labels=None,
force_install=False): force_install=False,
validate_events=True):
super(PerfCollector, self).__init__(target) super(PerfCollector, self).__init__(target)
self.force_install = force_install self.force_install = force_install
self.labels = labels self.labels = labels
@ -102,6 +103,7 @@ class PerfCollector(CollectorBase):
self.run_report_sample = run_report_sample self.run_report_sample = run_report_sample
self.report_sample_options = report_sample_options self.report_sample_options = report_sample_options
self.output_path = None self.output_path = None
self.validate_events = validate_events
# Validate parameters # Validate parameters
if isinstance(optionstring, list): if isinstance(optionstring, list):
@ -135,7 +137,8 @@ class PerfCollector(CollectorBase):
if self.force_install or not self.binary: if self.force_install or not self.binary:
self.binary = self._deploy_perf() self.binary = self._deploy_perf()
self._validate_events(self.events) if self.validate_events:
self._validate_events(self.events)
self.commands = self._build_commands() self.commands = self._build_commands()

View File

@ -0,0 +1,119 @@
# Copyright 2023 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import subprocess
from shlex import quote
from devlib.host import PACKAGE_BIN_DIRECTORY
from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry)
from devlib.exception import TargetStableError, HostError
OUTPUT_PERFETTO_TRACE = 'devlib-trace.perfetto-trace'
class PerfettoCollector(CollectorBase):
"""
Perfetto is a production-grade open-source stack for performance instrumentation
and trace analysis developed by Google. It offers services and libraries for
recording system-level and app-level traces, native + java heap profiling,
a library for analyzing traces using SQL and a web-based UI to visualize and
explore multi-GB traces.
This collector takes a path to a perfetto config file saved on disk and passes
it directly to the tool.
On Android platfroms Perfetto is included in the framework starting with Android 9.
On Android 8 and below, follow the Linux instructions below to build and include
the standalone tracebox binary.
On Linux platforms, either traced (Perfetto tracing daemon) needs to be running
in the background or the tracebox binary needs to be built from source and placed
in the Package Bin directory. The build instructions can be found here:
It is also possible to force using the prebuilt tracebox binary on platforms which
already have traced running using the force_tracebox collector parameter.
https://perfetto.dev/docs/contributing/build-instructions
After building the 'tracebox' binary should be copied to devlib/bin/<arch>/.
For more information consult the official documentation:
https://perfetto.dev/docs/
"""
def __init__(self, target, config=None, force_tracebox=False):
super().__init__(target)
self.bg_cmd = None
self.config = config
self.target_binary = 'perfetto'
target_output_path = self.target.working_directory
install_tracebox = force_tracebox or (target.os in ['linux', 'android'] and not target.is_running('traced'))
# Install Perfetto through tracebox
if install_tracebox:
self.target_binary = 'tracebox'
if not self.target.get_installed(self.target_binary):
host_executable = os.path.join(PACKAGE_BIN_DIRECTORY,
self.target.abi, self.target_binary)
if not os.path.exists(host_executable):
raise HostError("{} not found on the host".format(self.target_binary))
self.target.install(host_executable)
# Use Android's built-in Perfetto
elif target.os == 'android':
os_version = target.os_version['release']
if int(os_version) >= 9:
# Android requires built-in Perfetto to write to this directory
target_output_path = '/data/misc/perfetto-traces'
# Android 9 and 10 require traced to be enabled manually
if int(os_version) <= 10:
target.execute('setprop persist.traced.enable 1')
self.target_output_file = target.path.join(target_output_path, OUTPUT_PERFETTO_TRACE)
def start(self):
cmd = "{} cat {} | {} --txt -c - -o {}".format(
quote(self.target.busybox), quote(self.config), quote(self.target_binary), quote(self.target_output_file)
)
# start tracing
if self.bg_cmd is None:
self.bg_cmd = self.target.background(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
else:
raise TargetStableError('Perfetto collector is not re-entrant')
def stop(self):
# stop tracing
self.bg_cmd.cancel()
self.bg_cmd = None
def set_output(self, output_path):
if os.path.isdir(output_path):
output_path = os.path.join(output_path, os.path.basename(self.target_output_file))
self.output_path = output_path
def get_data(self):
if self.output_path is None:
raise RuntimeError("Output path was not set.")
if not self.target.file_exists(self.target_output_file):
raise RuntimeError("Output file not found on the device")
self.target.pull(self.target_output_file, self.output_path)
output = CollectorOutput()
if not os.path.isfile(self.output_path):
self.logger.warning('Perfetto trace not pulled from device.')
else:
output.append(CollectorOutputEntry(self.output_path, 'file'))
return output

View File

@ -1,4 +1,4 @@
# Copyright 2018 ARM Limited # Copyright 2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -13,8 +13,6 @@
# limitations under the License. # limitations under the License.
# #
import shutil
from tempfile import NamedTemporaryFile
from pexpect.exceptions import TIMEOUT from pexpect.exceptions import TIMEOUT
from devlib.collector import (CollectorBase, CollectorOutput, from devlib.collector import (CollectorBase, CollectorOutput,

View File

@ -1,4 +1,4 @@
# Copyright 2018 ARM Limited # Copyright 2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -16,9 +16,6 @@
import os import os
import subprocess import subprocess
from shutil import copyfile
from tempfile import NamedTemporaryFile
from devlib.collector import (CollectorBase, CollectorOutput, from devlib.collector import (CollectorBase, CollectorOutput,
CollectorOutputEntry) CollectorOutputEntry)
from devlib.exception import TargetStableError, HostError from devlib.exception import TargetStableError, HostError

View File

@ -1,4 +1,4 @@
# Copyright 2019 ARM Limited # Copyright 2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -14,15 +14,11 @@
# #
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from contextlib import contextmanager from contextlib import contextmanager, nullcontext
from datetime import datetime
from functools import partial
from weakref import WeakSet
from shlex import quote from shlex import quote
from time import monotonic
import os import os
from pathlib import Path
import signal import signal
import socket
import subprocess import subprocess
import threading import threading
import time import time
@ -30,14 +26,11 @@ import logging
import select import select
import fcntl import fcntl
from devlib.utils.misc import InitCheckpoint from devlib.utils.misc import InitCheckpoint, memoized
_KILL_TIMEOUT = 3 _KILL_TIMEOUT = 3
def _kill_pgid_cmd(pgid, sig, busybox):
return '{} kill -{} -- -{}'.format(busybox, sig.value, pgid)
def _popen_communicate(bg, popen, input, timeout): def _popen_communicate(bg, popen, input, timeout):
try: try:
stdout, stderr = popen.communicate(input=input, timeout=timeout) stdout, stderr = popen.communicate(input=input, timeout=timeout)
@ -61,11 +54,26 @@ class ConnectionBase(InitCheckpoint):
""" """
Base class for all connections. Base class for all connections.
""" """
def __init__(self): def __init__(
self._current_bg_cmds = WeakSet() self,
poll_transfers=False,
start_transfer_poll_delay=30,
total_transfer_timeout=3600,
transfer_poll_period=30,
):
self._current_bg_cmds = set()
self._closed = False self._closed = False
self._close_lock = threading.Lock() self._close_lock = threading.Lock()
self.busybox = None self.busybox = None
self.logger = logging.getLogger('Connection')
self.transfer_manager = TransferManager(
self,
start_transfer_poll_delay=start_transfer_poll_delay,
total_transfer_timeout=total_transfer_timeout,
transfer_poll_period=transfer_poll_period,
) if poll_transfers else NoopTransferManager()
def cancel_running_command(self): def cancel_running_command(self):
bg_cmds = set(self._current_bg_cmds) bg_cmds = set(self._current_bg_cmds)
@ -83,11 +91,21 @@ class ConnectionBase(InitCheckpoint):
""" """
def close(self): def close(self):
def finish_bg():
bg_cmds = set(self._current_bg_cmds)
n = len(bg_cmds)
if n:
self.logger.debug(f'Canceling {n} background commands before closing connection')
for bg_cmd in bg_cmds:
bg_cmd.cancel()
# Locking the closing allows any thread to safely call close() as long # Locking the closing allows any thread to safely call close() as long
# as the connection can be closed from a thread that is not the one it # as the connection can be closed from a thread that is not the one it
# started its life in. # started its life in.
with self._close_lock: with self._close_lock:
if not self._closed: if not self._closed:
finish_bg()
self._close() self._close()
self._closed = True self._closed = True
@ -109,7 +127,87 @@ class BackgroundCommand(ABC):
Instances of this class can be used as context managers, with the same Instances of this class can be used as context managers, with the same
semantic as :class:`subprocess.Popen`. semantic as :class:`subprocess.Popen`.
""" """
@abstractmethod
def __init__(self, conn, data_dir, cmd, as_root):
self.conn = conn
self._data_dir = data_dir
self.as_root = as_root
self.cmd = cmd
# Poll currently opened background commands on that connection to make
# them deregister themselves if they are completed. This avoids
# accumulating terminated commands and therefore leaking associated
# resources if the user is not careful and does not use the context
# manager API.
for bg_cmd in set(conn._current_bg_cmds):
try:
bg_cmd.poll()
# We don't want anything to fail here because of another command
except Exception:
pass
conn._current_bg_cmds.add(self)
@classmethod
def from_factory(cls, conn, cmd, as_root, make_init_kwargs):
cmd, data_dir = cls._with_data_dir(conn, cmd)
return cls(
conn=conn,
data_dir=data_dir,
cmd=cmd,
as_root=as_root,
**make_init_kwargs(cmd),
)
def _deregister(self):
try:
self.conn._current_bg_cmds.remove(self)
except KeyError:
pass
@property
def _pid_file(self):
return str(Path(self._data_dir, 'pid'))
@property
@memoized
def _targeted_pid(self):
"""
PID of the process pointed at by ``devlib-signal-target`` command.
"""
path = quote(self._pid_file)
busybox = quote(self.conn.busybox)
def execute(cmd):
return self.conn.execute(cmd, as_root=self.as_root)
while self.poll() is None:
try:
pid = execute(f'{busybox} cat {path}')
except subprocess.CalledProcessError:
time.sleep(0.01)
else:
if pid.endswith('\n'):
return int(pid.strip())
else:
# We got a partial write in the PID file
continue
raise ValueError(f'The background commmand did not use devlib-signal-target wrapper to designate which command should be the target of signals')
@classmethod
def _with_data_dir(cls, conn, cmd):
busybox = quote(conn.busybox)
data_dir = conn.execute(f'{busybox} mktemp -d').strip()
cmd = f'_DEVLIB_BG_CMD_DATA_DIR={data_dir} exec {busybox} sh -c {quote(cmd)}'
return cmd, data_dir
def _cleanup_data_dir(self):
path = quote(self._data_dir)
busybox = quote(self.conn.busybox)
cmd = f'{busybox} rm -r {path} || true'
self.conn.execute(cmd, as_root=self.as_root)
def send_signal(self, sig): def send_signal(self, sig):
""" """
Send a POSIX signal to the background command's process group ID Send a POSIX signal to the background command's process group ID
@ -119,6 +217,32 @@ class BackgroundCommand(ABC):
:type signal: signal.Signals :type signal: signal.Signals
""" """
def execute(cmd):
return self.conn.execute(cmd, as_root=self.as_root)
def send(sig):
busybox = quote(self.conn.busybox)
# If the command has already completed, we don't want to send a
# signal to another process that might have gotten that PID in the
# meantime.
if self.poll() is None:
if sig in (signal.SIGTERM, signal.SIGQUIT, signal.SIGKILL):
# Use -PGID to target a process group rather than just the
# process itself. This will work in any condition and will
# not require cooperation from the command.
execute(f'{busybox} kill -{sig.value} -{self.pid}')
else:
# Other signals require cooperation from the shell command
# so that it points to a specific process using
# devlib-signal-target
pid = self._targeted_pid
execute(f'{busybox} kill -{sig.value} {pid}')
try:
return send(sig)
finally:
# Deregister if the command has finished
self.poll()
def kill(self): def kill(self):
""" """
Send SIGKILL to the background command. Send SIGKILL to the background command.
@ -130,8 +254,11 @@ class BackgroundCommand(ABC):
Try to gracefully terminate the process by sending ``SIGTERM``, then Try to gracefully terminate the process by sending ``SIGTERM``, then
waiting for ``kill_timeout`` to send ``SIGKILL``. waiting for ``kill_timeout`` to send ``SIGKILL``.
""" """
if self.poll() is None: try:
self._cancel(kill_timeout=kill_timeout) if self.poll() is None:
return self._cancel(kill_timeout=kill_timeout)
finally:
self._deregister()
@abstractmethod @abstractmethod
def _cancel(self, kill_timeout): def _cancel(self, kill_timeout):
@ -141,10 +268,17 @@ class BackgroundCommand(ABC):
pass pass
@abstractmethod @abstractmethod
def _wait(self):
pass
def wait(self): def wait(self):
""" """
Block until the background command completes, and return its exit code. Block until the background command completes, and return its exit code.
""" """
try:
return self._wait()
finally:
self._deregister()
def communicate(self, input=b'', timeout=None): def communicate(self, input=b'', timeout=None):
""" """
@ -162,10 +296,17 @@ class BackgroundCommand(ABC):
pass pass
@abstractmethod @abstractmethod
def _poll(self):
pass
def poll(self): def poll(self):
""" """
Return exit code if the command has exited, None otherwise. Return exit code if the command has exited, None otherwise.
""" """
retcode = self._poll()
if retcode is not None:
self._deregister()
return retcode
@property @property
@abstractmethod @abstractmethod
@ -202,6 +343,9 @@ class BackgroundCommand(ABC):
""" """
@abstractmethod @abstractmethod
def _close(self):
pass
def close(self): def close(self):
""" """
Close all opened streams and then wait for command completion. Close all opened streams and then wait for command completion.
@ -211,6 +355,11 @@ class BackgroundCommand(ABC):
.. note:: If the command is writing to its stdout/stderr, it might be .. note:: If the command is writing to its stdout/stderr, it might be
blocked on that and die when the streams are closed. blocked on that and die when the streams are closed.
""" """
try:
return self._close()
finally:
self._deregister()
self._cleanup_data_dir()
def __enter__(self): def __enter__(self):
return self return self
@ -224,12 +373,15 @@ class PopenBackgroundCommand(BackgroundCommand):
:class:`subprocess.Popen`-based background command. :class:`subprocess.Popen`-based background command.
""" """
def __init__(self, popen): def __init__(self, conn, data_dir, cmd, as_root, popen):
super().__init__(
conn=conn,
data_dir=data_dir,
cmd=cmd,
as_root=as_root,
)
self.popen = popen self.popen = popen
def send_signal(self, sig):
return os.killpg(self.popen.pid, sig)
@property @property
def stdin(self): def stdin(self):
return self.popen.stdin return self.popen.stdin
@ -246,13 +398,13 @@ class PopenBackgroundCommand(BackgroundCommand):
def pid(self): def pid(self):
return self.popen.pid return self.popen.pid
def wait(self): def _wait(self):
return self.popen.wait() return self.popen.wait()
def _communicate(self, input, timeout): def _communicate(self, input, timeout):
return _popen_communicate(self, self.popen, input, timeout) return _popen_communicate(self, self.popen, input, timeout)
def poll(self): def _poll(self):
return self.popen.poll() return self.popen.poll()
def _cancel(self, kill_timeout): def _cancel(self, kill_timeout):
@ -263,48 +415,40 @@ class PopenBackgroundCommand(BackgroundCommand):
except subprocess.TimeoutExpired: except subprocess.TimeoutExpired:
os.killpg(os.getpgid(popen.pid), signal.SIGKILL) os.killpg(os.getpgid(popen.pid), signal.SIGKILL)
def close(self): def _close(self):
self.popen.__exit__(None, None, None) self.popen.__exit__(None, None, None)
return self.popen.returncode return self.popen.returncode
def __enter__(self): def __enter__(self):
super().__enter__()
self.popen.__enter__() self.popen.__enter__()
return self return self
def __exit__(self, *args, **kwargs):
self.popen.__exit__(*args, **kwargs)
class ParamikoBackgroundCommand(BackgroundCommand): class ParamikoBackgroundCommand(BackgroundCommand):
""" """
:mod:`paramiko`-based background command. :mod:`paramiko`-based background command.
""" """
def __init__(self, conn, chan, pid, as_root, cmd, stdin, stdout, stderr, redirect_thread): def __init__(self, conn, data_dir, cmd, as_root, chan, pid, stdin, stdout, stderr, redirect_thread):
super().__init__(
conn=conn,
data_dir=data_dir,
cmd=cmd,
as_root=as_root,
)
self.chan = chan self.chan = chan
self.as_root = as_root
self.conn = conn
self._pid = pid self._pid = pid
self._stdin = stdin self._stdin = stdin
self._stdout = stdout self._stdout = stdout
self._stderr = stderr self._stderr = stderr
self.redirect_thread = redirect_thread self.redirect_thread = redirect_thread
self.cmd = cmd
def send_signal(self, sig):
# If the command has already completed, we don't want to send a signal
# to another process that might have gotten that PID in the meantime.
if self.poll() is not None:
return
# Use -PGID to target a process group rather than just the process
# itself
cmd = _kill_pgid_cmd(self.pid, sig, self.conn.busybox)
self.conn.execute(cmd, as_root=self.as_root)
@property @property
def pid(self): def pid(self):
return self._pid return self._pid
def wait(self): def _wait(self):
status = self.chan.recv_exit_status() status = self.chan.recv_exit_status()
# Ensure that the redirection thread is finished copying the content # Ensure that the redirection thread is finished copying the content
# from paramiko to the pipe. # from paramiko to the pipe.
@ -339,13 +483,13 @@ class ParamikoBackgroundCommand(BackgroundCommand):
b''.join(out[stderr]) b''.join(out[stderr])
) )
start = monotonic() start = time.monotonic()
while ret is None: while ret is None:
# Even if ret is not None anymore, we need to drain the streams # Even if ret is not None anymore, we need to drain the streams
ret = self.poll() ret = self.poll()
if timeout is not None and ret is None and monotonic() - start >= timeout: if timeout is not None and ret is None and time.monotonic() - start >= timeout:
self.cancel() self.cancel()
_stdout, _stderr = create_out() _stdout, _stderr = create_out()
raise subprocess.TimeoutExpired(self.cmd, timeout, _stdout, _stderr) raise subprocess.TimeoutExpired(self.cmd, timeout, _stdout, _stderr)
@ -390,7 +534,7 @@ class ParamikoBackgroundCommand(BackgroundCommand):
else: else:
return (_stdout, _stderr) return (_stdout, _stderr)
def poll(self): def _poll(self):
# Wait for the redirection thread to finish, otherwise we would # Wait for the redirection thread to finish, otherwise we would
# indicate the caller that the command is finished and that the streams # indicate the caller that the command is finished and that the streams
# are safe to drain, but actually the redirection thread is not # are safe to drain, but actually the redirection thread is not
@ -424,7 +568,7 @@ class ParamikoBackgroundCommand(BackgroundCommand):
def stderr(self): def stderr(self):
return self._stderr return self._stderr
def close(self): def _close(self):
for x in (self.stdin, self.stdout, self.stderr): for x in (self.stdin, self.stdout, self.stderr):
if x is not None: if x is not None:
x.close() x.close()
@ -442,18 +586,16 @@ class AdbBackgroundCommand(BackgroundCommand):
``adb``-based background command. ``adb``-based background command.
""" """
def __init__(self, conn, adb_popen, pid, as_root): def __init__(self, conn, data_dir, cmd, as_root, adb_popen, pid):
self.conn = conn super().__init__(
self.as_root = as_root conn=conn,
data_dir=data_dir,
cmd=cmd,
as_root=as_root,
)
self.adb_popen = adb_popen self.adb_popen = adb_popen
self._pid = pid self._pid = pid
def send_signal(self, sig):
self.conn.execute(
_kill_pgid_cmd(self.pid, sig, self.conn.busybox),
as_root=self.as_root,
)
@property @property
def stdin(self): def stdin(self):
return self.adb_popen.stdin return self.adb_popen.stdin
@ -470,14 +612,13 @@ class AdbBackgroundCommand(BackgroundCommand):
def pid(self): def pid(self):
return self._pid return self._pid
def wait(self): def _wait(self):
return self.adb_popen.wait() return self.adb_popen.wait()
def _communicate(self, input, timeout): def _communicate(self, input, timeout):
return _popen_communicate(self, self.adb_popen, input, timeout) return _popen_communicate(self, self.adb_popen, input, timeout)
def _poll(self):
def poll(self):
return self.adb_popen.poll() return self.adb_popen.poll()
def _cancel(self, kill_timeout): def _cancel(self, kill_timeout):
@ -488,21 +629,99 @@ class AdbBackgroundCommand(BackgroundCommand):
self.send_signal(signal.SIGKILL) self.send_signal(signal.SIGKILL)
self.adb_popen.kill() self.adb_popen.kill()
def close(self): def _close(self):
self.adb_popen.__exit__(None, None, None) self.adb_popen.__exit__(None, None, None)
return self.adb_popen.returncode return self.adb_popen.returncode
def __enter__(self): def __enter__(self):
super().__enter__()
self.adb_popen.__enter__() self.adb_popen.__enter__()
return self return self
def __exit__(self, *args, **kwargs):
self.adb_popen.__exit__(*args, **kwargs) class TransferManager:
def __init__(self, conn, transfer_poll_period=30, start_transfer_poll_delay=30, total_transfer_timeout=3600):
self.conn = conn
self.transfer_poll_period = transfer_poll_period
self.total_transfer_timeout = total_transfer_timeout
self.start_transfer_poll_delay = start_transfer_poll_delay
self.logger = logging.getLogger('FileTransfer')
@contextmanager
def manage(self, sources, dest, direction, handle):
excep = None
stop_thread = threading.Event()
def monitor():
nonlocal excep
def cancel(reason):
self.logger.warning(
f'Cancelling file transfer {sources} -> {dest} due to: {reason}'
)
handle.cancel()
start_t = time.monotonic()
stop_thread.wait(self.start_transfer_poll_delay)
while not stop_thread.wait(self.transfer_poll_period):
if not handle.isactive():
cancel(reason='transfer inactive')
elif time.monotonic() - start_t > self.total_transfer_timeout:
cancel(reason='transfer timed out')
excep = TimeoutError(f'{direction}: {sources} -> {dest}')
m_thread = threading.Thread(target=monitor, daemon=True)
try:
m_thread.start()
yield self
finally:
stop_thread.set()
m_thread.join()
if excep is not None:
raise excep
class TransferManagerBase(ABC): class NoopTransferManager:
def manage(self, *args, **kwargs):
return nullcontext(self)
def _pull_dest_size(self, dest):
class TransferHandleBase(ABC):
def __init__(self, manager):
self.manager = manager
@property
def logger(self):
return self.manager.logger
@abstractmethod
def isactive(self):
pass
@abstractmethod
def cancel(self):
pass
class PopenTransferHandle(TransferHandleBase):
def __init__(self, popen, dest, direction, *args, **kwargs):
super().__init__(*args, **kwargs)
if direction == 'push':
sample_size = self._push_dest_size
elif direction == 'pull':
sample_size = self._pull_dest_size
else:
raise ValueError(f'Unknown direction: {direction}')
self.sample_size = lambda: sample_size(dest)
self.popen = popen
self.last_sample = 0
@staticmethod
def _pull_dest_size(dest):
if os.path.isdir(dest): if os.path.isdir(dest):
return sum( return sum(
os.stat(os.path.join(dirpath, f)).st_size os.stat(os.path.join(dirpath, f)).st_size
@ -511,155 +730,60 @@ class TransferManagerBase(ABC):
) )
else: else:
return os.stat(dest).st_size return os.stat(dest).st_size
return 0
def _push_dest_size(self, dest): def _push_dest_size(self, dest):
cmd = '{} du -s {}'.format(quote(self.conn.busybox), quote(dest)) conn = self.manager.conn
out = self.conn.execute(cmd) cmd = '{} du -s -- {}'.format(quote(conn.busybox), quote(dest))
try: out = conn.execute(cmd)
return int(out.split()[0]) return int(out.split()[0])
except ValueError:
return 0
def __init__(self, conn, poll_period, start_transfer_poll_delay, total_timeout): def cancel(self):
self.conn = conn self.popen.terminate()
self.poll_period = poll_period
self.total_timeout = total_timeout
self.start_transfer_poll_delay = start_transfer_poll_delay
self.logger = logging.getLogger('FileTransfer')
self.managing = threading.Event()
self.transfer_started = threading.Event()
self.transfer_completed = threading.Event()
self.transfer_aborted = threading.Event()
self.monitor_thread = None
self.sources = None
self.dest = None
self.direction = None
@abstractmethod
def _cancel(self):
pass
def cancel(self, reason=None):
msg = 'Cancelling file transfer {} -> {}'.format(self.sources, self.dest)
if reason is not None:
msg += ' due to \'{}\''.format(reason)
self.logger.warning(msg)
self.transfer_aborted.set()
self._cancel()
@abstractmethod
def isactive(self):
pass
@contextmanager
def manage(self, sources, dest, direction):
try:
self.sources, self.dest, self.direction = sources, dest, direction
m_thread = threading.Thread(target=self._monitor)
self.transfer_completed.clear()
self.transfer_aborted.clear()
self.transfer_started.set()
m_thread.start()
yield self
except BaseException:
self.cancel(reason='exception during transfer')
raise
finally:
self.transfer_completed.set()
self.transfer_started.set()
m_thread.join()
self.transfer_started.clear()
self.transfer_completed.clear()
self.transfer_aborted.clear()
def _monitor(self):
start_t = monotonic()
self.transfer_completed.wait(self.start_transfer_poll_delay)
while not self.transfer_completed.wait(self.poll_period):
if not self.isactive():
self.cancel(reason='transfer inactive')
elif monotonic() - start_t > self.total_timeout:
self.cancel(reason='transfer timed out')
class PopenTransferManager(TransferManagerBase):
def __init__(self, conn, poll_period=30, start_transfer_poll_delay=30, total_timeout=3600):
super().__init__(conn, poll_period, start_transfer_poll_delay, total_timeout)
self.transfer = None
self.last_sample = None
def _cancel(self):
if self.transfer:
self.transfer.cancel()
self.transfer = None
self.last_sample = None
def isactive(self): def isactive(self):
size_fn = self._push_dest_size if self.direction == 'push' else self._pull_dest_size try:
curr_size = size_fn(self.dest) curr_size = self.sample_size()
self.logger.debug('Polled file transfer, destination size {}'.format(curr_size)) except Exception as e:
active = True if self.last_sample is None else curr_size > self.last_sample self.logger.debug(f'File size polling failed: {e}')
self.last_sample = curr_size return True
return active else:
self.logger.debug(f'Polled file transfer, destination size: {curr_size}')
def set_transfer_and_wait(self, popen_bg_cmd): if curr_size:
self.transfer = popen_bg_cmd active = curr_size > self.last_sample
self.last_sample = None self.last_sample = curr_size
ret = self.transfer.wait() return active
# If the file is empty it will never grow in size, so we assume
if ret and not self.transfer_aborted.is_set(): # everything is going well.
raise subprocess.CalledProcessError(ret, self.transfer.popen.args) else:
elif self.transfer_aborted.is_set(): return True
raise TimeoutError(self.transfer.popen.args)
class SSHTransferManager(TransferManagerBase): class SSHTransferHandle(TransferHandleBase):
def __init__(self, handle, *args, **kwargs):
super().__init__(*args, **kwargs)
# SFTPClient or SSHClient
self.handle = handle
def __init__(self, conn, poll_period=30, start_transfer_poll_delay=30, total_timeout=3600):
super().__init__(conn, poll_period, start_transfer_poll_delay, total_timeout)
self.transferer = None
self.progressed = False self.progressed = False
self.transferred = None self.transferred = 0
self.to_transfer = None self.to_transfer = 0
def _cancel(self): def cancel(self):
self.transferer.close() self.handle.close()
def isactive(self): def isactive(self):
progressed = self.progressed progressed = self.progressed
self.progressed = False if progressed:
msg = 'Polled transfer: {}% [{}B/{}B]' self.progressed = False
pc = format((self.transferred / self.to_transfer) * 100, '.2f') pc = (self.transferred / self.to_transfer) * 100
self.logger.debug(msg.format(pc, self.transferred, self.to_transfer)) self.logger.debug(
f'Polled transfer: {pc:.2f}% [{self.transferred}B/{self.to_transfer}B]'
)
return progressed return progressed
@contextmanager def progress_cb(self, transferred, to_transfer):
def manage(self, sources, dest, direction, transferer): self.progressed = True
with super().manage(sources, dest, direction): self.transferred = transferred
try: self.to_transfer = to_transfer
self.progressed = False
self.transferer = transferer # SFTPClient or SCPClient
yield self
except socket.error as e:
if self.transfer_aborted.is_set():
self.transfer_aborted.clear()
method = 'SCP' if self.conn.use_scp else 'SFTP'
raise TimeoutError('{} {}: {} -> {}'.format(method, self.direction, sources, self.dest))
else:
raise e
def progress_cb(self, *args):
if self.transfer_started.is_set():
self.progressed = True
if len(args) == 3: # For SCPClient callbacks
self.transferred = args[2]
self.to_transfer = args[1]
elif len(args) == 2: # For SFTPClient callbacks
self.transferred = args[0]
self.to_transfer = args[1]

View File

@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from __future__ import division
from collections import defaultdict from collections import defaultdict
from devlib.derived import DerivedMeasurements, DerivedMetric from devlib.derived import DerivedMeasurements, DerivedMetric

View File

@ -13,7 +13,6 @@
# limitations under the License. # limitations under the License.
# #
from __future__ import division
import os import os
try: try:

View File

@ -180,3 +180,11 @@ def get_traceback(exc=None):
traceback.print_tb(tb, file=sio) traceback.print_tb(tb, file=sio)
del tb # needs to be done explicitly see: http://docs.python.org/2/library/sys.html#sys.exc_info del tb # needs to be done explicitly see: http://docs.python.org/2/library/sys.html#sys.exc_info
return sio.getvalue() return sio.getvalue()
class AdbRootError(TargetStableError):
"""
Exception raised when it is not safe to use ``adb root`` or ``adb unroot``
because other connections are known to be active, and changing rootness
requires restarting the server.
"""

View File

@ -1,4 +1,4 @@
# Copyright 2015-2017 ARM Limited # Copyright 2015-2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -12,24 +12,42 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import glob
import os import os
import signal import signal
import shutil import shutil
import subprocess import subprocess
import logging import logging
from distutils.dir_util import copy_tree import sys
from getpass import getpass from getpass import getpass
from pipes import quote from shlex import quote
from devlib.exception import ( from devlib.exception import (
TargetTransientError, TargetStableError, TargetStableError, TargetTransientCalledProcessError, TargetStableCalledProcessError
TargetTransientCalledProcessError, TargetStableCalledProcessError
) )
from devlib.utils.misc import check_output from devlib.utils.misc import check_output
from devlib.connection import ConnectionBase, PopenBackgroundCommand from devlib.connection import ConnectionBase, PopenBackgroundCommand
if sys.version_info >= (3, 8):
def copy_tree(src, dst):
from shutil import copy, copytree
copytree(
src,
dst,
# dirs_exist_ok=True only exists in Python >= 3.8
dirs_exist_ok=True,
# Do not copy creation and modification time to behave like other
# targets.
copy_function=copy
)
else:
def copy_tree(src, dst):
from distutils.dir_util import copy_tree
# Mirror the behavior of all other targets which only copy the
# content without metadata
copy_tree(src, dst, preserve_mode=False, preserve_times=False)
PACKAGE_BIN_DIRECTORY = os.path.join(os.path.dirname(__file__), 'bin') PACKAGE_BIN_DIRECTORY = os.path.join(os.path.dirname(__file__), 'bin')
@ -71,13 +89,7 @@ class LocalConnection(ConnectionBase):
def _copy_path(self, source, dest): def _copy_path(self, source, dest):
self.logger.debug('copying {} to {}'.format(source, dest)) self.logger.debug('copying {} to {}'.format(source, dest))
if os.path.isdir(source): if os.path.isdir(source):
# Use distutils copy_tree since it behaves the same as copy_tree(source, dest)
# shutils.copytree except that it won't fail if some folders
# already exist.
#
# Mirror the behavior of all other targets which only copy the
# content without metadata
copy_tree(source, dest, preserve_mode=False, preserve_times=False)
else: else:
shutil.copy(source, dest) shutil.copy(source, dest)
@ -100,7 +112,9 @@ class LocalConnection(ConnectionBase):
if self.unrooted: if self.unrooted:
raise TargetStableError('unrooted') raise TargetStableError('unrooted')
password = self._get_password() password = self._get_password()
command = "echo {} | sudo -k -p ' ' -S -- sh -c {}".format(quote(password), quote(command)) # Empty prompt with -p '' to avoid adding a leading space to the
# output.
command = "echo {} | sudo -k -p '' -S -- sh -c {}".format(quote(password), quote(command))
ignore = None if check_exit_code else 'all' ignore = None if check_exit_code else 'all'
try: try:
stdout, stderr = check_output(command, shell=True, timeout=timeout, ignore=ignore) stdout, stderr = check_output(command, shell=True, timeout=timeout, ignore=ignore)
@ -124,26 +138,34 @@ class LocalConnection(ConnectionBase):
if self.unrooted: if self.unrooted:
raise TargetStableError('unrooted') raise TargetStableError('unrooted')
password = self._get_password() password = self._get_password()
# The sudo prompt will add a space on stderr, but we cannot filter # Empty prompt with -p '' to avoid adding a leading space to the
# it out here # output.
command = "echo {} | sudo -k -p ' ' -S -- sh -c {}".format(quote(password), quote(command)) command = "echo {} | sudo -k -p '' -S -- sh -c {}".format(quote(password), quote(command))
# Make sure to get a new PGID so PopenBackgroundCommand() can kill # Make sure to get a new PGID so PopenBackgroundCommand() can kill
# all sub processes that could be started without troubles. # all sub processes that could be started without troubles.
def preexec_fn(): def preexec_fn():
os.setpgrp() os.setpgrp()
popen = subprocess.Popen( def make_init_kwargs(command):
command, popen = subprocess.Popen(
stdout=stdout, command,
stderr=stderr, stdout=stdout,
stdin=subprocess.PIPE, stderr=stderr,
shell=True, stdin=subprocess.PIPE,
preexec_fn=preexec_fn, shell=True,
preexec_fn=preexec_fn,
)
return dict(
popen=popen,
)
return PopenBackgroundCommand.from_factory(
conn=self,
cmd=command,
as_root=as_root,
make_init_kwargs=make_init_kwargs,
) )
bg_cmd = PopenBackgroundCommand(popen)
self._current_bg_cmds.add(bg_cmd)
return bg_cmd
def _close(self): def _close(self):
pass pass

View File

@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from __future__ import division
import logging import logging
import collections import collections

View File

@ -14,7 +14,6 @@
# #
#pylint: disable=attribute-defined-outside-init #pylint: disable=attribute-defined-outside-init
from __future__ import division
import os import os
import sys import sys
import time import time
@ -23,7 +22,7 @@ import shlex
from fcntl import fcntl, F_GETFL, F_SETFL from fcntl import fcntl, F_GETFL, F_SETFL
from string import Template from string import Template
from subprocess import Popen, PIPE, STDOUT from subprocess import Popen, PIPE, STDOUT
from pipes import quote from shlex import quote
from devlib import Instrument, CONTINUOUS, MeasurementsCsv from devlib import Instrument, CONTINUOUS, MeasurementsCsv
from devlib.exception import HostError from devlib.exception import HostError
@ -117,10 +116,7 @@ class AcmeCapeInstrument(Instrument):
msg = 'Could not terminate iio-capture:\n{}' msg = 'Could not terminate iio-capture:\n{}'
raise HostError(msg.format(output)) raise HostError(msg.format(output))
if self.process.returncode != 15: # iio-capture exits with 15 when killed if self.process.returncode != 15: # iio-capture exits with 15 when killed
if sys.version_info[0] == 3: output += self.process.stdout.read().decode(sys.stdout.encoding or 'utf-8', 'replace')
output += self.process.stdout.read().decode(sys.stdout.encoding or 'utf-8', 'replace')
else:
output += self.process.stdout.read()
self.logger.info('ACME instrument encountered an error, ' self.logger.info('ACME instrument encountered an error, '
'you may want to try rebooting the ACME device:\n' 'you may want to try rebooting the ACME device:\n'
' ssh root@{} reboot'.format(self.host)) ' ssh root@{} reboot'.format(self.host))

View File

@ -30,14 +30,13 @@
# pylint: disable=W0613,E1101,access-member-before-definition,attribute-defined-outside-init # pylint: disable=W0613,E1101,access-member-before-definition,attribute-defined-outside-init
from __future__ import division
import os import os
import subprocess
import signal
from pipes import quote
import tempfile
import shutil import shutil
import signal
import tempfile
import subprocess
from shlex import quote
from devlib.instrument import Instrument, CONTINUOUS, MeasurementsCsv from devlib.instrument import Instrument, CONTINUOUS, MeasurementsCsv
from devlib.exception import HostError from devlib.exception import HostError

View File

@ -12,14 +12,13 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from __future__ import division
import os import os
import signal import signal
import tempfile import tempfile
import struct import struct
import subprocess import subprocess
import sys import sys
from pipes import quote from shlex import quote
from devlib.instrument import Instrument, CONTINUOUS, MeasurementsCsv from devlib.instrument import Instrument, CONTINUOUS, MeasurementsCsv
from devlib.exception import HostError from devlib.exception import HostError
@ -87,9 +86,8 @@ class EnergyProbeInstrument(Instrument):
self.process.poll() self.process.poll()
if self.process.returncode is not None: if self.process.returncode is not None:
stdout, stderr = self.process.communicate() stdout, stderr = self.process.communicate()
if sys.version_info[0] == 3: stdout = stdout.decode(sys.stdout.encoding or 'utf-8', 'replace')
stdout = stdout.decode(sys.stdout.encoding or 'utf-8', 'replace') stderr = stderr.decode(sys.stdout.encoding or 'utf-8', 'replace')
stderr = stderr.decode(sys.stdout.encoding or 'utf-8', 'replace')
raise HostError( raise HostError(
'Energy Probe: Caiman exited unexpectedly with exit code {}.\n' 'Energy Probe: Caiman exited unexpectedly with exit code {}.\n'
'stdout:\n{}\nstderr:\n{}'.format(self.process.returncode, 'stdout:\n{}\nstderr:\n{}'.format(self.process.returncode,

View File

@ -13,7 +13,6 @@
# limitations under the License. # limitations under the License.
# #
from __future__ import division
import os import os
from devlib.instrument import (Instrument, CONTINUOUS, from devlib.instrument import (Instrument, CONTINUOUS,

View File

@ -12,8 +12,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import division
from devlib.platform.gem5 import Gem5SimulationPlatform from devlib.platform.gem5 import Gem5SimulationPlatform
from devlib.instrument import Instrument, CONTINUOUS, MeasurementsCsv from devlib.instrument import Instrument, CONTINUOUS, MeasurementsCsv
from devlib.exception import TargetStableError from devlib.exception import TargetStableError

View File

@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from __future__ import division
import re import re
from devlib.instrument import Instrument, Measurement, INSTANTANEOUS from devlib.instrument import Instrument, Measurement, INSTANTANEOUS

View File

@ -100,9 +100,8 @@ class MonsoonInstrument(Instrument):
process.poll() process.poll()
if process.returncode is not None: if process.returncode is not None:
stdout, stderr = process.communicate() stdout, stderr = process.communicate()
if sys.version_info[0] == 3: stdout = stdout.encode(sys.stdout.encoding or 'utf-8')
stdout = stdout.encode(sys.stdout.encoding or 'utf-8') stderr = stderr.encode(sys.stdout.encoding or 'utf-8')
stderr = stderr.encode(sys.stdout.encoding or 'utf-8')
raise HostError( raise HostError(
'Monsoon script exited unexpectedly with exit code {}.\n' 'Monsoon script exited unexpectedly with exit code {}.\n'
'stdout:\n{}\nstderr:\n{}'.format(process.returncode, 'stdout:\n{}\nstderr:\n{}'.format(process.returncode,

View File

@ -18,8 +18,7 @@ import re
import tempfile import tempfile
from datetime import datetime from datetime import datetime
from collections import defaultdict from collections import defaultdict
from itertools import zip_longest
from future.moves.itertools import zip_longest
from devlib.instrument import Instrument, MeasurementsCsv, CONTINUOUS from devlib.instrument import Instrument, MeasurementsCsv, CONTINUOUS
from devlib.exception import TargetStableError, HostError from devlib.exception import TargetStableError, HostError

View File

@ -15,16 +15,30 @@
import logging import logging
from inspect import isclass from inspect import isclass
from past.builtins import basestring from devlib.exception import TargetStableError
from devlib.utils.misc import walk_modules
from devlib.utils.types import identifier from devlib.utils.types import identifier
from devlib.utils.misc import walk_modules
_module_registry = {}
def register_module(mod):
if not issubclass(mod, Module):
raise ValueError('A module must subclass devlib.Module')
if mod.name is None:
raise ValueError('A module must define a name')
try:
existing = _module_registry[mod.name]
except KeyError:
pass
else:
if existing is not mod:
raise ValueError(f'Module "{mod.name}" already exists')
_module_registry[mod.name] = mod
__module_cache = {} class Module:
class Module(object):
name = None name = None
kind = None kind = None
@ -48,22 +62,48 @@ class Module(object):
@classmethod @classmethod
def install(cls, target, **params): def install(cls, target, **params):
if cls.kind is not None: attr_name = cls.attr_name
attr_name = identifier(cls.kind) installed = target._installed_modules
try:
mod = installed[attr_name]
except KeyError:
mod = cls(target, **params)
mod.logger.debug(f'Installing module {cls.name}')
if mod.probe(target):
for name in (
attr_name,
identifier(cls.name),
identifier(cls.kind) if cls.kind else None,
):
if name is not None:
installed[name] = mod
target._modules[cls.name] = params
return mod
else:
raise TargetStableError(f'Module "{cls.name}" is not supported by the target')
else: else:
attr_name = identifier(cls.name) raise ValueError(
if hasattr(target, attr_name): f'Attempting to install module "{cls.name}" but a module is already installed as attribute "{attr_name}": {mod}'
existing_module = getattr(target, attr_name) )
existing_name = getattr(existing_module, 'name', str(existing_module))
message = 'Attempting to install module "{}" which already exists (new: {}, existing: {})'
raise ValueError(message.format(attr_name, cls.name, existing_name))
setattr(target, attr_name, cls(target, **params))
def __init__(self, target): def __init__(self, target):
self.target = target self.target = target
self.logger = logging.getLogger(self.name) self.logger = logging.getLogger(self.name)
def __init_subclass__(cls, *args, **kwargs):
super().__init_subclass__(*args, **kwargs)
attr_name = cls.kind or cls.name
cls.attr_name = identifier(attr_name) if attr_name else None
if cls.name is not None:
register_module(cls)
class HardRestModule(Module): class HardRestModule(Module):
kind = 'hard_reset' kind = 'hard_reset'
@ -96,32 +136,25 @@ class FlashModule(Module):
def get_module(mod): def get_module(mod):
if not __module_cache: def from_registry(mod):
__load_cache()
if isinstance(mod, basestring):
try: try:
return __module_cache[mod] return _module_registry[mod]
except KeyError: except KeyError:
raise ValueError('Module "{}" does not exist'.format(mod)) raise ValueError('Module "{}" does not exist'.format(mod))
if isinstance(mod, str):
try:
return from_registry(mod)
except ValueError:
# If the lookup failed, we may have simply not imported Modules
# from the devlib.module package. The former module loading
# implementation was also pre-importing modules, so we need to
# replicate that behavior since users are currently not expected to
# have imported the module prior to trying to use it.
walk_modules('devlib.module')
return from_registry(mod)
elif issubclass(mod, Module): elif issubclass(mod, Module):
return mod return mod
else: else:
raise ValueError('Not a valid module: {}'.format(mod)) raise ValueError('Not a valid module: {}'.format(mod))
def register_module(mod):
if not issubclass(mod, Module):
raise ValueError('A module must subclass devlib.Module')
if mod.name is None:
raise ValueError('A module must define a name')
if mod.name in __module_cache:
raise ValueError('Module {} already exists'.format(mod.name))
__module_cache[mod.name] = mod
def __load_cache():
for module in walk_modules('devlib.module'):
for obj in vars(module).values():
if isclass(obj) and issubclass(obj, Module) and obj.name:
register_module(obj)

View File

@ -22,7 +22,7 @@ import tempfile
from devlib.module import FlashModule from devlib.module import FlashModule
from devlib.exception import HostError from devlib.exception import HostError
from devlib.utils.android import fastboot_flash_partition, fastboot_command from devlib.utils.android import fastboot_flash_partition, fastboot_command
from devlib.utils.misc import merge_dicts from devlib.utils.misc import merge_dicts, safe_extract
class FastbootFlashModule(FlashModule): class FastbootFlashModule(FlashModule):
@ -86,7 +86,7 @@ class FastbootFlashModule(FlashModule):
self._validate_image_bundle(image_bundle) self._validate_image_bundle(image_bundle)
extract_dir = tempfile.mkdtemp() extract_dir = tempfile.mkdtemp()
with tarfile.open(image_bundle) as tar: with tarfile.open(image_bundle) as tar:
tar.extractall(path=extract_dir) safe_extract(tar, path=extract_dir)
files = [tf.name for tf in tar.getmembers()] files = [tf.name for tf in tar.getmembers()]
if self.partitions_file_name not in files: if self.partitions_file_name not in files:
extract_dir = os.path.join(extract_dir, files[0]) extract_dir = os.path.join(extract_dir, files[0])

View File

@ -17,11 +17,14 @@ import logging
import re import re
from collections import namedtuple from collections import namedtuple
from shlex import quote from shlex import quote
import itertools
import warnings
from devlib.module import Module from devlib.module import Module
from devlib.exception import TargetStableError from devlib.exception import TargetStableError
from devlib.utils.misc import list_to_ranges, isiterable from devlib.utils.misc import list_to_ranges, isiterable
from devlib.utils.types import boolean from devlib.utils.types import boolean
from devlib.utils.asyn import asyncf, run
class Controller(object): class Controller(object):
@ -53,7 +56,8 @@ class Controller(object):
self.mount_point = None self.mount_point = None
self._cgroups = {} self._cgroups = {}
def mount(self, target, mount_root): @asyncf
async def mount(self, target, mount_root):
mounted = target.list_file_systems() mounted = target.list_file_systems()
if self.mount_name in [e.device for e in mounted]: if self.mount_name in [e.device for e in mounted]:
@ -66,16 +70,16 @@ class Controller(object):
else: else:
# Mount the controller if not already in use # Mount the controller if not already in use
self.mount_point = target.path.join(mount_root, self.mount_name) self.mount_point = target.path.join(mount_root, self.mount_name)
target.execute('mkdir -p {} 2>/dev/null'\ await target.execute.asyn('mkdir -p {} 2>/dev/null'\
.format(self.mount_point), as_root=True) .format(self.mount_point), as_root=True)
target.execute('mount -t cgroup -o {} {} {}'\ await target.execute.asyn('mount -t cgroup -o {} {} {}'\
.format(','.join(self.clist), .format(','.join(self.clist),
self.mount_name, self.mount_name,
self.mount_point), self.mount_point),
as_root=True) as_root=True)
# Check if this controller uses "noprefix" option # Check if this controller uses "noprefix" option
output = target.execute('mount | grep "{} "'.format(self.mount_name)) output = await target.execute.asyn('mount | grep "{} "'.format(self.mount_name))
if 'noprefix' in output: if 'noprefix' in output:
self._noprefix = True self._noprefix = True
# self.logger.debug('Controller %s using "noprefix" option', # self.logger.debug('Controller %s using "noprefix" option',
@ -123,9 +127,20 @@ class Controller(object):
return cgroups return cgroups
def move_tasks(self, source, dest, exclude=None): def move_tasks(self, source, dest, exclude=None):
if isinstance(exclude, str):
warnings.warn("Controller.move_tasks() takes needs a _list_ of exclude patterns, not a string", DeprecationWarning)
exclude = [exclude]
if exclude is None: if exclude is None:
exclude = [] exclude = []
exclude = ' '.join(
itertools.chain.from_iterable(
('-e', quote(pattern))
for pattern in exclude
)
)
srcg = self.cgroup(source) srcg = self.cgroup(source)
dstg = self.cgroup(dest) dstg = self.cgroup(dest)
@ -133,7 +148,7 @@ class Controller(object):
'cgroups_tasks_move {src} {dst} {exclude}'.format( 'cgroups_tasks_move {src} {dst} {exclude}'.format(
src=quote(srcg.directory), src=quote(srcg.directory),
dst=quote(dstg.directory), dst=quote(dstg.directory),
exclude=' '.join(map(quote, exclude)) exclude=exclude,
), ),
as_root=True, as_root=True,
) )
@ -165,18 +180,11 @@ class Controller(object):
self.logger.debug('Moving all tasks into %s', dest) self.logger.debug('Moving all tasks into %s', dest)
# Build list of tasks to exclude # Build list of tasks to exclude
grep_filters = ' '.join( self.logger.debug(' using grep filter: %s', exclude)
'-e {}'.format(comm)
for comm in exclude
)
self.logger.debug(' using grep filter: %s', grep_filters)
if grep_filters != '':
self.logger.debug(' excluding tasks which name matches:')
self.logger.debug(' %s', ', '.join(exclude))
for cgroup in self.list_all(): for cgroup in self.list_all():
if cgroup != dest: if cgroup != dest:
self.move_tasks(cgroup, dest, grep_filters) self.move_tasks(cgroup, dest, exclude)
# pylint: disable=too-many-locals # pylint: disable=too-many-locals
def tasks(self, cgroup, def tasks(self, cgroup,
@ -388,11 +396,12 @@ class CgroupsModule(Module):
# Initialize controllers # Initialize controllers
self.logger.info('Available controllers:') self.logger.info('Available controllers:')
self.controllers = {} self.controllers = {}
for ss in subsys:
async def register_controller(ss):
hid = ss.hierarchy hid = ss.hierarchy
controller = Controller(ss.name, hid, hierarchy[hid]) controller = Controller(ss.name, hid, hierarchy[hid])
try: try:
controller.mount(self.target, self.cgroup_root) await controller.mount.asyn(self.target, self.cgroup_root)
except TargetStableError: except TargetStableError:
message = 'Failed to mount "{}" controller' message = 'Failed to mount "{}" controller'
raise TargetStableError(message.format(controller.kind)) raise TargetStableError(message.format(controller.kind))
@ -400,12 +409,20 @@ class CgroupsModule(Module):
controller.mount_point) controller.mount_point)
self.controllers[ss.name] = controller self.controllers[ss.name] = controller
run(
target.async_manager.map_concurrently(
register_controller,
subsys,
)
)
def list_subsystems(self): def list_subsystems(self):
subsystems = [] subsystems = []
for line in self.target.execute('{} cat /proc/cgroups'\ for line in self.target.execute('{} cat /proc/cgroups'\
.format(self.target.busybox), as_root=self.target.is_rooted).splitlines()[1:]: .format(self.target.busybox), as_root=self.target.is_rooted).splitlines()[1:]:
line = line.strip() line = line.strip()
if not line or line.startswith('#'): if not line or line.startswith('#') or line.endswith('0'):
continue continue
name, hierarchy, num_cgroups, enabled = line.split() name, hierarchy, num_cgroups, enabled = line.split()
subsystems.append(CgroupSubsystemEntry(name, subsystems.append(CgroupSubsystemEntry(name,

1976
devlib/module/cgroups2.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
# Copyright 2014-2018 ARM Limited # Copyright 2014-2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -12,11 +12,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from contextlib import contextmanager
from devlib.module import Module from devlib.module import Module
from devlib.exception import TargetStableError from devlib.exception import TargetStableError
from devlib.utils.misc import memoized from devlib.utils.misc import memoized
import devlib.utils.asyn as asyn
# a dict of governor name and a list of it tunables that can't be read # a dict of governor name and a list of it tunables that can't be read
@ -30,44 +30,52 @@ class CpufreqModule(Module):
name = 'cpufreq' name = 'cpufreq'
@staticmethod @staticmethod
def probe(target): @asyn.asyncf
async def probe(target):
paths = [
# x86 with Intel P-State driver
(target.abi == 'x86_64', '/sys/devices/system/cpu/intel_pstate'),
# Generic CPUFreq support (single policy)
(True, '/sys/devices/system/cpu/cpufreq/policy0'),
# Generic CPUFreq support (per CPU policy)
(True, '/sys/devices/system/cpu/cpu0/cpufreq'),
]
paths = [
path[1] for path in paths
if path[0]
]
# x86 with Intel P-State driver exists = await target.async_manager.map_concurrently(
if target.abi == 'x86_64': target.file_exists.asyn,
path = '/sys/devices/system/cpu/intel_pstate' paths,
if target.file_exists(path): )
return True
# Generic CPUFreq support (single policy) return any(exists.values())
path = '/sys/devices/system/cpu/cpufreq/policy0'
if target.file_exists(path):
return True
# Generic CPUFreq support (per CPU policy)
path = '/sys/devices/system/cpu/cpu0/cpufreq'
return target.file_exists(path)
def __init__(self, target): def __init__(self, target):
super(CpufreqModule, self).__init__(target) super(CpufreqModule, self).__init__(target)
self._governor_tunables = {} self._governor_tunables = {}
@memoized @asyn.asyncf
def list_governors(self, cpu): @asyn.memoized_method
async def list_governors(self, cpu):
"""Returns a list of governors supported by the cpu.""" """Returns a list of governors supported by the cpu."""
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_available_governors'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_available_governors'.format(cpu)
output = self.target.read_value(sysfile) output = await self.target.read_value.asyn(sysfile)
return output.strip().split() return output.strip().split()
def get_governor(self, cpu): @asyn.asyncf
async def get_governor(self, cpu):
"""Returns the governor currently set for the specified CPU.""" """Returns the governor currently set for the specified CPU."""
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_governor'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_governor'.format(cpu)
return self.target.read_value(sysfile) return await self.target.read_value.asyn(sysfile)
def set_governor(self, cpu, governor, **kwargs): @asyn.asyncf
async def set_governor(self, cpu, governor, **kwargs):
""" """
Set the governor for the specified CPU. Set the governor for the specified CPU.
See https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt See https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt
@ -90,15 +98,15 @@ class CpufreqModule(Module):
""" """
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
supported = self.list_governors(cpu) supported = await self.list_governors.asyn(cpu)
if governor not in supported: if governor not in supported:
raise TargetStableError('Governor {} not supported for cpu {}'.format(governor, cpu)) raise TargetStableError('Governor {} not supported for cpu {}'.format(governor, cpu))
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_governor'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_governor'.format(cpu)
self.target.write_value(sysfile, governor) await self.target.write_value.asyn(sysfile, governor)
self.set_governor_tunables(cpu, governor, **kwargs) await self.set_governor_tunables.asyn(cpu, governor, **kwargs)
@contextmanager @asyn.asynccontextmanager
def use_governor(self, governor, cpus=None, **kwargs): async def use_governor(self, governor, cpus=None, **kwargs):
""" """
Use a given governor, then restore previous governor(s) Use a given governor, then restore previous governor(s)
@ -111,66 +119,138 @@ class CpufreqModule(Module):
:Keyword Arguments: Governor tunables, See :meth:`set_governor_tunables` :Keyword Arguments: Governor tunables, See :meth:`set_governor_tunables`
""" """
if not cpus: if not cpus:
cpus = self.target.list_online_cpus() cpus = await self.target.list_online_cpus.asyn()
# Setting a governor & tunables for a cpu will set them for all cpus async def get_cpu_info(cpu):
# in the same clock domain, so only manipulating one cpu per domain return await self.target.async_manager.concurrently((
# is enough self.get_affected_cpus.asyn(cpu),
domains = set(self.get_affected_cpus(cpu)[0] for cpu in cpus) self.get_governor.asyn(cpu),
prev_governors = {cpu : (self.get_governor(cpu), self.get_governor_tunables(cpu)) self.get_governor_tunables.asyn(cpu),
for cpu in domains} # We won't always use the frequency, but it's much quicker to
# do concurrently anyway so do it now
self.get_frequency.asyn(cpu),
))
# Special case for userspace, frequency is not seen as a tunable cpus_infos = await self.target.async_manager.map_concurrently(get_cpu_info, cpus)
userspace_freqs = {}
for cpu, (prev_gov, _) in prev_governors.items():
if prev_gov == "userspace":
userspace_freqs[cpu] = self.get_frequency(cpu)
for cpu in domains: # Setting a governor & tunables for a cpu will set them for all cpus in
self.set_governor(cpu, governor, **kwargs) # the same cpufreq policy, so only manipulating one cpu per domain is
# enough
domains = set(
info[0][0]
for info in cpus_infos.values()
)
await self.target.async_manager.concurrently(
self.set_governor.asyn(cpu, governor, **kwargs)
for cpu in domains
)
try: try:
yield yield
finally: finally:
for cpu, (prev_gov, tunables) in prev_governors.items(): async def set_per_cpu_tunables(cpu):
self.set_governor(cpu, prev_gov, **tunables) domain, prev_gov, tunables, freq = cpus_infos[cpu]
# Per-cpu tunables are safe to set concurrently
await self.set_governor_tunables.asyn(cpu, prev_gov, per_cpu=True, **tunables)
# Special case for userspace, frequency is not seen as a tunable
if prev_gov == "userspace": if prev_gov == "userspace":
self.set_frequency(cpu, userspace_freqs[cpu]) await self.set_frequency.asyn(cpu, freq)
def list_governor_tunables(self, cpu): per_cpu_tunables = self.target.async_manager.concurrently(
set_per_cpu_tunables(cpu)
for cpu in domains
)
per_cpu_tunables.__qualname__ = 'CpufreqModule.use_governor.<locals>.per_cpu_tunables'
# Non-per-cpu tunables have to be set one after the other, for each
# governor that we had to deal with.
global_tunables = {
prev_gov: (cpu, tunables)
for cpu, (domain, prev_gov, tunables, freq) in cpus_infos.items()
}
global_tunables = self.target.async_manager.concurrently(
self.set_governor_tunables.asyn(cpu, gov, per_cpu=False, **tunables)
for gov, (cpu, tunables) in global_tunables.items()
)
global_tunables.__qualname__ = 'CpufreqModule.use_governor.<locals>.global_tunables'
# Set the governor first
await self.target.async_manager.concurrently(
self.set_governor.asyn(cpu, cpus_infos[cpu][1])
for cpu in domains
)
# And then set all the tunables concurrently. Each task has a
# specific and non-overlapping set of file to write.
await self.target.async_manager.concurrently(
(per_cpu_tunables, global_tunables)
)
@asyn.asyncf
async def _list_governor_tunables(self, cpu, governor=None):
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
if governor is None:
governor = await self.get_governor.asyn(cpu)
try:
return self._governor_tunables[governor]
except KeyError:
for per_cpu, path in (
(True, '/sys/devices/system/cpu/{}/cpufreq/{}'.format(cpu, governor)),
# On old kernels
(False, '/sys/devices/system/cpu/cpufreq/{}'.format(governor)),
):
try:
tunables = await self.target.list_directory.asyn(path)
except TargetStableError:
continue
else:
break
else:
per_cpu = False
tunables = []
data = (governor, per_cpu, tunables)
self._governor_tunables[governor] = data
return data
@asyn.asyncf
async def list_governor_tunables(self, cpu):
"""Returns a list of tunables available for the governor on the specified CPU.""" """Returns a list of tunables available for the governor on the specified CPU."""
if isinstance(cpu, int): _, _, tunables = await self._list_governor_tunables.asyn(cpu)
cpu = 'cpu{}'.format(cpu)
governor = self.get_governor(cpu)
if governor not in self._governor_tunables:
try:
tunables_path = '/sys/devices/system/cpu/{}/cpufreq/{}'.format(cpu, governor)
self._governor_tunables[governor] = self.target.list_directory(tunables_path)
except TargetStableError: # probably an older kernel
try:
tunables_path = '/sys/devices/system/cpu/cpufreq/{}'.format(governor)
self._governor_tunables[governor] = self.target.list_directory(tunables_path)
except TargetStableError: # governor does not support tunables
self._governor_tunables[governor] = []
return self._governor_tunables[governor]
def get_governor_tunables(self, cpu):
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
governor = self.get_governor(cpu)
tunables = {}
for tunable in self.list_governor_tunables(cpu):
if tunable not in WRITE_ONLY_TUNABLES.get(governor, []):
try:
path = '/sys/devices/system/cpu/{}/cpufreq/{}/{}'.format(cpu, governor, tunable)
tunables[tunable] = self.target.read_value(path)
except TargetStableError: # May be an older kernel
path = '/sys/devices/system/cpu/cpufreq/{}/{}'.format(governor, tunable)
tunables[tunable] = self.target.read_value(path)
return tunables return tunables
def set_governor_tunables(self, cpu, governor=None, **kwargs): @asyn.asyncf
async def get_governor_tunables(self, cpu):
if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu)
governor, _, tunable_list = await self._list_governor_tunables.asyn(cpu)
write_only = set(WRITE_ONLY_TUNABLES.get(governor, []))
tunable_list = [
tunable
for tunable in tunable_list
if tunable not in write_only
]
tunables = {}
async def get_tunable(tunable):
try:
path = '/sys/devices/system/cpu/{}/cpufreq/{}/{}'.format(cpu, governor, tunable)
x = await self.target.read_value.asyn(path)
except TargetStableError: # May be an older kernel
path = '/sys/devices/system/cpu/cpufreq/{}/{}'.format(governor, tunable)
x = await self.target.read_value.asyn(path)
return x
tunables = await self.target.async_manager.map_concurrently(get_tunable, tunable_list)
return tunables
@asyn.asyncf
async def set_governor_tunables(self, cpu, governor=None, per_cpu=None, **kwargs):
""" """
Set tunables for the specified governor. Tunables should be specified as Set tunables for the specified governor. Tunables should be specified as
keyword arguments. Which tunables and values are valid depends on the keyword arguments. Which tunables and values are valid depends on the
@ -179,6 +259,9 @@ class CpufreqModule(Module):
:param cpu: The cpu for which the governor will be set. ``int`` or :param cpu: The cpu for which the governor will be set. ``int`` or
full cpu name as it appears in sysfs, e.g. ``cpu0``. full cpu name as it appears in sysfs, e.g. ``cpu0``.
:param governor: The name of the governor. Must be all lower case. :param governor: The name of the governor. Must be all lower case.
:param per_cpu: If ``None``, both per-cpu and global governor tunables
will be set. If ``True``, only per-CPU tunables will be set and if
``False``, only global tunables will be set.
The rest should be keyword parameters mapping tunable name onto the value to The rest should be keyword parameters mapping tunable name onto the value to
be set for it. be set for it.
@ -188,37 +271,38 @@ class CpufreqModule(Module):
tunable. tunable.
""" """
if not kwargs:
return
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
if governor is None:
governor = self.get_governor(cpu) governor, gov_per_cpu, valid_tunables = await self._list_governor_tunables.asyn(cpu, governor=governor)
valid_tunables = self.list_governor_tunables(cpu)
for tunable, value in kwargs.items(): for tunable, value in kwargs.items():
if tunable in valid_tunables: if tunable in valid_tunables:
path = '/sys/devices/system/cpu/{}/cpufreq/{}/{}'.format(cpu, governor, tunable) if per_cpu is not None and gov_per_cpu != per_cpu:
try: continue
self.target.write_value(path, value)
except TargetStableError: if gov_per_cpu:
if self.target.file_exists(path): path = '/sys/devices/system/cpu/{}/cpufreq/{}/{}'.format(cpu, governor, tunable)
# File exists but we did something wrong else:
raise
# Expected file doesn't exist, try older sysfs layout.
path = '/sys/devices/system/cpu/cpufreq/{}/{}'.format(governor, tunable) path = '/sys/devices/system/cpu/cpufreq/{}/{}'.format(governor, tunable)
self.target.write_value(path, value)
await self.target.write_value.asyn(path, value)
else: else:
message = 'Unexpected tunable {} for governor {} on {}.\n'.format(tunable, governor, cpu) message = 'Unexpected tunable {} for governor {} on {}.\n'.format(tunable, governor, cpu)
message += 'Available tunables are: {}'.format(valid_tunables) message += 'Available tunables are: {}'.format(valid_tunables)
raise TargetStableError(message) raise TargetStableError(message)
@memoized @asyn.asyncf
def list_frequencies(self, cpu): @asyn.memoized_method
async def list_frequencies(self, cpu):
"""Returns a sorted list of frequencies supported by the cpu or an empty list """Returns a sorted list of frequencies supported by the cpu or an empty list
if not could be found.""" if not could be found."""
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
try: try:
cmd = 'cat /sys/devices/system/cpu/{}/cpufreq/scaling_available_frequencies'.format(cpu) cmd = 'cat /sys/devices/system/cpu/{}/cpufreq/scaling_available_frequencies'.format(cpu)
output = self.target.execute(cmd) output = await self.target.execute.asyn(cmd)
available_frequencies = list(map(int, output.strip().split())) # pylint: disable=E1103 available_frequencies = list(map(int, output.strip().split())) # pylint: disable=E1103
except TargetStableError: except TargetStableError:
# On some devices scaling_frequencies is not generated. # On some devices scaling_frequencies is not generated.
@ -226,7 +310,7 @@ class CpufreqModule(Module):
# Fall back to parsing stats/time_in_state # Fall back to parsing stats/time_in_state
path = '/sys/devices/system/cpu/{}/cpufreq/stats/time_in_state'.format(cpu) path = '/sys/devices/system/cpu/{}/cpufreq/stats/time_in_state'.format(cpu)
try: try:
out_iter = iter(self.target.read_value(path).split()) out_iter = (await self.target.read_value.asyn(path)).split()
except TargetStableError: except TargetStableError:
if not self.target.file_exists(path): if not self.target.file_exists(path):
# Probably intel_pstate. Can't get available freqs. # Probably intel_pstate. Can't get available freqs.
@ -254,7 +338,8 @@ class CpufreqModule(Module):
freqs = self.list_frequencies(cpu) freqs = self.list_frequencies(cpu)
return min(freqs) if freqs else None return min(freqs) if freqs else None
def get_min_frequency(self, cpu): @asyn.asyncf
async def get_min_frequency(self, cpu):
""" """
Returns the min frequency currently set for the specified CPU. Returns the min frequency currently set for the specified CPU.
@ -268,9 +353,10 @@ class CpufreqModule(Module):
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_min_freq'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_min_freq'.format(cpu)
return self.target.read_int(sysfile) return await self.target.read_int.asyn(sysfile)
def set_min_frequency(self, cpu, frequency, exact=True): @asyn.asyncf
async def set_min_frequency(self, cpu, frequency, exact=True):
""" """
Set's the minimum value for CPU frequency. Actual frequency will Set's the minimum value for CPU frequency. Actual frequency will
depend on the Governor used and may vary during execution. The value should be depend on the Governor used and may vary during execution. The value should be
@ -289,7 +375,7 @@ class CpufreqModule(Module):
""" """
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
available_frequencies = self.list_frequencies(cpu) available_frequencies = await self.list_frequencies.asyn(cpu)
try: try:
value = int(frequency) value = int(frequency)
if exact and available_frequencies and value not in available_frequencies: if exact and available_frequencies and value not in available_frequencies:
@ -297,11 +383,12 @@ class CpufreqModule(Module):
value, value,
available_frequencies)) available_frequencies))
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_min_freq'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_min_freq'.format(cpu)
self.target.write_value(sysfile, value) await self.target.write_value.asyn(sysfile, value)
except ValueError: except ValueError:
raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency)) raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency))
def get_frequency(self, cpu, cpuinfo=False): @asyn.asyncf
async def get_frequency(self, cpu, cpuinfo=False):
""" """
Returns the current frequency currently set for the specified CPU. Returns the current frequency currently set for the specified CPU.
@ -321,9 +408,10 @@ class CpufreqModule(Module):
sysfile = '/sys/devices/system/cpu/{}/cpufreq/{}'.format( sysfile = '/sys/devices/system/cpu/{}/cpufreq/{}'.format(
cpu, cpu,
'cpuinfo_cur_freq' if cpuinfo else 'scaling_cur_freq') 'cpuinfo_cur_freq' if cpuinfo else 'scaling_cur_freq')
return self.target.read_int(sysfile) return await self.target.read_int.asyn(sysfile)
def set_frequency(self, cpu, frequency, exact=True): @asyn.asyncf
async def set_frequency(self, cpu, frequency, exact=True):
""" """
Set's the minimum value for CPU frequency. Actual frequency will Set's the minimum value for CPU frequency. Actual frequency will
depend on the Governor used and may vary during execution. The value should be depend on the Governor used and may vary during execution. The value should be
@ -347,23 +435,24 @@ class CpufreqModule(Module):
try: try:
value = int(frequency) value = int(frequency)
if exact: if exact:
available_frequencies = self.list_frequencies(cpu) available_frequencies = await self.list_frequencies.asyn(cpu)
if available_frequencies and value not in available_frequencies: if available_frequencies and value not in available_frequencies:
raise TargetStableError('Can\'t set {} frequency to {}\nmust be in {}'.format(cpu, raise TargetStableError('Can\'t set {} frequency to {}\nmust be in {}'.format(cpu,
value, value,
available_frequencies)) available_frequencies))
if self.get_governor(cpu) != 'userspace': if await self.get_governor.asyn(cpu) != 'userspace':
raise TargetStableError('Can\'t set {} frequency; governor must be "userspace"'.format(cpu)) raise TargetStableError('Can\'t set {} frequency; governor must be "userspace"'.format(cpu))
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_setspeed'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_setspeed'.format(cpu)
self.target.write_value(sysfile, value, verify=False) await self.target.write_value.asyn(sysfile, value, verify=False)
cpuinfo = self.get_frequency(cpu, cpuinfo=True) cpuinfo = await self.get_frequency.asyn(cpu, cpuinfo=True)
if cpuinfo != value: if cpuinfo != value:
self.logger.warning( self.logger.warning(
'The cpufreq value has not been applied properly cpuinfo={} request={}'.format(cpuinfo, value)) 'The cpufreq value has not been applied properly cpuinfo={} request={}'.format(cpuinfo, value))
except ValueError: except ValueError:
raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency)) raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency))
def get_max_frequency(self, cpu): @asyn.asyncf
async def get_max_frequency(self, cpu):
""" """
Returns the max frequency currently set for the specified CPU. Returns the max frequency currently set for the specified CPU.
@ -376,9 +465,10 @@ class CpufreqModule(Module):
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_max_freq'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_max_freq'.format(cpu)
return self.target.read_int(sysfile) return await self.target.read_int.asyn(sysfile)
def set_max_frequency(self, cpu, frequency, exact=True): @asyn.asyncf
async def set_max_frequency(self, cpu, frequency, exact=True):
""" """
Set's the minimum value for CPU frequency. Actual frequency will Set's the minimum value for CPU frequency. Actual frequency will
depend on the Governor used and may vary during execution. The value should be depend on the Governor used and may vary during execution. The value should be
@ -397,7 +487,7 @@ class CpufreqModule(Module):
""" """
if isinstance(cpu, int): if isinstance(cpu, int):
cpu = 'cpu{}'.format(cpu) cpu = 'cpu{}'.format(cpu)
available_frequencies = self.list_frequencies(cpu) available_frequencies = await self.list_frequencies.asyn(cpu)
try: try:
value = int(frequency) value = int(frequency)
if exact and available_frequencies and value not in available_frequencies: if exact and available_frequencies and value not in available_frequencies:
@ -405,45 +495,53 @@ class CpufreqModule(Module):
value, value,
available_frequencies)) available_frequencies))
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_max_freq'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_max_freq'.format(cpu)
self.target.write_value(sysfile, value) await self.target.write_value.asyn(sysfile, value)
except ValueError: except ValueError:
raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency)) raise ValueError('Frequency must be an integer; got: "{}"'.format(frequency))
def set_governor_for_cpus(self, cpus, governor, **kwargs): @asyn.asyncf
async def set_governor_for_cpus(self, cpus, governor, **kwargs):
""" """
Set the governor for the specified list of CPUs. Set the governor for the specified list of CPUs.
See https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt See https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt
:param cpus: The list of CPU for which the governor is to be set. :param cpus: The list of CPU for which the governor is to be set.
""" """
for cpu in cpus: await self.target.async_manager.map_concurrently(
self.set_governor(cpu, governor, **kwargs) self.set_governor(cpu, governor, **kwargs)
for cpu in sorted(set(cpus))
)
def set_frequency_for_cpus(self, cpus, freq, exact=False): @asyn.asyncf
async def set_frequency_for_cpus(self, cpus, freq, exact=False):
""" """
Set the frequency for the specified list of CPUs. Set the frequency for the specified list of CPUs.
See https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt See https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt
:param cpus: The list of CPU for which the frequency has to be set. :param cpus: The list of CPU for which the frequency has to be set.
""" """
for cpu in cpus: await self.target.async_manager.map_concurrently(
self.set_frequency(cpu, freq, exact) self.set_frequency(cpu, freq, exact)
for cpu in sorted(set(cpus))
)
def set_all_frequencies(self, freq): @asyn.asyncf
async def set_all_frequencies(self, freq):
""" """
Set the specified (minimum) frequency for all the (online) CPUs Set the specified (minimum) frequency for all the (online) CPUs
""" """
# pylint: disable=protected-access # pylint: disable=protected-access
return self.target._execute_util( return await self.target._execute_util.asyn(
'cpufreq_set_all_frequencies {}'.format(freq), 'cpufreq_set_all_frequencies {}'.format(freq),
as_root=True) as_root=True)
def get_all_frequencies(self): @asyn.asyncf
async def get_all_frequencies(self):
""" """
Get the current frequency for all the (online) CPUs Get the current frequency for all the (online) CPUs
""" """
# pylint: disable=protected-access # pylint: disable=protected-access
output = self.target._execute_util( output = await self.target._execute_util.asyn(
'cpufreq_get_all_frequencies', as_root=True) 'cpufreq_get_all_frequencies', as_root=True)
frequencies = {} frequencies = {}
for x in output.splitlines(): for x in output.splitlines():
@ -453,32 +551,34 @@ class CpufreqModule(Module):
frequencies[kv[0]] = kv[1] frequencies[kv[0]] = kv[1]
return frequencies return frequencies
def set_all_governors(self, governor): @asyn.asyncf
async def set_all_governors(self, governor):
""" """
Set the specified governor for all the (online) CPUs Set the specified governor for all the (online) CPUs
""" """
try: try:
# pylint: disable=protected-access # pylint: disable=protected-access
return self.target._execute_util( return await self.target._execute_util.asyn(
'cpufreq_set_all_governors {}'.format(governor), 'cpufreq_set_all_governors {}'.format(governor),
as_root=True) as_root=True)
except TargetStableError as e: except TargetStableError as e:
if ("echo: I/O error" in str(e) or if ("echo: I/O error" in str(e) or
"write error: Invalid argument" in str(e)): "write error: Invalid argument" in str(e)):
cpus_unsupported = [c for c in self.target.list_online_cpus() cpus_unsupported = [c for c in await self.target.list_online_cpus.asyn()
if governor not in self.list_governors(c)] if governor not in await self.list_governors.asyn(c)]
raise TargetStableError("Governor {} unsupported for CPUs {}".format( raise TargetStableError("Governor {} unsupported for CPUs {}".format(
governor, cpus_unsupported)) governor, cpus_unsupported))
else: else:
raise raise
def get_all_governors(self): @asyn.asyncf
async def get_all_governors(self):
""" """
Get the current governor for all the (online) CPUs Get the current governor for all the (online) CPUs
""" """
# pylint: disable=protected-access # pylint: disable=protected-access
output = self.target._execute_util( output = await self.target._execute_util.asyn(
'cpufreq_get_all_governors', as_root=True) 'cpufreq_get_all_governors', as_root=True)
governors = {} governors = {}
for x in output.splitlines(): for x in output.splitlines():
@ -488,14 +588,16 @@ class CpufreqModule(Module):
governors[kv[0]] = kv[1] governors[kv[0]] = kv[1]
return governors return governors
def trace_frequencies(self): @asyn.asyncf
async def trace_frequencies(self):
""" """
Report current frequencies on trace file Report current frequencies on trace file
""" """
# pylint: disable=protected-access # pylint: disable=protected-access
return self.target._execute_util('cpufreq_trace_all_frequencies', as_root=True) return await self.target._execute_util.asyn('cpufreq_trace_all_frequencies', as_root=True)
def get_affected_cpus(self, cpu): @asyn.asyncf
async def get_affected_cpus(self, cpu):
""" """
Get the online CPUs that share a frequency domain with the given CPU Get the online CPUs that share a frequency domain with the given CPU
""" """
@ -504,10 +606,12 @@ class CpufreqModule(Module):
sysfile = '/sys/devices/system/cpu/{}/cpufreq/affected_cpus'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/affected_cpus'.format(cpu)
return [int(c) for c in self.target.read_value(sysfile).split()] content = await self.target.read_value.asyn(sysfile)
return [int(c) for c in content.split()]
@memoized @asyn.asyncf
def get_related_cpus(self, cpu): @asyn.memoized_method
async def get_related_cpus(self, cpu):
""" """
Get the CPUs that share a frequency domain with the given CPU Get the CPUs that share a frequency domain with the given CPU
""" """
@ -516,10 +620,11 @@ class CpufreqModule(Module):
sysfile = '/sys/devices/system/cpu/{}/cpufreq/related_cpus'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/related_cpus'.format(cpu)
return [int(c) for c in self.target.read_value(sysfile).split()] return [int(c) for c in (await self.target.read_value.asyn(sysfile)).split()]
@memoized @asyn.asyncf
def get_driver(self, cpu): @asyn.memoized_method
async def get_driver(self, cpu):
""" """
Get the name of the driver used by this cpufreq policy. Get the name of the driver used by this cpufreq policy.
""" """
@ -528,15 +633,16 @@ class CpufreqModule(Module):
sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_driver'.format(cpu) sysfile = '/sys/devices/system/cpu/{}/cpufreq/scaling_driver'.format(cpu)
return self.target.read_value(sysfile).strip() return (await self.target.read_value.asyn(sysfile)).strip()
def iter_domains(self): @asyn.asyncf
async def iter_domains(self):
""" """
Iterate over the frequency domains in the system Iterate over the frequency domains in the system
""" """
cpus = set(range(self.target.number_of_cpus)) cpus = set(range(self.target.number_of_cpus))
while cpus: while cpus:
cpu = next(iter(cpus)) # pylint: disable=stop-iteration-return cpu = next(iter(cpus)) # pylint: disable=stop-iteration-return
domain = self.target.cpufreq.get_related_cpus(cpu) domain = await self.target.cpufreq.get_related_cpus.asyn(cpu)
yield domain yield domain
cpus = cpus.difference(domain) cpus = cpus.difference(domain)

View File

@ -19,7 +19,10 @@ from operator import attrgetter
from pprint import pformat from pprint import pformat
from devlib.module import Module from devlib.module import Module
from devlib.exception import TargetStableError
from devlib.utils.types import integer, boolean from devlib.utils.types import integer, boolean
from devlib.utils.misc import memoized
import devlib.utils.asyn as asyn
class CpuidleState(object): class CpuidleState(object):
@ -57,19 +60,23 @@ class CpuidleState(object):
self.id = self.target.path.basename(self.path) self.id = self.target.path.basename(self.path)
self.cpu = self.target.path.basename(self.target.path.dirname(path)) self.cpu = self.target.path.basename(self.target.path.dirname(path))
def enable(self): @asyn.asyncf
self.set('disable', 0) async def enable(self):
await self.set.asyn('disable', 0)
def disable(self): @asyn.asyncf
self.set('disable', 1) async def disable(self):
await self.set.asyn('disable', 1)
def get(self, prop): @asyn.asyncf
async def get(self, prop):
property_path = self.target.path.join(self.path, prop) property_path = self.target.path.join(self.path, prop)
return self.target.read_value(property_path) return await self.target.read_value.asyn(property_path)
def set(self, prop, value): @asyn.asyncf
async def set(self, prop, value):
property_path = self.target.path.join(self.path, prop) property_path = self.target.path.join(self.path, prop)
self.target.write_value(property_path, value) await self.target.write_value.asyn(property_path, value)
def __eq__(self, other): def __eq__(self, other):
if isinstance(other, CpuidleState): if isinstance(other, CpuidleState):
@ -94,8 +101,9 @@ class Cpuidle(Module):
root_path = '/sys/devices/system/cpu/cpuidle' root_path = '/sys/devices/system/cpu/cpuidle'
@staticmethod @staticmethod
def probe(target): @asyn.asyncf
return target.file_exists(Cpuidle.root_path) async def probe(target):
return await target.file_exists.asyn(Cpuidle.root_path)
def __init__(self, target): def __init__(self, target):
super(Cpuidle, self).__init__(target) super(Cpuidle, self).__init__(target)
@ -146,32 +154,67 @@ class Cpuidle(Module):
return s return s
raise ValueError('Cpuidle state {} does not exist'.format(state)) raise ValueError('Cpuidle state {} does not exist'.format(state))
def enable(self, state, cpu=0): @asyn.asyncf
self.get_state(state, cpu).enable() async def enable(self, state, cpu=0):
await self.get_state(state, cpu).enable.asyn()
def disable(self, state, cpu=0): @asyn.asyncf
self.get_state(state, cpu).disable() async def disable(self, state, cpu=0):
await self.get_state(state, cpu).disable.asyn()
def enable_all(self, cpu=0): @asyn.asyncf
for state in self.get_states(cpu): async def enable_all(self, cpu=0):
state.enable() await self.target.async_manager.concurrently(
state.enable.asyn()
for state in self.get_states(cpu)
)
def disable_all(self, cpu=0): @asyn.asyncf
for state in self.get_states(cpu): async def disable_all(self, cpu=0):
state.disable() await self.target.async_manager.concurrently(
state.disable.asyn()
for state in self.get_states(cpu)
)
def perturb_cpus(self): @asyn.asyncf
async def perturb_cpus(self):
""" """
Momentarily wake each CPU. Ensures cpu_idle events in trace file. Momentarily wake each CPU. Ensures cpu_idle events in trace file.
""" """
# pylint: disable=protected-access # pylint: disable=protected-access
self.target._execute_util('cpuidle_wake_all_cpus') await self.target._execute_util.asyn('cpuidle_wake_all_cpus')
def get_driver(self): @asyn.asyncf
return self.target.read_value(self.target.path.join(self.root_path, 'current_driver')) async def get_driver(self):
return await self.target.read_value.asyn(self.target.path.join(self.root_path, 'current_driver'))
def get_governor(self): @memoized
def list_governors(self):
"""Returns a list of supported idle governors."""
sysfile = self.target.path.join(self.root_path, 'available_governors')
output = self.target.read_value(sysfile)
return output.strip().split()
@asyn.asyncf
async def get_governor(self):
"""Returns the currently selected idle governor."""
path = self.target.path.join(self.root_path, 'current_governor_ro') path = self.target.path.join(self.root_path, 'current_governor_ro')
if not self.target.file_exists(path): if not await self.target.file_exists.asyn(path):
path = self.target.path.join(self.root_path, 'current_governor') path = self.target.path.join(self.root_path, 'current_governor')
return self.target.read_value(path) return await self.target.read_value.asyn(path)
def set_governor(self, governor):
"""
Set the idle governor for the system.
:param governor: The name of the governor to be used. This must be
supported by the specific device.
:raises TargetStableError if governor is not supported by the CPU, or
if, for some reason, the governor could not be set.
"""
supported = self.list_governors()
if governor not in supported:
raise TargetStableError('Governor {} not supported'.format(governor))
sysfile = self.target.path.join(self.root_path, 'current_governor')
self.target.write_value(sysfile, governor)

View File

@ -13,8 +13,11 @@
# limitations under the License. # limitations under the License.
import re import re
import logging
import devlib.utils.asyn as asyn
from devlib.module import Module from devlib.module import Module
from devlib.exception import TargetStableCalledProcessError
class TripPoint(object): class TripPoint(object):
def __init__(self, zone, _id): def __init__(self, zone, _id):
@ -27,19 +30,22 @@ class TripPoint(object):
def target(self): def target(self):
return self.zone.target return self.zone.target
def get_temperature(self): @asyn.asyncf
async def get_temperature(self):
"""Returns the currently configured temperature of the trip point""" """Returns the currently configured temperature of the trip point"""
temp_file = self.target.path.join(self.zone.path, self.temp_node) temp_file = self.target.path.join(self.zone.path, self.temp_node)
return self.target.read_int(temp_file) return await self.target.read_int.asyn(temp_file)
def set_temperature(self, temperature): @asyn.asyncf
async def set_temperature(self, temperature):
temp_file = self.target.path.join(self.zone.path, self.temp_node) temp_file = self.target.path.join(self.zone.path, self.temp_node)
self.target.write_value(temp_file, temperature) await self.target.write_value.asyn(temp_file, temperature)
def get_type(self): @asyn.asyncf
async def get_type(self):
"""Returns the type of trip point""" """Returns the type of trip point"""
type_file = self.target.path.join(self.zone.path, self.type_node) type_file = self.target.path.join(self.zone.path, self.type_node)
return self.target.read_value(type_file) return await self.target.read_value.asyn(type_file)
class ThermalZone(object): class ThermalZone(object):
def __init__(self, target, root, _id): def __init__(self, target, root, _id):
@ -47,28 +53,80 @@ class ThermalZone(object):
self.name = 'thermal_zone' + _id self.name = 'thermal_zone' + _id
self.path = target.path.join(root, self.name) self.path = target.path.join(root, self.name)
self.trip_points = {} self.trip_points = {}
self.type = self.target.read_value(self.target.path.join(self.path, 'type'))
for entry in self.target.list_directory(self.path, as_root=target.is_rooted): for entry in self.target.list_directory(self.path, as_root=target.is_rooted):
re_match = re.match('^trip_point_([0-9]+)_temp', entry) re_match = re.match('^trip_point_([0-9]+)_temp', entry)
if re_match is not None: if re_match is not None:
self.add_trip_point(re_match.group(1)) self._add_trip_point(re_match.group(1))
def add_trip_point(self, _id): def _add_trip_point(self, _id):
self.trip_points[int(_id)] = TripPoint(self, _id) self.trip_points[int(_id)] = TripPoint(self, _id)
def is_enabled(self): @asyn.asyncf
async def is_enabled(self):
"""Returns a boolean representing the 'mode' of the thermal zone""" """Returns a boolean representing the 'mode' of the thermal zone"""
value = self.target.read_value(self.target.path.join(self.path, 'mode')) value = await self.target.read_value.asyn(self.target.path.join(self.path, 'mode'))
return value == 'enabled' return value == 'enabled'
def set_enabled(self, enabled=True): @asyn.asyncf
async def set_enabled(self, enabled=True):
value = 'enabled' if enabled else 'disabled' value = 'enabled' if enabled else 'disabled'
self.target.write_value(self.target.path.join(self.path, 'mode'), value) await self.target.write_value.asyn(self.target.path.join(self.path, 'mode'), value)
def get_temperature(self): @asyn.asyncf
async def get_temperature(self):
"""Returns the temperature of the thermal zone""" """Returns the temperature of the thermal zone"""
temp_file = self.target.path.join(self.path, 'temp') sysfs_temperature_file = self.target.path.join(self.path, 'temp')
return self.target.read_int(temp_file) return await self.target.read_int.asyn(sysfs_temperature_file)
@asyn.asyncf
async def get_policy(self):
"""Returns the policy of the thermal zone"""
temp_file = self.target.path.join(self.path, 'policy')
return await self.target.read_value.asyn(temp_file)
@asyn.asyncf
async def set_policy(self, policy):
"""
Sets the policy of the thermal zone
:params policy: Thermal governor name
:type policy: str
"""
await self.target.write_value.asyn(self.target.path.join(self.path, 'policy'), policy)
@asyn.asyncf
async def get_offset(self):
"""Returns the temperature offset of the thermal zone"""
offset_file = self.target.path.join(self.path, 'offset')
return await self.target.read_value.asyn(offset_file)
@asyn.asyncf
async def set_offset(self, offset):
"""
Sets the temperature offset in milli-degrees of the thermal zone
:params offset: Temperature offset in milli-degrees
:type policy: int
"""
await self.target.write_value.asyn(self.target.path.join(self.path, 'offset'), policy)
@asyn.asyncf
async def set_emul_temp(self, offset):
"""
Sets the emulated temperature in milli-degrees of the thermal zone
:params offset: Emulated temperature in milli-degrees
:type policy: int
"""
await self.target.write_value.asyn(self.target.path.join(self.path, 'emul_temp'), policy)
@asyn.asyncf
async def get_available_policies(self):
"""Returns the policies available for the thermal zone"""
temp_file = self.target.path.join(self.path, 'available_policies')
return await self.target.read_value.asyn(temp_file)
class ThermalModule(Module): class ThermalModule(Module):
name = 'thermal' name = 'thermal'
@ -83,6 +141,9 @@ class ThermalModule(Module):
def __init__(self, target): def __init__(self, target):
super(ThermalModule, self).__init__(target) super(ThermalModule, self).__init__(target)
self.logger = logging.getLogger(self.name)
self.logger.debug('Initialized [%s] module', self.name)
self.zones = {} self.zones = {}
self.cdevs = [] self.cdevs = []
@ -93,15 +154,44 @@ class ThermalModule(Module):
continue continue
if re_match.group(1) == 'thermal_zone': if re_match.group(1) == 'thermal_zone':
self.add_thermal_zone(re_match.group(2)) self._add_thermal_zone(re_match.group(2))
elif re_match.group(1) == 'cooling_device': elif re_match.group(1) == 'cooling_device':
# TODO # TODO
pass pass
def add_thermal_zone(self, _id): def _add_thermal_zone(self, _id):
self.zones[int(_id)] = ThermalZone(self.target, self.thermal_root, _id) self.zones[int(_id)] = ThermalZone(self.target, self.thermal_root, _id)
def disable_all_zones(self): def disable_all_zones(self):
"""Disables all the thermal zones in the target""" """Disables all the thermal zones in the target"""
for zone in self.zones.values(): for zone in self.zones.values():
zone.set_enabled(False) zone.set_enabled(False)
@asyn.asyncf
async def get_all_temperatures(self, error='raise'):
"""
Returns dictionary with current reading of all thermal zones.
:params error: Sensor read error handling (raise or ignore)
:type error: str
:returns: a dictionary in the form: {tz_type:temperature}
"""
async def get_temperature_noexcep(item):
tzid, tz = item
try:
temperature = await tz.get_temperature.asyn()
except TargetStableCalledProcessError as e:
if error == 'raise':
raise e
elif error == 'ignore':
self.logger.warning(f'Skipping thermal_zone_id={tzid} thermal_zone_type={tz.type} error="{e}"')
return None
else:
raise ValueError(f'Unknown error parameter value: {error}')
return temperature
tz_temps = await self.target.async_manager.map_concurrently(get_temperature_noexcep, self.zones.items())
return {tz.type: temperature for (tzid, tz), temperature in tz_temps.items() if temperature is not None}

View File

@ -21,6 +21,7 @@ from subprocess import CalledProcessError
from devlib.module import HardRestModule, BootModule, FlashModule from devlib.module import HardRestModule, BootModule, FlashModule
from devlib.exception import TargetError, TargetStableError, HostError from devlib.exception import TargetError, TargetStableError, HostError
from devlib.utils.misc import safe_extract
from devlib.utils.serial_port import open_serial_connection, pulse_dtr, write_characters from devlib.utils.serial_port import open_serial_connection, pulse_dtr, write_characters
from devlib.utils.uefi import UefiMenu, UefiConfig from devlib.utils.uefi import UefiMenu, UefiConfig
from devlib.utils.uboot import UbootMenu from devlib.utils.uboot import UbootMenu
@ -354,7 +355,7 @@ class VersatileExpressFlashModule(FlashModule):
validate_image_bundle(bundle) validate_image_bundle(bundle)
self.logger.debug('Extracting {} into {}...'.format(bundle, self.vemsd_mount)) self.logger.debug('Extracting {} into {}...'.format(bundle, self.vemsd_mount))
with tarfile.open(bundle) as tar: with tarfile.open(bundle) as tar:
tar.extractall(self.vemsd_mount) safe_extract(tar, self.vemsd_mount)
def _overlay_images(self, images): def _overlay_images(self, images):
for dest, src in images.items(): for dest, src in images.items():

View File

@ -1,4 +1,4 @@
# Copyright 2015-2018 ARM Limited # Copyright 2015-2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -12,9 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from __future__ import division
import os import os
import sys
import tempfile import tempfile
import time import time
import pexpect import pexpect

View File

@ -19,7 +19,7 @@ import shutil
import time import time
import types import types
import shlex import shlex
from pipes import quote from shlex import quote
from devlib.exception import TargetStableError from devlib.exception import TargetStableError
from devlib.host import PACKAGE_BIN_DIRECTORY from devlib.host import PACKAGE_BIN_DIRECTORY

File diff suppressed because it is too large Load Diff

View File

@ -19,6 +19,7 @@ Utility functions for working with Android devices through adb.
""" """
# pylint: disable=E1103 # pylint: disable=E1103
import functools
import glob import glob
import logging import logging
import os import os
@ -30,19 +31,16 @@ import tempfile
import time import time
import uuid import uuid
import zipfile import zipfile
import threading
from collections import defaultdict from collections import defaultdict
from io import StringIO from io import StringIO
from lxml import etree from lxml import etree
from shlex import quote
try: from devlib.exception import TargetTransientError, TargetStableError, HostError, TargetTransientCalledProcessError, TargetStableCalledProcessError, AdbRootError
from shlex import quote
except ImportError:
from pipes import quote
from devlib.exception import TargetTransientError, TargetStableError, HostError, TargetTransientCalledProcessError, TargetStableCalledProcessError
from devlib.utils.misc import check_output, which, ABI_MAP, redirect_streams, get_subprocess from devlib.utils.misc import check_output, which, ABI_MAP, redirect_streams, get_subprocess
from devlib.connection import ConnectionBase, AdbBackgroundCommand, PopenBackgroundCommand, PopenTransferManager from devlib.connection import ConnectionBase, AdbBackgroundCommand, PopenTransferHandle
logger = logging.getLogger('android') logger = logging.getLogger('android')
@ -91,16 +89,6 @@ INTENT_FLAGS = {
'ACTIVITY_CLEAR_TASK' : 0x00008000 'ACTIVITY_CLEAR_TASK' : 0x00008000
} }
# Initialized in functions near the botton of the file
android_home = None
platform_tools = None
adb = None
aapt = None
aapt_version = None
fastboot = None
class AndroidProperties(object): class AndroidProperties(object):
def __init__(self, text): def __init__(self, text):
@ -160,13 +148,15 @@ class ApkInfo(object):
self._apk_path = None self._apk_path = None
self._activities = None self._activities = None
self._methods = None self._methods = None
self._aapt = _ANDROID_ENV.get_env('aapt')
self._aapt_version = _ANDROID_ENV.get_env('aapt_version')
if path: if path:
self.parse(path) self.parse(path)
# pylint: disable=too-many-branches # pylint: disable=too-many-branches
def parse(self, apk_path): def parse(self, apk_path):
_check_env() output = self._run([self._aapt, 'dump', 'badging', apk_path])
output = self._run([aapt, 'dump', 'badging', apk_path])
for line in output.split('\n'): for line in output.split('\n'):
if line.startswith('application-label:'): if line.startswith('application-label:'):
self.label = line.split(':')[1].strip().replace('\'', '') self.label = line.split(':')[1].strip().replace('\'', '')
@ -206,8 +196,8 @@ class ApkInfo(object):
@property @property
def activities(self): def activities(self):
if self._activities is None: if self._activities is None:
cmd = [aapt, 'dump', 'xmltree', self._apk_path] cmd = [self._aapt, 'dump', 'xmltree', self._apk_path]
if aapt_version == 2: if self._aapt_version == 2:
cmd += ['--file'] cmd += ['--file']
cmd += ['AndroidManifest.xml'] cmd += ['AndroidManifest.xml']
matched_activities = self.activity_regex.finditer(self._run(cmd)) matched_activities = self.activity_regex.finditer(self._run(cmd))
@ -225,7 +215,7 @@ class ApkInfo(object):
extracted = z.extract('classes.dex', tmp_dir) extracted = z.extract('classes.dex', tmp_dir)
except KeyError: except KeyError:
return [] return []
dexdump = os.path.join(os.path.dirname(aapt), 'dexdump') dexdump = os.path.join(os.path.dirname(self._aapt), 'dexdump')
command = [dexdump, '-l', 'xml', extracted] command = [dexdump, '-l', 'xml', extracted]
dump = self._run(command) dump = self._run(command)
@ -234,20 +224,22 @@ class ApkInfo(object):
parser = etree.XMLParser(encoding='utf-8', recover=True) parser = etree.XMLParser(encoding='utf-8', recover=True)
xml_tree = etree.parse(StringIO(dump), parser) xml_tree = etree.parse(StringIO(dump), parser)
package = next((i for i in xml_tree.iter('package') package = []
if i.attrib['name'] == self.package), None) for i in xml_tree.iter('package'):
if i.attrib['name'] == self.package:
package.append(i)
self._methods = [(meth.attrib['name'], klass.attrib['name']) for elem in package:
for klass in package.iter('class') self._methods.extend([(meth.attrib['name'], klass.attrib['name'])
for meth in klass.iter('method')] if package else [] for klass in elem.iter('class')
for meth in klass.iter('method')])
return self._methods return self._methods
def _run(self, command): def _run(self, command):
logger.debug(' '.join(command)) logger.debug(' '.join(command))
try: try:
output = subprocess.check_output(command, stderr=subprocess.STDOUT) output = subprocess.check_output(command, stderr=subprocess.STDOUT)
if sys.version_info[0] == 3: output = output.decode(sys.stdout.encoding or 'utf-8', 'replace')
output = output.decode(sys.stdout.encoding or 'utf-8', 'replace')
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise HostError('Error while running "{}":\n{}' raise HostError('Error while running "{}":\n{}'
.format(command, e.output)) .format(command, e.output))
@ -258,7 +250,7 @@ class AdbConnection(ConnectionBase):
# maintains the count of parallel active connections to a device, so that # maintains the count of parallel active connections to a device, so that
# adb disconnect is not invoked untill all connections are closed # adb disconnect is not invoked untill all connections are closed
active_connections = defaultdict(int) active_connections = (threading.Lock(), defaultdict(int))
# Track connected as root status per device # Track connected as root status per device
_connected_as_root = defaultdict(lambda: None) _connected_as_root = defaultdict(lambda: None)
default_timeout = 10 default_timeout = 10
@ -281,30 +273,53 @@ class AdbConnection(ConnectionBase):
self._connected_as_root[self.device] = state self._connected_as_root[self.device] = state
# pylint: disable=unused-argument # pylint: disable=unused-argument
def __init__(self, device=None, timeout=None, platform=None, adb_server=None, def __init__(
adb_as_root=False, connection_attempts=MAX_ATTEMPTS, self,
poll_transfers=False, device=None,
start_transfer_poll_delay=30, timeout=None,
total_transfer_timeout=3600, platform=None,
transfer_poll_period=30,): adb_server=None,
super().__init__() adb_port=None,
adb_as_root=False,
connection_attempts=MAX_ATTEMPTS,
poll_transfers=False,
start_transfer_poll_delay=30,
total_transfer_timeout=3600,
transfer_poll_period=30,
):
super().__init__(
poll_transfers=poll_transfers,
start_transfer_poll_delay=start_transfer_poll_delay,
total_transfer_timeout=total_transfer_timeout,
transfer_poll_period=transfer_poll_period,
)
self.logger.debug('server=%s port=%s device=%s as_root=%s',
adb_server, adb_port, device, adb_as_root)
self.timeout = timeout if timeout is not None else self.default_timeout self.timeout = timeout if timeout is not None else self.default_timeout
if device is None: if device is None:
device = adb_get_device(timeout=timeout, adb_server=adb_server) device = adb_get_device(timeout=timeout, adb_server=adb_server, adb_port=adb_port)
self.device = device self.device = device
self.adb_server = adb_server self.adb_server = adb_server
self.adb_port = adb_port
self.adb_as_root = adb_as_root self.adb_as_root = adb_as_root
self.poll_transfers = poll_transfers self._restore_to_adb_root = False
if poll_transfers: lock, nr_active = AdbConnection.active_connections
transfer_opts = {'start_transfer_poll_delay': start_transfer_poll_delay, with lock:
'total_timeout': total_transfer_timeout, nr_active[self.device] += 1
'poll_period': transfer_poll_period,
}
self.transfer_mgr = PopenTransferManager(self, **transfer_opts) if poll_transfers else None
if self.adb_as_root: if self.adb_as_root:
self.adb_root(enable=True) try:
adb_connect(self.device, adb_server=self.adb_server, attempts=connection_attempts) self._restore_to_adb_root = self._adb_root(enable=True)
AdbConnection.active_connections[self.device] += 1 # Exception will be raised if we are not the only connection
# active. adb_root() requires restarting the server, which is not
# acceptable if other connections are active and can apparently
# lead to commands hanging forever in some situations.
except AdbRootError:
pass
adb_connect(self.device, adb_server=self.adb_server, adb_port=self.adb_port, attempts=connection_attempts)
self._setup_ls() self._setup_ls()
self._setup_su() self._setup_su()
@ -323,12 +338,25 @@ class AdbConnection(ConnectionBase):
paths = ' '.join(map(do_quote, paths)) paths = ' '.join(map(do_quote, paths))
command = "{} {}".format(action, paths) command = "{} {}".format(action, paths)
if timeout or not self.poll_transfers: if timeout:
adb_command(self.device, command, timeout=timeout, adb_server=self.adb_server) adb_command(self.device, command, timeout=timeout, adb_server=self.adb_server, adb_port=self.adb_port)
else: else:
with self.transfer_mgr.manage(sources, dest, action): popen = adb_command_popen(
bg_cmd = adb_command_background(self.device, command, adb_server=self.adb_server) device=self.device,
self.transfer_mgr.set_transfer_and_wait(bg_cmd) conn=self,
command=command,
adb_server=self.adb_server,
adb_port=self.adb_port,
)
handle = PopenTransferHandle(
manager=self.transfer_manager,
popen=popen,
dest=dest,
direction=action
)
with popen, self.transfer_manager.manage(sources, dest, action, handle):
popen.communicate()
# pylint: disable=unused-argument # pylint: disable=unused-argument
def execute(self, command, timeout=None, check_exit_code=False, def execute(self, command, timeout=None, check_exit_code=False,
@ -337,7 +365,7 @@ class AdbConnection(ConnectionBase):
as_root = False as_root = False
try: try:
return adb_shell(self.device, command, timeout, check_exit_code, return adb_shell(self.device, command, timeout, check_exit_code,
as_root, adb_server=self.adb_server, su_cmd=self.su_cmd) as_root, adb_server=self.adb_server, adb_port=self.adb_port, su_cmd=self.su_cmd)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
cls = TargetTransientCalledProcessError if will_succeed else TargetStableCalledProcessError cls = TargetTransientCalledProcessError if will_succeed else TargetStableCalledProcessError
raise cls( raise cls(
@ -356,26 +384,36 @@ class AdbConnection(ConnectionBase):
if as_root and self.connected_as_root: if as_root and self.connected_as_root:
as_root = False as_root = False
bg_cmd = self._background(command, stdout, stderr, as_root) bg_cmd = self._background(command, stdout, stderr, as_root)
self._current_bg_cmds.add(bg_cmd)
return bg_cmd return bg_cmd
def _background(self, command, stdout, stderr, as_root): def _background(self, command, stdout, stderr, as_root):
adb_shell, pid = adb_background_shell(self, command, stdout, stderr, as_root) def make_init_kwargs(command):
bg_cmd = AdbBackgroundCommand( adb_popen, pid = adb_background_shell(self, command, stdout, stderr, as_root)
return dict(
adb_popen=adb_popen,
pid=pid,
)
bg_cmd = AdbBackgroundCommand.from_factory(
conn=self, conn=self,
adb_popen=adb_shell, cmd=command,
pid=pid, as_root=as_root,
as_root=as_root make_init_kwargs=make_init_kwargs,
) )
return bg_cmd return bg_cmd
def _close(self): def _close(self):
AdbConnection.active_connections[self.device] -= 1 lock, nr_active = AdbConnection.active_connections
if AdbConnection.active_connections[self.device] <= 0: with lock:
nr_active[self.device] -= 1
disconnect = nr_active[self.device] <= 0
if disconnect:
del nr_active[self.device]
if disconnect:
if self.adb_as_root: if self.adb_as_root:
self.adb_root(enable=False) self.adb_root(enable=self._restore_to_adb_root)
adb_disconnect(self.device, self.adb_server) adb_disconnect(self.device, self.adb_server, self.adb_port)
del AdbConnection.active_connections[self.device]
def cancel_running_command(self): def cancel_running_command(self):
# adbd multiplexes commands so that they don't interfer with each # adbd multiplexes commands so that they don't interfer with each
@ -384,17 +422,40 @@ class AdbConnection(ConnectionBase):
pass pass
def adb_root(self, enable=True): def adb_root(self, enable=True):
self._adb_root(enable=enable)
def _adb_root(self, enable):
lock, nr_active = AdbConnection.active_connections
with lock:
can_root = nr_active[self.device] <= 1
if not can_root:
raise AdbRootError('Can only restart adb server if no other connection is active')
def is_rooted(out):
return 'adbd is already running as root' in out
cmd = 'root' if enable else 'unroot' cmd = 'root' if enable else 'unroot'
output = adb_command(self.device, cmd, timeout=30, adb_server=self.adb_server) try:
if 'cannot run as root in production builds' in output: output = adb_command(self.device, cmd, timeout=30, adb_server=self.adb_server, adb_port=self.adb_port)
raise TargetStableError(output) except subprocess.CalledProcessError as e:
was_rooted = is_rooted(e.output)
# Ignore if we're already root
if not was_rooted:
raise AdbRootError(str(e)) from e
else:
was_rooted = is_rooted(output)
# Check separately as this does not cause a error exit code.
if 'cannot run as root in production builds' in output:
raise AdbRootError(output)
AdbConnection._connected_as_root[self.device] = enable AdbConnection._connected_as_root[self.device] = enable
return was_rooted
def wait_for_device(self, timeout=30): def wait_for_device(self, timeout=30):
adb_command(self.device, 'wait-for-device', timeout, self.adb_server) adb_command(self.device, 'wait-for-device', timeout, self.adb_server, self.adb_port)
def reboot_bootloader(self, timeout=30): def reboot_bootloader(self, timeout=30):
adb_command(self.device, 'reboot-bootloader', timeout, self.adb_server) adb_command(self.device, 'reboot-bootloader', timeout, self.adb_server, self.adb_port)
# Again, we need to handle boards where the default output format from ls is # Again, we need to handle boards where the default output format from ls is
# single column *and* boards where the default output is multi-column. # single column *and* boards where the default output is multi-column.
@ -403,7 +464,7 @@ class AdbConnection(ConnectionBase):
def _setup_ls(self): def _setup_ls(self):
command = "shell '(ls -1); echo \"\n$?\"'" command = "shell '(ls -1); echo \"\n$?\"'"
try: try:
output = adb_command(self.device, command, timeout=self.timeout, adb_server=self.adb_server) output = adb_command(self.device, command, timeout=self.timeout, adb_server=self.adb_server, adb_port=self.adb_port)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise HostError( raise HostError(
'Failed to set up ls command on Android device. Output:\n' 'Failed to set up ls command on Android device. Output:\n'
@ -432,9 +493,9 @@ class AdbConnection(ConnectionBase):
def fastboot_command(command, timeout=None, device=None): def fastboot_command(command, timeout=None, device=None):
_check_env()
target = '-s {}'.format(quote(device)) if device else '' target = '-s {}'.format(quote(device)) if device else ''
full_command = 'fastboot {} {}'.format(target, command) bin_ = _ANDROID_ENV.get_env('fastboot')
full_command = f'{bin} {target} {command}'
logger.debug(full_command) logger.debug(full_command)
output, _ = check_output(full_command, timeout, shell=True) output, _ = check_output(full_command, timeout, shell=True)
return output return output
@ -445,7 +506,7 @@ def fastboot_flash_partition(partition, path_to_image):
fastboot_command(command) fastboot_command(command)
def adb_get_device(timeout=None, adb_server=None): def adb_get_device(timeout=None, adb_server=None, adb_port=None):
""" """
Returns the serial number of a connected android device. Returns the serial number of a connected android device.
@ -456,7 +517,7 @@ def adb_get_device(timeout=None, adb_server=None):
# Ensure server is started so the 'daemon started successfully' message # Ensure server is started so the 'daemon started successfully' message
# doesn't confuse the parsing below # doesn't confuse the parsing below
adb_command(None, 'start-server', adb_server=adb_server) adb_command(None, 'start-server', adb_server=adb_server, adb_port=adb_port)
# The output of calling adb devices consists of a heading line then # The output of calling adb devices consists of a heading line then
# a list of the devices sperated by new line # a list of the devices sperated by new line
@ -464,7 +525,7 @@ def adb_get_device(timeout=None, adb_server=None):
# then the output length is 2 + (1 for each device) # then the output length is 2 + (1 for each device)
start = time.time() start = time.time()
while True: while True:
output = adb_command(None, "devices", adb_server=adb_server).splitlines() # pylint: disable=E1103 output = adb_command(None, "devices", adb_server=adb_server, adb_port=adb_port).splitlines() # pylint: disable=E1103
output_length = len(output) output_length = len(output)
if output_length == 3: if output_length == 3:
# output[1] is the 2nd line in the output which has the device name # output[1] is the 2nd line in the output which has the device name
@ -481,8 +542,7 @@ def adb_get_device(timeout=None, adb_server=None):
time.sleep(1) time.sleep(1)
def adb_connect(device, timeout=None, attempts=MAX_ATTEMPTS, adb_server=None): def adb_connect(device, timeout=None, attempts=MAX_ATTEMPTS, adb_server=None, adb_port=None):
_check_env()
tries = 0 tries = 0
output = None output = None
while tries <= attempts: while tries <= attempts:
@ -494,50 +554,49 @@ def adb_connect(device, timeout=None, attempts=MAX_ATTEMPTS, adb_server=None):
# adb connection may have gone "stale", resulting in adb blocking # adb connection may have gone "stale", resulting in adb blocking
# indefinitely when making calls to the device. To avoid this, # indefinitely when making calls to the device. To avoid this,
# always disconnect first. # always disconnect first.
adb_disconnect(device, adb_server) adb_disconnect(device, adb_server, adb_port)
adb_cmd = get_adb_command(None, 'connect', adb_server) adb_cmd = get_adb_command(None, 'connect', adb_server, adb_port)
command = '{} {}'.format(adb_cmd, quote(device)) command = '{} {}'.format(adb_cmd, quote(device))
logger.debug(command) logger.debug(command)
output, _ = check_output(command, shell=True, timeout=timeout) output, _ = check_output(command, shell=True, timeout=timeout)
if _ping(device, adb_server): if _ping(device, adb_server, adb_port):
break break
time.sleep(10) time.sleep(10)
else: # did not connect to the device else: # did not connect to the device
message = 'Could not connect to {}'.format(device or 'a device') message = f'Could not connect to {device or "a device"} at {adb_server}:{adb_port}'
if output: if output:
message += '; got: "{}"'.format(output) message += f'; got: {output}'
raise HostError(message) raise HostError(message)
def adb_disconnect(device, adb_server=None): def adb_disconnect(device, adb_server=None, adb_port=None):
_check_env()
if not device: if not device:
return return
if ":" in device and device in adb_list_devices(adb_server): if ":" in device and device in adb_list_devices(adb_server, adb_port):
adb_cmd = get_adb_command(None, 'disconnect', adb_server) adb_cmd = get_adb_command(None, 'disconnect', adb_server, adb_port)
command = "{} {}".format(adb_cmd, device) command = "{} {}".format(adb_cmd, device)
logger.debug(command) logger.debug(command)
retval = subprocess.call(command, stdout=open(os.devnull, 'wb'), shell=True) retval = subprocess.call(command, stdout=subprocess.DEVNULL, shell=True)
if retval: if retval:
raise TargetTransientError('"{}" returned {}'.format(command, retval)) raise TargetTransientError('"{}" returned {}'.format(command, retval))
def _ping(device, adb_server=None): def _ping(device, adb_server=None, adb_port=None):
_check_env() adb_cmd = get_adb_command(device, 'shell', adb_server, adb_port)
adb_cmd = get_adb_command(device, 'shell', adb_server)
command = "{} {}".format(adb_cmd, quote('ls /data/local/tmp > /dev/null')) command = "{} {}".format(adb_cmd, quote('ls /data/local/tmp > /dev/null'))
logger.debug(command) logger.debug(command)
result = subprocess.call(command, stderr=subprocess.PIPE, shell=True) try:
if not result: # pylint: disable=simplifiable-if-statement subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True)
return True except subprocess.CalledProcessError as e:
else: logger.debug(f'ADB ping failed: {e.stdout}')
return False return False
else:
return True
# pylint: disable=too-many-locals # pylint: disable=too-many-locals
def adb_shell(device, command, timeout=None, check_exit_code=False, def adb_shell(device, command, timeout=None, check_exit_code=False,
as_root=False, adb_server=None, su_cmd='su -c {}'): # NOQA as_root=False, adb_server=None, adb_port=None, su_cmd='su -c {}'): # NOQA
_check_env()
# On older combinations of ADB/Android versions, the adb host command always # On older combinations of ADB/Android versions, the adb host command always
# exits with 0 if it was able to run the command on the target, even if the # exits with 0 if it was able to run the command on the target, even if the
@ -545,18 +604,14 @@ def adb_shell(device, command, timeout=None, check_exit_code=False,
# Homogenise this behaviour by running the command then echoing the exit # Homogenise this behaviour by running the command then echoing the exit
# code of the executed command itself. # code of the executed command itself.
command = r'({}); echo "\n$?"'.format(command) command = r'({}); echo "\n$?"'.format(command)
command = su_cmd.format(quote(command)) if as_root else command
parts = ['adb'] command = ('shell', command)
if adb_server is not None: parts, env = _get_adb_parts(command, device, adb_server, adb_port, quote_adb=False)
parts += ['-H', adb_server] env = {**os.environ, **env}
if device is not None:
parts += ['-s', device]
parts += ['shell',
command if not as_root else su_cmd.format(quote(command))]
logger.debug(' '.join(quote(part) for part in parts)) logger.debug(' '.join(quote(part) for part in parts))
try: try:
raw_output, error = check_output(parts, timeout, shell=False) raw_output, error = check_output(parts, timeout, shell=False, env=env)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise TargetStableError(str(e)) raise TargetStableError(str(e))
@ -606,40 +661,72 @@ def adb_background_shell(conn, command,
"""Runs the specified command in a subprocess, returning the the Popen object.""" """Runs the specified command in a subprocess, returning the the Popen object."""
device = conn.device device = conn.device
adb_server = conn.adb_server adb_server = conn.adb_server
adb_port = conn.adb_port
busybox = conn.busybox
orig_command = command
_check_env()
stdout, stderr, command = redirect_streams(stdout, stderr, command) stdout, stderr, command = redirect_streams(stdout, stderr, command)
if as_root: if as_root:
command = 'echo {} | su'.format(quote(command)) command = f'{busybox} printf "%s" {quote(command)} | su'
# Attach a unique UUID to the command line so it can be looked for without def with_uuid(cmd):
# any ambiguity with ps # Attach a unique UUID to the command line so it can be looked for
uuid_ = uuid.uuid4().hex # without any ambiguity with ps
uuid_var = 'BACKGROUND_COMMAND_UUID={}'.format(uuid_) uuid_ = uuid.uuid4().hex
command = "{} sh -c {}".format(uuid_var, quote(command)) # Unset the var, since not all connection types set it. This will avoid
# anyone depending on that value.
cmd = f'DEVLIB_CMD_UUID={uuid_}; unset DEVLIB_CMD_UUID; {cmd}'
# Ensure we have an sh -c layer so that the UUID will appear on the
# command line parameters of at least one command.
cmd = f'exec {busybox} sh -c {quote(cmd)}'
return (uuid_, cmd)
adb_cmd = get_adb_command(device, 'shell', adb_server) # Freeze the command with SIGSTOP to avoid racing with PID detection.
full_command = '{} {}'.format(adb_cmd, quote(command)) command = f"{busybox} kill -STOP $$ && exec {busybox} sh -c {quote(command)}"
command_uuid, command = with_uuid(command)
adb_cmd = get_adb_command(device, 'shell', adb_server, adb_port)
full_command = f'{adb_cmd} {quote(command)}'
logger.debug(full_command) logger.debug(full_command)
p = subprocess.Popen(full_command, stdout=stdout, stderr=stderr, stdin=subprocess.PIPE, shell=True) p = subprocess.Popen(full_command, stdout=stdout, stderr=stderr, stdin=subprocess.PIPE, shell=True)
# Out of band PID lookup, to avoid conflicting needs with stdout redirection # Out of band PID lookup, to avoid conflicting needs with stdout redirection
find_pid = '{} ps -A -o pid,args | grep {}'.format(conn.busybox, quote(uuid_var)) grep_cmd = f'{busybox} grep {quote(command_uuid)}'
ps_out = conn.execute(find_pid) # Find the PID and release the blocked background command with SIGCONT.
pids = [ # We get multiple PIDs:
int(line.strip().split(' ', 1)[0]) # * One from the grep command itself, but we remove it with another grep command.
for line in ps_out.splitlines() # * One for each sh -c layer in the command itself.
] #
# The line we are looking for is the first one, since it was started before # For each of the parent layer, we issue SIGCONT as it is harmless and
# any look up command # avoids having to rely on PID ordering (which could be misleading if PIDs
pid = sorted(pids)[0] # got recycled).
find_pid = f'''pids=$({busybox} ps -A -o pid,args | {grep_cmd} | {busybox} grep -v {quote(grep_cmd)} | {busybox} awk '{{print $1}}') && {busybox} printf "%s" "$pids" && {busybox} kill -CONT $pids'''
excep = None
for _ in range(5):
try:
pids = conn.execute(find_pid, as_root=as_root)
# We choose the highest PID as the "control" PID. It actually does not
# really matter which one we pick, as they are all equivalent sh -c layers.
pid = max(map(int, pids.split()))
except TargetStableError:
raise
except Exception as e:
excep = e
time.sleep(10e-3)
continue
else:
break
else:
raise TargetTransientError(f'Could not detect PID of background command: {orig_command}') from excep
return (p, pid) return (p, pid)
def adb_kill_server(timeout=30, adb_server=None): def adb_kill_server(timeout=30, adb_server=None, adb_port=None):
adb_command(None, 'kill-server', timeout, adb_server) adb_command(None, 'kill-server', timeout, adb_server, adb_port)
def adb_list_devices(adb_server=None): def adb_list_devices(adb_server=None, adb_port=None):
output = adb_command(None, 'devices', adb_server=adb_server) output = adb_command(None, 'devices', adb_server=adb_server, adb_port=adb_port)
devices = [] devices = []
for line in output.splitlines(): for line in output.splitlines():
parts = [p.strip() for p in line.split()] parts = [p.strip() for p in line.split()]
@ -648,28 +735,38 @@ def adb_list_devices(adb_server=None):
return devices return devices
def get_adb_command(device, command, adb_server=None): def _get_adb_parts(command, device=None, adb_server=None, adb_port=None, quote_adb=True):
_check_env() _quote = quote if quote_adb else lambda x: x
device_string = "" parts = (
if adb_server != None: _ANDROID_ENV.get_env('adb'),
device_string = ' -H {}'.format(adb_server) *(('-H', _quote(adb_server)) if adb_server is not None else ()),
device_string += ' -s {}'.format(device) if device else '' *(('-P', _quote(str(adb_port))) if adb_port is not None else ()),
return "adb{} {}".format(device_string, command) *(('-s', _quote(device)) if device is not None else ()),
*command,
)
env = {'LC_ALL': 'C'}
return (parts, env)
def adb_command(device, command, timeout=None, adb_server=None): def get_adb_command(device, command, adb_server=None, adb_port=None):
full_command = get_adb_command(device, command, adb_server) parts, env = _get_adb_parts((command,), device, adb_server, adb_port, quote_adb=True)
env = [quote(f'{name}={val}') for name, val in sorted(env.items())]
parts = [*env, *parts]
return ' '.join(parts)
def adb_command(device, command, timeout=None, adb_server=None, adb_port=None):
full_command = get_adb_command(device, command, adb_server, adb_port)
logger.debug(full_command) logger.debug(full_command)
output, _ = check_output(full_command, timeout, shell=True) output, _ = check_output(full_command, timeout, shell=True)
return output return output
def adb_command_background(device, command, adb_server=None): def adb_command_popen(device, conn, command, adb_server=None, adb_port=None):
full_command = get_adb_command(device, command, adb_server) command = get_adb_command(device, command, adb_server, adb_port)
logger.debug(full_command) logger.debug(command)
proc = get_subprocess(full_command, shell=True) popen = get_subprocess(command, shell=True)
cmd = PopenBackgroundCommand(proc) return popen
return cmd
def grant_app_permissions(target, package): def grant_app_permissions(target, package):
@ -693,121 +790,138 @@ def grant_app_permissions(target, package):
# Messy environment initialisation stuff... # Messy environment initialisation stuff...
class _AndroidEnvironment:
class _AndroidEnvironment(object): # Make the initialization lazy so that we don't trigger an exception if the
# user imports the module (directly or indirectly) without actually using
def __init__(self): # anything from it
self.android_home = None @property
self.platform_tools = None @functools.lru_cache(maxsize=None)
self.build_tools = None def env(self):
self.adb = None
self.aapt = None
self.aapt_version = None
self.fastboot = None
def _initialize_with_android_home(env):
logger.debug('Using ANDROID_HOME from the environment.')
env.android_home = android_home
env.platform_tools = os.path.join(android_home, 'platform-tools')
os.environ['PATH'] = env.platform_tools + os.pathsep + os.environ['PATH']
_init_common(env)
return env
def _initialize_without_android_home(env):
adb_full_path = which('adb')
if adb_full_path:
env.adb = 'adb'
else:
raise HostError('ANDROID_HOME is not set and adb is not in PATH. '
'Have you installed Android SDK?')
logger.debug('Discovering ANDROID_HOME from adb path.')
env.platform_tools = os.path.dirname(adb_full_path)
env.android_home = os.path.dirname(env.platform_tools)
_init_common(env)
return env
def _init_common(env):
_discover_build_tools(env)
_discover_aapt(env)
def _discover_build_tools(env):
logger.debug('ANDROID_HOME: {}'.format(env.android_home))
build_tools_directory = os.path.join(env.android_home, 'build-tools')
if os.path.isdir(build_tools_directory):
env.build_tools = build_tools_directory
def _check_supported_aapt2(binary):
# At time of writing the version argument of aapt2 is not helpful as
# the output is only a placeholder that does not distinguish between versions
# with and without support for badging. Unfortunately aapt has been
# deprecated and fails to parse some valid apks so we will try to favour
# aapt2 if possible else will fall back to aapt.
# Try to execute the badging command and check if we get an expected error
# message as opposed to an unknown command error to determine if we have a
# suitable version.
cmd = '{} dump badging'.format(binary)
result = subprocess.run(cmd.encode('utf-8'), shell=True, stderr=subprocess.PIPE)
supported = bool(AAPT_BADGING_OUTPUT.search(result.stderr.decode('utf-8')))
msg = 'Found a {} aapt2 binary at: {}'
logger.debug(msg.format('supported' if supported else 'unsupported', binary))
return supported
def _discover_aapt(env):
if env.build_tools:
aapt_path = ''
aapt2_path = ''
versions = os.listdir(env.build_tools)
for version in reversed(sorted(versions)):
if not os.path.isfile(aapt2_path):
aapt2_path = os.path.join(env.build_tools, version, 'aapt2')
if not os.path.isfile(aapt_path):
aapt_path = os.path.join(env.build_tools, version, 'aapt')
aapt_version = 1
# Use latest available version for aapt/appt2 but ensure at least one is valid.
if os.path.isfile(aapt2_path) or os.path.isfile(aapt_path):
break
# Use aapt2 only if present and we have a suitable version
if aapt2_path and _check_supported_aapt2(aapt2_path):
aapt_path = aapt2_path
aapt_version = 2
# Use the aapt version discoverted from build tools.
if aapt_path:
logger.debug('Using {} for version {}'.format(aapt_path, version))
env.aapt = aapt_path
env.aapt_version = aapt_version
return
# Try detecting aapt2 and aapt from PATH
if not env.aapt:
aapt2_path = which('aapt2')
if _check_supported_aapt2(aapt2_path):
env.aapt = aapt2_path
env.aapt_version = 2
else:
env.aapt = which('aapt')
env.aapt_version = 1
if not env.aapt:
raise HostError('aapt/aapt2 not found. Please make sure it is avaliable in PATH'
' or at least one Android platform is installed')
def _check_env():
global android_home, platform_tools, adb, aapt, aapt_version # pylint: disable=W0603
if not android_home:
android_home = os.getenv('ANDROID_HOME') android_home = os.getenv('ANDROID_HOME')
if android_home: if android_home:
_env = _initialize_with_android_home(_AndroidEnvironment()) env = self._from_android_home(android_home)
else: else:
_env = _initialize_without_android_home(_AndroidEnvironment()) env = self._from_adb()
android_home = _env.android_home
platform_tools = _env.platform_tools return env
adb = _env.adb
aapt = _env.aapt def get_env(self, name):
aapt_version = _env.aapt_version return self.env[name]
@classmethod
def _from_android_home(cls, android_home):
logger.debug('Using ANDROID_HOME from the environment.')
platform_tools = os.path.join(android_home, 'platform-tools')
return {
'android_home': android_home,
'platform_tools': platform_tools,
'adb': os.path.join(platform_tools, 'adb'),
'fastboot': os.path.join(platform_tools, 'fastboot'),
**cls._init_common(android_home)
}
@classmethod
def _from_adb(cls):
adb_path = which('adb')
if adb_path:
logger.debug('Discovering ANDROID_HOME from adb path.')
platform_tools = os.path.dirname(adb_path)
android_home = os.path.dirname(platform_tools)
return {
'android_home': android_home,
'platform_tools': platform_tools,
'adb': adb_path,
'fastboot': which('fastboot'),
**cls._init_common(android_home)
}
else:
raise HostError('ANDROID_HOME is not set and adb is not in PATH. '
'Have you installed Android SDK?')
@classmethod
def _init_common(cls, android_home):
logger.debug(f'ANDROID_HOME: {android_home}')
build_tools = cls._discover_build_tools(android_home)
return {
'build_tools': build_tools,
**cls._discover_aapt(build_tools)
}
@staticmethod
def _discover_build_tools(android_home):
build_tools = os.path.join(android_home, 'build-tools')
if os.path.isdir(build_tools):
return build_tools
else:
return None
@staticmethod
def _check_supported_aapt2(binary):
# At time of writing the version argument of aapt2 is not helpful as
# the output is only a placeholder that does not distinguish between versions
# with and without support for badging. Unfortunately aapt has been
# deprecated and fails to parse some valid apks so we will try to favour
# aapt2 if possible else will fall back to aapt.
# Try to execute the badging command and check if we get an expected error
# message as opposed to an unknown command error to determine if we have a
# suitable version.
result = subprocess.run([str(binary), 'dump', 'badging'], stdout=subprocess.DEVNULL, stderr=subprocess.PIPE, universal_newlines=True)
supported = bool(AAPT_BADGING_OUTPUT.search(result.stderr))
msg = 'Found a {} aapt2 binary at: {}'
logger.debug(msg.format('supported' if supported else 'unsupported', binary))
return supported
@classmethod
def _discover_aapt(cls, build_tools):
if build_tools:
def find_aapt2(version):
path = os.path.join(build_tools, version, 'aapt2')
if os.path.isfile(path) and cls._check_supported_aapt2(path):
return (2, path)
else:
return (None, None)
def find_aapt(version):
path = os.path.join(build_tools, version, 'aapt')
if os.path.isfile(path):
return (1, path)
else:
return (None, None)
versions = os.listdir(build_tools)
found = (
(version, finder(version))
for version in reversed(sorted(versions))
for finder in (find_aapt2, find_aapt)
)
for version, (aapt_version, aapt_path) in found:
if aapt_path:
logger.debug(f'Using {aapt_path} for version {version}')
return dict(
aapt=aapt_path,
aapt_version=aapt_version,
)
# Try detecting aapt2 and aapt from PATH
aapt2_path = which('aapt2')
aapt_path = which('aapt')
if aapt2_path and cls._check_supported_aapt2(aapt2_path):
return dict(
aapt=aapt2_path,
aapt_version=2,
)
elif aapt_path:
return dict(
aapt=aapt_path,
aapt_version=1,
)
else:
raise HostError('aapt/aapt2 not found. Please make sure it is avaliable in PATH or at least one Android platform is installed')
class LogcatMonitor(object): class LogcatMonitor(object):
""" """
@ -866,7 +980,7 @@ class LogcatMonitor(object):
if self._logcat_format: if self._logcat_format:
logcat_cmd = "{} -v {}".format(logcat_cmd, quote(self._logcat_format)) logcat_cmd = "{} -v {}".format(logcat_cmd, quote(self._logcat_format))
logcat_cmd = get_adb_command(self.target.conn.device, logcat_cmd, self.target.adb_server) logcat_cmd = get_adb_command(self.target.conn.device, logcat_cmd, self.target.adb_server, self.target.adb_port)
logger.debug('logcat command ="{}"'.format(logcat_cmd)) logger.debug('logcat command ="{}"'.format(logcat_cmd))
self._logcat = pexpect.spawn(logcat_cmd, logfile=self._logfile, encoding='utf-8') self._logcat = pexpect.spawn(logcat_cmd, logfile=self._logfile, encoding='utf-8')
@ -959,3 +1073,6 @@ class LogcatMonitor(object):
return [line for line in self.get_log()[next_line_num:] return [line for line in self.get_log()[next_line_num:]
if re.match(regexp, line)] if re.match(regexp, line)]
_ANDROID_ENV = _AndroidEnvironment()

990
devlib/utils/asyn.py Normal file
View File

@ -0,0 +1,990 @@
# Copyright 2013-2018 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Async-related utilities
"""
import abc
import asyncio
import contextvars
import functools
import itertools
import contextlib
import pathlib
import queue
import os.path
import inspect
import sys
import threading
from concurrent.futures import ThreadPoolExecutor
from weakref import WeakSet, WeakKeyDictionary
from greenlet import greenlet
def create_task(awaitable, name=None):
if isinstance(awaitable, asyncio.Task):
task = awaitable
else:
task = asyncio.create_task(awaitable)
if name is None:
name = getattr(awaitable, '__qualname__', None)
task.name = name
return task
def _close_loop(loop):
if loop is not None:
try:
loop.run_until_complete(loop.shutdown_asyncgens())
try:
shutdown_default_executor = loop.shutdown_default_executor
except AttributeError:
pass
else:
loop.run_until_complete(shutdown_default_executor())
finally:
loop.close()
class AsyncManager:
def __init__(self):
self.task_tree = dict()
self.resources = dict()
def track_access(self, access):
"""
Register the given ``access`` to have been handled by the current
async task.
:param access: Access that were done.
:type access: ConcurrentAccessBase
This allows :func:`concurrently` to check that concurrent tasks did not
step on each other's toes.
"""
try:
task = asyncio.current_task()
except RuntimeError:
pass
else:
self.resources.setdefault(task, set()).add(access)
async def concurrently(self, awaitables):
"""
Await concurrently for the given awaitables, and cancel them as soon as
one raises an exception.
"""
awaitables = list(awaitables)
# Avoid creating asyncio.Tasks when it's not necessary, as it will
# disable a the blocking path optimization of Target._execute_async()
# that uses blocking calls as long as there is only one asyncio.Task
# running on the event loop.
if len(awaitables) == 1:
return [await awaitables[0]]
tasks = list(map(create_task, awaitables))
current_task = asyncio.current_task()
task_tree = self.task_tree
try:
node = task_tree[current_task]
except KeyError:
is_root_task = True
node = set()
else:
is_root_task = False
task_tree[current_task] = node
task_tree.update({
child: set()
for child in tasks
})
node.update(tasks)
try:
return await asyncio.gather(*tasks)
except BaseException:
for task in tasks:
task.cancel()
raise
finally:
def get_children(task):
immediate_children = task_tree[task]
return frozenset(
itertools.chain(
[task],
immediate_children,
itertools.chain.from_iterable(
map(get_children, immediate_children)
)
)
)
# Get the resources created during the execution of each subtask
# (directly or indirectly)
resources = {
task: frozenset(
itertools.chain.from_iterable(
self.resources.get(child, [])
for child in get_children(task)
)
)
for task in tasks
}
for (task1, resources1), (task2, resources2) in itertools.combinations(resources.items(), 2):
for res1, res2 in itertools.product(resources1, resources2):
if issubclass(res2.__class__, res1.__class__) and res1.overlap_with(res2):
raise RuntimeError(
'Overlapping resources manipulated in concurrent async tasks: {} (task {}) and {} (task {})'.format(res1, task1.name, res2, task2.name)
)
if is_root_task:
self.resources.clear()
task_tree.clear()
async def map_concurrently(self, f, keys):
"""
Similar to :meth:`concurrently`,
but maps the given function ``f`` on the given ``keys``.
:return: A dictionary with ``keys`` as keys, and function result as
values.
"""
keys = list(keys)
return dict(zip(
keys,
await self.concurrently(map(f, keys))
))
def compose(*coros):
"""
Compose coroutines, feeding the output of each as the input of the next
one.
``await compose(f, g)(x)`` is equivalent to ``await f(await g(x))``
.. note:: In Haskell, ``compose f g h`` would be equivalent to ``f <=< g <=< h``
"""
async def f(*args, **kwargs):
empty_dict = {}
for coro in reversed(coros):
x = coro(*args, **kwargs)
# Allow mixing corountines and regular functions
if asyncio.isfuture(x):
x = await x
args = [x]
kwargs = empty_dict
return x
return f
class _AsyncPolymorphicFunction:
"""
A callable that allows exposing both a synchronous and asynchronous API.
When called, the blocking synchronous operation is called. The ```asyn``
attribute gives access to the asynchronous version of the function, and all
the other attribute access will be redirected to the async function.
"""
def __init__(self, asyn, blocking):
self.asyn = asyn
self.blocking = blocking
functools.update_wrapper(self, asyn)
def __get__(self, *args, **kwargs):
return self.__class__(
asyn=self.asyn.__get__(*args, **kwargs),
blocking=self.blocking.__get__(*args, **kwargs),
)
# Ensure inspect.iscoroutinefunction() does not detect us as being async,
# since __call__ is not.
@property
def __code__(self):
return self.__call__.__code__
def __call__(self, *args, **kwargs):
return self.blocking(*args, **kwargs)
def __getattr__(self, attr):
return getattr(self.asyn, attr)
class memoized_method:
"""
Decorator to memmoize a method.
It works for:
* async methods (coroutine functions)
* non-async methods
* method already decorated with :func:`devlib.asyn.asyncf`.
.. note:: This decorator does not rely on hacks to hash unhashable data. If
such input is required, it will either have to be coerced to a hashable
first (e.g. converting a list to a tuple), or the code of
:func:`devlib.asyn.memoized_method` will have to be updated to do so.
"""
def __init__(self, f):
memo = self
sig = inspect.signature(f)
def bind(self, *args, **kwargs):
bound = sig.bind(self, *args, **kwargs)
bound.apply_defaults()
key = (bound.args[1:], tuple(sorted(bound.kwargs.items())))
return (key, bound.args, bound.kwargs)
def get_cache(self):
try:
cache = self.__dict__[memo.name]
except KeyError:
cache = {}
self.__dict__[memo.name] = cache
return cache
if inspect.iscoroutinefunction(f):
@functools.wraps(f)
async def wrapper(self, *args, **kwargs):
cache = get_cache(self)
key, args, kwargs = bind(self, *args, **kwargs)
try:
return cache[key]
except KeyError:
x = await f(*args, **kwargs)
cache[key] = x
return x
else:
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
cache = get_cache(self)
key, args, kwargs = bind(self, *args, **kwargs)
try:
return cache[key]
except KeyError:
x = f(*args, **kwargs)
cache[key] = x
return x
self.f = wrapper
self._name = f.__name__
@property
def name(self):
return '__memoization_cache_of_' + self._name
def __call__(self, *args, **kwargs):
return self.f(*args, **kwargs)
def __get__(self, obj, owner=None):
return self.f.__get__(obj, owner)
def __set__(self, obj, value):
raise RuntimeError("Cannot monkey-patch a memoized function")
def __set_name__(self, owner, name):
self._name = name
class _Genlet(greenlet):
"""
Generator-like object based on ``greenlets``. It allows nested :class:`_Genlet`
to make their parent yield on their behalf, as if callees could decide to
be annotated ``yield from`` without modifying the caller.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Forward the context variables to the greenlet, which will not happen
# by default:
# https://greenlet.readthedocs.io/en/latest/contextvars.html
self.gr_context = contextvars.copy_context()
@classmethod
def from_coro(cls, coro):
"""
Create a :class:`_Genlet` from a given coroutine, treating it as a
generator.
"""
f = lambda value: self.consume_coro(coro, value)
self = cls(f)
return self
def consume_coro(self, coro, value):
"""
Send ``value`` to ``coro`` then consume the coroutine, passing all its
yielded actions to the enclosing :class:`_Genlet`. This allows crossing
blocking calls layers as if they were async calls with `await`.
"""
excep = None
while True:
try:
if excep is None:
future = coro.send(value)
else:
future = coro.throw(excep)
except StopIteration as e:
return e.value
else:
parent = self.parent
# Switch back to the consumer that returns the values via
# send()
try:
value = parent.switch(future)
except BaseException as e:
excep = e
value = None
else:
excep = None
@classmethod
def get_enclosing(cls):
"""
Get the immediately enclosing :class:`_Genlet` in the callstack or
``None``.
"""
g = greenlet.getcurrent()
while not (isinstance(g, cls) or g is None):
g = g.parent
return g
def _send_throw(self, value, excep):
self.parent = greenlet.getcurrent()
# Switch back to the function yielding values
if excep is None:
result = self.switch(value)
else:
result = self.throw(excep)
if self:
return result
else:
raise StopIteration(result)
def gen_send(self, x):
"""
Similar to generators' ``send`` method.
"""
return self._send_throw(x, None)
def gen_throw(self, x):
"""
Similar to generators' ``throw`` method.
"""
return self._send_throw(None, x)
class _AwaitableGenlet:
"""
Wrap a coroutine with a :class:`_Genlet` and wrap that to be awaitable.
"""
@classmethod
def wrap_coro(cls, coro):
async def coro_f():
# Make sure every new task will be instrumented since a task cannot
# yield futures on behalf of another task. If that were to happen,
# the task B trying to do a nested yield would switch back to task
# A, asking to yield on its behalf. Since the event loop would be
# currently handling task B, nothing would handle task A trying to
# yield on behalf of B, leading to a deadlock.
loop = asyncio.get_running_loop()
_install_task_factory(loop)
# Create a top-level _AwaitableGenlet that all nested runs will use
# to yield their futures
_coro = cls(coro)
return await _coro
return coro_f()
def __init__(self, coro):
self._coro = coro
def __await__(self):
coro = self._coro
is_started = inspect.iscoroutine(coro) and coro.cr_running
def genf():
gen = _Genlet.from_coro(coro)
value = None
excep = None
# The coroutine is already started, so we need to dispatch the
# value from the upcoming send() to the gen without running
# gen first.
if is_started:
try:
value = yield
except BaseException as e:
excep = e
while True:
try:
if excep is None:
future = gen.gen_send(value)
else:
future = gen.gen_throw(excep)
except StopIteration as e:
return e.value
finally:
_set_current_context(gen.gr_context)
try:
value = yield future
except BaseException as e:
excep = e
value = None
else:
excep = None
gen = genf()
if is_started:
# Start the generator so it waits at the first yield point
gen.gen_send(None)
return gen
def _allow_nested_run(coro):
if _Genlet.get_enclosing() is None:
return _AwaitableGenlet.wrap_coro(coro)
else:
return coro
def allow_nested_run(coro):
"""
Wrap the coroutine ``coro`` such that nested calls to :func:`run` will be
allowed.
.. warning:: The coroutine needs to be consumed in the same OS thread it
was created in.
"""
return _allow_nested_run(coro)
# This thread runs coroutines that cannot be ran on the event loop in the
# current thread. Instead, they are scheduled in a separate thread where
# another event loop has been setup, so we can wrap coroutines before
# dispatching them there.
_CORO_THREAD_EXECUTOR = ThreadPoolExecutor(
# Allow for a ridiculously large number so that we will never end up
# queuing one job after another. This is critical as we could otherwise end
# up in deadlock, if a job triggers another job and waits for it.
max_workers=2**64,
)
def _check_executor_alive(executor):
try:
executor.submit(lambda: None)
except RuntimeError:
return False
else:
return True
_PATCHED_LOOP_LOCK = threading.Lock()
_PATCHED_LOOP = WeakSet()
def _install_task_factory(loop):
"""
Install a task factory on the given event ``loop`` so that top-level
coroutines are wrapped using :func:`allow_nested_run`. This ensures that
the nested :func:`run` infrastructure will be available.
"""
def install(loop):
if sys.version_info >= (3, 11):
def default_factory(loop, coro, context=None):
return asyncio.Task(coro, loop=loop, context=context)
else:
def default_factory(loop, coro, context=None):
return asyncio.Task(coro, loop=loop)
make_task = loop.get_task_factory() or default_factory
def factory(loop, coro, context=None):
# Make sure each Task will be able to yield on behalf of its nested
# await beneath blocking layers
coro = _AwaitableGenlet.wrap_coro(coro)
return make_task(loop, coro, context=context)
loop.set_task_factory(factory)
with _PATCHED_LOOP_LOCK:
if loop in _PATCHED_LOOP:
return
else:
install(loop)
_PATCHED_LOOP.add(loop)
def _set_current_context(ctx):
"""
Get all the variable from the passed ``ctx`` and set them in the current
context.
"""
for var, val in ctx.items():
var.set(val)
class _CoroRunner(abc.ABC):
"""
ABC for an object that can execute multiple coroutines in a given
environment.
This allows running coroutines for which it might be an assumption, such as
the awaitables yielded by an async generator that are all attached to a
single event loop.
"""
@abc.abstractmethod
def _run(self, coro):
pass
def run(self, coro):
# Ensure we have a fresh coroutine. inspect.getcoroutinestate() does not
# work on all objects that asyncio creates on some version of Python, such
# as iterable_coroutine
assert not (inspect.iscoroutine(coro) and coro.cr_running)
return self._run(coro)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, tb):
pass
class _ThreadCoroRunner(_CoroRunner):
"""
Run the coroutines on a thread picked from a
:class:`concurrent.futures.ThreadPoolExecutor`.
Critically, this allows running multiple coroutines out of the same thread,
which will be reserved until the runner ``__exit__`` method is called.
"""
def __init__(self, future, jobq, resq):
self._future = future
self._jobq = jobq
self._resq = resq
@staticmethod
def _thread_f(jobq, resq):
def handle_jobs(runner):
while True:
job = jobq.get()
if job is None:
return
else:
ctx, coro = job
try:
value = ctx.run(runner.run, coro)
except BaseException as e:
value = None
excep = e
else:
excep = None
resq.put((ctx, excep, value))
with _LoopCoroRunner(None) as runner:
handle_jobs(runner)
@classmethod
def from_executor(cls, executor):
jobq = queue.SimpleQueue()
resq = queue.SimpleQueue()
try:
future = executor.submit(cls._thread_f, jobq, resq)
except RuntimeError as e:
if _check_executor_alive(executor):
raise e
else:
raise RuntimeError('Devlib relies on nested asyncio implementation requiring threads. These threads are not available while shutting down the interpreter.')
return cls(
jobq=jobq,
resq=resq,
future=future,
)
def _run(self, coro):
ctx = contextvars.copy_context()
self._jobq.put((ctx, coro))
ctx, excep, value = self._resq.get()
_set_current_context(ctx)
if excep is None:
return value
else:
raise excep
def __exit__(self, *args, **kwargs):
self._jobq.put(None)
self._future.result()
class _LoopCoroRunner(_CoroRunner):
"""
Run a coroutine on the given event loop.
The passed event loop is assumed to not be running. If ``None`` is passed,
a new event loop will be created in ``__enter__`` and closed in
``__exit__``.
"""
def __init__(self, loop):
self.loop = loop
self._owned = False
def _run(self, coro):
loop = self.loop
# Back-propagate the contextvars that could have been modified by the
# coroutine. This could be handled by asyncio.Runner().run(...,
# context=...) or loop.create_task(..., context=...) but these APIs are
# only available since Python 3.11
ctx = None
async def capture_ctx():
nonlocal ctx
try:
return await _allow_nested_run(coro)
finally:
ctx = contextvars.copy_context()
try:
return loop.run_until_complete(capture_ctx())
finally:
_set_current_context(ctx)
def __enter__(self):
loop = self.loop
if loop is None:
owned = True
loop = asyncio.new_event_loop()
else:
owned = False
asyncio.set_event_loop(loop)
self.loop = loop
self._owned = owned
return self
def __exit__(self, *args, **kwargs):
if self._owned:
asyncio.set_event_loop(None)
_close_loop(self.loop)
class _GenletCoroRunner(_CoroRunner):
"""
Run a coroutine assuming one of the parent coroutines was wrapped with
:func:`allow_nested_run`.
"""
def __init__(self, g):
self._g = g
def _run(self, coro):
return self._g.consume_coro(coro, None)
def _get_runner():
executor = _CORO_THREAD_EXECUTOR
g = _Genlet.get_enclosing()
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
# We have an coroutine wrapped with allow_nested_run() higher in the
# callstack, that we will be able to use as a conduit to yield the
# futures.
if g is not None:
return _GenletCoroRunner(g)
# No event loop setup, so we can just make our own
elif loop is None:
return _LoopCoroRunner(None)
# There is an event loop setup, but it is not currently running so we
# can just re-use it.
#
# TODO: for now, this path is dead since asyncio.get_running_loop() will
# always raise a RuntimeError if the loop is not running, even if
# asyncio.set_event_loop() was used.
elif not loop.is_running():
return _LoopCoroRunner(loop)
# There is an event loop currently running in our thread, so we cannot
# just create another event loop and install it since asyncio forbids
# that. The only choice is doing this in a separate thread that we
# fully control.
else:
return _ThreadCoroRunner.from_executor(executor)
def run(coro):
"""
Similar to :func:`asyncio.run` but can be called while an event loop is
running if a coroutine higher in the callstack has been wrapped using
:func:`allow_nested_run`.
Note that context variables from :mod:`contextvars` will be available in
the coroutine, and unlike with :func:`asyncio.run`, any update to them will
be reflected in the context of the caller. This allows context variable
updates to cross an arbitrary number of run layers, as if all those layers
were just part of the same coroutine.
"""
runner = _get_runner()
with runner as runner:
return runner.run(coro)
def asyncf(f):
"""
Decorator used to turn a coroutine into a blocking function, with an
optional asynchronous API.
**Example**::
@asyncf
async def foo(x):
await do_some_async_things(x)
return x
# Blocking call, just as if the function was synchronous, except it may
# use asynchronous code inside, e.g. to do concurrent operations.
foo(42)
# Asynchronous API, foo.asyn being a corountine
await foo.asyn(42)
This allows the same implementation to be both used as blocking for ease of
use and backward compatibility, or exposed as a corountine for callers that
can deal with awaitables.
"""
@functools.wraps(f)
def blocking(*args, **kwargs):
# Since run() needs a corountine, make sure we provide one
async def wrapper():
x = f(*args, **kwargs)
# Async generators have to be consumed and accumulated in a list
# before crossing a blocking boundary.
if inspect.isasyncgen(x):
def genf():
asyncgen = x.__aiter__()
while True:
try:
yield run(asyncgen.__anext__())
except StopAsyncIteration:
return
return genf()
else:
return await x
return run(wrapper())
return _AsyncPolymorphicFunction(
asyn=f,
blocking=blocking,
)
class _AsyncPolymorphicCMState:
def __init__(self):
self.nesting = 0
self.runner = None
def _update_nesting(self, n):
x = self.nesting
assert x >= 0
x = x + n
self.nesting = x
return bool(x)
def _get_runner(self):
runner = self.runner
if runner is None:
assert not self.nesting
runner = _get_runner()
runner.__enter__()
self.runner = runner
return runner
def _cleanup_runner(self, force=False):
def cleanup():
self.runner = None
if runner is not None:
runner.__exit__(None, None, None)
runner = self.runner
if force:
cleanup()
else:
assert runner is not None
if not self._update_nesting(0):
cleanup()
class _AsyncPolymorphicCM:
"""
Wrap an async context manager such that it exposes a synchronous API as
well for backward compatibility.
"""
def __init__(self, async_cm):
self.cm = async_cm
self._state = threading.local()
def _get_state(self):
try:
return self._state.x
except AttributeError:
state = _AsyncPolymorphicCMState()
self._state.x = state
return state
def _delete_state(self):
try:
del self._state.x
except AttributeError:
pass
def __aenter__(self, *args, **kwargs):
return self.cm.__aenter__(*args, **kwargs)
def __aexit__(self, *args, **kwargs):
return self.cm.__aexit__(*args, **kwargs)
@staticmethod
def _exit(state):
state._update_nesting(-1)
state._cleanup_runner()
def __enter__(self, *args, **kwargs):
state = self._get_state()
runner = state._get_runner()
# Increase the nesting count _before_ we start running the
# coroutine, in case it is a recursive context manager
state._update_nesting(1)
try:
coro = self.cm.__aenter__(*args, **kwargs)
return runner.run(coro)
except BaseException:
self._exit(state)
raise
def __exit__(self, *args, **kwargs):
coro = self.cm.__aexit__(*args, **kwargs)
state = self._get_state()
runner = state._get_runner()
try:
return runner.run(coro)
finally:
self._exit(state)
def __del__(self):
self._get_state()._cleanup_runner(force=True)
def asynccontextmanager(f):
"""
Same as :func:`contextlib.asynccontextmanager` except that it can also be
used with a regular ``with`` statement for backward compatibility.
"""
f = contextlib.asynccontextmanager(f)
@functools.wraps(f)
def wrapper(*args, **kwargs):
cm = f(*args, **kwargs)
return _AsyncPolymorphicCM(cm)
return wrapper
class ConcurrentAccessBase(abc.ABC):
"""
Abstract Base Class for resources tracked by :func:`concurrently`.
"""
@abc.abstractmethod
def overlap_with(self, other):
"""
Return ``True`` if the resource overlaps with the given one.
:param other: Resources that should not overlap with ``self``.
:type other: devlib.utils.asym.ConcurrentAccessBase
.. note:: It is guaranteed that ``other`` will be a subclass of our
class.
"""
class PathAccess(ConcurrentAccessBase):
"""
Concurrent resource representing a file access.
:param namespace: Identifier of the namespace of the path. One of "target" or "host".
:type namespace: str
:param path: Normalized path to the file.
:type path: str
:param mode: Opening mode of the file. Can be ``"r"`` for read and ``"w"``
for writing.
:type mode: str
"""
def __init__(self, namespace, path, mode):
assert namespace in ('host', 'target')
self.namespace = namespace
assert mode in ('r', 'w')
self.mode = mode
self.path = os.path.abspath(path) if namespace == 'host' else os.path.normpath(path)
def overlap_with(self, other):
path1 = pathlib.Path(self.path).resolve()
path2 = pathlib.Path(other.path).resolve()
return (
self.namespace == other.namespace and
'w' in (self.mode, other.mode) and
(
path1 == path2 or
path1 in path2.parents or
path2 in path1.parents
)
)
def __str__(self):
mode = {
'r': 'read',
'w': 'write',
}[self.mode]
return '{} ({})'.format(self.path, mode)

View File

@ -1,4 +1,4 @@
# Copyright 2018 ARM Limited # Copyright 2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -54,16 +54,12 @@ responsibility of the calling code to ensure that the file is closed properly.
''' '''
import csv import csv
import sys
from contextlib import contextmanager from contextlib import contextmanager
@contextmanager @contextmanager
def csvwriter(filepath, *args, **kwargs): def csvwriter(filepath, *args, **kwargs):
if sys.version_info[0] == 3: wfh = open(filepath, 'w', newline='')
wfh = open(filepath, 'w', newline='')
else:
wfh = open(filepath, 'wb')
try: try:
yield csv.writer(wfh, *args, **kwargs) yield csv.writer(wfh, *args, **kwargs)
@ -73,10 +69,7 @@ def csvwriter(filepath, *args, **kwargs):
@contextmanager @contextmanager
def csvreader(filepath, *args, **kwargs): def csvreader(filepath, *args, **kwargs):
if sys.version_info[0] == 3: fh = open(filepath, 'r', newline='')
fh = open(filepath, 'r', newline='')
else:
fh = open(filepath, 'rb')
try: try:
yield csv.reader(fh, *args, **kwargs) yield csv.reader(fh, *args, **kwargs)
@ -85,16 +78,10 @@ def csvreader(filepath, *args, **kwargs):
def create_writer(filepath, *args, **kwargs): def create_writer(filepath, *args, **kwargs):
if sys.version_info[0] == 3: wfh = open(filepath, 'w', newline='')
wfh = open(filepath, 'w', newline='')
else:
wfh = open(filepath, 'wb')
return csv.writer(wfh, *args, **kwargs), wfh return csv.writer(wfh, *args, **kwargs), wfh
def create_reader(filepath, *args, **kwargs): def create_reader(filepath, *args, **kwargs):
if sys.version_info[0] == 3: fh = open(filepath, 'r', newline='')
fh = open(filepath, 'r', newline='')
else:
fh = open(filepath, 'rb')
return csv.reader(fh, *args, **kwargs), fh return csv.reader(fh, *args, **kwargs), fh

View File

@ -1,4 +1,4 @@
# Copyright 2013-2018 ARM Limited # Copyright 2013-2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -18,28 +18,26 @@
Miscellaneous functions that don't fit anywhere else. Miscellaneous functions that don't fit anywhere else.
""" """
from __future__ import division
from contextlib import contextmanager from contextlib import contextmanager
from functools import partial, reduce, wraps from functools import partial, reduce, wraps
from itertools import groupby from itertools import groupby
from operator import itemgetter from operator import itemgetter
from weakref import WeakKeyDictionary, WeakSet from weakref import WeakSet
from ruamel.yaml import YAML
import ctypes import ctypes
import functools
import logging import logging
import os import os
import pkgutil import pkgutil
import random import random
import re import re
import signal
import string import string
import subprocess import subprocess
import sys import sys
import threading import threading
import types import types
import wrapt
import warnings import warnings
import wrapt
try: try:
@ -47,11 +45,7 @@ try:
except AttributeError: except AttributeError:
from contextlib2 import ExitStack from contextlib2 import ExitStack
try: from shlex import quote
from shlex import quote
except ImportError:
from pipes import quote
from past.builtins import basestring from past.builtins import basestring
# pylint: disable=redefined-builtin # pylint: disable=redefined-builtin
@ -151,22 +145,16 @@ def preexec_function():
check_output_logger = logging.getLogger('check_output') check_output_logger = logging.getLogger('check_output')
# Popen is not thread safe. If two threads attempt to call it at the same time,
# one may lock up. See https://bugs.python.org/issue12739.
check_output_lock = threading.RLock()
def get_subprocess(command, **kwargs): def get_subprocess(command, **kwargs):
if 'stdout' in kwargs: if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.') raise ValueError('stdout argument not allowed, it will be overridden.')
with check_output_lock: return subprocess.Popen(command,
process = subprocess.Popen(command, stdout=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stderr=subprocess.PIPE, stdin=subprocess.PIPE,
stdin=subprocess.PIPE, preexec_fn=preexec_function,
preexec_fn=preexec_function, **kwargs)
**kwargs)
return process
def check_subprocess_output(process, timeout=None, ignore=None, inputtext=None): def check_subprocess_output(process, timeout=None, ignore=None, inputtext=None):
@ -181,21 +169,22 @@ def check_subprocess_output(process, timeout=None, ignore=None, inputtext=None):
message = 'Invalid value for ignore parameter: "{}"; must be an int or a list' message = 'Invalid value for ignore parameter: "{}"; must be an int or a list'
raise ValueError(message.format(ignore)) raise ValueError(message.format(ignore))
try: with process:
output, error = process.communicate(inputtext, timeout=timeout) try:
except subprocess.TimeoutExpired as e: output, error = process.communicate(inputtext, timeout=timeout)
timeout_expired = e except subprocess.TimeoutExpired as e:
else: timeout_expired = e
timeout_expired = None else:
timeout_expired = None
# Currently errors=replace is needed as 0x8c throws an error # Currently errors=replace is needed as 0x8c throws an error
output = output.decode(sys.stdout.encoding or 'utf-8', "replace") if output else '' output = output.decode(sys.stdout.encoding or 'utf-8', "replace") if output else ''
error = error.decode(sys.stderr.encoding or 'utf-8', "replace") if error else '' error = error.decode(sys.stderr.encoding or 'utf-8', "replace") if error else ''
if timeout_expired: if timeout_expired:
raise TimeoutError(process.args, output='\n'.join([output, error])) raise TimeoutError(process.args, output='\n'.join([output, error]))
retcode = process.poll() retcode = process.returncode
if retcode and ignore != 'all' and retcode not in ignore: if retcode and ignore != 'all' and retcode not in ignore:
raise subprocess.CalledProcessError(retcode, process.args, output, error) raise subprocess.CalledProcessError(retcode, process.args, output, error)
@ -467,7 +456,7 @@ def escape_quotes(text):
""" """
Escape quotes, and escaped quotes, in the specified text. Escape quotes, and escaped quotes, in the specified text.
.. note:: :func:`pipes.quote` should be favored where possible. .. note:: :func:`shlex.quote` should be favored where possible.
""" """
return re.sub(r'\\("|\')', r'\\\\\1', text).replace('\'', '\\\'').replace('\"', '\\\"') return re.sub(r'\\("|\')', r'\\\\\1', text).replace('\'', '\\\'').replace('\"', '\\\"')
@ -476,7 +465,7 @@ def escape_single_quotes(text):
""" """
Escape single quotes, and escaped single quotes, in the specified text. Escape single quotes, and escaped single quotes, in the specified text.
.. note:: :func:`pipes.quote` should be favored where possible. .. note:: :func:`shlex.quote` should be favored where possible.
""" """
return re.sub(r'\\("|\')', r'\\\\\1', text).replace('\'', '\'\\\'\'') return re.sub(r'\\("|\')', r'\\\\\1', text).replace('\'', '\'\\\'\'')
@ -485,7 +474,7 @@ def escape_double_quotes(text):
""" """
Escape double quotes, and escaped double quotes, in the specified text. Escape double quotes, and escaped double quotes, in the specified text.
.. note:: :func:`pipes.quote` should be favored where possible. .. note:: :func:`shlex.quote` should be favored where possible.
""" """
return re.sub(r'\\("|\')', r'\\\\\1', text).replace('\"', '\\\"') return re.sub(r'\\("|\')', r'\\\\\1', text).replace('\"', '\\\"')
@ -494,7 +483,7 @@ def escape_spaces(text):
""" """
Escape spaces in the specified text Escape spaces in the specified text
.. note:: :func:`pipes.quote` should be favored where possible. .. note:: :func:`shlex.quote` should be favored where possible.
""" """
return text.replace(' ', '\\ ') return text.replace(' ', '\\ ')
@ -602,13 +591,33 @@ class LoadSyntaxError(Exception):
return message.format(self.filepath, self.lineno, self.message) return message.format(self.filepath, self.lineno, self.message)
def load_struct_from_yaml(filepath):
"""
Parses a config structure from a YAML file.
The structure should be composed of basic Python types.
:param filepath: Input file which contains YAML data.
:type filepath: str
:raises LoadSyntaxError: if there is a syntax error in YAML data.
:return: A dictionary which contains parsed YAML data
:rtype: Dict
"""
try:
yaml = YAML(typ='safe', pure=True)
with open(filepath, 'r', encoding='utf-8') as file_handler:
return yaml.load(file_handler)
except yaml.YAMLError as ex:
message = ex.message if hasattr(ex, 'message') else ''
lineno = ex.problem_mark.line if hasattr(ex, 'problem_mark') else None
raise LoadSyntaxError(message, filepath=filepath, lineno=lineno) from ex
RAND_MOD_NAME_LEN = 30 RAND_MOD_NAME_LEN = 30
BAD_CHARS = string.punctuation + string.whitespace BAD_CHARS = string.punctuation + string.whitespace
# pylint: disable=no-member TRANS_TABLE = str.maketrans(BAD_CHARS, '_' * len(BAD_CHARS))
if sys.version_info[0] == 3:
TRANS_TABLE = str.maketrans(BAD_CHARS, '_' * len(BAD_CHARS))
else:
TRANS_TABLE = string.maketrans(BAD_CHARS, '_' * len(BAD_CHARS))
def to_identifier(text): def to_identifier(text):
@ -646,6 +655,7 @@ def ranges_to_list(ranges_string):
def list_to_ranges(values): def list_to_ranges(values):
"""Converts a list, e.g ``[0,2,3,4]``, into a sysfs-style ranges string, e.g. ``"0,2-4"``""" """Converts a list, e.g ``[0,2,3,4]``, into a sysfs-style ranges string, e.g. ``"0,2-4"``"""
values = sorted(values)
range_groups = [] range_groups = []
for _, g in groupby(enumerate(values), lambda i_x: i_x[0] - i_x[1]): for _, g in groupby(enumerate(values), lambda i_x: i_x[0] - i_x[1]):
range_groups.append(list(map(itemgetter(1), g))) range_groups.append(list(map(itemgetter(1), g)))
@ -748,8 +758,7 @@ def batch_contextmanager(f, kwargs_list):
yield yield
@contextmanager class nullcontext:
def nullcontext(enter_result=None):
""" """
Backport of Python 3.7 ``contextlib.nullcontext`` Backport of Python 3.7 ``contextlib.nullcontext``
@ -761,7 +770,20 @@ def nullcontext(enter_result=None):
statement, or `None` if nothing is specified. statement, or `None` if nothing is specified.
:type enter_result: object :type enter_result: object
""" """
yield enter_result def __init__(self, enter_result=None):
self.enter_result = enter_result
def __enter__(self):
return self.enter_result
async def __aenter__(self):
return self.enter_result
def __exit__(*_):
return
async def __aexit__(*_):
return
class tls_property: class tls_property:
@ -820,8 +842,13 @@ class tls_property:
def __delete__(self, instance): def __delete__(self, instance):
tls, values = self._get_tls(instance) tls, values = self._get_tls(instance)
with self.lock: with self.lock:
values.discard(tls.value) try:
del tls.value value = tls.value
except AttributeError:
pass
else:
values.discard(value)
del tls.value
def _get_tls(self, instance): def _get_tls(self, instance):
dct = instance.__dict__ dct = instance.__dict__
@ -984,3 +1011,26 @@ def groupby_value(dct):
tuple(map(itemgetter(0), _items)): v tuple(map(itemgetter(0), _items)): v
for v, _items in groupby(items, key=key) for v, _items in groupby(items, key=key)
} }
def safe_extract(tar, path=".", members=None, *, numeric_owner=False):
"""
A wrapper around TarFile.extract all to mitigate CVE-2007-4995
(see https://www.trellix.com/en-us/about/newsroom/stories/research/tarfile-exploiting-the-world.html)
"""
for member in tar.getmembers():
member_path = os.path.join(path, member.name)
if not _is_within_directory(path, member_path):
raise Exception("Attempted Path Traversal in Tar File")
tar.extractall(path, members, numeric_owner=numeric_owner)
def _is_within_directory(directory, target):
abs_directory = os.path.abspath(directory)
abs_target = os.path.abspath(target)
prefix = os.path.commonprefix([abs_directory, abs_target])
return prefix == abs_directory

View File

@ -21,7 +21,7 @@ import tempfile
import threading import threading
import time import time
from collections import namedtuple from collections import namedtuple
from pipes import quote from shlex import quote
# pylint: disable=redefined-builtin # pylint: disable=redefined-builtin
from devlib.exception import WorkerThreadError, TargetNotRespondingError, TimeoutError from devlib.exception import WorkerThreadError, TargetNotRespondingError, TimeoutError
@ -131,9 +131,18 @@ class SurfaceFlingerFrameCollector(FrameCollector):
self.header = header or SurfaceFlingerFrame._fields self.header = header or SurfaceFlingerFrame._fields
def collect_frames(self, wfh): def collect_frames(self, wfh):
for activity in self.list(): activities = [a for a in self.list() if a.startswith(self.view)]
if activity == self.view:
wfh.write(self.get_latencies(activity).encode('utf-8')) if len(activities) > 1:
raise ValueError(
"More than one activity matching view '{}' was found: {}".format(self.view, activities)
)
if not activities:
logger.warning("No activities matching view '{}' were found".format(self.view))
for activity in activities:
wfh.write(self.get_latencies(activity).encode('utf-8'))
def clear(self): def clear(self):
self.target.execute('dumpsys SurfaceFlinger --latency-clear ') self.target.execute('dumpsys SurfaceFlinger --latency-clear ')
@ -208,10 +217,7 @@ class GfxinfoFrameCollector(FrameCollector):
def collect_frames(self, wfh): def collect_frames(self, wfh):
cmd = 'dumpsys gfxinfo {} framestats' cmd = 'dumpsys gfxinfo {} framestats'
result = self.target.execute(cmd.format(self.package)) result = self.target.execute(cmd.format(self.package))
if sys.version_info[0] == 3: wfh.write(result.encode('utf-8'))
wfh.write(result.encode('utf-8'))
else:
wfh.write(result)
def clear(self): def clear(self):
pass pass

View File

@ -1,4 +1,4 @@
# Copyright 2013-2018 ARM Limited # Copyright 2013-2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -13,20 +13,20 @@
# limitations under the License. # limitations under the License.
# #
import time import time
from contextlib import contextmanager from contextlib import contextmanager
from logging import Logger from logging import Logger
import serial import serial
# pylint: disable=import-error,wrong-import-position,ungrouped-imports,wrong-import-order # pylint: disable=ungrouped-imports
import pexpect try:
from distutils.version import StrictVersion as V
if V(pexpect.__version__) < V('4.0.0'):
import fdpexpect
else:
from pexpect import fdpexpect from pexpect import fdpexpect
# pexpect < 4.0.0 does not have fdpexpect module
except ImportError:
import fdpexpect
# Adding pexpect exceptions into this module's namespace # Adding pexpect exceptions into this module's namespace
from pexpect import EOF, TIMEOUT # NOQA pylint: disable=W0611 from pexpect import EOF, TIMEOUT # NOQA pylint: disable=W0611

View File

@ -1,4 +1,4 @@
# Copyright 2014-2018 ARM Limited # Copyright 2014-2024 ARM Limited
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -14,26 +14,23 @@
# #
import glob
import os import os
import stat import stat
import logging import logging
from pathlib import Path
import subprocess import subprocess
import re import re
import threading import threading
import tempfile import tempfile
import shutil
import socket import socket
import sys import sys
import time import time
import atexit
import contextlib import contextlib
import weakref
import select import select
import copy import copy
import functools import functools
from pipes import quote import shutil
from future.utils import raise_from from shlex import quote
from paramiko.client import SSHClient, AutoAddPolicy, RejectPolicy from paramiko.client import SSHClient, AutoAddPolicy, RejectPolicy
import paramiko.ssh_exception import paramiko.ssh_exception
@ -43,11 +40,13 @@ logging.getLogger("paramiko").setLevel(logging.WARNING)
# pylint: disable=import-error,wrong-import-position,ungrouped-imports,wrong-import-order # pylint: disable=import-error,wrong-import-position,ungrouped-imports,wrong-import-order
import pexpect import pexpect
from distutils.version import StrictVersion as V
if V(pexpect.__version__) < V('4.0.0'): try:
import pxssh
else:
from pexpect import pxssh from pexpect import pxssh
# pexpect < 4.0.0 does not have a pxssh module
except ImportError:
import pxssh
from pexpect import EOF, TIMEOUT, spawn from pexpect import EOF, TIMEOUT, spawn
# pylint: disable=redefined-builtin,wrong-import-position # pylint: disable=redefined-builtin,wrong-import-position
@ -59,17 +58,25 @@ from devlib.exception import (HostError, TargetStableError, TargetNotRespondingE
from devlib.utils.misc import (which, strip_bash_colors, check_output, from devlib.utils.misc import (which, strip_bash_colors, check_output,
sanitize_cmd_template, memoized, redirect_streams) sanitize_cmd_template, memoized, redirect_streams)
from devlib.utils.types import boolean from devlib.utils.types import boolean
from devlib.connection import (ConnectionBase, ParamikoBackgroundCommand, PopenBackgroundCommand, from devlib.connection import ConnectionBase, ParamikoBackgroundCommand, SSHTransferHandle
SSHTransferManager)
DEFAULT_SSH_SUDO_COMMAND = "sudo -k -p ' ' -S -- sh -c {}" # Empty prompt with -p '' to avoid adding a leading space to the output.
DEFAULT_SSH_SUDO_COMMAND = "sudo -k -p '' -S -- sh -c {}"
ssh = None class _SSHEnv:
scp = None @functools.lru_cache(maxsize=None)
sshpass = None def get_path(self, tool):
if tool in {'ssh', 'scp', 'sshpass'}:
path = which(tool)
if path:
return path
else:
raise HostError(f'OpenSSH must be installed on the host: could not find {tool} command')
else:
raise AttributeError(f"Tool '{tool}' is not supported")
_SSH_ENV = _SSHEnv()
logger = logging.getLogger('ssh') logger = logging.getLogger('ssh')
gem5_logger = logging.getLogger('gem5-connection') gem5_logger = logging.getLogger('gem5-connection')
@ -169,13 +176,24 @@ def _read_paramiko_streams_internal(stdout, stderr, select_timeout, callback, in
return (callback_state, exit_code) return (callback_state, exit_code)
def _resolve_known_hosts(strict_host_check):
if strict_host_check:
if isinstance(strict_host_check, (str, os.PathLike)):
path = Path(strict_host_check)
else:
path = Path('~/.ssh/known_hosts').expanduser()
else:
path = Path('/dev/null')
return str(path.resolve())
def telnet_get_shell(host, def telnet_get_shell(host,
username, username,
password=None, password=None,
port=None, port=None,
timeout=10, timeout=10,
original_prompt=None): original_prompt=None):
_check_env()
start_time = time.time() start_time = time.time()
while True: while True:
conn = TelnetPxssh(original_prompt=original_prompt) conn = TelnetPxssh(original_prompt=original_prompt)
@ -296,8 +314,18 @@ class SshConnectionBase(ConnectionBase):
platform=None, platform=None,
sudo_cmd=DEFAULT_SSH_SUDO_COMMAND, sudo_cmd=DEFAULT_SSH_SUDO_COMMAND,
strict_host_check=True, strict_host_check=True,
poll_transfers=False,
start_transfer_poll_delay=30,
total_transfer_timeout=3600,
transfer_poll_period=30,
): ):
super().__init__() super().__init__(
poll_transfers=poll_transfers,
start_transfer_poll_delay=start_transfer_poll_delay,
total_transfer_timeout=total_transfer_timeout,
transfer_poll_period=transfer_poll_period,
)
self._connected_as_root = None self._connected_as_root = None
self.host = host self.host = host
self.username = username self.username = username
@ -338,43 +366,49 @@ class SshConnection(SshConnectionBase):
platform=platform, platform=platform,
sudo_cmd=sudo_cmd, sudo_cmd=sudo_cmd,
strict_host_check=strict_host_check, strict_host_check=strict_host_check,
poll_transfers=poll_transfers,
start_transfer_poll_delay=start_transfer_poll_delay,
total_transfer_timeout=total_transfer_timeout,
transfer_poll_period=transfer_poll_period,
) )
self.timeout = timeout if timeout is not None else self.default_timeout self.timeout = timeout if timeout is not None else self.default_timeout
# Allow using scp for file transfer if sftp is not supported # Allow using scp for file transfer if sftp is not supported
self.use_scp = use_scp self.use_scp = use_scp
self.poll_transfers=poll_transfers
if poll_transfers:
transfer_opts = {'start_transfer_poll_delay': start_transfer_poll_delay,
'total_timeout': total_transfer_timeout,
'poll_period': transfer_poll_period,
}
if self.use_scp: if self.use_scp:
logger.debug('Using SCP for file transfer') logger.debug('Using SCP for file transfer')
else: else:
logger.debug('Using SFTP for file transfer') logger.debug('Using SFTP for file transfer')
self.transfer_mgr = SSHTransferManager(self, **transfer_opts) if poll_transfers else None self.client = None
self.client = self._make_client() try:
atexit.register(self.close) self.client = self._make_client()
# Use a marker in the output so that we will be able to differentiate # Use a marker in the output so that we will be able to differentiate
# target connection issues with "password needed". # target connection issues with "password needed".
# Also, sudo might not be installed at all on the target (but # Also, sudo might not be installed at all on the target (but
# everything will work as long as we login as root). If sudo is still # everything will work as long as we login as root). If sudo is still
# needed, it will explode when someone tries to use it. After all, the # needed, it will explode when someone tries to use it. After all, the
# user might not be interested in being root at all. # user might not be interested in being root at all.
self._sudo_needs_password = ( self._sudo_needs_password = (
'NEED_PASSWORD' in 'NEED_PASSWORD' in
self.execute( self.execute(
# sudo -n is broken on some versions on MacOSX, revisit that if # sudo -n is broken on some versions on MacOSX, revisit that if
# someone ever cares # someone ever cares
'sudo -n true || echo NEED_PASSWORD', 'sudo -n true || echo NEED_PASSWORD',
as_root=False, as_root=False,
check_exit_code=False, check_exit_code=False,
)
) )
)
# pylint: disable=broad-except
except BaseException as e:
try:
if self.client is not None:
self.client.close()
finally:
raise e
def _make_client(self): def _make_client(self):
if self.strict_host_check: if self.strict_host_check:
@ -386,7 +420,10 @@ class SshConnection(SshConnectionBase):
with _handle_paramiko_exceptions(): with _handle_paramiko_exceptions():
client = SSHClient() client = SSHClient()
client.load_system_host_keys() if self.strict_host_check:
client.load_system_host_keys(_resolve_known_hosts(
self.strict_host_check
))
client.set_missing_host_key_policy(policy) client.set_missing_host_key_policy(policy)
client.connect( client.connect(
hostname=self.host, hostname=self.host,
@ -407,9 +444,6 @@ class SshConnection(SshConnectionBase):
channel = transport.open_session() channel = transport.open_session()
return channel return channel
def _get_progress_cb(self):
return self.transfer_mgr.progress_cb if self.transfer_mgr is not None else None
# Limit the number of opened channels to a low number, since some servers # Limit the number of opened channels to a low number, since some servers
# will reject more connections request. For OpenSSH, this is controlled by # will reject more connections request. For OpenSSH, this is controlled by
# the MaxSessions config. # the MaxSessions config.
@ -430,11 +464,12 @@ class SshConnection(SshConnectionBase):
return sftp return sftp
@functools.lru_cache() @functools.lru_cache()
def _get_scp(self, timeout): def _get_scp(self, timeout, callback=lambda *_: None):
return SCPClient(self.client.get_transport(), socket_timeout=timeout, progress=self._get_progress_cb()) cb = lambda _, to_transfer, transferred: callback(to_transfer, transferred)
return SCPClient(self.client.get_transport(), socket_timeout=timeout, progress=cb)
def _push_file(self, sftp, src, dst): def _push_file(self, sftp, src, dst, callback):
sftp.put(src, dst, callback=self._get_progress_cb()) sftp.put(src, dst, callback=callback)
@classmethod @classmethod
def _path_exists(cls, sftp, path): def _path_exists(cls, sftp, path):
@ -445,7 +480,7 @@ class SshConnection(SshConnectionBase):
else: else:
return True return True
def _push_folder(self, sftp, src, dst): def _push_folder(self, sftp, src, dst, callback):
sftp.mkdir(dst) sftp.mkdir(dst)
for entry in os.scandir(src): for entry in os.scandir(src):
name = entry.name name = entry.name
@ -456,17 +491,28 @@ class SshConnection(SshConnectionBase):
else: else:
push = self._push_file push = self._push_file
push(sftp, src_path, dst_path) push(sftp, src_path, dst_path, callback)
def _push_path(self, sftp, src, dst): def _push_path(self, sftp, src, dst, callback=None):
logger.debug('Pushing via sftp: {} -> {}'.format(src, dst)) logger.debug('Pushing via sftp: {} -> {}'.format(src, dst))
push = self._push_folder if os.path.isdir(src) else self._push_file push = self._push_folder if os.path.isdir(src) else self._push_file
push(sftp, src, dst) push(sftp, src, dst, callback)
def _pull_file(self, sftp, src, dst): def _pull_file(self, sftp, src, dst, callback):
sftp.get(src, dst, callback=self._get_progress_cb()) try:
sftp.get(src, dst, callback=callback)
except Exception as e:
# A file may have been created by Paramiko, but we want to clean
# that up, particularly if we tried to pull a folder and failed,
# otherwise this will make subsequent attempts at pulling the
# folder fail since the destination will exist.
try:
os.remove(dst)
except Exception:
pass
raise e
def _pull_folder(self, sftp, src, dst): def _pull_folder(self, sftp, src, dst, callback):
os.makedirs(dst) os.makedirs(dst)
for fileattr in sftp.listdir_attr(src): for fileattr in sftp.listdir_attr(src):
filename = fileattr.filename filename = fileattr.filename
@ -477,15 +523,15 @@ class SshConnection(SshConnectionBase):
else: else:
pull = self._pull_file pull = self._pull_file
pull(sftp, src_path, dst_path) pull(sftp, src_path, dst_path, callback)
def _pull_path(self, sftp, src, dst): def _pull_path(self, sftp, src, dst, callback=None):
logger.debug('Pulling via sftp: {} -> {}'.format(src, dst)) logger.debug('Pulling via sftp: {} -> {}'.format(src, dst))
try: try:
self._pull_file(sftp, src, dst) self._pull_file(sftp, src, dst, callback)
except IOError: except IOError:
# Maybe that was a directory, so retry as such # Maybe that was a directory, so retry as such
self._pull_folder(sftp, src, dst) self._pull_folder(sftp, src, dst, callback)
def push(self, sources, dest, timeout=None): def push(self, sources, dest, timeout=None):
self._push_pull('push', sources, dest, timeout) self._push_pull('push', sources, dest, timeout)
@ -497,8 +543,13 @@ class SshConnection(SshConnectionBase):
if action not in ['push', 'pull']: if action not in ['push', 'pull']:
raise ValueError("Action must be either `push` or `pull`") raise ValueError("Action must be either `push` or `pull`")
# If timeout is set, or told not to poll def make_handle(obj):
if timeout is not None or not self.poll_transfers: handle = SSHTransferHandle(obj, manager=self.transfer_manager)
cm = self.transfer_manager.manage(sources, dest, action, handle)
return (handle, cm)
# If timeout is set
if timeout is not None:
if self.use_scp: if self.use_scp:
scp = self._get_scp(timeout) scp = self._get_scp(timeout)
scp_cmd = getattr(scp, 'put' if action == 'push' else 'get') scp_cmd = getattr(scp, 'put' if action == 'push' else 'get')
@ -512,20 +563,25 @@ class SshConnection(SshConnectionBase):
for source in sources: for source in sources:
sftp_cmd(sftp, source, dest) sftp_cmd(sftp, source, dest)
# No timeout, and polling is set # No timeout
elif self.use_scp: elif self.use_scp:
scp = self._get_scp(timeout) def progress_cb(*args, **kwargs):
return handle.progress_cb(*args, **kwargs)
scp = self._get_scp(timeout, callback=progress_cb)
handle, cm = make_handle(scp)
scp_cmd = getattr(scp, 'put' if action == 'push' else 'get') scp_cmd = getattr(scp, 'put' if action == 'push' else 'get')
with _handle_paramiko_exceptions(), self.transfer_mgr.manage(sources, dest, action, scp): with _handle_paramiko_exceptions(), cm:
scp_msg = '{}ing via scp: {} -> {}'.format(action, sources, dest) scp_msg = '{}ing via scp: {} -> {}'.format(action, sources, dest)
logger.debug(scp_msg.capitalize()) logger.debug(scp_msg.capitalize())
scp_cmd(sources, dest, recursive=True) scp_cmd(sources, dest, recursive=True)
else: else:
sftp = self._get_sftp(timeout) sftp = self._get_sftp(timeout)
handle, cm = make_handle(sftp)
sftp_cmd = getattr(self, '_' + action + '_path') sftp_cmd = getattr(self, '_' + action + '_path')
with _handle_paramiko_exceptions(), self.transfer_mgr.manage(sources, dest, action, sftp): with _handle_paramiko_exceptions(), cm:
for source in sources: for source in sources:
sftp_cmd(sftp, source, dest) sftp_cmd(sftp, source, dest, callback=handle.progress_cb)
def execute(self, command, timeout=None, check_exit_code=True, def execute(self, command, timeout=None, check_exit_code=True,
as_root=False, strip_colors=True, will_succeed=False): #pylint: disable=unused-argument as_root=False, strip_colors=True, will_succeed=False): #pylint: disable=unused-argument
@ -554,152 +610,150 @@ class SshConnection(SshConnectionBase):
def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False): def background(self, command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, as_root=False):
with _handle_paramiko_exceptions(command): with _handle_paramiko_exceptions(command):
bg_cmd = self._background(command, stdout, stderr, as_root) return self._background(command, stdout, stderr, as_root)
self._current_bg_cmds.add(bg_cmd)
return bg_cmd
def _background(self, command, stdout, stderr, as_root): def _background(self, command, stdout, stderr, as_root):
orig_command = command def make_init_kwargs(command):
stdout, stderr, command = redirect_streams(stdout, stderr, command) _stdout, _stderr, _command = redirect_streams(stdout, stderr, command)
command = "printf '%s\n' $$; exec sh -c {}".format(quote(command)) _command = "printf '%s\n' $$; exec sh -c {}".format(quote(_command))
channel = self._make_channel() channel = self._make_channel()
def executor(cmd, timeout): def executor(cmd, timeout):
channel.exec_command(cmd) channel.exec_command(cmd)
# Read are not buffered so we will always get the data as soon as # Read are not buffered so we will always get the data as soon as
# they arrive # they arrive
return ( return (
channel.makefile_stdin('w', 0), channel.makefile_stdin('w', 0),
channel.makefile(), channel.makefile(),
channel.makefile_stderr(), channel.makefile_stderr(),
)
stdin, stdout_in, stderr_in = self._execute_command(
_command,
as_root=as_root,
log=False,
timeout=None,
executor=executor,
) )
pid = stdout_in.readline()
if not pid:
_stderr = stderr_in.read()
if channel.exit_status_ready():
ret = channel.recv_exit_status()
else:
ret = 126
raise subprocess.CalledProcessError(
ret,
_command,
b'',
_stderr,
)
pid = int(pid)
stdin, stdout_in, stderr_in = self._execute_command( def create_out_stream(stream_in, stream_out):
command, """
as_root=as_root, Create a pair of file-like objects. The first one is used to read
log=False, data and the second one to write.
timeout=None, """
executor=executor,
)
pid = stdout_in.readline()
if not pid:
stderr = stderr_in.read()
if channel.exit_status_ready():
ret = channel.recv_exit_status()
else:
ret = 126
raise subprocess.CalledProcessError(
ret,
command,
b'',
stderr,
)
pid = int(pid)
def create_out_stream(stream_in, stream_out): if stream_out == subprocess.DEVNULL:
""" r, w = None, None
Create a pair of file-like objects. The first one is used to read # When asked for a pipe, we just give the file-like object as the
data and the second one to write. # reading end and no writing end, since paramiko already writes to
""" # it
elif stream_out == subprocess.PIPE:
r, w = os.pipe()
r = os.fdopen(r, 'rb')
w = os.fdopen(w, 'wb')
# Turn a file descriptor into a file-like object
elif isinstance(stream_out, int) and stream_out >= 0:
r = os.fdopen(stream_in, 'rb')
w = os.fdopen(stream_out, 'wb')
# file-like object
else:
r = stream_in
w = stream_out
if stream_out == subprocess.DEVNULL: return (r, w)
r, w = None, None
# When asked for a pipe, we just give the file-like object as the
# reading end and no writing end, since paramiko already writes to
# it
elif stream_out == subprocess.PIPE:
r, w = os.pipe()
r = os.fdopen(r, 'rb')
w = os.fdopen(w, 'wb')
# Turn a file descriptor into a file-like object
elif isinstance(stream_out, int) and stream_out >= 0:
r = os.fdopen(stream_in, 'rb')
w = os.fdopen(stream_out, 'wb')
# file-like object
else:
r = stream_in
w = stream_out
return (r, w) out_streams = {
name: create_out_stream(stream_in, stream_out)
for stream_in, stream_out, name in (
(stdout_in, _stdout, 'stdout'),
(stderr_in, _stderr, 'stderr'),
)
}
out_streams = { def redirect_thread_f(stdout_in, stderr_in, out_streams, select_timeout):
name: create_out_stream(stream_in, stream_out) def callback(out_streams, name, chunk):
for stream_in, stream_out, name in ( try:
(stdout_in, stdout, 'stdout'), r, w = out_streams[name]
(stderr_in, stderr, 'stderr'), except KeyError:
) return out_streams
}
try:
w.write(chunk)
# Write failed
except ValueError:
# Since that stream is now closed, stop trying to write to it
del out_streams[name]
# If that was the last open stream, we raise an
# exception so the thread can terminate.
if not out_streams:
raise
def redirect_thread_f(stdout_in, stderr_in, out_streams, select_timeout):
def callback(out_streams, name, chunk):
try:
r, w = out_streams[name]
except KeyError:
return out_streams return out_streams
try: try:
w.write(chunk) _read_paramiko_streams(stdout_in, stderr_in, select_timeout, callback, copy.copy(out_streams))
# Write failed # The streams closed while we were writing to it, the job is done here
except ValueError: except ValueError:
# Since that stream is now closed, stop trying to write to it pass
del out_streams[name]
# If that was the last open stream, we raise an
# exception so the thread can terminate.
if not out_streams:
raise
return out_streams # Make sure the writing end are closed proper since we are not
# going to write anything anymore
for r, w in out_streams.values():
w.flush()
if r is not w and w is not None:
w.close()
try: # If there is anything we need to redirect to, spawn a thread taking
_read_paramiko_streams(stdout_in, stderr_in, select_timeout, callback, copy.copy(out_streams)) # care of that
# The streams closed while we were writing to it, the job is done here select_timeout = 1
except ValueError: thread_out_streams = {
pass name: (r, w)
for name, (r, w) in out_streams.items()
if w is not None
}
redirect_thread = threading.Thread(
target=redirect_thread_f,
args=(stdout_in, stderr_in, thread_out_streams, select_timeout),
# The thread will die when the main thread dies
daemon=True,
)
redirect_thread.start()
# Make sure the writing end are closed proper since we are not return dict(
# going to write anything anymore chan=channel,
for r, w in out_streams.values(): pid=pid,
w.flush() stdin=stdin,
if r is not w and w is not None: # We give the reading end to the consumer of the data
w.close() stdout=out_streams['stdout'][0],
stderr=out_streams['stderr'][0],
redirect_thread=redirect_thread,
)
# If there is anything we need to redirect to, spawn a thread taking return ParamikoBackgroundCommand.from_factory(
# care of that
select_timeout = 1
thread_out_streams = {
name: (r, w)
for name, (r, w) in out_streams.items()
if w is not None
}
redirect_thread = threading.Thread(
target=redirect_thread_f,
args=(stdout_in, stderr_in, thread_out_streams, select_timeout),
# The thread will die when the main thread dies
daemon=True,
)
redirect_thread.start()
return ParamikoBackgroundCommand(
conn=self, conn=self,
cmd=command,
as_root=as_root, as_root=as_root,
chan=channel, make_init_kwargs=make_init_kwargs,
pid=pid,
stdin=stdin,
# We give the reading end to the consumer of the data
stdout=out_streams['stdout'][0],
stderr=out_streams['stderr'][0],
redirect_thread=redirect_thread,
cmd=orig_command,
) )
def _close(self): def _close(self):
logger.debug('Logging out {}@{}'.format(self.username, self.host)) logger.debug('Logging out {}@{}'.format(self.username, self.host))
with _handle_paramiko_exceptions(): with _handle_paramiko_exceptions():
bg_cmds = set(self._current_bg_cmds)
for bg_cmd in bg_cmds:
bg_cmd.close()
self.client.close() self.client.close()
def _execute_command(self, command, as_root, log, timeout, executor): def _execute_command(self, command, as_root, log, timeout, executor):
@ -748,11 +802,7 @@ class SshConnection(SshConnectionBase):
output_chunks, exit_code = _read_paramiko_streams(stdout, stderr, select_timeout, callback, []) output_chunks, exit_code = _read_paramiko_streams(stdout, stderr, select_timeout, callback, [])
# Join in one go to avoid O(N^2) concatenation # Join in one go to avoid O(N^2) concatenation
output = b''.join(output_chunks) output = b''.join(output_chunks)
output = output.decode(sys.stdout.encoding or 'utf-8', 'replace')
if sys.version_info[0] == 3:
output = output.decode(sys.stdout.encoding or 'utf-8', 'replace')
if strip_colors:
output = strip_bash_colors(output)
return (exit_code, output) return (exit_code, output)
@ -786,7 +836,6 @@ class TelnetConnection(SshConnectionBase):
strict_host_check=strict_host_check, strict_host_check=strict_host_check,
) )
_check_env()
self.options = self._get_default_options() self.options = self._get_default_options()
self.lock = threading.Lock() self.lock = threading.Lock()
@ -795,22 +844,17 @@ class TelnetConnection(SshConnectionBase):
timeout = timeout if timeout is not None else self.default_timeout timeout = timeout if timeout is not None else self.default_timeout
self.conn = telnet_get_shell(host, username, password, port, timeout, original_prompt) self.conn = telnet_get_shell(host, username, password, port, timeout, original_prompt)
atexit.register(self.close)
def fmt_remote_path(self, path): def fmt_remote_path(self, path):
return '{}@{}:{}'.format(self.username, self.host, path) return '{}@{}:{}'.format(self.username, self.host, path)
def _get_default_options(self): def _get_default_options(self):
if self.strict_host_check: check = self.strict_host_check
options = { known_hosts = _resolve_known_hosts(check)
'StrictHostKeyChecking': 'yes', return {
} 'StrictHostKeyChecking': 'yes' if check else 'no',
else: 'UserKnownHostsFile': str(known_hosts),
options = { }
'StrictHostKeyChecking': 'no',
'UserKnownHostsFile': '/dev/null',
}
return options
def push(self, sources, dest, timeout=30): def push(self, sources, dest, timeout=30):
# Quote the destination as SCP would apply globbing too # Quote the destination as SCP would apply globbing too
@ -836,7 +880,7 @@ class TelnetConnection(SshConnectionBase):
options = " ".join(["-o {}={}".format(key, val) options = " ".join(["-o {}={}".format(key, val)
for key, val in self.options.items()]) for key, val in self.options.items()])
paths = ' '.join(map(quote, paths)) paths = ' '.join(map(quote, paths))
command = '{} {} -r {} {} {}'.format(scp, command = '{} {} -r {} {} {}'.format(_SSH_ENV.get_path('scp'),
options, options,
keyfile_string, keyfile_string,
port_string, port_string,
@ -848,8 +892,8 @@ class TelnetConnection(SshConnectionBase):
try: try:
check_output(command, timeout=timeout, shell=True) check_output(command, timeout=timeout, shell=True)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise_from(HostError("Failed to copy file with '{}'. Output:\n{}".format( msg = f"Failed to copy file with '{command_redacted}'. Output:\n{e.output}"
command_redacted, e.output)), None) raise HostError(msg) from None
except TimeoutError as e: except TimeoutError as e:
raise TimeoutError(command_redacted, e.output) raise TimeoutError(command_redacted, e.output)
@ -904,7 +948,7 @@ class TelnetConnection(SshConnectionBase):
command = self.sudo_cmd.format(command) command = self.sudo_cmd.format(command)
options = " ".join([ "-o {}={}".format(key,val) options = " ".join([ "-o {}={}".format(key,val)
for key,val in self.options.items()]) for key,val in self.options.items()])
command = '{} {} {} {} {}@{} {}'.format(ssh, command = '{} {} {} {} {}@{} {}'.format(_SSH_ENV.get_path('ssh'),
options, options,
keyfile_string, keyfile_string,
port_string, port_string,
@ -960,10 +1004,7 @@ class TelnetConnection(SshConnectionBase):
logger.debug(command) logger.debug(command)
self._sendline(command) self._sendline(command)
timed_out = self._wait_for_prompt(timeout) timed_out = self._wait_for_prompt(timeout)
if sys.version_info[0] == 3: output = process_backspaces(self.conn.before.decode(sys.stdout.encoding or 'utf-8', 'replace'))
output = process_backspaces(self.conn.before.decode(sys.stdout.encoding or 'utf-8', 'replace'))
else:
output = process_backspaces(self.conn.before)
if timed_out: if timed_out:
self.cancel_running_command() self.cancel_running_command()
@ -1595,23 +1636,16 @@ class AndroidGem5Connection(Gem5Connection):
gem5_logger.info("Android booted") gem5_logger.info("Android booted")
def _give_password(password, command): def _give_password(password, command):
if not sshpass: sshpass = _SSH_ENV.get_path('sshpass')
if sshpass:
pass_template = "{} -p {} "
pass_string = pass_template.format(quote(sshpass), quote(password))
redacted_string = pass_template.format(quote(sshpass), quote('<redacted>'))
return (pass_string + command, redacted_string + command)
else:
raise HostError('Must have sshpass installed on the host in order to use password-based auth.') raise HostError('Must have sshpass installed on the host in order to use password-based auth.')
pass_template = "sshpass -p {} "
pass_string = pass_template.format(quote(password))
redacted_string = pass_template.format(quote('<redacted>'))
return (pass_string + command, redacted_string + command)
def _check_env():
global ssh, scp, sshpass # pylint: disable=global-statement
if not ssh:
ssh = which('ssh')
scp = which('scp')
sshpass = which('sshpass')
if not (ssh and scp):
raise HostError('OpenSSH must be installed on the host.')
def process_backspaces(text): def process_backspaces(text):

View File

@ -136,35 +136,25 @@ def bitmask(value):
regex_type = type(re.compile('')) regex_type = type(re.compile(''))
if sys.version_info[0] == 3: def regex(value):
def regex(value): if isinstance(value, regex_type):
if isinstance(value, regex_type): if isinstance(value.pattern, str):
if isinstance(value.pattern, str):
return value
return re.compile(value.pattern.decode(),
value.flags | re.UNICODE)
else:
if isinstance(value, bytes):
value = value.decode()
return re.compile(value)
def bytes_regex(value):
if isinstance(value, regex_type):
if isinstance(value.pattern, bytes):
return value
return re.compile(value.pattern.encode(sys.stdout.encoding or 'utf-8'),
value.flags & ~re.UNICODE)
else:
if isinstance(value, str):
value = value.encode(sys.stdout.encoding or 'utf-8')
return re.compile(value)
else:
def regex(value):
if isinstance(value, regex_type):
return value return value
else: return re.compile(value.pattern.decode(),
return re.compile(value) value.flags | re.UNICODE)
else:
if isinstance(value, bytes):
value = value.decode()
return re.compile(value)
bytes_regex = regex def bytes_regex(value):
if isinstance(value, regex_type):
if isinstance(value.pattern, bytes):
return value
return re.compile(value.pattern.encode(sys.stdout.encoding or 'utf-8'),
value.flags & ~re.UNICODE)
else:
if isinstance(value, str):
value = value.encode(sys.stdout.encoding or 'utf-8')
return re.compile(value)

View File

@ -21,7 +21,7 @@ from subprocess import Popen, PIPE
VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev']) VersionTuple = namedtuple('Version', ['major', 'minor', 'revision', 'dev'])
version = VersionTuple(1, 3, 3, '') version = VersionTuple(1, 4, 0, 'dev3')
def get_devlib_version(): def get_devlib_version():
@ -42,7 +42,7 @@ def get_commit():
p.wait() p.wait()
if p.returncode: if p.returncode:
return None return None
if sys.version_info[0] == 3 and isinstance(std, bytes): if isinstance(std, bytes):
return std[:8].decode(sys.stdout.encoding or 'utf-8', 'replace') return std[:8].decode(sys.stdout.encoding or 'utf-8', 'replace')
else: else:
return std[:8] return std[:8]

View File

@ -177,7 +177,11 @@ Connection Types
:param platform: Specify the platform to be used. The generic :class:`~devlib.platform.Platform` :param platform: Specify the platform to be used. The generic :class:`~devlib.platform.Platform`
class is used by default. class is used by default.
:param sudo_cmd: Specify the format of the command used to grant sudo access. :param sudo_cmd: Specify the format of the command used to grant sudo access.
:param strict_host_check: Specify the ssh connection parameter ``StrictHostKeyChecking``, :param strict_host_check: Specify the ssh connection parameter
``StrictHostKeyChecking``. If a path is passed
rather than a boolean, it will be taken for a
``known_hosts`` file. Otherwise, the default
``$HOME/.ssh/known_hosts`` will be used.
:param use_scp: Use SCP for file transfers, defaults to SFTP. :param use_scp: Use SCP for file transfers, defaults to SFTP.
:param poll_transfers: Specify whether file transfers should be polled. Polling :param poll_transfers: Specify whether file transfers should be polled. Polling
monitors the progress of file transfers and periodically monitors the progress of file transfers and periodically
@ -199,9 +203,9 @@ Connection Types
timeout=None, password_prompt=None,\ timeout=None, password_prompt=None,\
original_prompt=None) original_prompt=None)
A connection to a device on the network over Telenet. A connection to a device on the network over Telnet.
.. note:: Since Telenet protocol is does not support file transfer, scp is .. note:: Since Telnet protocol is does not support file transfer, scp is
used for that purpose. used for that purpose.
:param host: SSH host to which to connect :param host: SSH host to which to connect
@ -220,7 +224,7 @@ Connection Types
:param password_prompt: A string with the password prompt used by :param password_prompt: A string with the password prompt used by
``sshpass``. Set this if your version of ``sshpass`` ``sshpass``. Set this if your version of ``sshpass``
uses something other than ``"[sudo] password"``. uses something other than ``"[sudo] password"``.
:param original_prompt: A regex for the shell prompted presented in the Telenet :param original_prompt: A regex for the shell prompted presented in the Telnet
connection (the prompt will be reset to a connection (the prompt will be reset to a
randomly-generated pattern for the duration of the randomly-generated pattern for the duration of the
connection to reduce the possibility of clashes). connection to reduce the possibility of clashes).

View File

@ -25,6 +25,7 @@ Contents:
derived_measurements derived_measurements
platform platform
connection connection
tools
Indices and tables Indices and tables
================== ==================

View File

@ -3,7 +3,7 @@
Target Target
====== ======
.. class:: Target(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=None) .. class:: Target(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=None, max_async=50)
:class:`~devlib.target.Target` is the primary interface to the remote :class:`~devlib.target.Target` is the primary interface to the remote
device. All interactions with the device are performed via a device. All interactions with the device are performed via a
@ -76,6 +76,9 @@ Target
:param conn_cls: This is the type of connection that will be used to :param conn_cls: This is the type of connection that will be used to
communicate with the device. communicate with the device.
:param max_async: Maximum number of opened connections to the target used to
issue non-blocking commands when using the async API.
.. attribute:: Target.core_names .. attribute:: Target.core_names
This is a list containing names of CPU cores on the target, in the order in This is a list containing names of CPU cores on the target, in the order in
@ -383,7 +386,7 @@ Target
Equivalent to ``Target.read_value(path, kind=devlib.utils.types.boolean)`` Equivalent to ``Target.read_value(path, kind=devlib.utils.types.boolean)``
.. method:: Target.write_value(path, value [, verify]) .. method:: Target.write_value(path, value [, verify, as_root])
Write the value to the specified path on the target. This is primarily Write the value to the specified path on the target. This is primarily
intended for sysfs/procfs/debugfs etc. intended for sysfs/procfs/debugfs etc.
@ -394,8 +397,10 @@ Target
it is written to make sure it has been written successfully. This due to it is written to make sure it has been written successfully. This due to
some sysfs entries silently failing to set the written value without some sysfs entries silently failing to set the written value without
returning an error code. returning an error code.
:param as_root: specifies if writing requires being root. Its default value
is ``True``.
.. method:: Target.revertable_write_value(path, value [, verify]) .. method:: Target.revertable_write_value(path, value [, verify, as_root])
Same as :meth:`Target.write_value`, but as a context manager that will write Same as :meth:`Target.write_value`, but as a context manager that will write
back the previous value on exit. back the previous value on exit.
@ -606,7 +611,7 @@ Target
Linux Target Linux Target
------------ ------------
.. class:: LinuxTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=SshConnection, is_container=False,) .. class:: LinuxTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=SshConnection, is_container=False, max_async=50)
:class:`LinuxTarget` is a subclass of :class:`~devlib.target.Target` :class:`LinuxTarget` is a subclass of :class:`~devlib.target.Target`
with customisations specific to a device running linux. with customisations specific to a device running linux.
@ -615,7 +620,7 @@ Linux Target
Local Linux Target Local Linux Target
------------------ ------------------
.. class:: LocalLinuxTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=SshConnection, is_container=False,) .. class:: LocalLinuxTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=SshConnection, is_container=False, max_async=50)
:class:`LocalLinuxTarget` is a subclass of :class:`LocalLinuxTarget` is a subclass of
:class:`~devlib.target.LinuxTarget` with customisations specific to using :class:`~devlib.target.LinuxTarget` with customisations specific to using
@ -625,7 +630,7 @@ Local Linux Target
Android Target Android Target
--------------- ---------------
.. class:: AndroidTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=AdbConnection, package_data_directory="/data/data") .. class:: AndroidTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, conn_cls=AdbConnection, package_data_directory="/data/data", max_async=50)
:class:`AndroidTarget` is a subclass of :class:`~devlib.target.Target` with :class:`AndroidTarget` is a subclass of :class:`~devlib.target.Target` with
additional features specific to a device running Android. additional features specific to a device running Android.
@ -744,10 +749,15 @@ Android Target
.. method:: AndroidTarget.is_screen_on() .. method:: AndroidTarget.is_screen_on()
Returns ``True`` if the targets screen is currently on and ``False`` Returns ``True`` if the target's screen is currently on and ``False``
otherwise. If the display is in a "Doze" mode or similar always on state, otherwise. If the display is in a "Doze" mode or similar always on state,
this will return ``True``. this will return ``True``.
.. method:: AndroidTarget.is_screen_locked()
Returns ``True`` if the target's screen is currently locked and ``False``
otherwise.
.. method:: AndroidTarget.wait_for_device(timeout=30) .. method:: AndroidTarget.wait_for_device(timeout=30)
Returns when the devices becomes available withing the given timeout Returns when the devices becomes available withing the given timeout
@ -773,7 +783,7 @@ Android Target
ChromeOS Target ChromeOS Target
--------------- ---------------
.. class:: ChromeOsTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, android_working_directory=None, android_executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, package_data_directory="/data/data") .. class:: ChromeOsTarget(connection_settings=None, platform=None, working_directory=None, executables_directory=None, android_working_directory=None, android_executables_directory=None, connect=True, modules=None, load_default_modules=True, shell_prompt=DEFAULT_SHELL_PROMPT, package_data_directory="/data/data", max_async=50)
:class:`ChromeOsTarget` is a subclass of :class:`LinuxTarget` with :class:`ChromeOsTarget` is a subclass of :class:`LinuxTarget` with
additional features specific to a device running ChromeOS for example, additional features specific to a device running ChromeOS for example,

109
doc/tools.rst Normal file
View File

@ -0,0 +1,109 @@
Tools
=====
Android
-------
``tools/android/setup_host.sh`` script installs Android command line tools
for Linux and creates Android Virtual Devices (AVD).
The script creates ``android-sdk-linux`` directory under ``tools/android`` and
sets it as ``ANDROID_HOME`` directory (see
https://developer.android.com/tools/variables).
Your ``ANDROID_USER_HOME`` and ``ANDROID_EMULATOR_HOME`` environment variables
point to ``tools/android/android-sdk-linux/.android``. Hence, removing
``android-sdk-linux`` folder will clean all artefacts of ``setup_host.sh``.
It fetches Android command line tools, then installs Android SDK
Platform-Tools, SDK Platform 31 (for Android 12) & 34 (for Android 14), and
Google APIs for platforms 31 & 34 for the associated ABI type.
Finally the script creates AVDs per Pixel 6 for Android 12 & 14.
Shell commands below illustrate how to list available AVDs and run them via
Android emulator:
.. code:: shell
ANDROID_HOME="/devlib/tools/android/android-sdk-linux"
export ANDROID_HOME
EMULATOR="${ANDROID_HOME}/emulator/emulator"
export ANDROID_EMULATOR_HOME="${ANDROID_HOME}/.android"
# List available AVDs:
${EMULATOR} -list-avds
# Run devlib-p6-14 AVD in emulator:
${EMULATOR} -avd devlib-p6-14 -no-window -no-snapshot -memory 2048 &
# After ~30 seconds, the emulated device will be ready:
adb -s emulator-5554 shell "lsmod"
Building buildroot
------------------
``buildroot/generate-kernel-initrd.sh`` helper script downloads and builds
``buildroot`` per config files located under ``tools/buildroot/configs``
for the specified architecture.
The script roughly checks out ``2023.11.1`` tag of ``buildroot``, copies config
files for buildroot (e.g., ``configs/aarch64/arm-power_aarch64_defconfig``) and
kernel (e.g., ``configs/aarch64/linux.config``) to necessary places under
buildroot directory, and runs ``make arm-power_aarch64_defconfig && make``
commands.
As its name suggests, ``generate-kernel-initrd.sh`` builds kernel image with an
initial RAM disk per default config files.
There is also ``post-build.sh`` script in order to make following tunings on
root filesystem generated by ``buildroot``:
- allow root login on SSH.
- increase number of concurrent SSH connections/channels to let devlib
consumers hammering the target system.
In order to keep rootfs minimal, only OpenSSH and util-linux packages
are enabled in the default configuration files.
DHCP client and SSH server services are enabled on target system startup.
SCHED_MC, SCHED_SMT and UCLAMP_TASK scheduler features are enabled for aarch64
kernel.
If you need to make changes on ``buildroot``, rootfs or kernel of target
system, you may want to run commands similar to these:
.. code:: shell
$ cd tools/buildroot/buildroot-v2023.11.1-aarch64
$ make menuconfig # or 'make linux-menuconfig' if you want to configure kernel
$ make
See https://buildroot.org/downloads/manual/manual.html for details.
Docker support
--------------
A Docker image for devlib can be created via ``tools/docker/Dockerfile``.
Once the Docker image is run, ``tools/docker/run_tests.sh`` script can execute
tests for Android, Linux, LocalLinux, and QEMU targets.
The Dockerfile forks from ``Ubuntu-22.04``, installs required system packages,
checks out ``master`` branch of devlib, installs devlib, creates Android
virtual devices via ``tools/android/setup_host.sh``, and QEMU images for
aarch64 and x86_84 architectures.
Version Android command line tools (``CMDLINE_VERSION``), buildroot
(``BUILDROOT_VERSION``) and devlib (``DEVLIB_REF``) branches can be customized
for the Docker image via aforementioned environment variables.
.. code:: shell
cd tools/docker
docker build -t devlib .
docker run -it --privileged devlib
/devlib/tools/docker/run_tests.sh

View File

@ -13,11 +13,11 @@
# limitations under the License. # limitations under the License.
# #
import imp
import os import os
import sys import sys
import warnings import warnings
from itertools import chain from itertools import chain
import types
try: try:
from setuptools import setup from setuptools import setup
@ -35,15 +35,25 @@ sys.path.insert(0, os.path.join(devlib_dir, 'core'))
warnings.filterwarnings('ignore', "Unknown distribution option: 'install_requires'") warnings.filterwarnings('ignore', "Unknown distribution option: 'install_requires'")
warnings.filterwarnings('ignore', "Unknown distribution option: 'extras_require'") warnings.filterwarnings('ignore', "Unknown distribution option: 'extras_require'")
try: try:
os.remove('MANIFEST') os.remove('MANIFEST')
except OSError: except OSError:
pass pass
def _load_path(filepath):
# Not a proper import in many, many ways but does the job for really basic
# needs
with open(filepath) as f:
globals_ = dict(__file__=filepath)
exec(f.read(), globals_)
return types.SimpleNamespace(**globals_)
vh_path = os.path.join(devlib_dir, 'utils', 'version.py') vh_path = os.path.join(devlib_dir, 'utils', 'version.py')
# can load this, as it does not have any devlib imports # can load this, as it does not have any devlib imports
version_helper = imp.load_source('version_helper', vh_path) version_helper = _load_path(vh_path)
__version__ = version_helper.get_devlib_version() __version__ = version_helper.get_devlib_version()
commit = version_helper.get_commit() commit = version_helper.get_commit()
if commit: if commit:
@ -82,6 +92,7 @@ params = dict(
url='https://github.com/ARM-software/devlib', url='https://github.com/ARM-software/devlib',
license='Apache v2', license='Apache v2',
maintainer='ARM Ltd.', maintainer='ARM Ltd.',
python_requires='>= 3.7',
install_requires=[ install_requires=[
'python-dateutil', # converting between UTC and local time. 'python-dateutil', # converting between UTC and local time.
'pexpect>=3.3', # Send/recieve to/from device 'pexpect>=3.3', # Send/recieve to/from device
@ -89,20 +100,23 @@ params = dict(
'paramiko', # SSH connection 'paramiko', # SSH connection
'scp', # SSH connection file transfers 'scp', # SSH connection file transfers
'wrapt', # Basic for construction of decorator functions 'wrapt', # Basic for construction of decorator functions
'future', # Python 2-3 compatibility 'numpy',
'enum34;python_version<"3.4"', # Enums for Python < 3.4 'pandas',
'contextlib2;python_version<"3.0"', # Python 3 contextlib backport for Python 2 'pytest',
'numpy<=1.16.4; python_version<"3"',
'numpy; python_version>="3"',
'pandas<=0.24.2; python_version<"3"',
'pandas; python_version>"3"',
'lxml', # More robust xml parsing 'lxml', # More robust xml parsing
'nest_asyncio', # Allows running nested asyncio loops
'greenlet', # Allows running nested asyncio loops
'future', # for the "past" Python package
'ruamel.yaml >= 0.15.72', # YAML formatted config parsing
], ],
extras_require={ extras_require={
'daq': ['daqpower>=2'], 'daq': ['daqpower>=2'],
'doc': ['sphinx'], 'doc': ['sphinx'],
'monsoon': ['python-gflags'], 'monsoon': ['python-gflags'],
'acme': ['pandas', 'numpy'], 'acme': ['pandas', 'numpy'],
'dev': [
'uvloop', # Test async features under uvloop
]
}, },
# https://pypi.python.org/pypi?%3Aaction=list_classifiers # https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[ classifiers=[

608
tests/test_asyn.py Normal file
View File

@ -0,0 +1,608 @@
#
# Copyright 2024 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
import asyncio
from functools import partial
import contextvars
from concurrent.futures import ThreadPoolExecutor
from contextlib import contextmanager
from pytest import skip, raises
from devlib.utils.asyn import run, asynccontextmanager
class AsynTestExcep(Exception):
pass
class Awaitable:
def __await__(self):
return (yield self)
@contextmanager
def raises_and_bubble(cls):
try:
yield
except BaseException as e:
if isinstance(e, cls):
raise
else:
raise AssertionError(f'Did not raise instance of {cls}')
else:
raise AssertionError(f'Did not raise any exception')
@contextmanager
def coro_stop_iteration(x):
try:
yield
except StopIteration as e:
assert e.value == x
except BaseException:
raise
else:
raise AssertionError('Coroutine did not finish')
def _do_test_run(top_run):
async def test_run_basic():
async def f():
return 42
assert run(f()) == 42
top_run(test_run_basic())
async def test_run_basic_contextvars_get():
var = contextvars.ContextVar('var')
var.set(42)
async def f():
return var.get()
assert var.get() == 42
assert run(f()) == 42
top_run(test_run_basic_contextvars_get())
async def test_run_basic_contextvars_set():
var = contextvars.ContextVar('var')
async def f():
var.set(43)
var.set(42)
assert var.get() == 42
run(f())
assert var.get() == 43
top_run(test_run_basic_contextvars_set())
async def test_run_basic_raise():
async def f():
raise AsynTestExcep
with raises(AsynTestExcep):
run(f())
top_run(test_run_basic_raise())
async def test_run_basic_await():
async def nested():
return 42
async def f():
return await nested()
assert run(f()) == 42
top_run(test_run_basic_await())
async def test_run_basic_await_raise():
async def nested():
raise AsynTestExcep
async def f():
with raises_and_bubble(AsynTestExcep):
return await nested()
with raises(AsynTestExcep):
run(f())
top_run(test_run_basic_await_raise())
async def test_run_nested1():
async def nested():
return 42
async def f():
return run(nested())
assert run(f()) == 42
top_run(test_run_nested1())
async def test_run_nested1_raise():
async def nested():
raise AsynTestExcep
async def f():
with raises_and_bubble(AsynTestExcep):
return run(nested())
with raises(AsynTestExcep):
run(f())
top_run(test_run_nested1_raise())
async def test_run_nested2():
async def nested2():
return 42
async def nested1():
return run(nested2())
async def f():
return run(nested1())
assert run(f()) == 42
top_run(test_run_nested2())
async def test_run_nested2_raise():
async def nested2():
raise AsynTestExcep
async def nested1():
with raises_and_bubble(AsynTestExcep):
return run(nested2())
async def f():
with raises_and_bubble(AsynTestExcep):
return run(nested1())
with raises(AsynTestExcep):
run(f())
top_run(test_run_nested2_raise())
async def test_run_nested2_block():
async def nested2():
return 42
def nested1():
return run(nested2())
async def f():
return nested1()
assert run(f()) == 42
top_run(test_run_nested2_block())
async def test_run_nested2_block_raise():
async def nested2():
raise AsynTestExcep
def nested1():
with raises_and_bubble(AsynTestExcep):
return run(nested2())
async def f():
with raises_and_bubble(AsynTestExcep):
return nested1()
with raises(AsynTestExcep):
run(f())
top_run(test_run_nested2_block_raise())
async def test_coro_send():
async def f():
return await Awaitable()
coro = f()
coro.send(None)
with coro_stop_iteration(42):
coro.send(42)
top_run(test_coro_send())
async def test_coro_nested_send():
async def nested():
return await Awaitable()
async def f():
return await nested()
coro = f()
coro.send(None)
with coro_stop_iteration(42):
coro.send(42)
top_run(test_coro_nested_send())
async def test_coro_nested_send2():
future = asyncio.Future()
future.set_result(42)
async def nested():
return await future
async def f():
return run(nested())
assert run(f()) == 42
top_run(test_coro_nested_send2())
async def test_coro_nested_send3():
future = asyncio.Future()
future.set_result(42)
async def nested2():
return await future
async def nested():
return run(nested2())
async def f():
return run(nested())
assert run(f()) == 42
top_run(test_coro_nested_send3())
async def test_coro_throw():
async def f():
try:
await Awaitable()
except AsynTestExcep:
return 42
coro = f()
coro.send(None)
with coro_stop_iteration(42):
coro.throw(AsynTestExcep)
top_run(test_coro_throw())
async def test_coro_throw2():
async def f():
await Awaitable()
coro = f()
coro.send(None)
with raises(AsynTestExcep):
coro.throw(AsynTestExcep)
top_run(test_coro_throw2())
async def test_coro_nested_throw():
async def nested():
try:
await Awaitable()
except AsynTestExcep:
return 42
async def f():
return await nested()
coro = f()
coro.send(None)
with coro_stop_iteration(42):
coro.throw(AsynTestExcep)
top_run(test_coro_nested_throw())
async def test_coro_nested_throw2():
async def nested():
await Awaitable()
async def f():
with raises_and_bubble(AsynTestExcep):
await nested()
coro = f()
coro.send(None)
with raises(AsynTestExcep):
coro.throw(AsynTestExcep)
top_run(test_coro_nested_throw2())
async def test_coro_nested_throw3():
future = asyncio.Future()
future.set_exception(AsynTestExcep())
async def nested():
await future
async def f():
with raises_and_bubble(AsynTestExcep):
run(nested())
with raises(AsynTestExcep):
run(f())
top_run(test_coro_nested_throw3())
async def test_coro_nested_throw4():
future = asyncio.Future()
future.set_exception(AsynTestExcep())
async def nested2():
await future
async def nested():
return run(nested2())
async def f():
with raises_and_bubble(AsynTestExcep):
run(nested())
with raises(AsynTestExcep):
run(f())
top_run(test_coro_nested_throw4())
async def test_async_cm():
state = None
async def f():
return 43
@asynccontextmanager
async def cm():
nonlocal state
state = 'started'
await f()
try:
yield 42
finally:
await f()
state = 'finished'
async with cm() as x:
assert state == 'started'
assert x == 42
assert state == 'finished'
top_run(test_async_cm())
async def test_async_cm2():
state = None
async def f():
return 43
@asynccontextmanager
async def cm():
nonlocal state
state = 'started'
await f()
try:
await f()
yield 42
await f()
except AsynTestExcep:
await f()
# Swallow the exception
pass
finally:
await f()
state = 'finished'
async with cm() as x:
assert state == 'started'
raise AsynTestExcep()
assert state == 'finished'
top_run(test_async_cm2())
async def test_async_cm3():
state = None
async def f():
return 43
@asynccontextmanager
async def cm():
nonlocal state
state = 'started'
await f()
try:
yield 42
finally:
await f()
state = 'finished'
with cm() as x:
assert state == 'started'
assert x == 42
assert state == 'finished'
top_run(test_async_cm3())
def test_async_cm4():
state = None
async def f():
return 43
@asynccontextmanager
async def cm():
nonlocal state
state = 'started'
await f()
try:
yield 42
finally:
await f()
state = 'finished'
with cm() as x:
assert state == 'started'
assert x == 42
assert state == 'finished'
test_async_cm4()
def test_async_cm5():
@asynccontextmanager
async def cm_f():
yield 42
cm = cm_f()
assert top_run(cm.__aenter__()) == 42
assert not top_run(cm.__aexit__(None, None, None))
test_async_cm5()
def test_async_gen1():
async def agen_f():
for i in range(2):
yield i
agen = agen_f()
assert top_run(anext(agen)) == 0
assert top_run(anext(agen)) == 1
test_async_gen1()
def _test_in_thread(setup, test):
def f():
with setup() as run:
return test()
with ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(f).result()
def _test_run_with_setup(setup):
def run_with_existing_loop(coro):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Simulate case where devlib is ran in a context where the main app has
# set an event loop at some point
try:
return asyncio.run(coro)
finally:
loop.close()
def run_with_existing_loop2(coro):
# This is similar to how things are executed on IPython/jupyterlab
loop = asyncio.new_event_loop()
try:
return loop.run_until_complete(coro)
finally:
loop.close()
def run_with_to_thread(top_run, coro):
# Add a layer of asyncio.to_thread(), to simulate a case where users
# would be using the blocking API along with asyncio.to_thread() (code
# written before devlib gained async capabilities or wishing to
# preserve compat with older devlib versions)
async def wrapper():
return await asyncio.to_thread(
top_run, coro
)
return top_run(wrapper())
runners = [
run,
asyncio.run,
run_with_existing_loop,
run_with_existing_loop2,
partial(run_with_to_thread, run),
partial(run_with_to_thread, asyncio.run),
partial(run_with_to_thread, run_with_existing_loop),
partial(run_with_to_thread, run_with_existing_loop2),
]
for top_run in runners:
_test_in_thread(
setup,
partial(_do_test_run, top_run),
)
def test_run_stdlib():
@contextmanager
def setup():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
yield asyncio.run
finally:
loop.close()
_test_run_with_setup(setup)
def test_run_uvloop():
try:
import uvloop
except ImportError:
skip('uvloop not installed')
else:
@contextmanager
def setup():
if sys.version_info >= (3, 11):
with asyncio.Runner(loop_factory=uvloop.new_event_loop) as runner:
yield runner.run
else:
uvloop.install()
yield asyncio.run
_test_run_with_setup(setup)

5
tests/test_config.yml Normal file
View File

@ -0,0 +1,5 @@
target-configs:
entry-0:
LocalLinuxTarget:
connection_settings:
unrooted: True

View File

@ -0,0 +1,40 @@
target-configs:
entry-0:
AndroidTarget:
timeout: 60
connection_settings:
device: 'emulator-5554'
entry-1:
ChromeOsTarget:
connection_settings:
device: 'emulator-5556'
host: 'example.com'
username: 'username'
password: 'password'
entry-2:
LinuxTarget:
connection_settings:
host: 'example.com'
username: 'username'
password: 'password'
entry-3:
LocalLinuxTarget:
connection_settings:
unrooted: True
entry-4:
QEMUTargetRunner:
qemu_settings:
kernel_image: '/path/to/devlib/tools/buildroot/buildroot-v2023.11.1-aarch64/output/images/Image'
entry-5:
QEMUTargetRunner:
connection_settings:
port: 8023
qemu_settings:
kernel_image: '/path/to/devlib/tools/buildroot/buildroot-v2023.11.1-x86_64/output/images/bzImage'
arch: 'x86_64'
cmdline: 'console=ttyS0'

View File

@ -1,32 +1,181 @@
#
# Copyright 2024 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Module for testing targets.
Sample run with log level is set to DEBUG (see
https://docs.pytest.org/en/7.1.x/how-to/logging.html#live-logs for logging details):
$ python -m pytest --log-cli-level DEBUG test_target.py
"""
import logging
import os import os
import shutil import pytest
import tempfile
from unittest import TestCase
from devlib import LocalLinuxTarget from devlib import AndroidTarget, ChromeOsTarget, LinuxTarget, LocalLinuxTarget
from devlib._target_runner import NOPTargetRunner, QEMUTargetRunner
from devlib.utils.android import AdbConnection
from devlib.utils.misc import load_struct_from_yaml
class TestReadTreeValues(TestCase): logger = logging.getLogger('test_target')
def test_read_multiline_values(self):
data = {
'test1': '1',
'test2': '2\n\n',
'test3': '3\n\n4\n\n',
}
tempdir = tempfile.mkdtemp(prefix='devlib-test-') def get_class_object(name):
for key, value in data.items(): """
path = os.path.join(tempdir, key) Get associated class object from string formatted class name
with open(path, 'w') as wfh:
wfh.write(value)
t = LocalLinuxTarget(connection_settings={'unrooted': True}) :param name: Class name
raw_result = t.read_tree_values_flat(tempdir) :type name: str
result = {os.path.basename(k): v for k, v in raw_result.items()} :return: Class object
:rtype: object or None
"""
if globals().get(name) is None:
return None
shutil.rmtree(tempdir) return globals()[name] if issubclass(globals()[name], object) else None
self.assertEqual({k: v.strip()
for k, v in data.items()}, @pytest.fixture(scope="module")
result) # pylint: disable=too-many-branches
def build_target_runners():
"""Read targets from a YAML formatted config file and create runners for them"""
logger.info("Initializing resources...")
config_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'test_config.yml')
test_config = load_struct_from_yaml(config_file)
if test_config is None:
raise ValueError(f'{config_file} looks empty!')
target_configs = test_config.get('target-configs')
if target_configs is None:
raise ValueError('No targets found!')
target_runners = []
for entry in target_configs.values():
key, target_info = entry.popitem()
target_class = get_class_object(key)
if target_class is AndroidTarget:
logger.info('> Android target: %s', repr(target_info))
a_target = AndroidTarget(
connect=False,
connection_settings=target_info['connection_settings'],
conn_cls=lambda **kwargs: AdbConnection(adb_as_root=True, **kwargs),
)
a_target.connect(timeout=target_info.get('timeout', 60))
target_runners.append(NOPTargetRunner(a_target))
elif target_class is ChromeOsTarget:
logger.info('> ChromeOS target: %s', repr(target_info))
c_target = ChromeOsTarget(
connection_settings=target_info['connection_settings'],
working_directory='/tmp/devlib-target',
)
target_runners.append(NOPTargetRunner(c_target))
elif target_class is LinuxTarget:
logger.info('> Linux target: %s', repr(target_info))
l_target = LinuxTarget(connection_settings=target_info['connection_settings'])
target_runners.append(NOPTargetRunner(l_target))
elif target_class is LocalLinuxTarget:
logger.info('> LocalLinux target: %s', repr(target_info))
ll_target = LocalLinuxTarget(connection_settings=target_info['connection_settings'])
target_runners.append(NOPTargetRunner(ll_target))
elif target_class is QEMUTargetRunner:
logger.info('> QEMU target runner: %s', repr(target_info))
qemu_runner = QEMUTargetRunner(
qemu_settings=target_info.get('qemu_settings'),
connection_settings=target_info.get('connection_settings'),
)
if target_info.get('ChromeOsTarget') is not None:
# Leave termination of QEMU runner to ChromeOS target.
target_runners.append(NOPTargetRunner(qemu_runner.target))
logger.info('>> ChromeOS target: %s', repr(target_info["ChromeOsTarget"]))
qemu_runner.target = ChromeOsTarget(
connection_settings={
**target_info['ChromeOsTarget']['connection_settings'],
**qemu_runner.target.connection_settings,
},
working_directory='/tmp/devlib-target',
)
target_runners.append(qemu_runner)
else:
raise ValueError(f'Unknown target type {key}!')
yield target_runners
logger.info("Destroying resources...")
for target_runner in target_runners:
target = target_runner.target
# TODO: Revisit per https://github.com/ARM-software/devlib/issues/680.
logger.debug('Removing %s...', target.working_directory)
target.remove(target.working_directory)
target_runner.terminate()
# pylint: disable=redefined-outer-name
def test_read_multiline_values(build_target_runners):
"""
Test Target.read_tree_values_flat()
Runs tests around ``Target.read_tree_values_flat()`` for ``TargetRunner`` objects.
"""
logger.info('Running test_read_multiline_values test...')
data = {
'test1': '1',
'test2': '2\n\n',
'test3': '3\n\n4\n\n',
}
target_runners = build_target_runners
for target_runner in target_runners:
target = target_runner.target
logger.info('target=%s os=%s hostname=%s',
target.__class__.__name__, target.os, target.hostname)
with target.make_temp() as tempdir:
logger.debug('Created %s.', tempdir)
for key, value in data.items():
path = os.path.join(tempdir, key)
logger.debug('Writing %s to %s...', repr(value), path)
target.write_value(path, value, verify=False,
as_root=target.conn.connected_as_root)
logger.debug('Reading values from target...')
raw_result = target.read_tree_values_flat(tempdir)
result = {os.path.basename(k): v for k, v in raw_result.items()}
assert {k: v.strip() for k, v in data.items()} == result

1
tools/android/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
android-sdk-linux/

350
tools/android/setup_host.sh Executable file
View File

@ -0,0 +1,350 @@
#!/usr/bin/env bash
#
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (C) 2024, ARM Limited and contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Script to install Android SDK tools for LISA & devlib on an Ubuntu-like
# system and creates Android virtual devices.
# shellcheck disable=SC2317
if [[ -z ${ANDROID_HOME:-} ]]; then
ANDROID_HOME="$(dirname "${BASH_SOURCE[0]}")/android-sdk-linux"
export ANDROID_HOME
fi
export ANDROID_USER_HOME="${ANDROID_HOME}/.android"
ANDROID_CMDLINE_VERSION=${ANDROID_CMDLINE_VERSION:-"11076708"}
# Android SDK is picky on Java version, so we need to set JAVA_HOME manually.
# In most distributions, Java is installed under /usr/lib/jvm so use that.
# according to the distribution
ANDROID_SDK_JAVA_VERSION=17
# Read standard /etc/os-release file and extract the needed field lsb_release
# binary is not installed on all distro, but that file is found pretty much
# anywhere.
read_os_release() {
local field_name=${1}
# shellcheck source=/etc/os-release
(source /etc/os-release &> /dev/null && printf "%s" "${!field_name}")
}
# Test the value of a field in /etc/os-release
test_os_release() {
local field_name=${1}
local value=${2}
if [[ "$(read_os_release "${field_name}")" == "${value}" ]]; then
return 0
fi
return 1
}
get_android_sdk_host_arch() {
# Default to Google ABI type for Arm platforms
local arch="arm64-v8a"
local machine
machine=$(uname -m)
if [[ "${machine}" == "x86"* ]]; then
arch=${machine}
fi
echo "${arch}"
}
# No need for the whole SDK for this one
install_android_platform_tools() {
echo "Installing Android Platform Tools ..."
local url="https://dl.google.com/android/repository/platform-tools-latest-linux.zip"
echo "Downloading Android Platform Tools from: ${url}"
wget -qO- "${url}" | bsdtar -xf- -C "${ANDROID_HOME}/"
}
cleanup_android_home() {
echo "Cleaning up Android SDK: ${ANDROID_HOME}"
rm -rf "${ANDROID_HOME}"
mkdir -p "${ANDROID_HOME}/cmdline-tools"
}
install_android_sdk_manager() {
echo "Installing Android SDK manager ..."
# URL taken from "Command line tools only": https://developer.android.com/studio
local url="https://dl.google.com/android/repository/commandlinetools-linux-${ANDROID_CMDLINE_VERSION}_latest.zip"
echo "Downloading Android SDK manager from: $url"
wget -qO- "${url}" | bsdtar -xf- -C "${ANDROID_HOME}/cmdline-tools"
echo "Moving commandlinetools to cmdline-tools/latest..."
# First, clear cmdline-tools/latest if it exists.
rm -rf "${ANDROID_HOME}/cmdline-tools/latest"
mv "${ANDROID_HOME}/cmdline-tools/cmdline-tools" "${ANDROID_HOME}/cmdline-tools/latest"
chmod +x -R "${ANDROID_HOME}/cmdline-tools/latest/bin"
yes | (call_android_sdkmanager --licenses || true)
}
find_java_home() {
_JAVA_BIN=$(find -L /usr/lib/jvm -path "*${ANDROID_SDK_JAVA_VERSION}*/bin/java" -not -path '*/jre/bin/*' -print -quit)
_JAVA_HOME=$(dirname "${_JAVA_BIN}")/../
echo "Found JAVA_HOME=${_JAVA_HOME}"
}
call_android_sdk() {
local tool="${ANDROID_HOME}/cmdline-tools/latest/bin/${1}"
shift
JAVA_HOME=${_JAVA_HOME} "${tool}" "$@"
}
call_android_sdkmanager() {
call_android_sdk sdkmanager "$@"
}
call_android_avdmanager() {
call_android_sdk avdmanager "$@"
}
install_build_tools() {
yes | call_android_sdkmanager --verbose --channel=0 --install "build-tools;34.0.0"
}
install_platform_tools() {
yes | call_android_sdkmanager --verbose --channel=0 --install "platform-tools"
}
install_platforms() {
yes | call_android_sdkmanager --verbose --channel=0 --install "platforms;android-31"
yes | call_android_sdkmanager --verbose --channel=0 --install "platforms;android-33"
yes | call_android_sdkmanager --verbose --channel=0 --install "platforms;android-34"
}
install_system_images() {
local android_sdk_host_arch
android_sdk_host_arch=$(get_android_sdk_host_arch)
yes | call_android_sdkmanager --verbose --channel=0 --install "system-images;android-31;google_apis;${android_sdk_host_arch}"
yes | call_android_sdkmanager --verbose --channel=0 --install "system-images;android-33;android-desktop;${android_sdk_host_arch}"
yes | call_android_sdkmanager --verbose --channel=0 --install "system-images;android-34;google_apis;${android_sdk_host_arch}"
}
create_android_vds() {
local android_sdk_host_arch
android_sdk_host_arch=$(get_android_sdk_host_arch)
local vd_name
vd_name="devlib-p6-12"
echo "Creating virtual device \"${vd_name}\" (Pixel 6 - Android 12)..."
echo no | call_android_avdmanager -s create avd -n "${vd_name}" -k "system-images;android-31;google_apis;${android_sdk_host_arch}" -b "${android_sdk_host_arch}" -f
vd_name="devlib-p6-14"
echo "Creating virtual device \"${vd_name}\" (Pixel 6 - Android 14)..."
echo no | call_android_avdmanager -s create avd -n "${vd_name}" -k "system-images;android-34;google_apis;${android_sdk_host_arch}" -b "${android_sdk_host_arch}" -f
vd_name="devlib-chromeos"
echo "Creating virtual device \"${vd_name}\" (ChromeOS - Android 13, Pixel tablet)..."
echo no | call_android_avdmanager -s create avd -n "${vd_name}" -k "system-images;android-33;android-desktop;${android_sdk_host_arch}" -b "${android_sdk_host_arch}" -f
}
install_apt() {
echo "Installing apt packages..."
local apt_cmd=(DEBIAN_FRONTEND=noninteractive apt-get)
sudo "${apt_cmd[@]}" update
if [[ $unsupported_distro == 1 ]]; then
for package in "${apt_packages[@]}"; do
if ! sudo "${apt_cmd[@]}" install -y "$package"; then
echo "Failed to install $package on that distribution" >&2
fi
done
else
sudo "${apt_cmd[@]}" install -y "${apt_packages[@]}" || exit $?
fi
}
install_pacman() {
echo "Installing pacman packages..."
sudo pacman -Sy --needed --noconfirm "${pacman_packages[@]}" || exit $?
}
# APT-based distributions like Ubuntu or Debian
apt_packages=(
libarchive-tools
qemu-user-static
wget
)
# pacman-based distributions like Archlinux or its derivatives
pacman_packages=(
coreutils
libarchive
qemu-user-static
wget
)
# Detection based on the package-manager, so that it works on derivatives of
# distributions we expect. Matching on distro name would prevent that.
if which apt-get &>/dev/null; then
install_functions+=(install_apt)
package_manager='apt-get'
expected_distro="Ubuntu"
elif which pacman &>/dev/null; then
install_functions+=(install_pacman)
package_manager="pacman"
expected_distro="Arch Linux"
else
echo "The package manager of distribution $(read_os_release NAME) is not supported, will only install distro-agnostic code"
fi
if [[ -n "${package_manager}" ]] && ! test_os_release NAME "${expected_distro}"; then
unsupported_distro=1
echo -e "\nINFO: the distribution seems based on ${package_manager} but is not ${expected_distro}, some package names might not be right\n"
else
unsupported_distro=0
fi
usage() {
echo "Usage: ${0} [--help] [--cleanup-android-sdk] [--install-android-tools]
[--install-android-platform-tools] [--create-avds] [--install-all]"
cat << EOF
Install distribution packages and other bits required by Android emulator.
Archlinux and Ubuntu are supported, although derivative distributions will
probably work as well.
--install-android-platform-tools is not needed when using
--install-android-tools, but has the advantage of not needing a Java
installation and is quicker to install.
EOF
}
# Defaults to --install-all if no option is given
if [[ -z "$*" ]]; then
args=("--install-all")
else
args=("$@")
fi
# Use conditional fall-through ;;& to all matching all branches with
# --install-all
for arg in "${args[@]}"; do
# We need this flag since *) does not play well with fall-through ;;&
handled=0
case "$arg" in
"--cleanup-android-sdk")
install_functions+=(cleanup_android_home)
handled=1
;;&
# Not part of --install-all since that is already satisfied by
# --install-android-tools The advantage of that method is that it does not
# require the Java JDK/JRE to be installed, and is a bit quicker. However,
# it will not provide the build-tools which are needed by devlib.
"--install-android-platform-tools")
install_functions+=(install_android_platform_tools)
handled=1
;;&
"--install-android-tools" | "--install-all")
install_functions+=(
find_java_home
install_android_sdk_manager
install_build_tools
install_platform_tools
)
apt_packages+=(openjdk-"${ANDROID_SDK_JAVA_VERSION}"-jre openjdk-"${ANDROID_SDK_JAVA_VERSION}"-jdk)
pacman_packages+=(jre"${ANDROID_SDK_JAVA_VERSION}"-openjdk jdk"${ANDROID_SDK_JAVA_VERSION}"-openjdk)
handled=1
;;&
"--create-avds" | "--install-all")
install_functions+=(
find_java_home
install_android_sdk_manager
install_platform_tools
install_platforms
install_system_images
create_android_vds
)
handled=1
;;&
"--help")
usage
exit 0
;;&
*)
if [[ ${handled} != 1 ]]; then
echo "Unrecognised argument: ${arg}"
usage
exit 2
fi
;;
esac
done
# In order in which they will be executed if specified in command line
ordered_functions=(
# Distro package managers before anything else, so all the basic
# pre-requisites are there
install_apt
install_pacman
find_java_home
# cleanup must be done BEFORE installing
cleanup_android_home
install_android_sdk_manager
install_android_platform_tools
install_build_tools
install_platform_tools
install_platforms
install_system_images
create_android_vds
)
# Remove duplicates in the list
# shellcheck disable=SC2207
install_functions=($(echo "${install_functions[@]}" | tr ' ' '\n' | sort -u | tr '\n' ' '))
mkdir -p "${ANDROID_HOME}/cmdline-tools"
# Call all the hooks in the order of available_functions
ret=0
for f in "${ordered_functions[@]}"; do
for func in "${install_functions[@]}"; do
if [[ ${func} == "${f}" ]]; then
# If one hook returns non-zero, we keep going but return an overall failure code.
${func}; _ret=$?
if [[ $_ret != 0 ]]; then
ret=${_ret}
echo "Stage ${func} failed with exit code ${ret}" >&2
else
echo "Stage ${func} succeeded" >&2
fi
fi
done
done
exit $ret
# vim: set tabstop=4 shiftwidth=4 textwidth=80 expandtab:

1
tools/buildroot/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
buildroot-v2023.11.1-*/

View File

@ -0,0 +1,17 @@
BR2_aarch64=y
BR2_cortex_a73_a53=y
BR2_ROOTFS_DEVICE_CREATION_DYNAMIC_MDEV=y
BR2_TARGET_GENERIC_ROOT_PASSWD="root"
BR2_SYSTEM_DHCP="eth0"
BR2_ROOTFS_POST_BUILD_SCRIPT="board/arm-power/post-build.sh"
BR2_LINUX_KERNEL=y
BR2_LINUX_KERNEL_USE_CUSTOM_CONFIG=y
BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE="board/arm-power/aarch64/linux.config"
BR2_LINUX_KERNEL_XZ=y
BR2_PACKAGE_OPENSSH=y
# BR2_PACKAGE_OPENSSH_SANDBOX is not set
BR2_PACKAGE_UTIL_LINUX=y
BR2_PACKAGE_UTIL_LINUX_BINARIES=y
BR2_TARGET_ROOTFS_CPIO_XZ=y
BR2_TARGET_ROOTFS_INITRAMFS=y
# BR2_TARGET_ROOTFS_TAR is not set

View File

@ -0,0 +1,36 @@
CONFIG_SCHED_MC=y
CONFIG_UCLAMP_TASK=y
CONFIG_SCHED_SMT=y
CONFIG_KERNEL_XZ=y
CONFIG_SYSVIPC=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_CGROUPS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE="${BR_BINARIES_DIR}/rootfs.cpio"
# CONFIG_RD_GZIP is not set
# CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set
# CONFIG_RD_LZO is not set
# CONFIG_RD_LZ4 is not set
# CONFIG_RD_ZSTD is not set
CONFIG_SMP=y
# CONFIG_GCC_PLUGINS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_PCI=y
CONFIG_PCI_HOST_GENERIC=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_NETDEVICES=y
CONFIG_VIRTIO_NET=y
CONFIG_INPUT_EVDEV=y
CONFIG_SERIAL_AMBA_PL011=y
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_VIRTIO_PCI=y
CONFIG_TMPFS=y

View File

@ -0,0 +1,15 @@
#!/bin/sh
set -eux
# Enable root login on SSH
sed -i 's/#PermitRootLogin.*/PermitRootLogin yes/' "${TARGET_DIR}/etc/ssh/sshd_config"
# Increase the number of available channels so that devlib async code can
# exploit concurrency better.
sed -i 's/#MaxSessions.*/MaxSessions 100/' "${TARGET_DIR}/etc/ssh/sshd_config"
sed -i 's/#MaxStartups.*/MaxStartups 100/' "${TARGET_DIR}/etc/ssh/sshd_config"
# To test Android bindings of ChromeOsTarget
mkdir -p "${TARGET_DIR}/opt/google/containers/android"

View File

@ -0,0 +1,16 @@
BR2_x86_64=y
BR2_ROOTFS_DEVICE_CREATION_DYNAMIC_MDEV=y
BR2_TARGET_GENERIC_ROOT_PASSWD="root"
BR2_SYSTEM_DHCP="eth0"
BR2_ROOTFS_POST_BUILD_SCRIPT="board/arm-power/post-build.sh"
BR2_LINUX_KERNEL=y
BR2_LINUX_KERNEL_USE_CUSTOM_CONFIG=y
BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE="board/arm-power/x86_64/linux.config"
BR2_LINUX_KERNEL_XZ=y
BR2_PACKAGE_OPENSSH=y
# BR2_PACKAGE_OPENSSH_SANDBOX is not set
BR2_PACKAGE_UTIL_LINUX=y
BR2_PACKAGE_UTIL_LINUX_BINARIES=y
BR2_TARGET_ROOTFS_CPIO_XZ=y
BR2_TARGET_ROOTFS_INITRAMFS=y
# BR2_TARGET_ROOTFS_TAR is not set

View File

@ -0,0 +1,31 @@
CONFIG_KERNEL_XZ=y
CONFIG_SYSVIPC=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_CGROUPS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE="${BR_BINARIES_DIR}/rootfs.cpio"
# CONFIG_RD_GZIP is not set
# CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set
# CONFIG_RD_LZO is not set
# CONFIG_RD_LZ4 is not set
# CONFIG_RD_ZSTD is not set
CONFIG_SMP=y
# CONFIG_GCC_PLUGINS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_PCI=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_NETDEVICES=y
CONFIG_VIRTIO_NET=y
CONFIG_INPUT_EVDEV=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_VIRTIO_PCI=y
CONFIG_TMPFS=y

View File

@ -0,0 +1,120 @@
#!/usr/bin/env bash
#
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (C) 2024, ARM Limited and contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Forked from LISA/tools/lisa-buildroot-create-rootfs.
#
set -eu
ARCH="aarch64"
BUILDROOT_URI="git://git.busybox.net/buildroot"
KERNEL_IMAGE_NAME="Image"
function print_usage
{
echo "Usage: ${0} [options]"
echo " options:"
echo " -a: set arch (default is aarch64, x86_64 is also supported)"
echo " -p: purge buildroot to force a fresh build"
echo " -h: print this help message"
}
function set_arch
{
if [[ "${1}" == "aarch64" ]]; then
return 0
elif [[ "${1}" == "x86_64" ]]; then
ARCH="x86_64"
KERNEL_IMAGE_NAME="bzImage"
return 0
fi
return 1
}
while getopts "ahp" opt; do
case ${opt} in
a)
shift
if ! set_arch "${1}"; then
echo "Invalid arch \"${1}\"."
exit 1
fi
;;
p)
rm -rf "${BUILDROOT_DIR}"
exit 0
;;
h)
print_usage
exit 0
;;
*)
print_usage
exit 1
;;
esac
done
# Execute function for once
function do_once
{
FILE="${BUILDROOT_DIR}/.devlib_${1}"
if [ ! -e "${FILE}" ]; then
eval "${1}"
touch "${FILE}"
fi
}
function br_clone
{
git clone -b ${BUILDROOT_VERSION} -v ${BUILDROOT_URI} "${BUILDROOT_DIR}"
}
function br_apply_config
{
pushd "${BUILDROOT_DIR}" >/dev/null
mkdir -p "board/arm-power/${ARCH}/"
cp -f "../configs/post-build.sh" "board/arm-power/"
cp -f "../configs/${ARCH}/arm-power_${ARCH}_defconfig" "configs/"
cp -f "../configs/${ARCH}/linux.config" "board/arm-power/${ARCH}/"
make "arm-power_${ARCH}_defconfig"
popd >/dev/null
}
function br_build
{
pushd "${BUILDROOT_DIR}" >/dev/null
make
popd >/dev/null
}
BUILDROOT_VERSION=${BUILDROOT_VERSION:-"2023.11.1"}
BUILDROOT_DIR="$(dirname "$0")/buildroot-v${BUILDROOT_VERSION}-${ARCH}"
do_once br_clone
do_once br_apply_config
br_build
echo "Kernel image \"${BUILDROOT_DIR}/output/images/${KERNEL_IMAGE_NAME}\" is ready."

77
tools/docker/Dockerfile Normal file
View File

@ -0,0 +1,77 @@
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (C) 2024, ARM Limited and contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This Dockerfile creates an image to run devlib CI tests.
#
# Running ``docker build -t devlib .`` command in ``tools/docker`` directory
# creates the docker image.
#
# The image can be runned via ``docker run -it --privileged devlib`` command.
#
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
ENV DEVLIB_REF=master
RUN apt-get update && \
apt-get install -y --no-install-recommends \
aapt \
bc \
bison \
build-essential \
cmake \
cpio \
file \
flex \
git \
libelf-dev \
libncurses5-dev \
libssl-dev \
locales \
python3-pip \
qemu-system-arm \
qemu-system-x86 \
rsync \
sudo \
unzip \
wget \
vim \
xz-utils
RUN apt-get -y autoremove && \
apt-get -y autoclean && \
apt-get clean && \
rm -rf /var/cache/apt
RUN git clone -b ${DEVLIB_REF} -v https://github.com/ARM-software/devlib.git /devlib
RUN cd /devlib && \
pip install --upgrade pip setuptools wheel && \
pip install .[full]
# Set ANDROID_CMDLINE_VERSION environment variable if you want to use a
# specific version of Android command line tools rather than default
# which is ``11076708`` as of writing this comment.
RUN cd /devlib/tools/android && ./setup_host.sh
# Set BUILDROOT_VERSION environment variable if you want to use a specific
# branch of buildroot rather than default which is ``2023.11.1`` as of
# writing this comment.
RUN cd /devlib/tools/buildroot && \
./generate-kernel-initrd.sh && \
./generate-kernel-initrd.sh -a x86_64

41
tools/docker/run_tests.sh Executable file
View File

@ -0,0 +1,41 @@
#!/usr/bin/env bash
#
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (C) 2024, ARM Limited and contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Prepare the groundwork and run tests/test_target.py on the Docker image.
#
set -eu
ANDROID_HOME="/devlib/tools/android/android-sdk-linux"
export ANDROID_HOME
export ANDROID_USER_HOME="${ANDROID_HOME}/.android"
export ANDROID_EMULATOR_HOME="${ANDROID_HOME}/.android"
export PATH=${ANDROID_HOME}/platform-tools/:${PATH}
EMULATOR="${ANDROID_HOME}/emulator/emulator"
EMULATOR_ARGS="-no-window -no-snapshot -memory 2048"
${EMULATOR} -avd devlib-p6-12 ${EMULATOR_ARGS} &
${EMULATOR} -avd devlib-p6-14 ${EMULATOR_ARGS} &
${EMULATOR} -avd devlib-chromeos ${EMULATOR_ARGS} &
echo "Waiting 30 seconds for Android virtual devices to finish boot up..."
sleep 30
cd /devlib
cp -f tools/docker/test_config.yml tests/
python3 -m pytest --log-cli-level DEBUG ./tests/test_target.py

View File

@ -0,0 +1,46 @@
target-configs:
entry-0:
# Android-12, Pixel-6
AndroidTarget:
timeout: 60
connection_settings:
device: 'emulator-5554'
entry-1:
# Android-14, Pixel-6
AndroidTarget:
connection_settings:
device: 'emulator-5556'
entry-2:
# Android-13, Pixel tablet
AndroidTarget:
connection_settings:
device: 'emulator-5558'
entry-3:
LocalLinuxTarget:
connection_settings:
unrooted: True
entry-4:
# aarch64 target
QEMUTargetRunner:
qemu_settings:
kernel_image: '/devlib/tools/buildroot/buildroot-v2023.11.1-aarch64/output/images/Image'
ChromeOsTarget:
connection_settings:
device: 'emulator-5558'
entry-5:
# x86_64 target
QEMUTargetRunner:
connection_settings:
port: 8023
qemu_settings:
kernel_image: '/devlib/tools/buildroot/buildroot-v2023.11.1-x86_64/output/images/bzImage'
arch: 'x86_64'
cmdline: 'console=ttyS0'
ChromeOsTarget:
connection_settings:
device: 'emulator-5558'