1
0
mirror of https://github.com/esphome/esphome.git synced 2025-10-16 08:43:45 +01:00

Merge remote-tracking branch 'upstream/dev' into integration

This commit is contained in:
J. Nick Koston
2025-10-15 09:43:58 -10:00
1910 changed files with 10230 additions and 6269 deletions

View File

@@ -186,6 +186,11 @@ This document provides essential context for AI models interacting with this pro
└── components/[component]/ # Component-specific tests
```
Run them using `script/test_build_components`. Use `-c <component>` to test specific components and `-t <target>` for specific platforms.
* **Testing All Components Together:** To verify that all components can be tested together without ID conflicts or configuration issues, use:
```bash
./script/test_component_grouping.py -e config --all
```
This tests all components in a single build to catch conflicts that might not appear when testing components individually. Use `-e config` for fast configuration validation, or `-e compile` for full compilation testing.
* **Debugging and Troubleshooting:**
* **Debug Tools:**
- `esphome config <file>.yaml` to validate configuration.
@@ -216,6 +221,146 @@ This document provides essential context for AI models interacting with this pro
* **Component Development:** Keep dependencies minimal, provide clear error messages, and write comprehensive docstrings and tests.
* **Code Generation:** Generate minimal and efficient C++ code. Validate all user inputs thoroughly. Support multiple platform variations.
* **Configuration Design:** Aim for simplicity with sensible defaults, while allowing for advanced customization.
* **Embedded Systems Optimization:** ESPHome targets resource-constrained microcontrollers. Be mindful of flash size and RAM usage.
**STL Container Guidelines:**
ESPHome runs on embedded systems with limited resources. Choose containers carefully:
1. **Compile-time-known sizes:** Use `std::array` instead of `std::vector` when size is known at compile time.
```cpp
// Bad - generates STL realloc code
std::vector<int> values;
// Good - no dynamic allocation
std::array<int, MAX_VALUES> values;
```
Use `cg.add_define("MAX_VALUES", count)` to set the size from Python configuration.
**For byte buffers:** Avoid `std::vector<uint8_t>` unless the buffer needs to grow. Use `std::unique_ptr<uint8_t[]>` instead.
> **Note:** `std::unique_ptr<uint8_t[]>` does **not** provide bounds checking or iterator support like `std::vector<uint8_t>`. Use it only when you do not need these features and want minimal overhead.
```cpp
// Bad - STL overhead for simple byte buffer
std::vector<uint8_t> buffer;
buffer.resize(256);
// Good - minimal overhead, single allocation
std::unique_ptr<uint8_t[]> buffer = std::make_unique<uint8_t[]>(256);
// Or if size is constant:
std::array<uint8_t, 256> buffer;
```
2. **Compile-time-known fixed sizes with vector-like API:** Use `StaticVector` from `esphome/core/helpers.h` for fixed-size stack allocation with `push_back()` interface.
```cpp
// Bad - generates STL realloc code (_M_realloc_insert)
std::vector<ServiceRecord> services;
services.reserve(5); // Still includes reallocation machinery
// Good - compile-time fixed size, stack allocated, no reallocation machinery
StaticVector<ServiceRecord, MAX_SERVICES> services; // Allocates all MAX_SERVICES on stack
services.push_back(record1); // Tracks count but all slots allocated
```
Use `cg.add_define("MAX_SERVICES", count)` to set the size from Python configuration.
Like `std::array` but with vector-like API (`push_back()`, `size()`) and no STL reallocation code.
3. **Runtime-known sizes:** Use `FixedVector` from `esphome/core/helpers.h` when the size is only known at runtime initialization.
```cpp
// Bad - generates STL realloc code (_M_realloc_insert)
std::vector<TxtRecord> txt_records;
txt_records.reserve(5); // Still includes reallocation machinery
// Good - runtime size, single allocation, no reallocation machinery
FixedVector<TxtRecord> txt_records;
txt_records.init(record_count); // Initialize with exact size at runtime
```
**Benefits:**
- Eliminates `_M_realloc_insert`, `_M_default_append` template instantiations (saves 200-500 bytes per instance)
- Single allocation, no upper bound needed
- No reallocation overhead
- Compatible with protobuf code generation when using `[(fixed_vector) = true]` option
4. **Small datasets (1-16 elements):** Use `std::vector` or `std::array` with simple structs instead of `std::map`/`std::set`/`std::unordered_map`.
```cpp
// Bad - 2KB+ overhead for red-black tree/hash table
std::map<std::string, int> small_lookup;
std::unordered_map<int, std::string> tiny_map;
// Good - simple struct with linear search (std::vector is fine)
struct LookupEntry {
const char *key;
int value;
};
std::vector<LookupEntry> small_lookup = {
{"key1", 10},
{"key2", 20},
{"key3", 30},
};
// Or std::array if size is compile-time constant:
// std::array<LookupEntry, 3> small_lookup = {{ ... }};
```
Linear search on small datasets (1-16 elements) is often faster than hashing/tree overhead, but this depends on lookup frequency and access patterns. For frequent lookups in hot code paths, the O(1) vs O(n) complexity difference may still matter even for small datasets. `std::vector` with simple structs is usually fine—it's the heavy containers (`map`, `set`, `unordered_map`) that should be avoided for small datasets unless profiling shows otherwise.
5. **Detection:** Look for these patterns in compiler output:
- Large code sections with STL symbols (vector, map, set)
- `alloc`, `realloc`, `dealloc` in symbol names
- `_M_realloc_insert`, `_M_default_append` (vector reallocation)
- Red-black tree code (`rb_tree`, `_Rb_tree`)
- Hash table infrastructure (`unordered_map`, `hash`)
**When to optimize:**
- Core components (API, network, logger)
- Widely-used components (mdns, wifi, ble)
- Components causing flash size complaints
**When not to optimize:**
- Single-use niche components
- Code where readability matters more than bytes
- Already using appropriate containers
* **State Management:** Use `CORE.data` for component state that needs to persist during configuration generation. Avoid module-level mutable globals.
**Bad Pattern (Module-Level Globals):**
```python
# Don't do this - state persists between compilation runs
_component_state = []
_use_feature = None
def enable_feature():
global _use_feature
_use_feature = True
```
**Good Pattern (CORE.data with Helpers):**
```python
from esphome.core import CORE
# Keys for CORE.data storage
COMPONENT_STATE_KEY = "my_component_state"
USE_FEATURE_KEY = "my_component_use_feature"
def _get_component_state() -> list:
"""Get component state from CORE.data."""
return CORE.data.setdefault(COMPONENT_STATE_KEY, [])
def _get_use_feature() -> bool | None:
"""Get feature flag from CORE.data."""
return CORE.data.get(USE_FEATURE_KEY)
def _set_use_feature(value: bool) -> None:
"""Set feature flag in CORE.data."""
CORE.data[USE_FEATURE_KEY] = value
def enable_feature():
_set_use_feature(True)
```
**Why this matters:**
- Module-level globals persist between compilation runs if the dashboard doesn't fork/exec
- `CORE.data` automatically clears between runs
- Typed helper functions provide better IDE support and maintainability
- Encapsulation makes state management explicit and testable
* **Security:** Be mindful of security when making changes to the API, web server, or any other network-related code. Do not hardcode secrets or keys.

View File

@@ -1 +1 @@
049d60eed541730efaa4c0dc5d337b4287bf29b6daa350b5dfc1f23915f1c52f
d7693a1e996cacd4a3d1c9a16336799c2a8cc3db02e4e74084151ce964581248

View File

@@ -114,8 +114,7 @@ jobs:
matrix:
python-version:
- "3.11"
- "3.12"
- "3.13"
- "3.14"
os:
- ubuntu-latest
- macOS-latest
@@ -124,13 +123,9 @@ jobs:
# Minimize CI resource usage
# by only running the Python version
# version used for docker images on Windows and macOS
- python-version: "3.13"
- python-version: "3.14"
os: windows-latest
- python-version: "3.12"
os: windows-latest
- python-version: "3.13"
os: macOS-latest
- python-version: "3.12"
- python-version: "3.14"
os: macOS-latest
runs-on: ${{ matrix.os }}
needs:
@@ -177,6 +172,8 @@ jobs:
clang-tidy: ${{ steps.determine.outputs.clang-tidy }}
python-linters: ${{ steps.determine.outputs.python-linters }}
changed-components: ${{ steps.determine.outputs.changed-components }}
changed-components-with-tests: ${{ steps.determine.outputs.changed-components-with-tests }}
directly-changed-components-with-tests: ${{ steps.determine.outputs.directly-changed-components-with-tests }}
component-test-count: ${{ steps.determine.outputs.component-test-count }}
steps:
- name: Check out code from GitHub
@@ -204,6 +201,8 @@ jobs:
echo "clang-tidy=$(echo "$output" | jq -r '.clang_tidy')" >> $GITHUB_OUTPUT
echo "python-linters=$(echo "$output" | jq -r '.python_linters')" >> $GITHUB_OUTPUT
echo "changed-components=$(echo "$output" | jq -c '.changed_components')" >> $GITHUB_OUTPUT
echo "changed-components-with-tests=$(echo "$output" | jq -c '.changed_components_with_tests')" >> $GITHUB_OUTPUT
echo "directly-changed-components-with-tests=$(echo "$output" | jq -c '.directly_changed_components_with_tests')" >> $GITHUB_OUTPUT
echo "component-test-count=$(echo "$output" | jq -r '.component_test_count')" >> $GITHUB_OUTPUT
integration-tests:
@@ -356,79 +355,64 @@ jobs:
# yamllint disable-line rule:line-length
if: always()
test-build-components:
name: Component test ${{ matrix.file }}
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) > 0 && fromJSON(needs.determine-jobs.outputs.component-test-count) < 100
strategy:
fail-fast: false
max-parallel: 2
matrix:
file: ${{ fromJson(needs.determine-jobs.outputs.changed-components) }}
steps:
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install libsdl2-dev
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: test_build_components -e config -c ${{ matrix.file }}
run: |
. venv/bin/activate
./script/test_build_components -e config -c ${{ matrix.file }}
- name: test_build_components -e compile -c ${{ matrix.file }}
run: |
. venv/bin/activate
./script/test_build_components -e compile -c ${{ matrix.file }}
test-build-components-splitter:
name: Split components for testing into 10 components per group
name: Split components for intelligent grouping (40 weighted per batch)
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) >= 100
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) > 0
outputs:
matrix: ${{ steps.split.outputs.components }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Split components into groups of 10
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Split components intelligently based on bus configurations
id: split
run: |
components=$(echo '${{ needs.determine-jobs.outputs.changed-components }}' | jq -c '.[]' | shuf | jq -s -c '[_nwise(10) | join(" ")]')
echo "components=$components" >> $GITHUB_OUTPUT
. venv/bin/activate
# Use intelligent splitter that groups components with same bus configs
components='${{ needs.determine-jobs.outputs.changed-components-with-tests }}'
directly_changed='${{ needs.determine-jobs.outputs.directly-changed-components-with-tests }}'
echo "Splitting components intelligently..."
output=$(python3 script/split_components_for_ci.py --components "$components" --directly-changed "$directly_changed" --batch-size 40 --output github)
echo "$output" >> $GITHUB_OUTPUT
test-build-components-split:
name: Test split components
name: Test components batch (${{ matrix.components }})
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
- test-build-components-splitter
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) >= 100
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) > 0
strategy:
fail-fast: false
max-parallel: 4
max-parallel: ${{ (github.base_ref == 'beta' || github.base_ref == 'release') && 8 || 4 }}
matrix:
components: ${{ fromJson(needs.test-build-components-splitter.outputs.matrix) }}
steps:
- name: Show disk space
run: |
echo "Available disk space:"
df -h
- name: List components
run: echo ${{ matrix.components }}
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install libsdl2-dev
- name: Cache apt packages
uses: awalsh128/cache-apt-pkgs-action@acb598e5ddbc6f68a970c5da0688d2f3a9f04d05 # v1.5.3
with:
packages: libsdl2-dev
version: 1.0
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
@@ -437,20 +421,53 @@ jobs:
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Validate config
- name: Validate and compile components with intelligent grouping
run: |
. venv/bin/activate
for component in ${{ matrix.components }}; do
./script/test_build_components -e config -c $component
done
- name: Compile config
run: |
. venv/bin/activate
mkdir build_cache
export PLATFORMIO_BUILD_CACHE_DIR=$PWD/build_cache
for component in ${{ matrix.components }}; do
./script/test_build_components -e compile -c $component
done
# Use /mnt for build files (70GB available vs ~29GB on /)
# Bind mount PlatformIO directory to /mnt (tools, packages, build cache all go there)
sudo mkdir -p /mnt/platformio
sudo chown $USER:$USER /mnt/platformio
mkdir -p ~/.platformio
sudo mount --bind /mnt/platformio ~/.platformio
# Bind mount test build directory to /mnt
sudo mkdir -p /mnt/test_build_components_build
sudo chown $USER:$USER /mnt/test_build_components_build
mkdir -p tests/test_build_components/build
sudo mount --bind /mnt/test_build_components_build tests/test_build_components/build
# Convert space-separated components to comma-separated for Python script
components_csv=$(echo "${{ matrix.components }}" | tr ' ' ',')
# Only isolate directly changed components when targeting dev branch
# For beta/release branches, group everything for faster CI
#
# WHY ISOLATE DIRECTLY CHANGED COMPONENTS?
# - Isolated tests run WITHOUT --testing-mode, enabling full validation
# - This catches pin conflicts and other issues in directly changed code
# - Grouped tests use --testing-mode to allow config merging (disables some checks)
# - Dependencies are safe to group since they weren't modified in this PR
if [ "${{ github.base_ref }}" = "beta" ] || [ "${{ github.base_ref }}" = "release" ]; then
directly_changed_csv=""
echo "Testing components: $components_csv"
echo "Target branch: ${{ github.base_ref }} - grouping all components"
else
directly_changed_csv=$(echo '${{ needs.determine-jobs.outputs.directly-changed-components-with-tests }}' | jq -r 'join(",")')
echo "Testing components: $components_csv"
echo "Target branch: ${{ github.base_ref }} - isolating directly changed components: $directly_changed_csv"
fi
echo ""
# Run config validation with grouping and isolation
python3 script/test_build_components.py -e config -c "$components_csv" -f --isolate "$directly_changed_csv"
echo ""
echo "Config validation passed! Starting compilation..."
echo ""
# Run compilation with grouping and isolation
python3 script/test_build_components.py -e compile -c "$components_csv" -f --isolate "$directly_changed_csv"
pre-commit-ci-lite:
name: pre-commit.ci lite
@@ -483,7 +500,6 @@ jobs:
- integration-tests
- clang-tidy
- determine-jobs
- test-build-components
- test-build-components-splitter
- test-build-components-split
- pre-commit-ci-lite

View File

@@ -58,7 +58,7 @@ jobs:
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@e296a935590eb16afc0c0108289f68c87e2a89a5 # v4.30.7
uses: github/codeql-action/init@f443b600d91635bebf5b0d9ebc620189c0d6fba5 # v4.30.8
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
@@ -86,6 +86,6 @@ jobs:
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@e296a935590eb16afc0c0108289f68c87e2a89a5 # v4.30.7
uses: github/codeql-action/analyze@f443b600d91635bebf5b0d9ebc620189c0d6fba5 # v4.30.8
with:
category: "/language:${{matrix.language}}"

View File

@@ -268,8 +268,10 @@ def has_ip_address() -> bool:
def has_resolvable_address() -> bool:
"""Check if CORE.address is resolvable (via mDNS or is an IP address)."""
return has_mdns() or has_ip_address()
"""Check if CORE.address is resolvable (via mDNS, DNS, or is an IP address)."""
# Any address (IP, mDNS hostname, or regular DNS hostname) is resolvable
# The resolve_ip_address() function in helpers.py handles all types via AsyncResolver
return CORE.address is not None
def mqtt_get_ip(config: ConfigType, username: str, password: str, client_id: str):
@@ -578,11 +580,12 @@ def show_logs(config: ConfigType, args: ArgsProtocol, devices: list[str]) -> int
if has_api():
addresses_to_use: list[str] | None = None
if port_type == "NETWORK" and (has_mdns() or is_ip_address(port)):
if port_type == "NETWORK":
# Network addresses (IPs, mDNS names, or regular DNS hostnames) can be used
# The resolve_ip_address() function in helpers.py handles all types
addresses_to_use = devices
elif port_type in ("NETWORK", "MQTT", "MQTTIP") and has_mqtt_ip_lookup():
# Only use MQTT IP lookup if the first condition didn't match
# (for MQTT/MQTTIP types, or for NETWORK when mdns/ip check fails)
elif port_type in ("MQTT", "MQTTIP") and has_mqtt_ip_lookup():
# Use MQTT IP lookup for MQTT/MQTTIP types
addresses_to_use = mqtt_get_ip(
config, args.username, args.password, args.client_id
)
@@ -1009,6 +1012,12 @@ def parse_args(argv):
action="append",
default=[],
)
options_parser.add_argument(
"--testing-mode",
help="Enable testing mode (disables validation checks for grouped component testing)",
action="store_true",
default=False,
)
parser = argparse.ArgumentParser(
description=f"ESPHome {const.__version__}", parents=[options_parser]
@@ -1278,6 +1287,7 @@ def run_esphome(argv):
args = parse_args(argv)
CORE.dashboard = args.dashboard
CORE.testing_mode = args.testing_mode
# Create address cache from command-line arguments
CORE.address_cache = AddressCache.from_cli_args(

View File

@@ -776,9 +776,9 @@ message HomeassistantActionRequest {
option (ifdef) = "USE_API_HOMEASSISTANT_SERVICES";
string service = 1;
repeated HomeassistantServiceMap data = 2;
repeated HomeassistantServiceMap data_template = 3;
repeated HomeassistantServiceMap variables = 4;
repeated HomeassistantServiceMap data = 2 [(fixed_vector) = true];
repeated HomeassistantServiceMap data_template = 3 [(fixed_vector) = true];
repeated HomeassistantServiceMap variables = 4 [(fixed_vector) = true];
bool is_event = 5;
uint32 call_id = 6 [(field_ifdef) = "USE_API_HOMEASSISTANT_ACTION_RESPONSES"];
bool wants_response = 7 [(field_ifdef) = "USE_API_HOMEASSISTANT_ACTION_RESPONSES_JSON"];
@@ -1519,7 +1519,7 @@ message BluetoothGATTCharacteristic {
repeated uint64 uuid = 1 [(fixed_array_size) = 2, (fixed_array_skip_zero) = true];
uint32 handle = 2;
uint32 properties = 3;
repeated BluetoothGATTDescriptor descriptors = 4;
repeated BluetoothGATTDescriptor descriptors = 4 [(fixed_vector) = true];
// New field for efficient UUID (v1.12+)
// Only one of uuid or short_uuid will be set.
@@ -1531,7 +1531,7 @@ message BluetoothGATTCharacteristic {
message BluetoothGATTService {
repeated uint64 uuid = 1 [(fixed_array_size) = 2, (fixed_array_skip_zero) = true];
uint32 handle = 2;
repeated BluetoothGATTCharacteristic characteristics = 3;
repeated BluetoothGATTCharacteristic characteristics = 3 [(fixed_vector) = true];
// New field for efficient UUID (v1.12+)
// Only one of uuid or short_uuid will be set.

View File

@@ -64,4 +64,10 @@ extend google.protobuf.FieldOptions {
// This is typically done through methods returning const T& or special accessor
// methods like get_options() or supported_modes_for_api_().
optional string container_pointer = 50001;
// fixed_vector: Use FixedVector instead of std::vector for repeated fields
// When set, the repeated field will use FixedVector<T> which requires calling
// init(size) before adding elements. This eliminates std::vector template overhead
// and is ideal when the exact size is known before populating the array.
optional bool fixed_vector = 50013 [default=false];
}

View File

@@ -1110,9 +1110,9 @@ class HomeassistantActionRequest final : public ProtoMessage {
#endif
StringRef service_ref_{};
void set_service(const StringRef &ref) { this->service_ref_ = ref; }
std::vector<HomeassistantServiceMap> data{};
std::vector<HomeassistantServiceMap> data_template{};
std::vector<HomeassistantServiceMap> variables{};
FixedVector<HomeassistantServiceMap> data{};
FixedVector<HomeassistantServiceMap> data_template{};
FixedVector<HomeassistantServiceMap> variables{};
bool is_event{false};
#ifdef USE_API_HOMEASSISTANT_ACTION_RESPONSES
uint32_t call_id{0};
@@ -1923,7 +1923,7 @@ class BluetoothGATTCharacteristic final : public ProtoMessage {
std::array<uint64_t, 2> uuid{};
uint32_t handle{0};
uint32_t properties{0};
std::vector<BluetoothGATTDescriptor> descriptors{};
FixedVector<BluetoothGATTDescriptor> descriptors{};
uint32_t short_uuid{0};
void encode(ProtoWriteBuffer buffer) const override;
void calculate_size(ProtoSize &size) const override;
@@ -1937,7 +1937,7 @@ class BluetoothGATTService final : public ProtoMessage {
public:
std::array<uint64_t, 2> uuid{};
uint32_t handle{0};
std::vector<BluetoothGATTCharacteristic> characteristics{};
FixedVector<BluetoothGATTCharacteristic> characteristics{};
uint32_t short_uuid{0};
void encode(ProtoWriteBuffer buffer) const override;
void calculate_size(ProtoSize &size) const override;

View File

@@ -201,9 +201,9 @@ class CustomAPIDevice {
void call_homeassistant_service(const std::string &service_name, const std::map<std::string, std::string> &data) {
HomeassistantActionRequest resp;
resp.set_service(StringRef(service_name));
resp.data.init(data.size());
for (auto &it : data) {
resp.data.emplace_back();
auto &kv = resp.data.back();
auto &kv = resp.data.emplace_back();
kv.set_key(StringRef(it.first));
kv.value = it.second;
}
@@ -244,9 +244,9 @@ class CustomAPIDevice {
HomeassistantActionRequest resp;
resp.set_service(StringRef(service_name));
resp.is_event = true;
resp.data.init(data.size());
for (auto &it : data) {
resp.data.emplace_back();
auto &kv = resp.data.back();
auto &kv = resp.data.emplace_back();
kv.set_key(StringRef(it.first));
kv.value = it.second;
}

View File

@@ -127,24 +127,9 @@ template<typename... Ts> class HomeAssistantServiceCallAction : public Action<Ts
std::string service_value = this->service_.value(x...);
resp.set_service(StringRef(service_value));
resp.is_event = this->flags_.is_event;
for (auto &it : this->data_) {
resp.data.emplace_back();
auto &kv = resp.data.back();
kv.set_key(StringRef(it.key));
kv.value = it.value.value(x...);
}
for (auto &it : this->data_template_) {
resp.data_template.emplace_back();
auto &kv = resp.data_template.back();
kv.set_key(StringRef(it.key));
kv.value = it.value.value(x...);
}
for (auto &it : this->variables_) {
resp.variables.emplace_back();
auto &kv = resp.variables.back();
kv.set_key(StringRef(it.key));
kv.value = it.value.value(x...);
}
this->populate_service_map(resp.data, this->data_, x...);
this->populate_service_map(resp.data_template, this->data_template_, x...);
this->populate_service_map(resp.variables, this->variables_, x...);
#ifdef USE_API_HOMEASSISTANT_ACTION_RESPONSES
if (this->flags_.wants_status) {
@@ -189,6 +174,16 @@ template<typename... Ts> class HomeAssistantServiceCallAction : public Action<Ts
}
protected:
template<typename VectorType, typename SourceType>
static void populate_service_map(VectorType &dest, SourceType &source, Ts... x) {
dest.init(source.size());
for (auto &it : source) {
auto &kv = dest.emplace_back();
kv.set_key(StringRef(it.key));
kv.value = it.value.value(x...);
}
}
APIServer *parent_;
TemplatableStringValue<Ts...> service_{};
std::vector<TemplatableKeyValuePair<Ts...>> data_;

View File

@@ -749,13 +749,29 @@ class ProtoSize {
template<typename MessageType>
inline void add_repeated_message(uint32_t field_id_size, const std::vector<MessageType> &messages) {
// Skip if the vector is empty
if (messages.empty()) {
return;
if (!messages.empty()) {
// Use the force version for all messages in the repeated field
for (const auto &message : messages) {
add_message_object_force(field_id_size, message);
}
}
}
// Use the force version for all messages in the repeated field
for (const auto &message : messages) {
add_message_object_force(field_id_size, message);
/**
* @brief Calculates and adds the sizes of all messages in a repeated field to the total message size (FixedVector
* version)
*
* @tparam MessageType The type of the nested messages in the FixedVector
* @param messages FixedVector of message objects
*/
template<typename MessageType>
inline void add_repeated_message(uint32_t field_id_size, const FixedVector<MessageType> &messages) {
// Skip if the fixed vector is empty
if (!messages.empty()) {
// Use the force version for all messages in the repeated field
for (const auto &message : messages) {
add_message_object_force(field_id_size, message);
}
}
}
};

View File

@@ -230,8 +230,8 @@ void BluetoothConnection::send_service_for_discovery_() {
service_resp.handle = service_result.start_handle;
if (total_char_count > 0) {
// Reserve space and process characteristics
service_resp.characteristics.reserve(total_char_count);
// Initialize FixedVector with exact count and process characteristics
service_resp.characteristics.init(total_char_count);
uint16_t char_offset = 0;
esp_gattc_char_elem_t char_result;
while (true) { // characteristics
@@ -253,9 +253,7 @@ void BluetoothConnection::send_service_for_discovery_() {
service_resp.characteristics.emplace_back();
auto &characteristic_resp = service_resp.characteristics.back();
fill_gatt_uuid(characteristic_resp.uuid, characteristic_resp.short_uuid, char_result.uuid, use_efficient_uuids);
characteristic_resp.handle = char_result.char_handle;
characteristic_resp.properties = char_result.properties;
char_offset++;
@@ -271,12 +269,11 @@ void BluetoothConnection::send_service_for_discovery_() {
return;
}
if (total_desc_count == 0) {
// No descriptors, continue to next characteristic
continue;
}
// Reserve space and process descriptors
characteristic_resp.descriptors.reserve(total_desc_count);
// Initialize FixedVector with exact count and process descriptors
characteristic_resp.descriptors.init(total_desc_count);
uint16_t desc_offset = 0;
esp_gattc_descr_elem_t desc_result;
while (true) { // descriptors
@@ -297,9 +294,7 @@ void BluetoothConnection::send_service_for_discovery_() {
characteristic_resp.descriptors.emplace_back();
auto &descriptor_resp = characteristic_resp.descriptors.back();
fill_gatt_uuid(descriptor_resp.uuid, descriptor_resp.short_uuid, desc_result.uuid, use_efficient_uuids);
descriptor_resp.handle = desc_result.handle;
desc_offset++;
}

View File

@@ -16,7 +16,9 @@
#include "bluetooth_connection.h"
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_bt_device.h>
namespace esphome::bluetooth_proxy {

View File

@@ -324,9 +324,9 @@ def _is_framework_url(source: str) -> str:
# The default/recommended arduino framework version
# - https://github.com/espressif/arduino-esp32/releases
ARDUINO_FRAMEWORK_VERSION_LOOKUP = {
"recommended": cv.Version(3, 2, 1),
"latest": cv.Version(3, 3, 1),
"dev": cv.Version(3, 3, 1),
"recommended": cv.Version(3, 3, 2),
"latest": cv.Version(3, 3, 2),
"dev": cv.Version(3, 3, 2),
}
ARDUINO_PLATFORM_VERSION_LOOKUP = {
cv.Version(3, 3, 2): cv.Version(55, 3, 31, "1"),
@@ -343,7 +343,7 @@ ARDUINO_PLATFORM_VERSION_LOOKUP = {
# The default/recommended esp-idf framework version
# - https://github.com/espressif/esp-idf/releases
ESP_IDF_FRAMEWORK_VERSION_LOOKUP = {
"recommended": cv.Version(5, 4, 2),
"recommended": cv.Version(5, 5, 1),
"latest": cv.Version(5, 5, 1),
"dev": cv.Version(5, 5, 1),
}
@@ -363,7 +363,7 @@ ESP_IDF_PLATFORM_VERSION_LOOKUP = {
# The platform-espressif32 version
# - https://github.com/pioarduino/platform-espressif32/releases
PLATFORM_VERSION_LOOKUP = {
"recommended": cv.Version(54, 3, 21, "2"),
"recommended": cv.Version(55, 3, 31, "1"),
"latest": cv.Version(55, 3, 31, "1"),
"dev": cv.Version(55, 3, 31, "1"),
}
@@ -544,6 +544,7 @@ CONF_ENABLE_LWIP_MDNS_QUERIES = "enable_lwip_mdns_queries"
CONF_ENABLE_LWIP_BRIDGE_INTERFACE = "enable_lwip_bridge_interface"
CONF_ENABLE_LWIP_TCPIP_CORE_LOCKING = "enable_lwip_tcpip_core_locking"
CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY = "enable_lwip_check_thread_safety"
CONF_DISABLE_LIBC_LOCKS_IN_IRAM = "disable_libc_locks_in_iram"
def _validate_idf_component(config: ConfigType) -> ConfigType:
@@ -606,6 +607,9 @@ FRAMEWORK_SCHEMA = cv.All(
cv.Optional(
CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY, default=True
): cv.boolean,
cv.Optional(
CONF_DISABLE_LIBC_LOCKS_IN_IRAM, default=True
): cv.boolean,
cv.Optional(CONF_EXECUTE_FROM_PSRAM): cv.boolean,
}
),
@@ -864,6 +868,12 @@ async def to_code(config):
if advanced.get(CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY, True):
add_idf_sdkconfig_option("CONFIG_LWIP_CHECK_THREAD_SAFETY", True)
# Disable placing libc locks in IRAM to save RAM
# This is safe for ESPHome since no IRAM ISRs (interrupts that run while cache is disabled)
# use libc lock APIs. Saves approximately 1.3KB (1,356 bytes) of IRAM.
if advanced.get(CONF_DISABLE_LIBC_LOCKS_IN_IRAM, True):
add_idf_sdkconfig_option("CONFIG_LIBC_LOCKS_PLACE_IN_IRAM", False)
cg.add_platformio_option("board_build.partitions", "partitions.csv")
if CONF_PARTITIONS in config:
add_extra_build_file(

View File

@@ -1564,6 +1564,10 @@ BOARDS = {
"name": "DFRobot Beetle ESP32-C3",
"variant": VARIANT_ESP32C3,
},
"dfrobot_firebeetle2_esp32c6": {
"name": "DFRobot FireBeetle 2 ESP32-C6",
"variant": VARIANT_ESP32C6,
},
"dfrobot_firebeetle2_esp32e": {
"name": "DFRobot Firebeetle 2 ESP32-E",
"variant": VARIANT_ESP32,
@@ -1604,6 +1608,22 @@ BOARDS = {
"name": "Ai-Thinker ESP-C3-M1-I-Kit",
"variant": VARIANT_ESP32C3,
},
"esp32-c5-devkitc-1": {
"name": "Espressif ESP32-C5-DevKitC-1 4MB no PSRAM",
"variant": VARIANT_ESP32C5,
},
"esp32-c5-devkitc1-n16r4": {
"name": "Espressif ESP32-C5-DevKitC-1 N16R4 (16 MB Flash Quad, 4 MB PSRAM Quad)",
"variant": VARIANT_ESP32C5,
},
"esp32-c5-devkitc1-n4": {
"name": "Espressif ESP32-C5-DevKitC-1 N4 (4MB no PSRAM)",
"variant": VARIANT_ESP32C5,
},
"esp32-c5-devkitc1-n8r4": {
"name": "Espressif ESP32-C5-DevKitC-1 N8R4 (8 MB Flash Quad, 4 MB PSRAM Quad)",
"variant": VARIANT_ESP32C5,
},
"esp32-c6-devkitc-1": {
"name": "Espressif ESP32-C6-DevKitC-1",
"variant": VARIANT_ESP32C6,
@@ -2048,6 +2068,10 @@ BOARDS = {
"name": "M5Stack Station",
"variant": VARIANT_ESP32,
},
"m5stack-tab5-p4": {
"name": "M5STACK Tab5 esp32-p4 Board",
"variant": VARIANT_ESP32P4,
},
"m5stack-timer-cam": {
"name": "M5Stack Timer CAM",
"variant": VARIANT_ESP32,
@@ -2476,6 +2500,10 @@ BOARDS = {
"name": "YelloByte YB-ESP32-S3-AMP (Rev.3)",
"variant": VARIANT_ESP32S3,
},
"yb_esp32s3_drv": {
"name": "YelloByte YB-ESP32-S3-DRV",
"variant": VARIANT_ESP32S3,
},
"yb_esp32s3_eth": {
"name": "YelloByte YB-ESP32-S3-ETH",
"variant": VARIANT_ESP32S3,

View File

@@ -1,4 +1,5 @@
from collections.abc import Callable, MutableMapping
from dataclasses import dataclass
from enum import Enum
import logging
import re
@@ -16,7 +17,7 @@ from esphome.const import (
CONF_NAME,
CONF_NAME_ADD_MAC_SUFFIX,
)
from esphome.core import CORE, TimePeriod
from esphome.core import CORE, CoroPriority, TimePeriod, coroutine_with_priority
import esphome.final_validate as fv
DEPENDENCIES = ["esp32"]
@@ -111,6 +112,58 @@ class BTLoggers(Enum):
_required_loggers: set[BTLoggers] = set()
# Dataclass for handler registration counts
@dataclass
class HandlerCounts:
gap_event: int = 0
gap_scan_event: int = 0
gattc_event: int = 0
gatts_event: int = 0
ble_status_event: int = 0
# Track handler registration counts for StaticVector sizing
_handler_counts = HandlerCounts()
def register_gap_event_handler(parent_var: cg.MockObj, handler_var: cg.MockObj) -> None:
"""Register a GAP event handler and track the count."""
_handler_counts.gap_event += 1
cg.add(parent_var.register_gap_event_handler(handler_var))
def register_gap_scan_event_handler(
parent_var: cg.MockObj, handler_var: cg.MockObj
) -> None:
"""Register a GAP scan event handler and track the count."""
_handler_counts.gap_scan_event += 1
cg.add(parent_var.register_gap_scan_event_handler(handler_var))
def register_gattc_event_handler(
parent_var: cg.MockObj, handler_var: cg.MockObj
) -> None:
"""Register a GATTc event handler and track the count."""
_handler_counts.gattc_event += 1
cg.add(parent_var.register_gattc_event_handler(handler_var))
def register_gatts_event_handler(
parent_var: cg.MockObj, handler_var: cg.MockObj
) -> None:
"""Register a GATTs event handler and track the count."""
_handler_counts.gatts_event += 1
cg.add(parent_var.register_gatts_event_handler(handler_var))
def register_ble_status_event_handler(
parent_var: cg.MockObj, handler_var: cg.MockObj
) -> None:
"""Register a BLE status event handler and track the count."""
_handler_counts.ble_status_event += 1
cg.add(parent_var.register_ble_status_event_handler(handler_var))
def register_bt_logger(*loggers: BTLoggers) -> None:
"""Register Bluetooth logger categories that a component needs.
@@ -285,6 +338,10 @@ def consume_connection_slots(
def validate_connection_slots(max_connections: int) -> None:
"""Validate that BLE connection slots don't exceed the configured maximum."""
# Skip validation in testing mode to allow component grouping
if CORE.testing_mode:
return
ble_data = CORE.data.get(KEY_ESP32_BLE, {})
used_slots = ble_data.get(KEY_USED_CONNECTION_SLOTS, [])
num_used = len(used_slots)
@@ -330,14 +387,27 @@ def final_validation(config):
max_connections = config.get(CONF_MAX_CONNECTIONS, DEFAULT_MAX_CONNECTIONS)
validate_connection_slots(max_connections)
# Check if hosted bluetooth is being used
if "esp32_hosted" in full_config:
add_idf_sdkconfig_option("CONFIG_BT_CLASSIC_ENABLED", False)
add_idf_sdkconfig_option("CONFIG_BT_BLE_ENABLED", True)
add_idf_sdkconfig_option("CONFIG_BT_BLUEDROID_ENABLED", True)
add_idf_sdkconfig_option("CONFIG_BT_CONTROLLER_DISABLED", True)
add_idf_sdkconfig_option("CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID", True)
add_idf_sdkconfig_option("CONFIG_ESP_HOSTED_BLUEDROID_HCI_VHCI", True)
# Check if BLE Server is needed
has_ble_server = "esp32_ble_server" in full_config
add_idf_sdkconfig_option("CONFIG_BT_GATTS_ENABLE", has_ble_server)
# Check if BLE Client is needed (via esp32_ble_tracker or esp32_ble_client)
has_ble_client = (
"esp32_ble_tracker" in full_config or "esp32_ble_client" in full_config
)
# ESP-IDF BLE stack requires GATT Server to be enabled when GATT Client is enabled
# This is an internal dependency in the Bluedroid stack (tested ESP-IDF 5.4.2-5.5.1)
# See: https://github.com/espressif/esp-idf/issues/17724
add_idf_sdkconfig_option("CONFIG_BT_GATTS_ENABLE", has_ble_server or has_ble_client)
add_idf_sdkconfig_option("CONFIG_BT_GATTC_ENABLE", has_ble_client)
# Handle max_connections: check for deprecated location in esp32_ble_tracker
@@ -366,6 +436,36 @@ def final_validation(config):
FINAL_VALIDATE_SCHEMA = final_validation
# This needs to be run as a job with CoroPriority.FINAL priority so that all components have
# a chance to register their handlers before the counts are added to defines.
@coroutine_with_priority(CoroPriority.FINAL)
async def _add_ble_handler_defines():
# Add defines for StaticVector sizing based on handler registration counts
# Only define if count > 0 to avoid allocating unnecessary memory
if _handler_counts.gap_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT", _handler_counts.gap_event
)
if _handler_counts.gap_scan_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT",
_handler_counts.gap_scan_event,
)
if _handler_counts.gattc_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT", _handler_counts.gattc_event
)
if _handler_counts.gatts_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT", _handler_counts.gatts_event
)
if _handler_counts.ble_status_event > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT",
_handler_counts.ble_status_event,
)
async def to_code(config):
var = cg.new_Pvariable(config[CONF_ID])
cg.add(var.set_enable_on_boot(config[CONF_ENABLE_ON_BOOT]))
@@ -420,6 +520,9 @@ async def to_code(config):
cg.add_define("USE_ESP32_BLE_ADVERTISING")
cg.add_define("USE_ESP32_BLE_UUID")
# Schedule the handler defines to be added after all components register
CORE.add_job(_add_ble_handler_defines)
@automation.register_condition("ble.enabled", BLEEnabledCondition, cv.Schema({}))
async def ble_enabled_to_code(config, condition_id, template_arg, args):

View File

@@ -6,7 +6,15 @@
#include "esphome/core/helpers.h"
#include "esphome/core/log.h"
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#else
extern "C" {
#include <esp_hosted.h>
#include <esp_hosted_misc.h>
#include <esp_hosted_bluedroid.h>
}
#endif
#include <esp_bt_device.h>
#include <esp_bt_main.h>
#include <esp_gap_ble_api.h>
@@ -140,6 +148,7 @@ void ESP32BLE::advertising_init_() {
bool ESP32BLE::ble_setup_() {
esp_err_t err;
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#ifdef USE_ARDUINO
if (!btStart()) {
ESP_LOGE(TAG, "btStart failed: %d", esp_bt_controller_get_status());
@@ -173,6 +182,28 @@ bool ESP32BLE::ble_setup_() {
#endif
esp_bt_controller_mem_release(ESP_BT_MODE_CLASSIC_BT);
#else
esp_hosted_connect_to_slave(); // NOLINT
if (esp_hosted_bt_controller_init() != ESP_OK) {
ESP_LOGW(TAG, "esp_hosted_bt_controller_init failed");
return false;
}
if (esp_hosted_bt_controller_enable() != ESP_OK) {
ESP_LOGW(TAG, "esp_hosted_bt_controller_enable failed");
return false;
}
hosted_hci_bluedroid_open();
esp_bluedroid_hci_driver_operations_t operations = {
.send = hosted_hci_bluedroid_send,
.check_send_available = hosted_hci_bluedroid_check_send_available,
.register_host_callback = hosted_hci_bluedroid_register_host_callback,
};
esp_bluedroid_attach_hci_driver(&operations);
#endif
err = esp_bluedroid_init();
if (err != ESP_OK) {
@@ -185,31 +216,27 @@ bool ESP32BLE::ble_setup_() {
return false;
}
if (!this->gap_event_handlers_.empty()) {
err = esp_ble_gap_register_callback(ESP32BLE::gap_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gap_register_callback failed: %d", err);
return false;
}
}
#ifdef USE_ESP32_BLE_SERVER
if (!this->gatts_event_handlers_.empty()) {
err = esp_ble_gatts_register_callback(ESP32BLE::gatts_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gatts_register_callback failed: %d", err);
return false;
}
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
err = esp_ble_gap_register_callback(ESP32BLE::gap_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gap_register_callback failed: %d", err);
return false;
}
#endif
#ifdef USE_ESP32_BLE_CLIENT
if (!this->gattc_event_handlers_.empty()) {
err = esp_ble_gattc_register_callback(ESP32BLE::gattc_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gattc_register_callback failed: %d", err);
return false;
}
#if defined(USE_ESP32_BLE_SERVER) && defined(ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT)
err = esp_ble_gatts_register_callback(ESP32BLE::gatts_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gatts_register_callback failed: %d", err);
return false;
}
#endif
#if defined(USE_ESP32_BLE_CLIENT) && defined(ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT)
err = esp_ble_gattc_register_callback(ESP32BLE::gattc_event_handler);
if (err != ESP_OK) {
ESP_LOGE(TAG, "esp_ble_gattc_register_callback failed: %d", err);
return false;
}
#endif
@@ -217,8 +244,11 @@ bool ESP32BLE::ble_setup_() {
if (this->name_.has_value()) {
name = this->name_.value();
if (App.is_name_add_mac_suffix_enabled()) {
name += "-";
name += get_mac_address().substr(6);
// MAC address suffix length (last 6 characters of 12-char MAC address string)
constexpr size_t mac_address_suffix_len = 6;
const std::string mac_addr = get_mac_address();
const char *mac_suffix_ptr = mac_addr.c_str() + mac_address_suffix_len;
name = make_name_with_suffix(name, '-', mac_suffix_ptr, mac_address_suffix_len);
}
} else {
name = App.get_name();
@@ -262,6 +292,7 @@ bool ESP32BLE::ble_dismantle_() {
return false;
}
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#ifdef USE_ARDUINO
if (!btStop()) {
ESP_LOGE(TAG, "btStop failed: %d", esp_bt_controller_get_status());
@@ -291,6 +322,19 @@ bool ESP32BLE::ble_dismantle_() {
return false;
}
}
#endif
#else
if (esp_hosted_bt_controller_disable() != ESP_OK) {
ESP_LOGW(TAG, "esp_hosted_bt_controller_disable failed");
return false;
}
if (esp_hosted_bt_controller_deinit(false) != ESP_OK) {
ESP_LOGW(TAG, "esp_hosted_bt_controller_deinit failed");
return false;
}
hosted_hci_bluedroid_close();
#endif
return true;
}
@@ -303,9 +347,11 @@ void ESP32BLE::loop() {
case BLE_COMPONENT_STATE_DISABLE: {
ESP_LOGD(TAG, "Disabling");
#ifdef ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT
for (auto *ble_event_handler : this->ble_status_event_handlers_) {
ble_event_handler->ble_before_disabled_event_handler();
}
#endif
if (!ble_dismantle_()) {
ESP_LOGE(TAG, "Could not be dismantled");
@@ -335,7 +381,7 @@ void ESP32BLE::loop() {
BLEEvent *ble_event = this->ble_events_.pop();
while (ble_event != nullptr) {
switch (ble_event->type_) {
#ifdef USE_ESP32_BLE_SERVER
#if defined(USE_ESP32_BLE_SERVER) && defined(ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT)
case BLEEvent::GATTS: {
esp_gatts_cb_event_t event = ble_event->event_.gatts.gatts_event;
esp_gatt_if_t gatts_if = ble_event->event_.gatts.gatts_if;
@@ -347,7 +393,7 @@ void ESP32BLE::loop() {
break;
}
#endif
#ifdef USE_ESP32_BLE_CLIENT
#if defined(USE_ESP32_BLE_CLIENT) && defined(ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT)
case BLEEvent::GATTC: {
esp_gattc_cb_event_t event = ble_event->event_.gattc.gattc_event;
esp_gatt_if_t gattc_if = ble_event->event_.gattc.gattc_if;
@@ -363,10 +409,12 @@ void ESP32BLE::loop() {
esp_gap_ble_cb_event_t gap_event = ble_event->event_.gap.gap_event;
switch (gap_event) {
case ESP_GAP_BLE_SCAN_RESULT_EVT:
#ifdef ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT
// Use the new scan event handler - no memcpy!
for (auto *scan_handler : this->gap_scan_event_handlers_) {
scan_handler->gap_scan_event_handler(ble_event->scan_result());
}
#endif
break;
// Scan complete events
@@ -378,10 +426,12 @@ void ESP32BLE::loop() {
// This is verified at compile-time by static_assert checks in ble_event.h
// The struct already contains our copy of the status (copied in BLEEvent constructor)
ESP_LOGV(TAG, "gap_event_handler - %d", gap_event);
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
for (auto *gap_handler : this->gap_event_handlers_) {
gap_handler->gap_event_handler(
gap_event, reinterpret_cast<esp_ble_gap_cb_param_t *>(&ble_event->event_.gap.scan_complete));
}
#endif
break;
// Advertising complete events
@@ -392,19 +442,23 @@ void ESP32BLE::loop() {
case ESP_GAP_BLE_ADV_STOP_COMPLETE_EVT:
// All advertising complete events have the same structure with just status
ESP_LOGV(TAG, "gap_event_handler - %d", gap_event);
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
for (auto *gap_handler : this->gap_event_handlers_) {
gap_handler->gap_event_handler(
gap_event, reinterpret_cast<esp_ble_gap_cb_param_t *>(&ble_event->event_.gap.adv_complete));
}
#endif
break;
// RSSI complete event
case ESP_GAP_BLE_READ_RSSI_COMPLETE_EVT:
ESP_LOGV(TAG, "gap_event_handler - %d", gap_event);
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
for (auto *gap_handler : this->gap_event_handlers_) {
gap_handler->gap_event_handler(
gap_event, reinterpret_cast<esp_ble_gap_cb_param_t *>(&ble_event->event_.gap.read_rssi_complete));
}
#endif
break;
// Security events
@@ -414,10 +468,12 @@ void ESP32BLE::loop() {
case ESP_GAP_BLE_PASSKEY_REQ_EVT:
case ESP_GAP_BLE_NC_REQ_EVT:
ESP_LOGV(TAG, "gap_event_handler - %d", gap_event);
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
for (auto *gap_handler : this->gap_event_handlers_) {
gap_handler->gap_event_handler(
gap_event, reinterpret_cast<esp_ble_gap_cb_param_t *>(&ble_event->event_.gap.security));
}
#endif
break;
default:

View File

@@ -126,19 +126,25 @@ class ESP32BLE : public Component {
void advertising_register_raw_advertisement_callback(std::function<void(bool)> &&callback);
#endif
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
void register_gap_event_handler(GAPEventHandler *handler) { this->gap_event_handlers_.push_back(handler); }
#endif
#ifdef ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT
void register_gap_scan_event_handler(GAPScanEventHandler *handler) {
this->gap_scan_event_handlers_.push_back(handler);
}
#ifdef USE_ESP32_BLE_CLIENT
#endif
#if defined(USE_ESP32_BLE_CLIENT) && defined(ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT)
void register_gattc_event_handler(GATTcEventHandler *handler) { this->gattc_event_handlers_.push_back(handler); }
#endif
#ifdef USE_ESP32_BLE_SERVER
#if defined(USE_ESP32_BLE_SERVER) && defined(ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT)
void register_gatts_event_handler(GATTsEventHandler *handler) { this->gatts_event_handlers_.push_back(handler); }
#endif
#ifdef ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT
void register_ble_status_event_handler(BLEStatusEventHandler *handler) {
this->ble_status_event_handlers_.push_back(handler);
}
#endif
void set_enable_on_boot(bool enable_on_boot) { this->enable_on_boot_ = enable_on_boot; }
protected:
@@ -160,16 +166,22 @@ class ESP32BLE : public Component {
private:
template<typename... Args> friend void enqueue_ble_event(Args... args);
// Vectors (12 bytes each on 32-bit, naturally aligned to 4 bytes)
std::vector<GAPEventHandler *> gap_event_handlers_;
std::vector<GAPScanEventHandler *> gap_scan_event_handlers_;
#ifdef USE_ESP32_BLE_CLIENT
std::vector<GATTcEventHandler *> gattc_event_handlers_;
// Handler vectors - use StaticVector when counts are known at compile time
#ifdef ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT
StaticVector<GAPEventHandler *, ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT> gap_event_handlers_;
#endif
#ifdef USE_ESP32_BLE_SERVER
std::vector<GATTsEventHandler *> gatts_event_handlers_;
#ifdef ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT
StaticVector<GAPScanEventHandler *, ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT> gap_scan_event_handlers_;
#endif
#if defined(USE_ESP32_BLE_CLIENT) && defined(ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT)
StaticVector<GATTcEventHandler *, ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT> gattc_event_handlers_;
#endif
#if defined(USE_ESP32_BLE_SERVER) && defined(ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT)
StaticVector<GATTsEventHandler *, ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT> gatts_event_handlers_;
#endif
#ifdef ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT
StaticVector<BLEStatusEventHandler *, ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT> ble_status_event_handlers_;
#endif
std::vector<BLEStatusEventHandler *> ble_status_event_handlers_;
// Large objects (size depends on template parameters, but typically aligned to 4 bytes)
esphome::LockFreeQueue<BLEEvent, MAX_BLE_QUEUE_SIZE> ble_events_;

View File

@@ -10,7 +10,9 @@
#ifdef USE_ESP32
#ifdef USE_ESP32_BLE_ADVERTISING
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_gap_ble_api.h>
#include <esp_gatts_api.h>

View File

@@ -74,7 +74,7 @@ async def to_code(config):
var = cg.new_Pvariable(config[CONF_ID], uuid_arr)
parent = await cg.get_variable(config[esp32_ble.CONF_BLE_ID])
cg.add(parent.register_gap_event_handler(var))
esp32_ble.register_gap_event_handler(parent, var)
await cg.register_component(var, config)
cg.add(var.set_major(config[CONF_MAJOR]))

View File

@@ -4,7 +4,9 @@
#ifdef USE_ESP32
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_bt_main.h>
#include <esp_gap_ble_api.h>
#include <freertos/FreeRTOS.h>

View File

@@ -5,7 +5,9 @@
#ifdef USE_ESP32
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_gap_ble_api.h>
namespace esphome {

View File

@@ -546,8 +546,8 @@ async def to_code(config):
await cg.register_component(var, config)
parent = await cg.get_variable(config[esp32_ble.CONF_BLE_ID])
cg.add(parent.register_gatts_event_handler(var))
cg.add(parent.register_ble_status_event_handler(var))
esp32_ble.register_gatts_event_handler(parent, var)
esp32_ble.register_ble_status_event_handler(parent, var)
cg.add(var.set_parent(parent))
cg.add(parent.advertising_set_appearance(config[CONF_APPEARANCE]))
if CONF_MANUFACTURER_DATA in config:

View File

@@ -10,7 +10,9 @@
#include <nvs_flash.h>
#include <freertos/FreeRTOSConfig.h>
#include <esp_bt_main.h>
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <freertos/task.h>
#include <esp_gap_ble_api.h>

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
from dataclasses import dataclass
import logging
from esphome import automation
@@ -52,9 +53,19 @@ class BLEFeatures(StrEnum):
ESP_BT_DEVICE = "ESP_BT_DEVICE"
# Dataclass for registration counts
@dataclass
class RegistrationCounts:
listeners: int = 0
clients: int = 0
# Set to track which features are needed by components
_required_features: set[BLEFeatures] = set()
# Track registration counts for StaticVector sizing
_registration_counts = RegistrationCounts()
def register_ble_features(features: set[BLEFeatures]) -> None:
"""Register BLE features that a component needs.
@@ -235,10 +246,10 @@ async def to_code(config):
await cg.register_component(var, config)
parent = await cg.get_variable(config[esp32_ble.CONF_BLE_ID])
cg.add(parent.register_gap_event_handler(var))
cg.add(parent.register_gap_scan_event_handler(var))
cg.add(parent.register_gattc_event_handler(var))
cg.add(parent.register_ble_status_event_handler(var))
esp32_ble.register_gap_event_handler(parent, var)
esp32_ble.register_gap_scan_event_handler(parent, var)
esp32_ble.register_gattc_event_handler(parent, var)
esp32_ble.register_ble_status_event_handler(parent, var)
cg.add(var.set_parent(parent))
params = config[CONF_SCAN_PARAMETERS]
@@ -257,12 +268,14 @@ async def to_code(config):
register_ble_features({BLEFeatures.ESP_BT_DEVICE})
for conf in config.get(CONF_ON_BLE_ADVERTISE, []):
_registration_counts.listeners += 1
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
if CONF_MAC_ADDRESS in conf:
addr_list = [it.as_hex for it in conf[CONF_MAC_ADDRESS]]
cg.add(trigger.set_addresses(addr_list))
await automation.build_automation(trigger, [(ESPBTDeviceConstRef, "x")], conf)
for conf in config.get(CONF_ON_BLE_SERVICE_DATA_ADVERTISE, []):
_registration_counts.listeners += 1
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
if len(conf[CONF_SERVICE_UUID]) == len(bt_uuid16_format):
cg.add(trigger.set_service_uuid16(as_hex(conf[CONF_SERVICE_UUID])))
@@ -275,6 +288,7 @@ async def to_code(config):
cg.add(trigger.set_address(conf[CONF_MAC_ADDRESS].as_hex))
await automation.build_automation(trigger, [(adv_data_t_const_ref, "x")], conf)
for conf in config.get(CONF_ON_BLE_MANUFACTURER_DATA_ADVERTISE, []):
_registration_counts.listeners += 1
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
if len(conf[CONF_MANUFACTURER_ID]) == len(bt_uuid16_format):
cg.add(trigger.set_manufacturer_uuid16(as_hex(conf[CONF_MANUFACTURER_ID])))
@@ -287,6 +301,7 @@ async def to_code(config):
cg.add(trigger.set_address(conf[CONF_MAC_ADDRESS].as_hex))
await automation.build_automation(trigger, [(adv_data_t_const_ref, "x")], conf)
for conf in config.get(CONF_ON_SCAN_END, []):
_registration_counts.listeners += 1
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
@@ -320,6 +335,17 @@ async def _add_ble_features():
cg.add_define("USE_ESP32_BLE_DEVICE")
cg.add_define("USE_ESP32_BLE_UUID")
# Add defines for StaticVector sizing based on registration counts
# Only define if count > 0 to avoid allocating unnecessary memory
if _registration_counts.listeners > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT", _registration_counts.listeners
)
if _registration_counts.clients > 0:
cg.add_define(
"ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT", _registration_counts.clients
)
ESP32_BLE_START_SCAN_ACTION_SCHEMA = cv.Schema(
{
@@ -369,6 +395,7 @@ async def register_ble_device(
var: cg.SafeExpType, config: ConfigType
) -> cg.SafeExpType:
register_ble_features({BLEFeatures.ESP_BT_DEVICE})
_registration_counts.listeners += 1
paren = await cg.get_variable(config[CONF_ESP32_BLE_ID])
cg.add(paren.register_listener(var))
return var
@@ -376,6 +403,7 @@ async def register_ble_device(
async def register_client(var: cg.SafeExpType, config: ConfigType) -> cg.SafeExpType:
register_ble_features({BLEFeatures.ESP_BT_DEVICE})
_registration_counts.clients += 1
paren = await cg.get_variable(config[CONF_ESP32_BLE_ID])
cg.add(paren.register_client(var))
return var
@@ -389,6 +417,7 @@ async def register_raw_ble_device(
This does NOT register the ESP_BT_DEVICE feature, meaning ESPBTDevice
will not be compiled in if this is the only registration method used.
"""
_registration_counts.listeners += 1
paren = await cg.get_variable(config[CONF_ESP32_BLE_ID])
cg.add(paren.register_listener(var))
return var
@@ -402,6 +431,7 @@ async def register_raw_client(
This does NOT register the ESP_BT_DEVICE feature, meaning ESPBTDevice
will not be compiled in if this is the only registration method used.
"""
_registration_counts.clients += 1
paren = await cg.get_variable(config[CONF_ESP32_BLE_ID])
cg.add(paren.register_client(var))
return var

View File

@@ -7,7 +7,9 @@
#include "esphome/core/helpers.h"
#include "esphome/core/log.h"
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
#include <esp_bt.h>
#endif
#include <esp_bt_defs.h>
#include <esp_bt_main.h>
#include <esp_gap_ble_api.h>
@@ -74,9 +76,11 @@ void ESP32BLETracker::setup() {
[this](ota::OTAState state, float progress, uint8_t error, ota::OTAComponent *comp) {
if (state == ota::OTA_STARTED) {
this->stop_scan();
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
client->disconnect();
}
#endif
}
});
#endif
@@ -206,8 +210,10 @@ void ESP32BLETracker::start_scan_(bool first) {
this->set_scanner_state_(ScannerState::STARTING);
ESP_LOGD(TAG, "Starting scan, set scanner state to STARTING.");
if (!first) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_)
listener->on_scan_end();
#endif
}
#ifdef USE_ESP32_BLE_DEVICE
this->already_discovered_.clear();
@@ -236,20 +242,25 @@ void ESP32BLETracker::start_scan_(bool first) {
}
void ESP32BLETracker::register_client(ESPBTClient *client) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
client->app_id = ++this->app_id_;
this->clients_.push_back(client);
this->recalculate_advertisement_parser_types();
#endif
}
void ESP32BLETracker::register_listener(ESPBTDeviceListener *listener) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
listener->set_parent(this);
this->listeners_.push_back(listener);
this->recalculate_advertisement_parser_types();
#endif
}
void ESP32BLETracker::recalculate_advertisement_parser_types() {
this->raw_advertisements_ = false;
this->parse_advertisements_ = false;
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_) {
if (listener->get_advertisement_parser_type() == AdvertisementParserType::PARSED_ADVERTISEMENTS) {
this->parse_advertisements_ = true;
@@ -257,6 +268,8 @@ void ESP32BLETracker::recalculate_advertisement_parser_types() {
this->raw_advertisements_ = true;
}
}
#endif
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
if (client->get_advertisement_parser_type() == AdvertisementParserType::PARSED_ADVERTISEMENTS) {
this->parse_advertisements_ = true;
@@ -264,6 +277,7 @@ void ESP32BLETracker::recalculate_advertisement_parser_types() {
this->raw_advertisements_ = true;
}
}
#endif
}
void ESP32BLETracker::gap_event_handler(esp_gap_ble_cb_event_t event, esp_ble_gap_cb_param_t *param) {
@@ -282,10 +296,12 @@ void ESP32BLETracker::gap_event_handler(esp_gap_ble_cb_event_t event, esp_ble_ga
default:
break;
}
// Forward all events to clients (scan results are handled separately via gap_scan_event_handler)
// Forward all events to clients (scan results are handled separately via gap_scan_event_handler)
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
client->gap_event_handler(event, param);
}
#endif
}
void ESP32BLETracker::gap_scan_event_handler(const BLEScanResult &scan_result) {
@@ -348,9 +364,11 @@ void ESP32BLETracker::gap_scan_stop_complete_(const esp_ble_gap_cb_param_t::ble_
void ESP32BLETracker::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc_if,
esp_ble_gattc_cb_param_t *param) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
client->gattc_event_handler(event, gattc_if, param);
}
#endif
}
void ESP32BLETracker::set_scanner_state_(ScannerState state) {
@@ -704,12 +722,16 @@ bool ESPBTDevice::resolve_irk(const uint8_t *irk) const {
void ESP32BLETracker::process_scan_result_(const BLEScanResult &scan_result) {
// Process raw advertisements
if (this->raw_advertisements_) {
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_) {
listener->parse_devices(&scan_result, 1);
}
#endif
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
client->parse_devices(&scan_result, 1);
}
#endif
}
// Process parsed advertisements
@@ -719,16 +741,20 @@ void ESP32BLETracker::process_scan_result_(const BLEScanResult &scan_result) {
device.parse_scan_rst(scan_result);
bool found = false;
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_) {
if (listener->parse_device(device))
found = true;
}
#endif
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
if (client->parse_device(device)) {
found = true;
}
}
#endif
if (!found && !this->scan_continuous_) {
this->print_bt_device_info(device);
@@ -745,8 +771,10 @@ void ESP32BLETracker::cleanup_scan_state_(bool is_stop_complete) {
// Reset timeout state machine instead of cancelling scheduler timeout
this->scan_timeout_state_ = ScanTimeoutState::INACTIVE;
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
for (auto *listener : this->listeners_)
listener->on_scan_end();
#endif
this->set_scanner_state_(ScannerState::IDLE);
}
@@ -770,6 +798,7 @@ void ESP32BLETracker::handle_scanner_failure_() {
void ESP32BLETracker::try_promote_discovered_clients_() {
// Only promote the first discovered client to avoid multiple simultaneous connections
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
if (client->state() != ClientState::DISCOVERED) {
continue;
@@ -791,6 +820,7 @@ void ESP32BLETracker::try_promote_discovered_clients_() {
client->connect();
break;
}
#endif
}
const char *ESP32BLETracker::scanner_state_to_string_(ScannerState state) const {
@@ -817,6 +847,7 @@ void ESP32BLETracker::log_unexpected_state_(const char *operation, ScannerState
#ifdef USE_ESP32_BLE_SOFTWARE_COEXISTENCE
void ESP32BLETracker::update_coex_preference_(bool force_ble) {
#ifndef CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
if (force_ble && !this->coex_prefer_ble_) {
ESP_LOGD(TAG, "Setting coexistence to Bluetooth to make connection.");
this->coex_prefer_ble_ = true;
@@ -826,6 +857,7 @@ void ESP32BLETracker::update_coex_preference_(bool force_ble) {
this->coex_prefer_ble_ = false;
esp_coex_preference_set(ESP_COEX_PREFER_BALANCE); // Reset to default
}
#endif // CONFIG_ESP_HOSTED_ENABLE_BT_BLUEDROID
}
#endif

View File

@@ -302,6 +302,7 @@ class ESP32BLETracker : public Component,
/// Count clients in each state
ClientStateCounts count_client_states_() const {
ClientStateCounts counts;
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
for (auto *client : this->clients_) {
switch (client->state()) {
case ClientState::DISCONNECTING:
@@ -317,12 +318,17 @@ class ESP32BLETracker : public Component,
break;
}
}
#endif
return counts;
}
// Group 1: Large objects (12+ bytes) - vectors and callback manager
std::vector<ESPBTDeviceListener *> listeners_;
std::vector<ESPBTClient *> clients_;
#ifdef ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT
StaticVector<ESPBTDeviceListener *, ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT> listeners_;
#endif
#ifdef ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT
StaticVector<ESPBTClient *, ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT> clients_;
#endif
CallbackManager<void(ScannerState)> scanner_state_callbacks_;
#ifdef USE_ESP32_BLE_DEVICE
/// Vector of addresses that have already been printed in print_bt_device_info

View File

@@ -92,9 +92,14 @@ async def to_code(config):
framework_ver: cv.Version = CORE.data[KEY_CORE][KEY_FRAMEWORK_VERSION]
os.environ["ESP_IDF_VERSION"] = f"{framework_ver.major}.{framework_ver.minor}"
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="0.10.2")
esp32.add_idf_component(name="espressif/eppp_link", ref="0.2.0")
esp32.add_idf_component(name="espressif/esp_hosted", ref="2.0.11")
if framework_ver >= cv.Version(5, 5, 0):
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="1.1.5")
esp32.add_idf_component(name="espressif/eppp_link", ref="1.1.3")
esp32.add_idf_component(name="espressif/esp_hosted", ref="2.5.11")
else:
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="0.13.0")
esp32.add_idf_component(name="espressif/eppp_link", ref="0.2.0")
esp32.add_idf_component(name="espressif/esp_hosted", ref="2.0.11")
esp32.add_extra_script(
"post",
"esp32_hosted.py",

View File

@@ -143,6 +143,7 @@ void ESP32ImprovComponent::loop() {
#else
this->set_state_(improv::STATE_AUTHORIZED);
#endif
this->check_wifi_connection_();
break;
}
case improv::STATE_AUTHORIZED: {
@@ -156,31 +157,12 @@ void ESP32ImprovComponent::loop() {
if (!this->check_identify_()) {
this->set_status_indicator_state_((now % 1000) < 500);
}
this->check_wifi_connection_();
break;
}
case improv::STATE_PROVISIONING: {
this->set_status_indicator_state_((now % 200) < 100);
if (wifi::global_wifi_component->is_connected()) {
wifi::global_wifi_component->save_wifi_sta(this->connecting_sta_.get_ssid(),
this->connecting_sta_.get_password());
this->connecting_sta_ = {};
this->cancel_timeout("wifi-connect-timeout");
this->set_state_(improv::STATE_PROVISIONED);
std::vector<std::string> urls = {ESPHOME_MY_LINK};
#ifdef USE_WEBSERVER
for (auto &ip : wifi::global_wifi_component->wifi_sta_ip_addresses()) {
if (ip.is_ip4()) {
std::string webserver_url = "http://" + ip.str() + ":" + to_string(USE_WEBSERVER_PORT);
urls.push_back(webserver_url);
break;
}
}
#endif
std::vector<uint8_t> data = improv::build_rpc_response(improv::WIFI_SETTINGS, urls);
this->send_response_(data);
this->stop();
}
this->check_wifi_connection_();
break;
}
case improv::STATE_PROVISIONED: {
@@ -392,6 +374,36 @@ void ESP32ImprovComponent::on_wifi_connect_timeout_() {
wifi::global_wifi_component->clear_sta();
}
void ESP32ImprovComponent::check_wifi_connection_() {
if (!wifi::global_wifi_component->is_connected()) {
return;
}
if (this->state_ == improv::STATE_PROVISIONING) {
wifi::global_wifi_component->save_wifi_sta(this->connecting_sta_.get_ssid(), this->connecting_sta_.get_password());
this->connecting_sta_ = {};
this->cancel_timeout("wifi-connect-timeout");
std::vector<std::string> urls = {ESPHOME_MY_LINK};
#ifdef USE_WEBSERVER
for (auto &ip : wifi::global_wifi_component->wifi_sta_ip_addresses()) {
if (ip.is_ip4()) {
std::string webserver_url = "http://" + ip.str() + ":" + to_string(USE_WEBSERVER_PORT);
urls.push_back(webserver_url);
break;
}
}
#endif
std::vector<uint8_t> data = improv::build_rpc_response(improv::WIFI_SETTINGS, urls);
this->send_response_(data);
} else if (this->is_active() && this->state_ != improv::STATE_PROVISIONED) {
ESP_LOGD(TAG, "WiFi provisioned externally");
}
this->set_state_(improv::STATE_PROVISIONED);
this->stop();
}
void ESP32ImprovComponent::advertise_service_data_() {
uint8_t service_data[IMPROV_SERVICE_DATA_SIZE] = {};
service_data[0] = IMPROV_PROTOCOL_ID_1; // PR

View File

@@ -111,6 +111,7 @@ class ESP32ImprovComponent : public Component {
void send_response_(std::vector<uint8_t> &response);
void process_incoming_data_();
void on_wifi_connect_timeout_();
void check_wifi_connection_();
bool check_identify_();
void advertise_service_data_();
#if ESPHOME_LOG_LEVEL >= ESPHOME_LOG_LEVEL_DEBUG

View File

@@ -42,6 +42,11 @@ static size_t IRAM_ATTR HOT encoder_callback(const void *data, size_t size, size
symbols[i] = params->bit0;
}
}
#if ESP_IDF_VERSION >= ESP_IDF_VERSION_VAL(5, 5, 1)
if ((index + 1) >= size && params->reset.duration0 == 0 && params->reset.duration1 == 0) {
*done = true;
}
#endif
return RMT_SYMBOLS_PER_BYTE;
}

View File

@@ -29,7 +29,7 @@ namespace esphome {
static const char *const TAG = "esphome.ota";
static constexpr uint16_t OTA_BLOCK_SIZE = 8192;
static constexpr size_t OTA_BUFFER_SIZE = 1024; // buffer size for OTA data transfer
static constexpr uint32_t OTA_SOCKET_TIMEOUT_HANDSHAKE = 10000; // milliseconds for initial handshake
static constexpr uint32_t OTA_SOCKET_TIMEOUT_HANDSHAKE = 20000; // milliseconds for initial handshake
static constexpr uint32_t OTA_SOCKET_TIMEOUT_DATA = 90000; // milliseconds for data transfer
#ifdef USE_OTA_PASSWORD

View File

@@ -689,12 +689,9 @@ void EthernetComponent::add_phy_register(PHYRegister register_value) { this->phy
void EthernetComponent::set_type(EthernetType type) { this->type_ = type; }
void EthernetComponent::set_manual_ip(const ManualIP &manual_ip) { this->manual_ip_ = manual_ip; }
std::string EthernetComponent::get_use_address() const {
if (this->use_address_.empty()) {
return App.get_name() + ".local";
}
return this->use_address_;
}
// set_use_address() is guaranteed to be called during component setup by Python code generation,
// so use_address_ will always be valid when get_use_address() is called - no fallback needed.
const std::string &EthernetComponent::get_use_address() const { return this->use_address_; }
void EthernetComponent::set_use_address(const std::string &use_address) { this->use_address_ = use_address; }

View File

@@ -88,7 +88,7 @@ class EthernetComponent : public Component {
network::IPAddresses get_ip_addresses();
network::IPAddress get_dns_address(uint8_t num);
std::string get_use_address() const;
const std::string &get_use_address() const;
void set_use_address(const std::string &use_address);
void get_eth_mac_address_raw(uint8_t *mac);
std::string get_eth_mac_address_pretty();

View File

@@ -90,13 +90,12 @@ void HomeassistantNumber::control(float value) {
api::HomeassistantActionRequest resp;
resp.set_service(SERVICE_NAME);
resp.data.emplace_back();
auto &entity_id = resp.data.back();
resp.data.init(2);
auto &entity_id = resp.data.emplace_back();
entity_id.set_key(ENTITY_ID_KEY);
entity_id.value = this->entity_id_;
resp.data.emplace_back();
auto &entity_value = resp.data.back();
auto &entity_value = resp.data.emplace_back();
entity_value.set_key(VALUE_KEY);
entity_value.value = to_string(value);

View File

@@ -51,8 +51,8 @@ void HomeassistantSwitch::write_state(bool state) {
resp.set_service(SERVICE_OFF);
}
resp.data.emplace_back();
auto &entity_id_kv = resp.data.back();
resp.data.init(1);
auto &entity_id_kv = resp.data.emplace_back();
entity_id_kv.set_key(ENTITY_ID_KEY);
entity_id_kv.value = this->entity_id_;

View File

@@ -167,8 +167,8 @@ class HttpRequestComponent : public Component {
}
protected:
virtual std::shared_ptr<HttpContainer> perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
virtual std::shared_ptr<HttpContainer> perform(const std::string &url, const std::string &method,
const std::string &body, const std::list<Header> &request_headers,
std::set<std::string> collect_headers) = 0;
const char *useragent_{nullptr};
bool follow_redirects_{};

View File

@@ -14,8 +14,9 @@ namespace http_request {
static const char *const TAG = "http_request.arduino";
std::shared_ptr<HttpContainer> HttpRequestArduino::perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::shared_ptr<HttpContainer> HttpRequestArduino::perform(const std::string &url, const std::string &method,
const std::string &body,
const std::list<Header> &request_headers,
std::set<std::string> collect_headers) {
if (!network::is_connected()) {
this->status_momentary_error("failed", 1000);

View File

@@ -31,8 +31,8 @@ class HttpContainerArduino : public HttpContainer {
class HttpRequestArduino : public HttpRequestComponent {
protected:
std::shared_ptr<HttpContainer> perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::shared_ptr<HttpContainer> perform(const std::string &url, const std::string &method, const std::string &body,
const std::list<Header> &request_headers,
std::set<std::string> collect_headers) override;
};

View File

@@ -17,8 +17,9 @@ namespace http_request {
static const char *const TAG = "http_request.host";
std::shared_ptr<HttpContainer> HttpRequestHost::perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::shared_ptr<HttpContainer> HttpRequestHost::perform(const std::string &url, const std::string &method,
const std::string &body,
const std::list<Header> &request_headers,
std::set<std::string> response_headers) {
if (!network::is_connected()) {
this->status_momentary_error("failed", 1000);

View File

@@ -18,8 +18,8 @@ class HttpContainerHost : public HttpContainer {
class HttpRequestHost : public HttpRequestComponent {
public:
std::shared_ptr<HttpContainer> perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::shared_ptr<HttpContainer> perform(const std::string &url, const std::string &method, const std::string &body,
const std::list<Header> &request_headers,
std::set<std::string> response_headers) override;
void set_ca_path(const char *ca_path) { this->ca_path_ = ca_path; }

View File

@@ -52,8 +52,9 @@ esp_err_t HttpRequestIDF::http_event_handler(esp_http_client_event_t *evt) {
return ESP_OK;
}
std::shared_ptr<HttpContainer> HttpRequestIDF::perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::shared_ptr<HttpContainer> HttpRequestIDF::perform(const std::string &url, const std::string &method,
const std::string &body,
const std::list<Header> &request_headers,
std::set<std::string> collect_headers) {
if (!network::is_connected()) {
this->status_momentary_error("failed", 1000);

View File

@@ -37,8 +37,8 @@ class HttpRequestIDF : public HttpRequestComponent {
void set_buffer_size_tx(uint16_t buffer_size_tx) { this->buffer_size_tx_ = buffer_size_tx; }
protected:
std::shared_ptr<HttpContainer> perform(std::string url, std::string method, std::string body,
std::list<Header> request_headers,
std::shared_ptr<HttpContainer> perform(const std::string &url, const std::string &method, const std::string &body,
const std::list<Header> &request_headers,
std::set<std::string> collect_headers) override;
// if zero ESP-IDF will use DEFAULT_HTTP_BUF_SIZE
uint16_t buffer_size_rx_{};

View File

@@ -218,7 +218,7 @@ bool ImprovSerialComponent::parse_improv_payload_(improv::ImprovCommand &command
}
case improv::GET_WIFI_NETWORKS: {
std::vector<std::string> networks;
auto results = wifi::global_wifi_component->get_scan_result();
const auto &results = wifi::global_wifi_component->get_scan_result();
for (auto &scan : results) {
if (scan.get_is_hidden())
continue;

View File

@@ -8,6 +8,13 @@ namespace json {
static const char *const TAG = "json";
#ifdef USE_PSRAM
// Global allocator that outlives all JsonDocuments returned by parse_json()
// This prevents dangling pointer issues when JsonDocuments are returned from functions
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables) - Must be mutable for ArduinoJson::Allocator
static SpiRamAllocator global_json_allocator;
#endif
std::string build_json(const json_build_t &f) {
// NOLINTBEGIN(clang-analyzer-cplusplus.NewDeleteLeaks) false positive with ArduinoJson
JsonBuilder builder;
@@ -33,8 +40,7 @@ JsonDocument parse_json(const uint8_t *data, size_t len) {
return JsonObject(); // return unbound object
}
#ifdef USE_PSRAM
auto doc_allocator = SpiRamAllocator();
JsonDocument json_document(&doc_allocator);
JsonDocument json_document(&global_json_allocator);
#else
JsonDocument json_document;
#endif

View File

@@ -177,9 +177,10 @@ void LightState::set_gamma_correct(float gamma_correct) { this->gamma_correct_ =
void LightState::set_restore_mode(LightRestoreMode restore_mode) { this->restore_mode_ = restore_mode; }
void LightState::set_initial_state(const LightStateRTCState &initial_state) { this->initial_state_ = initial_state; }
bool LightState::supports_effects() { return !this->effects_.empty(); }
const std::vector<LightEffect *> &LightState::get_effects() const { return this->effects_; }
const FixedVector<LightEffect *> &LightState::get_effects() const { return this->effects_; }
void LightState::add_effects(const std::vector<LightEffect *> &effects) {
this->effects_.reserve(this->effects_.size() + effects.size());
// Called once from Python codegen during setup with all effects from YAML config
this->effects_.init(effects.size());
for (auto *effect : effects) {
this->effects_.push_back(effect);
}

View File

@@ -11,8 +11,9 @@
#include "light_traits.h"
#include "light_transformer.h"
#include <vector>
#include "esphome/core/helpers.h"
#include <strings.h>
#include <vector>
namespace esphome {
namespace light {
@@ -159,7 +160,7 @@ class LightState : public EntityBase, public Component {
bool supports_effects();
/// Get all effects for this light state.
const std::vector<LightEffect *> &get_effects() const;
const FixedVector<LightEffect *> &get_effects() const;
/// Add effects for this light state.
void add_effects(const std::vector<LightEffect *> &effects);
@@ -260,7 +261,7 @@ class LightState : public EntityBase, public Component {
/// The currently active transformer for this light (transition/flash).
std::unique_ptr<LightTransformer> transformer_{nullptr};
/// List of effects for this light.
std::vector<LightEffect *> effects_;
FixedVector<LightEffect *> effects_;
/// Object used to store the persisted values of the light.
ESPPreferenceObject rtc_;
/// Value for storing the index of the currently active effect. 0 if no effect is active

View File

@@ -486,7 +486,6 @@ CONF_RESUME_ON_INPUT = "resume_on_input"
CONF_RIGHT_BUTTON = "right_button"
CONF_ROLLOVER = "rollover"
CONF_ROOT_BACK_BTN = "root_back_btn"
CONF_ROWS = "rows"
CONF_SCALE_LINES = "scale_lines"
CONF_SCROLLBAR_MODE = "scrollbar_mode"
CONF_SELECTED_INDEX = "selected_index"

View File

@@ -2,7 +2,7 @@ from esphome import automation
import esphome.codegen as cg
from esphome.components.key_provider import KeyProvider
import esphome.config_validation as cv
from esphome.const import CONF_ID, CONF_ITEMS, CONF_TEXT, CONF_WIDTH
from esphome.const import CONF_ID, CONF_ITEMS, CONF_ROWS, CONF_TEXT, CONF_WIDTH
from esphome.cpp_generator import MockObj
from ..automation import action_to_code
@@ -15,7 +15,6 @@ from ..defines import (
CONF_ONE_CHECKED,
CONF_PAD_COLUMN,
CONF_PAD_ROW,
CONF_ROWS,
CONF_SELECTED,
)
from ..helpers import lvgl_components_required

View File

@@ -2,7 +2,7 @@ from esphome import automation, pins
import esphome.codegen as cg
from esphome.components import key_provider
import esphome.config_validation as cv
from esphome.const import CONF_ID, CONF_ON_KEY, CONF_PIN, CONF_TRIGGER_ID
from esphome.const import CONF_ID, CONF_ON_KEY, CONF_PIN, CONF_ROWS, CONF_TRIGGER_ID
CODEOWNERS = ["@ssieb"]
@@ -19,7 +19,6 @@ MatrixKeyTrigger = matrix_keypad_ns.class_(
)
CONF_KEYPAD_ID = "keypad_id"
CONF_ROWS = "rows"
CONF_COLUMNS = "columns"
CONF_KEYS = "keys"
CONF_DEBOUNCE_TIME = "debounce_time"

View File

@@ -1,6 +1,6 @@
import esphome.codegen as cg
from esphome.components.esp32 import add_idf_component
from esphome.config_helpers import filter_source_files_from_platform
from esphome.config_helpers import filter_source_files_from_platform, get_logger_level
import esphome.config_validation as cv
from esphome.const import (
CONF_DISABLED,
@@ -101,7 +101,7 @@ async def _mdns_txt_record_templated(
def mdns_service(
service: str, proto: str, port: int, txt_records: list[dict[str, str]]
service: str, proto: str, port: int, txt_records: list[cg.RawExpression]
) -> cg.StructInitializer:
"""Create a mDNS service.
@@ -125,6 +125,17 @@ def mdns_service(
)
def enable_mdns_storage():
"""Enable persistent storage of mDNS services in the MDNSComponent.
Called by external components (like OpenThread) that need access to
services after setup() completes via get_services().
Public API for external components. Do not remove.
"""
cg.add_define("USE_MDNS_STORE_SERVICES")
@coroutine_with_priority(CoroPriority.NETWORK_SERVICES)
async def to_code(config):
if config[CONF_DISABLED] is True:
@@ -150,6 +161,8 @@ async def to_code(config):
if config[CONF_SERVICES]:
cg.add_define("USE_MDNS_EXTRA_SERVICES")
# Extra services need to be stored persistently
enable_mdns_storage()
# Ensure at least 1 service (fallback service)
cg.add_define("MDNS_SERVICE_COUNT", max(1, service_count))
@@ -171,6 +184,10 @@ async def to_code(config):
# Ensure at least 1 to avoid zero-size array
cg.add_define("MDNS_DYNAMIC_TXT_COUNT", max(1, dynamic_txt_count))
# Enable storage if verbose logging is enabled (for dump_config)
if get_logger_level() in ("VERBOSE", "VERY_VERBOSE"):
enable_mdns_storage()
var = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(var, config)

View File

@@ -36,7 +36,7 @@ MDNS_STATIC_CONST_CHAR(SERVICE_TCP, "_tcp");
// Wrap build-time defines into flash storage
MDNS_STATIC_CONST_CHAR(VALUE_VERSION, ESPHOME_VERSION);
void MDNSComponent::compile_records_() {
void MDNSComponent::compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUNT> &services) {
this->hostname_ = App.get_name();
// IMPORTANT: The #ifdef blocks below must match COMPONENTS_WITH_MDNS_SERVICES
@@ -53,7 +53,7 @@ void MDNSComponent::compile_records_() {
MDNS_STATIC_CONST_CHAR(VALUE_BOARD, ESPHOME_BOARD);
if (api::global_api_server != nullptr) {
auto &service = this->services_.emplace_next();
auto &service = services.emplace_next();
service.service_type = MDNS_STR(SERVICE_ESPHOMELIB);
service.proto = MDNS_STR(SERVICE_TCP);
service.port = api::global_api_server->get_port();
@@ -83,7 +83,7 @@ void MDNSComponent::compile_records_() {
#endif
auto &txt_records = service.txt_records;
txt_records.reserve(txt_count);
txt_records.init(txt_count);
if (!friendly_name_empty) {
txt_records.push_back({MDNS_STR(TXT_FRIENDLY_NAME), MDNS_STR(friendly_name.c_str())});
@@ -146,7 +146,7 @@ void MDNSComponent::compile_records_() {
#ifdef USE_PROMETHEUS
MDNS_STATIC_CONST_CHAR(SERVICE_PROMETHEUS, "_prometheus-http");
auto &prom_service = this->services_.emplace_next();
auto &prom_service = services.emplace_next();
prom_service.service_type = MDNS_STR(SERVICE_PROMETHEUS);
prom_service.proto = MDNS_STR(SERVICE_TCP);
prom_service.port = USE_WEBSERVER_PORT;
@@ -155,7 +155,7 @@ void MDNSComponent::compile_records_() {
#ifdef USE_WEBSERVER
MDNS_STATIC_CONST_CHAR(SERVICE_HTTP, "_http");
auto &web_service = this->services_.emplace_next();
auto &web_service = services.emplace_next();
web_service.service_type = MDNS_STR(SERVICE_HTTP);
web_service.proto = MDNS_STR(SERVICE_TCP);
web_service.port = USE_WEBSERVER_PORT;
@@ -167,11 +167,11 @@ void MDNSComponent::compile_records_() {
// Publish "http" service if not using native API or any other services
// This is just to have *some* mDNS service so that .local resolution works
auto &fallback_service = this->services_.emplace_next();
auto &fallback_service = services.emplace_next();
fallback_service.service_type = MDNS_STR(SERVICE_HTTP);
fallback_service.proto = MDNS_STR(SERVICE_TCP);
fallback_service.port = USE_WEBSERVER_PORT;
fallback_service.txt_records.push_back({MDNS_STR(TXT_VERSION), MDNS_STR(VALUE_VERSION)});
fallback_service.txt_records = {{MDNS_STR(TXT_VERSION), MDNS_STR(VALUE_VERSION)}};
#endif
}
@@ -180,7 +180,7 @@ void MDNSComponent::dump_config() {
"mDNS:\n"
" Hostname: %s",
this->hostname_.c_str());
#if ESPHOME_LOG_LEVEL >= ESPHOME_LOG_LEVEL_VERBOSE
#ifdef USE_MDNS_STORE_SERVICES
ESP_LOGV(TAG, " Services:");
for (const auto &service : this->services_) {
ESP_LOGV(TAG, " - %s, %s, %d", MDNS_STR_ARG(service.service_type), MDNS_STR_ARG(service.proto),

View File

@@ -38,7 +38,7 @@ struct MDNSService {
// as defined in RFC6763 Section 7, like "_tcp" or "_udp"
const MDNSString *proto;
TemplatableValue<uint16_t> port;
std::vector<MDNSTXTRecord> txt_records;
FixedVector<MDNSTXTRecord> txt_records;
};
class MDNSComponent : public Component {
@@ -55,7 +55,9 @@ class MDNSComponent : public Component {
void add_extra_service(MDNSService service) { this->services_.emplace_next() = std::move(service); }
#endif
#ifdef USE_MDNS_STORE_SERVICES
const StaticVector<MDNSService, MDNS_SERVICE_COUNT> &get_services() const { return this->services_; }
#endif
void on_shutdown() override;
@@ -71,9 +73,11 @@ class MDNSComponent : public Component {
StaticVector<std::string, MDNS_DYNAMIC_TXT_COUNT> dynamic_txt_values_;
protected:
#ifdef USE_MDNS_STORE_SERVICES
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services_{};
#endif
std::string hostname_;
void compile_records_();
void compile_records_(StaticVector<MDNSService, MDNS_SERVICE_COUNT> &services);
};
} // namespace mdns

View File

@@ -12,7 +12,13 @@ namespace mdns {
static const char *const TAG = "mdns";
void MDNSComponent::setup() {
this->compile_records_();
#ifdef USE_MDNS_STORE_SERVICES
this->compile_records_(this->services_);
const auto &services = this->services_;
#else
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services;
this->compile_records_(services);
#endif
esp_err_t err = mdns_init();
if (err != ESP_OK) {
@@ -24,7 +30,7 @@ void MDNSComponent::setup() {
mdns_hostname_set(this->hostname_.c_str());
mdns_instance_name_set(this->hostname_.c_str());
for (const auto &service : this->services_) {
for (const auto &service : services) {
std::vector<mdns_txt_item_t> txt_records;
for (const auto &record : service.txt_records) {
mdns_txt_item_t it{};

View File

@@ -12,11 +12,17 @@ namespace esphome {
namespace mdns {
void MDNSComponent::setup() {
this->compile_records_();
#ifdef USE_MDNS_STORE_SERVICES
this->compile_records_(this->services_);
const auto &services = this->services_;
#else
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services;
this->compile_records_(services);
#endif
MDNS.begin(this->hostname_.c_str());
for (const auto &service : this->services_) {
for (const auto &service : services) {
// Strip the leading underscore from the proto and service_type. While it is
// part of the wire protocol to have an underscore, and for example ESP-IDF
// expects the underscore to be there, the ESP8266 implementation always adds

View File

@@ -9,7 +9,9 @@
namespace esphome {
namespace mdns {
void MDNSComponent::setup() { this->compile_records_(); }
void MDNSComponent::setup() {
// Host platform doesn't have actual mDNS implementation
}
void MDNSComponent::on_shutdown() {}

View File

@@ -12,11 +12,17 @@ namespace esphome {
namespace mdns {
void MDNSComponent::setup() {
this->compile_records_();
#ifdef USE_MDNS_STORE_SERVICES
this->compile_records_(this->services_);
const auto &services = this->services_;
#else
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services;
this->compile_records_(services);
#endif
MDNS.begin(this->hostname_.c_str());
for (const auto &service : this->services_) {
for (const auto &service : services) {
// Strip the leading underscore from the proto and service_type. While it is
// part of the wire protocol to have an underscore, and for example ESP-IDF
// expects the underscore to be there, the ESP8266 implementation always adds

View File

@@ -12,11 +12,17 @@ namespace esphome {
namespace mdns {
void MDNSComponent::setup() {
this->compile_records_();
#ifdef USE_MDNS_STORE_SERVICES
this->compile_records_(this->services_);
const auto &services = this->services_;
#else
StaticVector<MDNSService, MDNS_SERVICE_COUNT> services;
this->compile_records_(services);
#endif
MDNS.begin(this->hostname_.c_str());
for (const auto &service : this->services_) {
for (const auto &service : services) {
// Strip the leading underscore from the proto and service_type. While it is
// part of the wire protocol to have an underscore, and for example ESP-IDF
// expects the underscore to be there, the ESP8266 implementation always adds

View File

@@ -56,50 +56,41 @@ DriverChip(
"WAVESHARE-P4-86-PANEL",
height=720,
width=720,
hsync_back_porch=80,
hsync_back_porch=50,
hsync_pulse_width=20,
hsync_front_porch=80,
vsync_back_porch=12,
hsync_front_porch=50,
vsync_back_porch=20,
vsync_pulse_width=4,
vsync_front_porch=30,
pclk_frequency="46MHz",
lane_bit_rate="1Gbps",
vsync_front_porch=20,
pclk_frequency="38MHz",
lane_bit_rate="480Mbps",
swap_xy=cv.UNDEFINED,
color_order="RGB",
reset_pin=27,
initsequence=[
(0xB9, 0xF1, 0x12, 0x83),
(
0xBA, 0x31, 0x81, 0x05, 0xF9, 0x0E, 0x0E, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x44, 0x25, 0x00,
0x90, 0x0A, 0x00, 0x00, 0x01, 0x4F, 0x01, 0x00, 0x00, 0x37,
),
(0xB8, 0x25, 0x22, 0xF0, 0x63),
(0xBF, 0x02, 0x11, 0x00),
(0xB1, 0x00, 0x00, 0x00, 0xDA, 0x80),
(0xB2, 0x3C, 0x12, 0x30),
(0xB3, 0x10, 0x10, 0x28, 0x28, 0x03, 0xFF, 0x00, 0x00, 0x00, 0x00),
(0xC0, 0x73, 0x73, 0x50, 0x50, 0x00, 0x00, 0x12, 0x70, 0x00),
(0xBC, 0x46), (0xCC, 0x0B), (0xB4, 0x80), (0xB2, 0x3C, 0x12, 0x30),
(0xE3, 0x07, 0x07, 0x0B, 0x0B, 0x03, 0x0B, 0x00, 0x00, 0x00, 0x00, 0xFF, 0x00, 0xC0, 0x10,),
(0xC1, 0x36, 0x00, 0x32, 0x32, 0x77, 0xF1, 0xCC, 0xCC, 0x77, 0x77, 0x33, 0x33),
(0xB4, 0x80),
(0xB5, 0x0A, 0x0A),
(0xB6, 0xB2, 0xB2),
(
0xE9, 0xC8, 0x10, 0x0A, 0x10, 0x0F, 0xA1, 0x80, 0x12, 0x31, 0x23, 0x47, 0x86, 0xA1, 0x80,
0x47, 0x08, 0x00, 0x00, 0x0D, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0D, 0x00, 0x00, 0x00, 0x48,
0x02, 0x8B, 0xAF, 0x46, 0x02, 0x88, 0x88, 0x88, 0x88, 0x88, 0x48, 0x13, 0x8B, 0xAF, 0x57,
0x13, 0x88, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
),
(
0xEA, 0x96, 0x12, 0x01, 0x01, 0x01, 0x78, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4F, 0x31,
0x8B, 0xA8, 0x31, 0x75, 0x88, 0x88, 0x88, 0x88, 0x88, 0x4F, 0x20, 0x8B, 0xA8, 0x20, 0x64,
0x88, 0x88, 0x88, 0x88, 0x88, 0x23, 0x00, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0xA1, 0x80, 0x00, 0x00,
0x00, 0x00,
),
(
0xE0, 0x00, 0x0A, 0x0F, 0x29, 0x3B, 0x3F, 0x42, 0x39, 0x06, 0x0D, 0x10, 0x13, 0x15, 0x14,
0x15, 0x10, 0x17, 0x00, 0x0A, 0x0F, 0x29, 0x3B, 0x3F, 0x42, 0x39, 0x06, 0x0D, 0x10, 0x13,
0x15, 0x14, 0x15, 0x10, 0x17,
),
(0xB6, 0x97, 0x97),
(0xB8, 0x26, 0x22, 0xF0, 0x13),
(0xBA, 0x31, 0x81, 0x0F, 0xF9, 0x0E, 0x06, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x44, 0x25, 0x00, 0x90, 0x0A, 0x00, 0x00, 0x01, 0x4F, 0x01, 0x00, 0x00, 0x37),
(0xBC, 0x47),
(0xBF, 0x02, 0x11, 0x00),
(0xC0, 0x73, 0x73, 0x50, 0x50, 0x00, 0x00, 0x12, 0x70, 0x00),
(0xC1, 0x25, 0x00, 0x32, 0x32, 0x77, 0xE4, 0xFF, 0xFF, 0xCC, 0xCC, 0x77, 0x77),
(0xC6, 0x82, 0x00, 0xBF, 0xFF, 0x00, 0xFF),
(0xC7, 0xB8, 0x00, 0x0A, 0x10, 0x01, 0x09),
(0xC8, 0x10, 0x40, 0x1E, 0x02),
(0xCC, 0x0B),
(0xE0, 0x00, 0x0B, 0x10, 0x2C, 0x3D, 0x3F, 0x42, 0x3A, 0x07, 0x0D, 0x0F, 0x13, 0x15, 0x13, 0x14, 0x0F, 0x16, 0x00, 0x0B, 0x10, 0x2C, 0x3D, 0x3F, 0x42, 0x3A, 0x07, 0x0D, 0x0F, 0x13, 0x15, 0x13, 0x14, 0x0F, 0x16),
(0xE3, 0x07, 0x07, 0x0B, 0x0B, 0x0B, 0x0B, 0x00, 0x00, 0x00, 0x00, 0xFF, 0x00, 0xC0, 0x10),
(0xE9, 0xC8, 0x10, 0x0A, 0x00, 0x00, 0x80, 0x81, 0x12, 0x31, 0x23, 0x4F, 0x86, 0xA0, 0x00, 0x47, 0x08, 0x00, 0x00, 0x0C, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0C, 0x00, 0x00, 0x00, 0x98, 0x02, 0x8B, 0xAF, 0x46, 0x02, 0x88, 0x88, 0x88, 0x88, 0x88, 0x98, 0x13, 0x8B, 0xAF, 0x57, 0x13, 0x88, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00),
(0xEA, 0x97, 0x0C, 0x09, 0x09, 0x09, 0x78, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x9F, 0x31, 0x8B, 0xA8, 0x31, 0x75, 0x88, 0x88, 0x88, 0x88, 0x88, 0x9F, 0x20, 0x8B, 0xA8, 0x20, 0x64, 0x88, 0x88, 0x88, 0x88, 0x88, 0x23, 0x00, 0x00, 0x02, 0x71, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x80, 0x81, 0x00, 0x00, 0x00, 0x00),
(0xEF, 0xFF, 0xFF, 0x01),
(0x11, 0x00),
(0x29, 0x00),
],
)

View File

@@ -29,7 +29,8 @@ static const char *const TAG = "mqtt";
MQTTClientComponent::MQTTClientComponent() {
global_mqtt_client = this;
this->credentials_.client_id = App.get_name() + "-" + get_mac_address();
const std::string mac_addr = get_mac_address();
this->credentials_.client_id = make_name_with_suffix(App.get_name(), '-', mac_addr.c_str(), mac_addr.size());
}
// Connection

View File

@@ -85,22 +85,25 @@ network::IPAddresses get_ip_addresses() {
return {};
}
std::string get_use_address() {
const std::string &get_use_address() {
// Global component pointers are guaranteed to be set by component constructors when USE_* is defined
#ifdef USE_ETHERNET
if (ethernet::global_eth_component != nullptr)
return ethernet::global_eth_component->get_use_address();
return ethernet::global_eth_component->get_use_address();
#endif
#ifdef USE_MODEM
if (modem::global_modem_component != nullptr)
return modem::global_modem_component->get_use_address();
return modem::global_modem_component->get_use_address();
#endif
#ifdef USE_WIFI
if (wifi::global_wifi_component != nullptr)
return wifi::global_wifi_component->get_use_address();
return wifi::global_wifi_component->get_use_address();
#endif
#if !defined(USE_ETHERNET) && !defined(USE_MODEM) && !defined(USE_WIFI)
// Fallback when no network component is defined (e.g., host platform)
static const std::string empty;
return empty;
#endif
return "";
}
} // namespace network

View File

@@ -12,7 +12,7 @@ bool is_connected();
/// Return whether the network is disabled (only wifi for now)
bool is_disabled();
/// Get the active network hostname
std::string get_use_address();
const std::string &get_use_address();
IPAddresses get_ip_addresses();
} // namespace network

View File

@@ -5,7 +5,7 @@ from esphome.components.esp32 import (
add_idf_sdkconfig_option,
only_on_variant,
)
from esphome.components.mdns import MDNSComponent
from esphome.components.mdns import MDNSComponent, enable_mdns_storage
import esphome.config_validation as cv
from esphome.const import CONF_CHANNEL, CONF_ENABLE_IPV6, CONF_ID
import esphome.final_validate as fv
@@ -141,6 +141,9 @@ FINAL_VALIDATE_SCHEMA = _final_validate
async def to_code(config):
cg.add_define("USE_OPENTHREAD")
# OpenThread SRP needs access to mDNS services after setup
enable_mdns_storage()
ot = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(ot, config)

View File

@@ -63,6 +63,8 @@ SPIRAM_SPEEDS = {
def supported() -> bool:
if not CORE.is_esp32:
return False
variant = get_esp32_variant()
return variant in SPIRAM_MODES

View File

@@ -145,7 +145,7 @@ class BSDSocketImpl : public Socket {
}
ssize_t sendto(const void *buf, size_t len, int flags, const struct sockaddr *to, socklen_t tolen) override {
return ::sendto(fd_, buf, len, flags, to, tolen);
return ::sendto(fd_, buf, len, flags, to, tolen); // NOLINT(readability-suspicious-call-argument)
}
int setblocking(bool blocking) override {

View File

@@ -40,33 +40,14 @@ class LWIPRawImpl : public Socket {
void init() {
LWIP_LOG("init(%p)", pcb_);
tcp_arg(pcb_, this);
tcp_accept(pcb_, LWIPRawImpl::s_accept_fn);
tcp_recv(pcb_, LWIPRawImpl::s_recv_fn);
tcp_err(pcb_, LWIPRawImpl::s_err_fn);
}
std::unique_ptr<Socket> accept(struct sockaddr *addr, socklen_t *addrlen) override {
if (pcb_ == nullptr) {
errno = EBADF;
return nullptr;
}
if (this->accepted_socket_count_ == 0) {
errno = EWOULDBLOCK;
return nullptr;
}
// Take from front for FIFO ordering
std::unique_ptr<LWIPRawImpl> sock = std::move(this->accepted_sockets_[0]);
// Shift remaining sockets forward
for (uint8_t i = 1; i < this->accepted_socket_count_; i++) {
this->accepted_sockets_[i - 1] = std::move(this->accepted_sockets_[i]);
}
this->accepted_socket_count_--;
LWIP_LOG("Connection accepted by application, queue size: %d", this->accepted_socket_count_);
if (addr != nullptr) {
sock->getpeername(addr, addrlen);
}
LWIP_LOG("accept(%p)", sock.get());
return std::unique_ptr<Socket>(std::move(sock));
// Non-listening sockets return error
errno = EINVAL;
return nullptr;
}
int bind(const struct sockaddr *name, socklen_t addrlen) override {
if (pcb_ == nullptr) {
@@ -292,25 +273,10 @@ class LWIPRawImpl : public Socket {
return -1;
}
int listen(int backlog) override {
if (pcb_ == nullptr) {
errno = EBADF;
return -1;
}
LWIP_LOG("tcp_listen_with_backlog(%p backlog=%d)", pcb_, backlog);
struct tcp_pcb *listen_pcb = tcp_listen_with_backlog(pcb_, backlog);
if (listen_pcb == nullptr) {
tcp_abort(pcb_);
pcb_ = nullptr;
errno = EOPNOTSUPP;
return -1;
}
// tcp_listen reallocates the pcb, replace ours
pcb_ = listen_pcb;
// set callbacks on new pcb
LWIP_LOG("tcp_arg(%p)", pcb_);
tcp_arg(pcb_, this);
tcp_accept(pcb_, LWIPRawImpl::s_accept_fn);
return 0;
// Regular sockets can't be converted to listening - this shouldn't happen
// as listen() should only be called on sockets created for listening
errno = EOPNOTSUPP;
return -1;
}
ssize_t read(void *buf, size_t len) override {
if (pcb_ == nullptr) {
@@ -491,29 +457,6 @@ class LWIPRawImpl : public Socket {
return 0;
}
err_t accept_fn(struct tcp_pcb *newpcb, err_t err) {
LWIP_LOG("accept(newpcb=%p err=%d)", newpcb, err);
if (err != ERR_OK || newpcb == nullptr) {
// "An error code if there has been an error accepting. Only return ERR_ABRT if you have
// called tcp_abort from within the callback function!"
// https://www.nongnu.org/lwip/2_1_x/tcp_8h.html#a00517abce6856d6c82f0efebdafb734d
// nothing to do here, we just don't push it to the queue
return ERR_OK;
}
// Check if we've reached the maximum accept queue size
if (this->accepted_socket_count_ >= MAX_ACCEPTED_SOCKETS) {
LWIP_LOG("Rejecting connection, queue full (%d)", this->accepted_socket_count_);
// Abort the connection when queue is full
tcp_abort(newpcb);
// Must return ERR_ABRT since we called tcp_abort()
return ERR_ABRT;
}
auto sock = make_unique<LWIPRawImpl>(family_, newpcb);
sock->init();
this->accepted_sockets_[this->accepted_socket_count_++] = std::move(sock);
LWIP_LOG("Accepted connection, queue size: %d", this->accepted_socket_count_);
return ERR_OK;
}
void err_fn(err_t err) {
LWIP_LOG("err(err=%d)", err);
// "If a connection is aborted because of an error, the application is alerted of this event by
@@ -545,11 +488,6 @@ class LWIPRawImpl : public Socket {
return ERR_OK;
}
static err_t s_accept_fn(void *arg, struct tcp_pcb *newpcb, err_t err) {
LWIPRawImpl *arg_this = reinterpret_cast<LWIPRawImpl *>(arg);
return arg_this->accept_fn(newpcb, err);
}
static void s_err_fn(void *arg, err_t err) {
LWIPRawImpl *arg_this = reinterpret_cast<LWIPRawImpl *>(arg);
arg_this->err_fn(err);
@@ -601,7 +539,107 @@ class LWIPRawImpl : public Socket {
return -1;
}
// Member ordering optimized to minimize padding on 32-bit systems
// Largest members first (4 bytes), then smaller members (1 byte each)
struct tcp_pcb *pcb_;
pbuf *rx_buf_ = nullptr;
size_t rx_buf_offset_ = 0;
bool rx_closed_ = false;
// don't use lwip nodelay flag, it sometimes causes reconnect
// instead use it for determining whether to call lwip_output
bool nodelay_ = false;
sa_family_t family_ = 0;
};
// Listening socket class - only allocates accept queue when needed (for bind+listen sockets)
// This saves 16 bytes (12 bytes array + 1 byte count + 3 bytes padding) for regular connected sockets on ESP8266/RP2040
class LWIPRawListenImpl : public LWIPRawImpl {
public:
LWIPRawListenImpl(sa_family_t family, struct tcp_pcb *pcb) : LWIPRawImpl(family, pcb) {}
void init() {
LWIP_LOG("init(%p)", pcb_);
tcp_arg(pcb_, this);
tcp_accept(pcb_, LWIPRawListenImpl::s_accept_fn);
tcp_err(pcb_, LWIPRawImpl::s_err_fn); // Use base class error handler
}
std::unique_ptr<Socket> accept(struct sockaddr *addr, socklen_t *addrlen) override {
if (pcb_ == nullptr) {
errno = EBADF;
return nullptr;
}
if (accepted_socket_count_ == 0) {
errno = EWOULDBLOCK;
return nullptr;
}
// Take from front for FIFO ordering
std::unique_ptr<LWIPRawImpl> sock = std::move(accepted_sockets_[0]);
// Shift remaining sockets forward
for (uint8_t i = 1; i < accepted_socket_count_; i++) {
accepted_sockets_[i - 1] = std::move(accepted_sockets_[i]);
}
accepted_socket_count_--;
LWIP_LOG("Connection accepted by application, queue size: %d", accepted_socket_count_);
if (addr != nullptr) {
sock->getpeername(addr, addrlen);
}
LWIP_LOG("accept(%p)", sock.get());
return std::unique_ptr<Socket>(std::move(sock));
}
int listen(int backlog) override {
if (pcb_ == nullptr) {
errno = EBADF;
return -1;
}
LWIP_LOG("tcp_listen_with_backlog(%p backlog=%d)", pcb_, backlog);
struct tcp_pcb *listen_pcb = tcp_listen_with_backlog(pcb_, backlog);
if (listen_pcb == nullptr) {
tcp_abort(pcb_);
pcb_ = nullptr;
errno = EOPNOTSUPP;
return -1;
}
// tcp_listen reallocates the pcb, replace ours
pcb_ = listen_pcb;
// set callbacks on new pcb
LWIP_LOG("tcp_arg(%p)", pcb_);
tcp_arg(pcb_, this);
tcp_accept(pcb_, LWIPRawListenImpl::s_accept_fn);
return 0;
}
private:
err_t accept_fn(struct tcp_pcb *newpcb, err_t err) {
LWIP_LOG("accept(newpcb=%p err=%d)", newpcb, err);
if (err != ERR_OK || newpcb == nullptr) {
// "An error code if there has been an error accepting. Only return ERR_ABRT if you have
// called tcp_abort from within the callback function!"
// https://www.nongnu.org/lwip/2_1_x/tcp_8h.html#a00517abce6856d6c82f0efebdafb734d
// nothing to do here, we just don't push it to the queue
return ERR_OK;
}
// Check if we've reached the maximum accept queue size
if (accepted_socket_count_ >= MAX_ACCEPTED_SOCKETS) {
LWIP_LOG("Rejecting connection, queue full (%d)", accepted_socket_count_);
// Abort the connection when queue is full
tcp_abort(newpcb);
// Must return ERR_ABRT since we called tcp_abort()
return ERR_ABRT;
}
auto sock = make_unique<LWIPRawImpl>(family_, newpcb);
sock->init();
accepted_sockets_[accepted_socket_count_++] = std::move(sock);
LWIP_LOG("Accepted connection, queue size: %d", accepted_socket_count_);
return ERR_OK;
}
static err_t s_accept_fn(void *arg, struct tcp_pcb *newpcb, err_t err) {
LWIPRawListenImpl *arg_this = reinterpret_cast<LWIPRawListenImpl *>(arg);
return arg_this->accept_fn(newpcb, err);
}
// Accept queue - holds incoming connections briefly until the event loop calls accept()
// This is NOT a connection pool - just a temporary queue between LWIP callbacks and the main loop
// 3 slots is plenty since connections are pulled out quickly by the event loop
@@ -613,23 +651,21 @@ class LWIPRawImpl : public Socket {
// - std::array<3>: 12 bytes fixed (3 pointers × 4 bytes)
// Saves ~44+ bytes RAM per listening socket + avoids ALL heap allocations
// Used on ESP8266 and RP2040 (platforms using LWIP_TCP implementation)
//
// By using a separate listening socket class, regular connected sockets save
// 16 bytes (12 bytes array + 1 byte count + 3 bytes padding) of memory overhead on 32-bit systems
static constexpr size_t MAX_ACCEPTED_SOCKETS = 3;
std::array<std::unique_ptr<LWIPRawImpl>, MAX_ACCEPTED_SOCKETS> accepted_sockets_;
uint8_t accepted_socket_count_ = 0; // Number of sockets currently in queue
bool rx_closed_ = false;
pbuf *rx_buf_ = nullptr;
size_t rx_buf_offset_ = 0;
// don't use lwip nodelay flag, it sometimes causes reconnect
// instead use it for determining whether to call lwip_output
bool nodelay_ = false;
sa_family_t family_ = 0;
};
std::unique_ptr<Socket> socket(int domain, int type, int protocol) {
auto *pcb = tcp_new();
if (pcb == nullptr)
return nullptr;
auto *sock = new LWIPRawImpl((sa_family_t) domain, pcb); // NOLINT(cppcoreguidelines-owning-memory)
// Create listening socket implementation since user sockets typically bind+listen
// Accepted connections are created directly as LWIPRawImpl in the accept callback
auto *sock = new LWIPRawListenImpl((sa_family_t) domain, pcb); // NOLINT(cppcoreguidelines-owning-memory)
sock->init();
return std::unique_ptr<Socket>{sock};
}

View File

@@ -6,7 +6,7 @@ from pathlib import Path
from esphome import automation, external_files
import esphome.codegen as cg
from esphome.components import audio, esp32, media_player, speaker
from esphome.components import audio, esp32, media_player, psram, speaker
import esphome.config_validation as cv
from esphome.const import (
CONF_BUFFER_SIZE,
@@ -26,10 +26,21 @@ from esphome.const import (
from esphome.core import CORE, HexInt
from esphome.core.entity_helpers import inherit_property_from
from esphome.external_files import download_content
from esphome.types import ConfigType
_LOGGER = logging.getLogger(__name__)
AUTO_LOAD = ["audio", "psram"]
def AUTO_LOAD(config: ConfigType) -> list[str]:
load = ["audio"]
if (
not config
or config.get(CONF_TASK_STACK_IN_PSRAM)
or config.get(CONF_CODEC_SUPPORT_ENABLED)
):
return load + ["psram"]
return load
CODEOWNERS = ["@kahrendt", "@synesthesiam"]
DOMAIN = "media_player"
@@ -279,7 +290,9 @@ CONFIG_SCHEMA = cv.All(
cv.Optional(CONF_BUFFER_SIZE, default=1000000): cv.int_range(
min=4000, max=4000000
),
cv.Optional(CONF_CODEC_SUPPORT_ENABLED, default=True): cv.boolean,
cv.Optional(
CONF_CODEC_SUPPORT_ENABLED, default=psram.supported()
): cv.boolean,
cv.Optional(CONF_FILES): cv.ensure_list(MEDIA_FILE_TYPE_SCHEMA),
cv.Optional(CONF_TASK_STACK_IN_PSRAM, default=False): cv.boolean,
cv.Optional(CONF_VOLUME_INCREMENT, default=0.05): cv.percentage,

View File

@@ -347,7 +347,7 @@ def final_validate_device_schema(
def validate_pin(opt, device):
def validator(value):
if opt in device:
if opt in device and not CORE.testing_mode:
raise cv.Invalid(
f"The uart {opt} is used both by {name} and {device[opt]}, "
f"but can only be used by one. Please create a new uart bus for {name}."

View File

@@ -9,6 +9,7 @@ from esphome.components.esp32 import (
import esphome.config_validation as cv
from esphome.const import CONF_DEVICES, CONF_ID
from esphome.cpp_types import Component
from esphome.types import ConfigType
AUTO_LOAD = ["bytebuffer"]
CODEOWNERS = ["@clydebarrow"]
@@ -20,6 +21,7 @@ USBClient = usb_host_ns.class_("USBClient", Component)
CONF_VID = "vid"
CONF_PID = "pid"
CONF_ENABLE_HUBS = "enable_hubs"
CONF_MAX_TRANSFER_REQUESTS = "max_transfer_requests"
def usb_device_schema(cls=USBClient, vid: int = None, pid: [int] = None) -> cv.Schema:
@@ -44,6 +46,9 @@ CONFIG_SCHEMA = cv.All(
{
cv.GenerateID(): cv.declare_id(USBHost),
cv.Optional(CONF_ENABLE_HUBS, default=False): cv.boolean,
cv.Optional(CONF_MAX_TRANSFER_REQUESTS, default=16): cv.int_range(
min=1, max=32
),
cv.Optional(CONF_DEVICES): cv.ensure_list(usb_device_schema()),
}
),
@@ -58,10 +63,14 @@ async def register_usb_client(config):
return var
async def to_code(config):
async def to_code(config: ConfigType) -> None:
add_idf_sdkconfig_option("CONFIG_USB_HOST_CONTROL_TRANSFER_MAX_SIZE", 1024)
if config.get(CONF_ENABLE_HUBS):
add_idf_sdkconfig_option("CONFIG_USB_HOST_HUBS_SUPPORTED", True)
max_requests = config[CONF_MAX_TRANSFER_REQUESTS]
cg.add_define("USB_HOST_MAX_REQUESTS", max_requests)
var = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(var, config)
for device in config.get(CONF_DEVICES) or ():

View File

@@ -2,6 +2,7 @@
// Should not be needed, but it's required to pass CI clang-tidy checks
#if defined(USE_ESP32_VARIANT_ESP32S2) || defined(USE_ESP32_VARIANT_ESP32S3) || defined(USE_ESP32_VARIANT_ESP32P4)
#include "esphome/core/defines.h"
#include "esphome/core/component.h"
#include <vector>
#include "usb/usb_host.h"
@@ -16,23 +17,25 @@ namespace usb_host {
// THREADING MODEL:
// This component uses a dedicated USB task for event processing to prevent data loss.
// - USB Task (high priority): Handles USB events, executes transfer callbacks
// - Main Loop Task: Initiates transfers, processes completion events
// - USB Task (high priority): Handles USB events, executes transfer callbacks, releases transfer slots
// - Main Loop Task: Initiates transfers, processes device connect/disconnect events
//
// Thread-safe communication:
// - Lock-free queues for USB task -> main loop events (SPSC pattern)
// - Lock-free TransferRequest pool using atomic bitmask (MCSP pattern)
// - Lock-free TransferRequest pool using atomic bitmask (MCMP pattern - multi-consumer, multi-producer)
//
// TransferRequest pool access pattern:
// - get_trq_() [allocate]: Called from BOTH USB task and main loop threads
// * USB task: via USB UART input callbacks that restart transfers immediately
// * Main loop: for output transfers and flow-controlled input restarts
// - release_trq() [deallocate]: Called from main loop thread only
// - release_trq() [deallocate]: Called from BOTH USB task and main loop threads
// * USB task: immediately after transfer callback completes (critical for preventing slot exhaustion)
// * Main loop: when transfer submission fails
//
// The multi-threaded allocation is intentional for performance:
// - USB task can immediately restart input transfers without context switching
// The multi-threaded allocation/deallocation is intentional for performance:
// - USB task can immediately restart input transfers and release slots without context switching
// - Main loop controls backpressure by deciding when to restart after consuming data
// The atomic bitmask ensures thread-safe allocation without mutex blocking.
// The atomic bitmask ensures thread-safe allocation/deallocation without mutex blocking.
static const char *const TAG = "usb_host";
@@ -52,8 +55,17 @@ static const uint8_t USB_DIR_IN = 1 << 7;
static const uint8_t USB_DIR_OUT = 0;
static const size_t SETUP_PACKET_SIZE = 8;
static const size_t MAX_REQUESTS = 16; // maximum number of outstanding requests possible.
static_assert(MAX_REQUESTS <= 16, "MAX_REQUESTS must be <= 16 to fit in uint16_t bitmask");
static const size_t MAX_REQUESTS = USB_HOST_MAX_REQUESTS; // maximum number of outstanding requests possible.
static_assert(MAX_REQUESTS >= 1 && MAX_REQUESTS <= 32, "MAX_REQUESTS must be between 1 and 32");
// Select appropriate bitmask type for tracking allocation of TransferRequest slots.
// The bitmask must have at least as many bits as MAX_REQUESTS, so:
// - Use uint16_t for up to 16 requests (MAX_REQUESTS <= 16)
// - Use uint32_t for 17-32 requests (MAX_REQUESTS > 16)
// This is tied to the static_assert above, which enforces MAX_REQUESTS is between 1 and 32.
// If MAX_REQUESTS is increased above 32, this logic and the static_assert must be updated.
using trq_bitmask_t = std::conditional<(MAX_REQUESTS <= 16), uint16_t, uint32_t>::type;
static constexpr size_t USB_EVENT_QUEUE_SIZE = 32; // Size of event queue between USB task and main loop
static constexpr size_t USB_TASK_STACK_SIZE = 4096; // Stack size for USB task (same as ESP-IDF USB examples)
static constexpr UBaseType_t USB_TASK_PRIORITY = 5; // Higher priority than main loop (tskIDLE_PRIORITY + 5)
@@ -83,8 +95,6 @@ struct TransferRequest {
enum EventType : uint8_t {
EVENT_DEVICE_NEW,
EVENT_DEVICE_GONE,
EVENT_TRANSFER_COMPLETE,
EVENT_CONTROL_COMPLETE,
};
struct UsbEvent {
@@ -96,9 +106,6 @@ struct UsbEvent {
struct {
usb_device_handle_t handle;
} device_gone;
struct {
TransferRequest *trq;
} transfer;
} data;
// Required for EventPool - no cleanup needed for POD types
@@ -163,10 +170,9 @@ class USBClient : public Component {
uint16_t pid_{};
// Lock-free pool management using atomic bitmask (no dynamic allocation)
// Bit i = 1: requests_[i] is in use, Bit i = 0: requests_[i] is available
// Supports multiple concurrent consumers (both threads can allocate)
// Single producer for deallocation (main loop only)
// Limited to 16 slots by uint16_t size (enforced by static_assert)
std::atomic<uint16_t> trq_in_use_;
// Supports multiple concurrent consumers and producers (both threads can allocate/deallocate)
// Bitmask type automatically selected: uint16_t for <= 16 slots, uint32_t for 17-32 slots
std::atomic<trq_bitmask_t> trq_in_use_;
TransferRequest requests_[MAX_REQUESTS]{};
};
class USBHost : public Component {

View File

@@ -228,12 +228,6 @@ void USBClient::loop() {
case EVENT_DEVICE_GONE:
this->on_removed(event->data.device_gone.handle);
break;
case EVENT_TRANSFER_COMPLETE:
case EVENT_CONTROL_COMPLETE: {
auto *trq = event->data.transfer.trq;
this->release_trq(trq);
break;
}
}
// Return event to pool for reuse
this->event_pool.release(event);
@@ -313,25 +307,6 @@ void USBClient::on_removed(usb_device_handle_t handle) {
}
}
// Helper to queue transfer cleanup to main loop
static void queue_transfer_cleanup(TransferRequest *trq, EventType type) {
auto *client = trq->client;
// Allocate event from pool
UsbEvent *event = client->event_pool.allocate();
if (event == nullptr) {
// No events available - increment counter for periodic logging
client->event_queue.increment_dropped_count();
return;
}
event->type = type;
event->data.transfer.trq = trq;
// Push to lock-free queue (always succeeds since pool size == queue size)
client->event_queue.push(event);
}
// CALLBACK CONTEXT: USB task (called from usb_host_client_handle_events in USB task)
static void control_callback(const usb_transfer_t *xfer) {
auto *trq = static_cast<TransferRequest *>(xfer->context);
@@ -346,8 +321,9 @@ static void control_callback(const usb_transfer_t *xfer) {
trq->callback(trq->status);
}
// Queue cleanup to main loop
queue_transfer_cleanup(trq, EVENT_CONTROL_COMPLETE);
// Release transfer slot immediately in USB task
// The release_trq() uses thread-safe atomic operations
trq->client->release_trq(trq);
}
// THREAD CONTEXT: Called from both USB task and main loop threads (multi-consumer)
@@ -358,20 +334,20 @@ static void control_callback(const usb_transfer_t *xfer) {
// This multi-threaded access is intentional for performance - USB task can
// immediately restart transfers without waiting for main loop scheduling.
TransferRequest *USBClient::get_trq_() {
uint16_t mask = this->trq_in_use_.load(std::memory_order_relaxed);
trq_bitmask_t mask = this->trq_in_use_.load(std::memory_order_relaxed);
// Find first available slot (bit = 0) and try to claim it atomically
// We use a while loop to allow retrying the same slot after CAS failure
size_t i = 0;
while (i != MAX_REQUESTS) {
if (mask & (1U << i)) {
if (mask & (static_cast<trq_bitmask_t>(1) << i)) {
// Slot is in use, move to next slot
i++;
continue;
}
// Slot i appears available, try to claim it atomically
uint16_t desired = mask | (1U << i); // Set bit i to mark as in-use
trq_bitmask_t desired = mask | (static_cast<trq_bitmask_t>(1) << i); // Set bit i to mark as in-use
if (this->trq_in_use_.compare_exchange_weak(mask, desired, std::memory_order_acquire, std::memory_order_relaxed)) {
// Successfully claimed slot i - prepare the TransferRequest
@@ -386,7 +362,7 @@ TransferRequest *USBClient::get_trq_() {
i = 0;
}
ESP_LOGE(TAG, "All %d transfer slots in use", MAX_REQUESTS);
ESP_LOGE(TAG, "All %zu transfer slots in use", MAX_REQUESTS);
return nullptr;
}
void USBClient::disconnect() {
@@ -452,8 +428,11 @@ static void transfer_callback(usb_transfer_t *xfer) {
trq->callback(trq->status);
}
// Queue cleanup to main loop
queue_transfer_cleanup(trq, EVENT_TRANSFER_COMPLETE);
// Release transfer slot AFTER callback completes to prevent slot exhaustion
// This is critical for high-throughput transfers (e.g., USB UART at 115200 baud)
// The callback has finished accessing xfer->data_buffer, so it's safe to release
// The release_trq() uses thread-safe atomic operations
trq->client->release_trq(trq);
}
/**
* Performs a transfer input operation.
@@ -521,12 +500,12 @@ void USBClient::dump_config() {
" Product id %04X",
this->vid_, this->pid_);
}
// THREAD CONTEXT: Only called from main loop thread (single producer for deallocation)
// - Via event processing when handling EVENT_TRANSFER_COMPLETE/EVENT_CONTROL_COMPLETE
// - Directly when transfer submission fails
// THREAD CONTEXT: Called from both USB task and main loop threads
// - USB task: Immediately after transfer callback completes
// - Main loop: When transfer submission fails
//
// THREAD SAFETY: Lock-free using atomic AND to clear bit
// Single-producer pattern makes this simpler than allocation
// Thread-safe atomic operation allows multi-threaded deallocation
void USBClient::release_trq(TransferRequest *trq) {
if (trq == nullptr)
return;
@@ -540,8 +519,8 @@ void USBClient::release_trq(TransferRequest *trq) {
// Atomically clear bit i to mark slot as available
// fetch_and with inverted bitmask clears the bit atomically
uint16_t bit = 1U << index;
this->trq_in_use_.fetch_and(static_cast<uint16_t>(~bit), std::memory_order_release);
trq_bitmask_t bit = static_cast<trq_bitmask_t>(1) << index;
this->trq_in_use_.fetch_and(static_cast<trq_bitmask_t>(~bit), std::memory_order_release);
}
} // namespace usb_host

View File

@@ -380,24 +380,25 @@ void AsyncEventSource::handleRequest(AsyncWebServerRequest *request) {
if (this->on_connect_) {
this->on_connect_(rsp);
}
this->sessions_.insert(rsp);
this->sessions_.push_back(rsp);
}
void AsyncEventSource::loop() {
// Clean up dead sessions safely
// This follows the ESP-IDF pattern where free_ctx marks resources as dead
// and the main loop handles the actual cleanup to avoid race conditions
auto it = this->sessions_.begin();
while (it != this->sessions_.end()) {
auto *ses = *it;
for (size_t i = 0; i < this->sessions_.size();) {
auto *ses = this->sessions_[i];
// If the session has a dead socket (marked by destroy callback)
if (ses->fd_.load() == 0) {
ESP_LOGD(TAG, "Removing dead event source session");
it = this->sessions_.erase(it);
delete ses; // NOLINT(cppcoreguidelines-owning-memory)
// Remove by swapping with last element (O(1) removal, order doesn't matter for sessions)
this->sessions_[i] = this->sessions_.back();
this->sessions_.pop_back();
} else {
ses->loop();
++it;
++i;
}
}
}

View File

@@ -8,7 +8,6 @@
#include <functional>
#include <list>
#include <map>
#include <set>
#include <string>
#include <utility>
#include <vector>
@@ -315,7 +314,10 @@ class AsyncEventSource : public AsyncWebHandler {
protected:
std::string url_;
std::set<AsyncEventSourceResponse *> sessions_;
// Use vector instead of set: SSE sessions are typically 1-5 connections (browsers, dashboards).
// Linear search is faster than red-black tree overhead for this small dataset.
// Only operations needed: add session, remove session, iterate sessions - no need for sorted order.
std::vector<AsyncEventSourceResponse *> sessions_;
connect_handler_t on_connect_{};
esphome::web_server::WebServer *web_server_;
};

View File

@@ -447,6 +447,8 @@ async def to_code(config):
var.get_disconnect_trigger(), [], on_disconnect_config
)
CORE.add_job(final_step)
@automation.register_condition("wifi.connected", WiFiConnectedCondition, cv.Schema({}))
async def wifi_connected_to_code(config, condition_id, template_arg, args):
@@ -468,6 +470,28 @@ async def wifi_disable_to_code(config, action_id, template_arg, args):
return cg.new_Pvariable(action_id, template_arg)
KEEP_SCAN_RESULTS_KEY = "wifi_keep_scan_results"
def request_wifi_scan_results():
"""Request that WiFi scan results be kept in memory after connection.
Components that need access to scan results after WiFi is connected should
call this function during their code generation. This prevents the WiFi component from
freeing scan result memory after successful connection.
"""
CORE.data[KEEP_SCAN_RESULTS_KEY] = True
@coroutine_with_priority(CoroPriority.FINAL)
async def final_step():
"""Final code generation step to configure scan result retention."""
if CORE.data.get(KEEP_SCAN_RESULTS_KEY, False):
cg.add(
cg.RawExpression("wifi::global_wifi_component->set_keep_scan_results(true)")
)
@automation.register_action(
"wifi.configure",
WiFiConfigureAction,

View File

@@ -265,12 +265,9 @@ network::IPAddress WiFiComponent::get_dns_address(int num) {
return this->wifi_dns_ip_(num);
return {};
}
std::string WiFiComponent::get_use_address() const {
if (this->use_address_.empty()) {
return App.get_name() + ".local";
}
return this->use_address_;
}
// set_use_address() is guaranteed to be called during component setup by Python code generation,
// so use_address_ will always be valid when get_use_address() is called - no fallback needed.
const std::string &WiFiComponent::get_use_address() const { return this->use_address_; }
void WiFiComponent::set_use_address(const std::string &use_address) { this->use_address_ = use_address; }
#ifdef USE_WIFI_AP
@@ -552,7 +549,7 @@ void WiFiComponent::start_scanning() {
// Using insertion sort instead of std::stable_sort saves flash memory
// by avoiding template instantiations (std::rotate, std::stable_sort, lambdas)
// IMPORTANT: This sort is stable (preserves relative order of equal elements)
static void insertion_sort_scan_results(std::vector<WiFiScanResult> &results) {
template<typename VectorType> static void insertion_sort_scan_results(VectorType &results) {
const size_t size = results.size();
for (size_t i = 1; i < size; i++) {
// Make a copy to avoid issues with move semantics during comparison
@@ -576,8 +573,9 @@ __attribute__((noinline)) static void log_scan_result(const WiFiScanResult &res)
format_mac_addr_upper(bssid.data(), bssid_s);
if (res.get_matches()) {
ESP_LOGI(TAG, "- '%s' %s" LOG_SECRET("(%s) ") "%s", res.get_ssid().c_str(), res.get_is_hidden() ? "(HIDDEN) " : "",
bssid_s, LOG_STR_ARG(get_signal_bars(res.get_rssi())));
ESP_LOGI(TAG, "- '%s' %s" LOG_SECRET("(%s) ") "%s", res.get_ssid().c_str(),
res.get_is_hidden() ? LOG_STR_LITERAL("(HIDDEN) ") : LOG_STR_LITERAL(""), bssid_s,
LOG_STR_ARG(get_signal_bars(res.get_rssi())));
ESP_LOGD(TAG,
" Channel: %u\n"
" RSSI: %d dB",
@@ -715,6 +713,12 @@ void WiFiComponent::check_connecting_finished() {
this->state_ = WIFI_COMPONENT_STATE_STA_CONNECTED;
this->num_retried_ = 0;
// Free scan results memory unless a component needs them
if (!this->keep_scan_results_) {
this->scan_result_.clear();
this->scan_result_.shrink_to_fit();
}
if (this->fast_connect_) {
this->save_fast_connect_settings_();
}

View File

@@ -121,6 +121,14 @@ struct EAPAuth {
using bssid_t = std::array<uint8_t, 6>;
// Use std::vector for RP2040 since scan count is unknown (callback-based)
// Use FixedVector for other platforms where count is queried first
#ifdef USE_RP2040
template<typename T> using wifi_scan_vector_t = std::vector<T>;
#else
template<typename T> using wifi_scan_vector_t = FixedVector<T>;
#endif
class WiFiAP {
public:
void set_ssid(const std::string &ssid);
@@ -275,10 +283,10 @@ class WiFiComponent : public Component {
network::IPAddress get_dns_address(int num);
network::IPAddresses get_ip_addresses();
std::string get_use_address() const;
const std::string &get_use_address() const;
void set_use_address(const std::string &use_address);
const std::vector<WiFiScanResult> &get_scan_result() const { return scan_result_; }
const wifi_scan_vector_t<WiFiScanResult> &get_scan_result() const { return scan_result_; }
network::IPAddress wifi_soft_ap_ip();
@@ -316,6 +324,7 @@ class WiFiComponent : public Component {
int8_t wifi_rssi();
void set_enable_on_boot(bool enable_on_boot) { this->enable_on_boot_ = enable_on_boot; }
void set_keep_scan_results(bool keep_scan_results) { this->keep_scan_results_ = keep_scan_results; }
Trigger<> *get_connect_trigger() const { return this->connect_trigger_; };
Trigger<> *get_disconnect_trigger() const { return this->disconnect_trigger_; };
@@ -385,7 +394,7 @@ class WiFiComponent : public Component {
std::string use_address_;
std::vector<WiFiAP> sta_;
std::vector<WiFiSTAPriority> sta_priorities_;
std::vector<WiFiScanResult> scan_result_;
wifi_scan_vector_t<WiFiScanResult> scan_result_;
WiFiAP selected_ap_;
WiFiAP ap_;
optional<float> output_power_;
@@ -424,6 +433,7 @@ class WiFiComponent : public Component {
#endif
bool enable_on_boot_;
bool got_ipv4_address_{false};
bool keep_scan_results_{false};
// Pointers at the end (naturally aligned)
Trigger<> *connect_trigger_{new Trigger<>()};

View File

@@ -696,7 +696,15 @@ void WiFiComponent::wifi_scan_done_callback_(void *arg, STATUS status) {
this->retry_connect();
return;
}
// Count the number of results first
auto *head = reinterpret_cast<bss_info *>(arg);
size_t count = 0;
for (bss_info *it = head; it != nullptr; it = STAILQ_NEXT(it, next)) {
count++;
}
this->scan_result_.init(count);
for (bss_info *it = head; it != nullptr; it = STAILQ_NEXT(it, next)) {
WiFiScanResult res({it->bssid[0], it->bssid[1], it->bssid[2], it->bssid[3], it->bssid[4], it->bssid[5]},
std::string(reinterpret_cast<char *>(it->ssid), it->ssid_len), it->channel, it->rssi,

View File

@@ -784,7 +784,7 @@ void WiFiComponent::wifi_process_event_(IDFWiFiEvent *data) {
}
records.resize(number);
scan_result_.reserve(number);
scan_result_.init(number);
for (int i = 0; i < number; i++) {
auto &record = records[i];
bssid_t bssid;

View File

@@ -411,7 +411,7 @@ void WiFiComponent::wifi_scan_done_callback_() {
if (num < 0)
return;
this->scan_result_.reserve(static_cast<unsigned int>(num));
this->scan_result_.init(static_cast<unsigned int>(num));
for (int i = 0; i < num; i++) {
String ssid = WiFi.SSID(i);
wifi_auth_mode_t authmode = WiFi.encryptionType(i);

View File

@@ -1,5 +1,5 @@
import esphome.codegen as cg
from esphome.components import text_sensor
from esphome.components import text_sensor, wifi
import esphome.config_validation as cv
from esphome.const import (
CONF_BSSID,
@@ -77,7 +77,9 @@ async def to_code(config):
await setup_conf(config, CONF_SSID)
await setup_conf(config, CONF_BSSID)
await setup_conf(config, CONF_MAC_ADDRESS)
await setup_conf(config, CONF_SCAN_RESULTS)
if CONF_SCAN_RESULTS in config:
await setup_conf(config, CONF_SCAN_RESULTS)
wifi.request_wifi_scan_results()
await setup_conf(config, CONF_DNS_ADDRESS)
if conf := config.get(CONF_IP_ADDRESS):
wifi_info = await text_sensor.new_text_sensor(config[CONF_IP_ADDRESS])

View File

@@ -1195,6 +1195,13 @@ def validate_bytes(value):
def hostname(value):
"""Validate that the value is a valid hostname.
Maximum length is 63 characters per RFC 1035.
Note: If this limit is changed, update MAX_NAME_WITH_SUFFIX_SIZE in
esphome/core/helpers.cpp to accommodate the new maximum length.
"""
value = string(value)
if re.match(r"^[a-z0-9-]{1,63}$", value, re.IGNORECASE) is not None:
return value

View File

@@ -836,6 +836,7 @@ CONF_RMT_CHANNEL = "rmt_channel"
CONF_RMT_SYMBOLS = "rmt_symbols"
CONF_ROTATION = "rotation"
CONF_ROW = "row"
CONF_ROWS = "rows"
CONF_RS_PIN = "rs_pin"
CONF_RTD_NOMINAL_RESISTANCE = "rtd_nominal_resistance"
CONF_RTD_WIRES = "rtd_wires"

View File

@@ -529,6 +529,8 @@ class EsphomeCore:
self.dashboard = False
# True if command is run from vscode api
self.vscode = False
# True if running in testing mode (disables validation checks for grouped testing)
self.testing_mode = False
# The name of the node
self.name: str | None = None
# The friendly name of the node

View File

@@ -340,8 +340,8 @@ void Application::calculate_looping_components_() {
}
}
// Pre-reserve vector to avoid reallocations
this->looping_components_.reserve(total_looping);
// Initialize FixedVector with exact size - no reallocation possible
this->looping_components_.init(total_looping);
// Add all components with loop override that aren't already LOOP_DONE
// Some components (like logger) may call disable_loop() during initialization

View File

@@ -102,9 +102,15 @@ class Application {
arch_init();
this->name_add_mac_suffix_ = name_add_mac_suffix;
if (name_add_mac_suffix) {
const std::string mac_suffix = get_mac_address().substr(6);
this->name_ = name + "-" + mac_suffix;
this->friendly_name_ = friendly_name.empty() ? "" : friendly_name + " " + mac_suffix;
// MAC address suffix length (last 6 characters of 12-char MAC address string)
constexpr size_t mac_address_suffix_len = 6;
const std::string mac_addr = get_mac_address();
// Use pointer + offset to avoid substr() allocation
const char *mac_suffix_ptr = mac_addr.c_str() + mac_address_suffix_len;
this->name_ = make_name_with_suffix(name, '-', mac_suffix_ptr, mac_address_suffix_len);
if (!friendly_name.empty()) {
this->friendly_name_ = make_name_with_suffix(friendly_name, ' ', mac_suffix_ptr, mac_address_suffix_len);
}
} else {
this->name_ = name;
this->friendly_name_ = friendly_name;
@@ -472,7 +478,7 @@ class Application {
// - When a component is enabled, it's swapped with the first inactive component
// and active_end_ is incremented
// - This eliminates branch mispredictions from flag checking in the hot loop
std::vector<Component *> looping_components_{};
FixedVector<Component *> looping_components_{};
#ifdef USE_SOCKET_SELECT_SUPPORT
std::vector<int> socket_fds_; // Vector of all monitored socket file descriptors
#endif

View File

@@ -7,6 +7,7 @@
#include "esphome/core/preferences.h"
#include "esphome/core/scheduler.h"
#include "esphome/core/application.h"
#include "esphome/core/helpers.h"
#include <vector>
@@ -14,7 +15,7 @@ namespace esphome {
template<typename... Ts> class AndCondition : public Condition<Ts...> {
public:
explicit AndCondition(const std::vector<Condition<Ts...> *> &conditions) : conditions_(conditions) {}
explicit AndCondition(std::initializer_list<Condition<Ts...> *> conditions) : conditions_(conditions) {}
bool check(Ts... x) override {
for (auto *condition : this->conditions_) {
if (!condition->check(x...))
@@ -25,12 +26,12 @@ template<typename... Ts> class AndCondition : public Condition<Ts...> {
}
protected:
std::vector<Condition<Ts...> *> conditions_;
FixedVector<Condition<Ts...> *> conditions_;
};
template<typename... Ts> class OrCondition : public Condition<Ts...> {
public:
explicit OrCondition(const std::vector<Condition<Ts...> *> &conditions) : conditions_(conditions) {}
explicit OrCondition(std::initializer_list<Condition<Ts...> *> conditions) : conditions_(conditions) {}
bool check(Ts... x) override {
for (auto *condition : this->conditions_) {
if (condition->check(x...))
@@ -41,7 +42,7 @@ template<typename... Ts> class OrCondition : public Condition<Ts...> {
}
protected:
std::vector<Condition<Ts...> *> conditions_;
FixedVector<Condition<Ts...> *> conditions_;
};
template<typename... Ts> class NotCondition : public Condition<Ts...> {
@@ -55,7 +56,7 @@ template<typename... Ts> class NotCondition : public Condition<Ts...> {
template<typename... Ts> class XorCondition : public Condition<Ts...> {
public:
explicit XorCondition(const std::vector<Condition<Ts...> *> &conditions) : conditions_(conditions) {}
explicit XorCondition(std::initializer_list<Condition<Ts...> *> conditions) : conditions_(conditions) {}
bool check(Ts... x) override {
size_t result = 0;
for (auto *condition : this->conditions_) {
@@ -66,7 +67,7 @@ template<typename... Ts> class XorCondition : public Condition<Ts...> {
}
protected:
std::vector<Condition<Ts...> *> conditions_;
FixedVector<Condition<Ts...> *> conditions_;
};
template<typename... Ts> class LambdaCondition : public Condition<Ts...> {

View File

@@ -200,7 +200,7 @@ CONFIG_SCHEMA = cv.All(
cv.Schema(
{
cv.Required(CONF_NAME): cv.valid_name,
cv.Optional(CONF_FRIENDLY_NAME, ""): cv.string,
cv.Optional(CONF_FRIENDLY_NAME, ""): cv.All(cv.string, cv.Length(max=120)),
cv.Optional(CONF_AREA): validate_area_config,
cv.Optional(CONF_COMMENT): cv.string,
cv.Required(CONF_BUILD_PATH): cv.string,

View File

@@ -83,6 +83,7 @@
#define USE_LVGL_TILEVIEW
#define USE_LVGL_TOUCHSCREEN
#define USE_MDNS
#define USE_MDNS_STORE_SERVICES
#define MDNS_SERVICE_COUNT 3
#define MDNS_DYNAMIC_TXT_COUNT 3
#define USE_MEDIA_PLAYER
@@ -175,6 +176,13 @@
#define USE_ESP32_BLE_SERVER_DESCRIPTOR_ON_WRITE
#define USE_ESP32_BLE_SERVER_ON_CONNECT
#define USE_ESP32_BLE_SERVER_ON_DISCONNECT
#define ESPHOME_ESP32_BLE_TRACKER_LISTENER_COUNT 1
#define ESPHOME_ESP32_BLE_TRACKER_CLIENT_COUNT 1
#define ESPHOME_ESP32_BLE_GAP_EVENT_HANDLER_COUNT 2
#define ESPHOME_ESP32_BLE_GAP_SCAN_EVENT_HANDLER_COUNT 1
#define ESPHOME_ESP32_BLE_GATTC_EVENT_HANDLER_COUNT 1
#define ESPHOME_ESP32_BLE_GATTS_EVENT_HANDLER_COUNT 1
#define ESPHOME_ESP32_BLE_BLE_STATUS_EVENT_HANDLER_COUNT 2
#define USE_ESP32_CAMERA_JPEG_ENCODER
#define USE_I2C
#define USE_IMPROV
@@ -191,9 +199,10 @@
#define USE_WEBSERVER_PORT 80 // NOLINT
#define USE_WEBSERVER_SORTING
#define USE_WIFI_11KV_SUPPORT
#define USB_HOST_MAX_REQUESTS 16
#ifdef USE_ARDUINO
#define USE_ARDUINO_VERSION_CODE VERSION_CODE(3, 2, 1)
#define USE_ARDUINO_VERSION_CODE VERSION_CODE(3, 3, 2)
#define USE_ETHERNET
#define USE_ETHERNET_KSZ8081
#endif

View File

@@ -246,12 +246,15 @@ def entity_duplicate_validator(platform: str) -> Callable[[ConfigType], ConfigTy
"\n to distinguish them"
)
raise cv.Invalid(
f"Duplicate {platform} entity with name '{entity_name}' found{device_prefix}. "
f"{conflict_msg}. "
"Each entity on a device must have a unique name within its platform."
f"{sanitized_msg}"
)
# Skip duplicate entity name validation when testing_mode is enabled
# This flag is used for grouped component testing
if not CORE.testing_mode:
raise cv.Invalid(
f"Duplicate {platform} entity with name '{entity_name}' found{device_prefix}. "
f"{conflict_msg}. "
"Each entity on a device must have a unique name within its platform."
f"{sanitized_msg}"
)
# Store metadata about this entity
entity_metadata: EntityMetadata = {

View File

@@ -235,6 +235,30 @@ std::string str_sprintf(const char *fmt, ...) {
return str;
}
// Maximum size for name with suffix: 120 (max friendly name) + 1 (separator) + 6 (MAC suffix) + 1 (null term)
static constexpr size_t MAX_NAME_WITH_SUFFIX_SIZE = 128;
std::string make_name_with_suffix(const std::string &name, char sep, const char *suffix_ptr, size_t suffix_len) {
char buffer[MAX_NAME_WITH_SUFFIX_SIZE];
size_t name_len = name.size();
size_t total_len = name_len + 1 + suffix_len;
// Silently truncate if needed: prioritize keeping the full suffix
if (total_len >= MAX_NAME_WITH_SUFFIX_SIZE) {
// NOTE: This calculation could underflow if suffix_len >= MAX_NAME_WITH_SUFFIX_SIZE - 2,
// but this is safe because this helper is only called with small suffixes:
// MAC suffixes (6-12 bytes), ".local" (5 bytes), etc.
name_len = MAX_NAME_WITH_SUFFIX_SIZE - suffix_len - 2; // -2 for separator and null terminator
total_len = name_len + 1 + suffix_len;
}
memcpy(buffer, name.c_str(), name_len);
buffer[name_len] = sep;
memcpy(buffer + name_len + 1, suffix_ptr, suffix_len);
buffer[total_len] = '\0';
return std::string(buffer, total_len);
}
// Parsing & formatting
size_t parse_hex(const char *str, size_t length, uint8_t *data, size_t count) {

View File

@@ -162,6 +162,159 @@ template<typename T, size_t N> class StaticVector {
const_reverse_iterator rend() const { return const_reverse_iterator(begin()); }
};
/// Fixed-capacity vector - allocates once at runtime, never reallocates
/// This avoids std::vector template overhead (_M_realloc_insert, _M_default_append)
/// when size is known at initialization but not at compile time
template<typename T> class FixedVector {
private:
T *data_{nullptr};
size_t size_{0};
size_t capacity_{0};
// Helper to destroy all elements without freeing memory
void destroy_elements_() {
// Only call destructors for non-trivially destructible types
if constexpr (!std::is_trivially_destructible<T>::value) {
for (size_t i = 0; i < size_; i++) {
data_[i].~T();
}
}
}
// Helper to destroy elements and free memory
void cleanup_() {
if (data_ != nullptr) {
destroy_elements_();
// Free raw memory
::operator delete(data_);
}
}
// Helper to reset pointers after cleanup
void reset_() {
data_ = nullptr;
capacity_ = 0;
size_ = 0;
}
public:
FixedVector() = default;
/// Constructor from initializer list - allocates exact size needed
/// This enables brace initialization: FixedVector<int> v = {1, 2, 3};
FixedVector(std::initializer_list<T> init_list) {
init(init_list.size());
size_t idx = 0;
for (const auto &item : init_list) {
new (data_ + idx) T(item);
++idx;
}
size_ = init_list.size();
}
~FixedVector() { cleanup_(); }
// Disable copy operations (avoid accidental expensive copies)
FixedVector(const FixedVector &) = delete;
FixedVector &operator=(const FixedVector &) = delete;
// Enable move semantics (allows use in move-only containers like std::vector)
FixedVector(FixedVector &&other) noexcept : data_(other.data_), size_(other.size_), capacity_(other.capacity_) {
other.reset_();
}
FixedVector &operator=(FixedVector &&other) noexcept {
if (this != &other) {
// Delete our current data
cleanup_();
// Take ownership of other's data
data_ = other.data_;
size_ = other.size_;
capacity_ = other.capacity_;
// Leave other in valid empty state
other.reset_();
}
return *this;
}
// Allocate capacity - can be called multiple times to reinit
void init(size_t n) {
cleanup_();
reset_();
if (n > 0) {
// Allocate raw memory without calling constructors
// sizeof(T) is correct here for any type T (value types, pointers, etc.)
// NOLINTNEXTLINE(bugprone-sizeof-expression)
data_ = static_cast<T *>(::operator new(n * sizeof(T)));
capacity_ = n;
}
}
// Clear the vector (destroy all elements, reset size to 0, keep capacity)
void clear() {
destroy_elements_();
size_ = 0;
}
// Shrink capacity to fit current size (frees all memory)
void shrink_to_fit() {
cleanup_();
reset_();
}
/// Add element without bounds checking
/// Caller must ensure sufficient capacity was allocated via init()
/// Silently ignores pushes beyond capacity (no exception or assertion)
void push_back(const T &value) {
if (size_ < capacity_) {
// Use placement new to construct the object in pre-allocated memory
new (&data_[size_]) T(value);
size_++;
}
}
/// Add element by move without bounds checking
/// Caller must ensure sufficient capacity was allocated via init()
/// Silently ignores pushes beyond capacity (no exception or assertion)
void push_back(T &&value) {
if (size_ < capacity_) {
// Use placement new to move-construct the object in pre-allocated memory
new (&data_[size_]) T(std::move(value));
size_++;
}
}
/// Emplace element without bounds checking - constructs in-place
/// Caller must ensure sufficient capacity was allocated via init()
/// Returns reference to the newly constructed element
/// NOTE: Caller MUST ensure size_ < capacity_ before calling
T &emplace_back() {
// Use placement new to default-construct the object in pre-allocated memory
new (&data_[size_]) T();
size_++;
return data_[size_ - 1];
}
/// Access last element (no bounds checking - matches std::vector behavior)
/// Caller must ensure vector is not empty (size() > 0)
T &back() { return data_[size_ - 1]; }
const T &back() const { return data_[size_ - 1]; }
size_t size() const { return size_; }
bool empty() const { return size_ == 0; }
/// Access element without bounds checking (matches std::vector behavior)
/// Caller must ensure index is valid (i < size())
T &operator[](size_t i) { return data_[i]; }
const T &operator[](size_t i) const { return data_[i]; }
// Iterator support for range-based for loops
T *begin() { return data_; }
T *end() { return data_ + size_; }
const T *begin() const { return data_; }
const T *end() const { return data_ + size_; }
};
///@}
/// @name Mathematics
@@ -309,6 +462,16 @@ std::string __attribute__((format(printf, 1, 3))) str_snprintf(const char *fmt,
/// sprintf-like function returning std::string.
std::string __attribute__((format(printf, 1, 2))) str_sprintf(const char *fmt, ...);
/// Concatenate a name with a separator and suffix using an efficient stack-based approach.
/// This avoids multiple heap allocations during string construction.
/// Maximum name length supported is 120 characters for friendly names.
/// @param name The base name string
/// @param sep The separator character (e.g., '-', ' ', or '.')
/// @param suffix_ptr Pointer to the suffix characters
/// @param suffix_len Length of the suffix
/// @return The concatenated string: name + sep + suffix
std::string make_name_with_suffix(const std::string &name, char sep, const char *suffix_ptr, size_t suffix_len);
///@}
/// @name Parsing & formatting

View File

@@ -410,7 +410,7 @@ def run_ota_impl_(
af, socktype, _, _, sa = r
_LOGGER.info("Connecting to %s port %s...", sa[0], sa[1])
sock = socket.socket(af, socktype)
sock.settimeout(10.0)
sock.settimeout(20.0)
try:
sock.connect(sa)
except OSError as err:

View File

@@ -118,11 +118,11 @@ class PinRegistry(dict):
parent_config = fconf.get_config_for_path(parent_path)
final_val_fun(pin_config, parent_config)
allow_others = pin_config.get(CONF_ALLOW_OTHER_USES, False)
if count != 1 and not allow_others:
if count != 1 and not allow_others and not CORE.testing_mode:
raise cv.Invalid(
f"Pin {pin_config[CONF_NUMBER]} is used in multiple places"
)
if count == 1 and allow_others:
if count == 1 and allow_others and not CORE.testing_mode:
raise cv.Invalid(
f"Pin {pin_config[CONF_NUMBER]} incorrectly sets {CONF_ALLOW_OTHER_USES}: true"
)

View File

@@ -43,6 +43,35 @@ def patch_structhash():
cli.clean_build_dir = patched_clean_build_dir
def patch_file_downloader():
"""Patch PlatformIO's FileDownloader to retry on PackageException errors."""
from platformio.package.download import FileDownloader
from platformio.package.exception import PackageException
original_init = FileDownloader.__init__
def patched_init(self, *args: Any, **kwargs: Any) -> None:
max_retries = 3
for attempt in range(max_retries):
try:
return original_init(self, *args, **kwargs)
except PackageException as e:
if attempt < max_retries - 1:
_LOGGER.warning(
"Package download failed: %s. Retrying... (attempt %d/%d)",
str(e),
attempt + 1,
max_retries,
)
else:
# Final attempt - re-raise
raise
return None
FileDownloader.__init__ = patched_init
IGNORE_LIB_WARNINGS = f"(?:{'|'.join(['Hash', 'Update'])})"
FILTER_PLATFORMIO_LINES = [
r"Verbose mode can be enabled via `-v, --verbose` option.*",
@@ -100,6 +129,7 @@ def run_platformio_cli(*args, **kwargs) -> str | int:
import platformio.__main__
patch_structhash()
patch_file_downloader()
return run_external_command(platformio.__main__.main, *cmd, **kwargs)

View File

@@ -15,6 +15,8 @@ from esphome.const import (
from esphome.core import CORE, EsphomeError
from esphome.helpers import (
copy_file_if_changed,
get_str_env,
is_ha_addon,
read_file,
walk_files,
write_file_if_changed,
@@ -338,16 +340,21 @@ def clean_build():
def clean_all(configuration: list[str]):
import shutil
# Clean entire build dir
for dir in configuration:
build_dir = Path(dir) / ".esphome"
if build_dir.is_dir():
_LOGGER.info("Cleaning %s", build_dir)
# Don't remove storage as it will cause the dashboard to regenerate all configs
for item in build_dir.iterdir():
if item.is_file():
data_dirs = [Path(dir) / ".esphome" for dir in configuration]
if is_ha_addon():
data_dirs.append(Path("/data"))
if "ESPHOME_DATA_DIR" in os.environ:
data_dirs.append(Path(get_str_env("ESPHOME_DATA_DIR", None)))
# Clean build dir
for dir in data_dirs:
if dir.is_dir():
_LOGGER.info("Cleaning %s", dir)
# Don't remove storage or .json files which are needed by the dashboard
for item in dir.iterdir():
if item.is_file() and not item.name.endswith(".json"):
item.unlink()
elif item.name != "storage" and item.is_dir():
elif item.is_dir() and item.name != "storage":
shutil.rmtree(item)
# Clean PlatformIO project files

View File

@@ -1,3 +1,4 @@
[build]
command = "script/build-api-docs"
publish = "api-docs"
environment = { PYTHON_VERSION = "3.13" }

View File

@@ -125,9 +125,9 @@ extra_scripts = post:esphome/components/esp8266/post_build.py.script
; This are common settings for the ESP32 (all variants) using Arduino.
[common:esp32-arduino]
extends = common:arduino
platform = https://github.com/pioarduino/platform-espressif32/releases/download/54.03.21-2/platform-espressif32.zip
platform = https://github.com/pioarduino/platform-espressif32/releases/download/55.03.31-1/platform-espressif32.zip
platform_packages =
pioarduino/framework-arduinoespressif32@https://github.com/espressif/arduino-esp32/releases/download/3.2.1/esp32-3.2.1.zip
pioarduino/framework-arduinoespressif32@https://github.com/espressif/arduino-esp32/releases/download/3.3.2/esp32-3.3.2.zip
framework = arduino, espidf ; Arduino as an ESP-IDF component
lib_deps =
@@ -161,9 +161,9 @@ extra_scripts = post:esphome/components/esp32/post_build.py.script
; This are common settings for the ESP32 (all variants) using IDF.
[common:esp32-idf]
extends = common:idf
platform = https://github.com/pioarduino/platform-espressif32/releases/download/54.03.21-2/platform-espressif32.zip
platform = https://github.com/pioarduino/platform-espressif32/releases/download/55.03.31-1/platform-espressif32.zip
platform_packages =
pioarduino/framework-espidf@https://github.com/pioarduino/esp-idf/releases/download/v5.4.2/esp-idf-v5.4.2.zip
pioarduino/framework-espidf@https://github.com/pioarduino/esp-idf/releases/download/v5.5.1/esp-idf-v5.5.1.zip
framework = espidf
lib_deps =

View File

@@ -11,22 +11,18 @@ pyserial==3.5
platformio==6.1.18 # When updating platformio, also update /docker/Dockerfile
esptool==5.1.0
click==8.1.7
esphome-dashboard==20251009.0
aioesphomeapi==41.13.0
esphome-dashboard==20251013.0
aioesphomeapi==41.18.0
zeroconf==0.148.0
puremagic==1.30
ruamel.yaml==0.18.15 # dashboard_import
ruamel.yaml.clib==0.2.12 # dashboard_import
ruamel.yaml.clib==0.2.14 # dashboard_import
esphome-glyphsets==0.2.0
pillow==10.4.0
pillow==11.3.0
cairosvg==2.8.2
freetype-py==2.5.1
jinja2==3.1.6
# esp-idf requires this, but doesn't bundle it by default
# https://github.com/espressif/esp-idf/blob/220590d599e134d7a5e7f1e683cc4550349ffbf8/requirements.txt#L24
kconfiglib==13.7.1
# esp-idf >= 5.0 requires this
pyparsing >= 3.0

View File

@@ -1,4 +1,4 @@
pylint==3.3.9
pylint==4.0.1
flake8==7.3.0 # also change in .pre-commit-config.yaml when updating
ruff==0.14.0 # also change in .pre-commit-config.yaml when updating
pyupgrade==3.21.0 # also change in .pre-commit-config.yaml when updating

525
script/analyze_component_buses.py Executable file
View File

@@ -0,0 +1,525 @@
#!/usr/bin/env python3
"""Analyze component test files to detect which common bus configs they use.
This script scans component test files and extracts which common bus configurations
(i2c, spi, uart, etc.) are included via the packages mechanism. This information
is used to group components that can be tested together.
Components can only be grouped together if they use the EXACT SAME set of common
bus configurations, ensuring that merged configs are compatible.
Example output:
{
"component1": {
"esp32-ard": ["i2c", "uart_19200"],
"esp32-idf": ["i2c", "uart_19200"]
},
"component2": {
"esp32-ard": ["spi"],
"esp32-idf": ["spi"]
}
}
"""
from __future__ import annotations
import argparse
from functools import lru_cache
import json
from pathlib import Path
import re
import sys
from typing import Any
# Add esphome to path
sys.path.insert(0, str(Path(__file__).parent.parent))
from esphome import yaml_util
from esphome.config_helpers import Extend, Remove
# Path to common bus configs
COMMON_BUS_PATH = Path("tests/test_build_components/common")
# Package dependencies - maps packages to the packages they include
# When a component uses a package on the left, it automatically gets
# the packages on the right as well
PACKAGE_DEPENDENCIES = {
"modbus": ["uart"], # modbus packages include uart packages
# Add more package dependencies here as needed
}
# Bus types that can be defined directly in config files
# Components defining these directly cannot be grouped (they create unique bus IDs)
DIRECT_BUS_TYPES = ("i2c", "spi", "uart", "modbus")
# Signature for components with no bus requirements
# These components can be merged with any other group
NO_BUSES_SIGNATURE = "no_buses"
# Base bus components - these ARE the bus implementations and should not
# be flagged as needing migration since they are the platform/base components
BASE_BUS_COMPONENTS = {
"i2c",
"spi",
"uart",
"modbus",
"canbus",
}
# Components that must be tested in isolation (not grouped or batched with others)
# These have known build issues that prevent grouping
# NOTE: This should be kept in sync with both test_build_components and split_components_for_ci.py
ISOLATED_COMPONENTS = {
"animation": "Has display lambda in common.yaml that requires existing display platform - breaks when merged without display",
"esphome": "Defines devices/areas in esphome: section that are referenced in other sections - breaks when merged",
"ethernet": "Defines ethernet: which conflicts with wifi: used by most components",
"ethernet_info": "Related to ethernet component which conflicts with wifi",
"lvgl": "Defines multiple SDL displays on host platform that conflict when merged with other display configs",
"openthread": "Conflicts with wifi: used by most components",
"openthread_info": "Conflicts with wifi: used by most components",
"matrix_keypad": "Needs isolation due to keypad",
"mcp4725": "no YAML config to specify i2c bus id",
"mcp47a1": "no YAML config to specify i2c bus id",
"modbus_controller": "Defines multiple modbus buses for testing client/server functionality - conflicts with package modbus bus",
"neopixelbus": "RMT type conflict with ESP32 Arduino/ESP-IDF headers (enum vs struct rmt_channel_t)",
"packages": "cannot merge packages",
}
@lru_cache(maxsize=1)
def get_common_bus_packages() -> frozenset[str]:
"""Get the list of common bus package names.
Reads from tests/test_build_components/common/ directory
and caches the result. All bus types support component grouping
for config validation since --testing-mode bypasses runtime conflicts.
Returns:
Frozenset of common bus package names (i2c, spi, uart, etc.)
"""
if not COMMON_BUS_PATH.exists():
return frozenset()
# List all directories in common/ - these are the bus package names
return frozenset(d.name for d in COMMON_BUS_PATH.iterdir() if d.is_dir())
def uses_local_file_references(component_dir: Path) -> bool:
"""Check if a component uses local file references via $component_dir.
Components that reference local files cannot be grouped because each needs
a unique component_dir path pointing to their specific directory.
Args:
component_dir: Path to the component's test directory
Returns:
True if the component uses $component_dir for local file references
"""
common_yaml = component_dir / "common.yaml"
if not common_yaml.exists():
return False
try:
content = common_yaml.read_text()
except Exception: # pylint: disable=broad-exception-caught
return False
# Pattern to match $component_dir or ${component_dir} references
# These indicate local file usage that prevents grouping
return bool(re.search(r"\$\{?component_dir\}?", content))
def is_platform_component(component_dir: Path) -> bool:
"""Check if a component is a platform component (abstract base class).
Platform components have IS_PLATFORM_COMPONENT = True and cannot be
instantiated without a platform-specific implementation. These components
define abstract methods and cause linker errors if compiled standalone.
Examples: canbus, mcp23x08_base, mcp23x17_base
Args:
component_dir: Path to the component's test directory
Returns:
True if this is a platform component
"""
# Check in the actual component source, not tests
# tests/components/X -> tests/components -> tests -> repo root
repo_root = component_dir.parent.parent.parent
comp_init = (
repo_root / "esphome" / "components" / component_dir.name / "__init__.py"
)
if not comp_init.exists():
return False
try:
content = comp_init.read_text()
return "IS_PLATFORM_COMPONENT = True" in content
except Exception: # pylint: disable=broad-exception-caught
return False
def _contains_extend_or_remove(data: Any) -> bool:
"""Recursively check if data contains Extend or Remove objects.
Args:
data: Parsed YAML data structure
Returns:
True if any Extend or Remove objects are found
"""
if isinstance(data, (Extend, Remove)):
return True
if isinstance(data, dict):
for value in data.values():
if _contains_extend_or_remove(value):
return True
if isinstance(data, list):
for item in data:
if _contains_extend_or_remove(item):
return True
return False
def analyze_yaml_file(yaml_file: Path) -> dict[str, Any]:
"""Load a YAML file once and extract all needed information.
This loads the YAML file a single time and extracts all information needed
for component analysis, avoiding multiple file reads.
Args:
yaml_file: Path to the YAML file to analyze
Returns:
Dictionary with keys:
- buses: set of common bus package names
- has_extend_remove: bool indicating if Extend/Remove objects are present
- has_direct_bus_config: bool indicating if buses are defined directly (not via packages)
- loaded: bool indicating if file was successfully loaded
"""
result = {
"buses": set(),
"has_extend_remove": False,
"has_direct_bus_config": False,
"loaded": False,
}
if not yaml_file.exists():
return result
try:
data = yaml_util.load_yaml(yaml_file)
result["loaded"] = True
except Exception: # pylint: disable=broad-exception-caught
return result
# Check for Extend/Remove objects
result["has_extend_remove"] = _contains_extend_or_remove(data)
# Check if buses are defined directly (not via packages)
# Components that define i2c, spi, uart, or modbus directly in test files
# cannot be grouped because they create unique bus IDs
if isinstance(data, dict):
for bus_type in DIRECT_BUS_TYPES:
if bus_type in data:
result["has_direct_bus_config"] = True
break
# Extract common bus packages
if not isinstance(data, dict) or "packages" not in data:
return result
packages = data["packages"]
if not isinstance(packages, dict):
return result
valid_buses = get_common_bus_packages()
for pkg_name in packages:
if pkg_name not in valid_buses:
continue
result["buses"].add(pkg_name)
# Add any package dependencies (e.g., modbus includes uart)
if pkg_name not in PACKAGE_DEPENDENCIES:
continue
for dep in PACKAGE_DEPENDENCIES[pkg_name]:
if dep not in valid_buses:
continue
result["buses"].add(dep)
return result
def analyze_component(component_dir: Path) -> tuple[dict[str, list[str]], bool, bool]:
"""Analyze a component directory to find which buses each platform uses.
Args:
component_dir: Path to the component's test directory
Returns:
Tuple of:
- Dictionary mapping platform to list of bus configs
Example: {"esp32-ard": ["i2c", "spi"], "esp32-idf": ["i2c"]}
- Boolean indicating if component uses !extend or !remove
- Boolean indicating if component defines buses directly (not via packages)
"""
if not component_dir.is_dir():
return {}, False, False
platform_buses = {}
has_extend_remove = False
has_direct_bus_config = False
# Analyze all YAML files in the component directory
for yaml_file in component_dir.glob("*.yaml"):
analysis = analyze_yaml_file(yaml_file)
# Track if any file uses extend/remove
if analysis["has_extend_remove"]:
has_extend_remove = True
# Track if any file defines buses directly
if analysis["has_direct_bus_config"]:
has_direct_bus_config = True
# For test.*.yaml files, extract platform and buses
if yaml_file.name.startswith("test.") and yaml_file.suffix == ".yaml":
# Extract platform name (e.g., test.esp32-ard.yaml -> esp32-ard)
platform = yaml_file.stem.replace("test.", "")
# Always add platform, even if it has no buses (empty list)
# This allows grouping components that don't use any shared buses
platform_buses[platform] = (
sorted(analysis["buses"]) if analysis["buses"] else []
)
return platform_buses, has_extend_remove, has_direct_bus_config
def analyze_all_components(
tests_dir: Path = None,
) -> tuple[dict[str, dict[str, list[str]]], set[str], set[str]]:
"""Analyze all component test directories.
Args:
tests_dir: Path to tests/components directory (defaults to auto-detect)
Returns:
Tuple of:
- Dictionary mapping component name to platform->buses mapping
- Set of component names that cannot be grouped
- Set of component names that define buses directly (need migration warning)
"""
if tests_dir is None:
tests_dir = Path("tests/components")
if not tests_dir.exists():
print(f"Error: {tests_dir} does not exist", file=sys.stderr)
return {}, set(), set()
components = {}
non_groupable = set()
direct_bus_components = set()
for component_dir in sorted(tests_dir.iterdir()):
if not component_dir.is_dir():
continue
component_name = component_dir.name
platform_buses, has_extend_remove, has_direct_bus_config = analyze_component(
component_dir
)
if platform_buses:
components[component_name] = platform_buses
# Note: Components using $component_dir are now groupable because the merge
# script rewrites these to absolute paths with component-specific substitutions
# Check if component is explicitly isolated
# These have known issues that prevent grouping with other components
if component_name in ISOLATED_COMPONENTS:
non_groupable.add(component_name)
# Check if component is a base bus component
# These ARE the bus platform implementations and define buses directly for testing
# They cannot be grouped with components that use bus packages (causes ID conflicts)
if component_name in BASE_BUS_COMPONENTS:
non_groupable.add(component_name)
# Check if component uses !extend or !remove directives
# These rely on specific config structure and cannot be merged with other components
# The directives work within a component's own package hierarchy but break when
# merging independent components together
if has_extend_remove:
non_groupable.add(component_name)
# Check if component defines buses directly in test files
# These create unique bus IDs and cause conflicts when merged
# Exclude base bus components (i2c, spi, uart, etc.) since they ARE the platform
if has_direct_bus_config and component_name not in BASE_BUS_COMPONENTS:
non_groupable.add(component_name)
direct_bus_components.add(component_name)
return components, non_groupable, direct_bus_components
def create_grouping_signature(
platform_buses: dict[str, list[str]], platform: str
) -> str:
"""Create a signature string for grouping components.
Components with the same signature can be grouped together for testing.
All valid bus types can be grouped since --testing-mode bypasses runtime
conflicts during config validation.
Args:
platform_buses: Mapping of platform to list of buses
platform: The specific platform to create signature for
Returns:
Signature string (e.g., "i2c" or "uart") or empty if no valid buses
"""
buses = platform_buses.get(platform, [])
if not buses:
return ""
# Only include valid bus types in signature
common_buses = get_common_bus_packages()
valid_buses = [b for b in buses if b in common_buses]
if not valid_buses:
return ""
return "+".join(sorted(valid_buses))
def group_components_by_signature(
components: dict[str, dict[str, list[str]]], platform: str
) -> dict[str, list[str]]:
"""Group components by their bus signature for a specific platform.
Args:
components: Component analysis results from analyze_all_components()
platform: Platform to group for (e.g., "esp32-ard")
Returns:
Dictionary mapping signature to list of component names
Example: {"i2c+uart_19200": ["comp1", "comp2"], "spi": ["comp3"]}
"""
signature_groups: dict[str, list[str]] = {}
for component_name, platform_buses in components.items():
if platform not in platform_buses:
continue
signature = create_grouping_signature(platform_buses, platform)
if not signature:
continue
if signature not in signature_groups:
signature_groups[signature] = []
signature_groups[signature].append(component_name)
return signature_groups
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Analyze component test files to detect common bus usage"
)
parser.add_argument(
"--components",
"-c",
nargs="+",
help="Specific components to analyze (default: all)",
)
parser.add_argument(
"--platform",
"-p",
help="Show grouping for a specific platform",
)
parser.add_argument(
"--json",
action="store_true",
help="Output as JSON",
)
parser.add_argument(
"--group",
action="store_true",
help="Show component groupings by bus signature",
)
args = parser.parse_args()
# Analyze components
tests_dir = Path("tests/components")
if args.components:
# Analyze only specified components
components = {}
non_groupable = set()
direct_bus_components = set()
for comp in args.components:
comp_dir = tests_dir / comp
platform_buses, has_extend_remove, has_direct_bus_config = (
analyze_component(comp_dir)
)
if platform_buses:
components[comp] = platform_buses
# Note: Components using $component_dir are now groupable
if comp in ISOLATED_COMPONENTS:
non_groupable.add(comp)
if comp in BASE_BUS_COMPONENTS:
non_groupable.add(comp)
if has_direct_bus_config and comp not in BASE_BUS_COMPONENTS:
non_groupable.add(comp)
direct_bus_components.add(comp)
else:
# Analyze all components
components, non_groupable, direct_bus_components = analyze_all_components(
tests_dir
)
# Output results
if args.group and args.platform:
# Show groupings for a specific platform
groups = group_components_by_signature(components, args.platform)
if args.json:
print(json.dumps(groups, indent=2))
else:
print(f"Component groupings for {args.platform}:")
print()
for signature, comp_list in sorted(groups.items()):
print(f" {signature}:")
for comp in sorted(comp_list):
print(f" - {comp}")
print()
elif args.json:
# JSON output
print(json.dumps(components, indent=2))
else:
# Human-readable output
for component, platform_buses in sorted(components.items()):
non_groupable_marker = (
" [NON-GROUPABLE]" if component in non_groupable else ""
)
print(f"{component}{non_groupable_marker}:")
for platform, buses in sorted(platform_buses.items()):
bus_str = ", ".join(buses)
print(f" {platform}: {bus_str}")
print()
print(f"Total components analyzed: {len(components)}")
if non_groupable:
print(f"Non-groupable components (use local files): {len(non_groupable)}")
for comp in sorted(non_groupable):
print(f" - {comp}")
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More