1
0
mirror of https://github.com/esphome/esphome.git synced 2026-02-08 00:31:58 +00:00
Commit Graph

21884 Commits

Author SHA1 Message Date
J. Nick Koston
c139aff8d9 Merge remote-tracking branch 'upstream/libretiny_1120' into integration 2026-02-07 18:05:38 -06:00
J. Nick Koston
98f900183d update boards as well 2026-02-07 18:04:41 -06:00
J. Nick Koston
059087ed21 [libretiny] Update LibreTiny to v1.12.0 2026-02-07 18:01:52 -06:00
J. Nick Koston
d3778af3e8 [libretiny] Update LibreTiny to v1.12.0 2026-02-07 18:01:19 -06:00
J. Nick Koston
663151821f Merge branch 'cswitch_sdk' into integration 2026-02-07 18:00:55 -06:00
J. Nick Koston
8c4a732eb7 copilot edge cases 2026-02-07 18:00:30 -06:00
J. Nick Koston
2d0b1db3dd Merge branch 'cswitch_sdk' into integration 2026-02-07 17:53:31 -06:00
J. Nick Koston
4e3ccb4fc5 [analyze-memory] Attribute CSWTCH symbols from SDK archives 2026-02-07 17:52:20 -06:00
J. Nick Koston
66ab62b3fb Merge branch 'deprecate_set_retry' into integration 2026-02-07 17:26:28 -06:00
J. Nick Koston
2a6e20dd32 [core] Deprecate set_retry, cancel_retry, and RetryResult
set_retry does a std::make_shared<RetryArgs>() heap allocation on every
invocation. No core component needs this pattern - all callers have been
migrated to set_timeout or set_interval in prior PRs. The feature wastes
flash and RAM on every firmware for a pattern that set_interval covers
better, and the hidden heap allocation is a footgun for component authors.

Deprecated in 2026.2.0, removal in 2026.8.0.

Depends on:
- #13841 [lps22] Replace set_retry with set_interval
- #13842 [ms8607] Replace set_retry with set_timeout chain
- #13843 [speaker] Replace set_retry with set_interval
- #13844 [esp32_hosted] Replace set_retry with set_interval
2026-02-07 17:25:59 -06:00
J. Nick Koston
7516e418f2 Merge branch 'ms8607_remove_set_retry' into integration 2026-02-07 17:23:25 -06:00
J. Nick Koston
3864f06a15 Merge branch 'esp32_hosted_remove_set_retry' into integration 2026-02-07 17:23:21 -06:00
J. Nick Koston
98dcea6e7d Merge branch 'speaker_media_player_remove_set_retry' into integration 2026-02-07 17:23:15 -06:00
J. Nick Koston
9ee51b06fa Merge branch 'deprecate_set_retry' into integration 2026-02-07 17:23:12 -06:00
J. Nick Koston
4efca40457 Merge branch 'lps22_remove_set_retry' into integration 2026-02-07 17:23:06 -06:00
J. Nick Koston
a43e3e5948 [dashboard] Close WebSocket after process exit to prevent zombie connections (#13834) 2026-02-07 15:19:20 -06:00
J. Nick Koston
f64f71b9ac Merge remote-tracking branch 'upstream/dev' into integration 2026-02-07 15:13:45 -06:00
J. Nick Koston
60298f67b8 [ms8607] Replace set_retry with set_timeout chain to avoid heap allocation
set_retry internally does a std::make_shared<RetryArgs>() heap allocation
on every invocation. Replace with a try_reset_() method that chains
set_timeout calls with manual backoff, preserving the same timing
(immediate, +5ms, +25ms).
2026-02-07 14:53:18 -06:00
J. Nick Koston
4d2354da2e [esp32_hosted] Replace set_retry with set_interval to avoid heap allocation
set_retry internally does a std::make_shared<RetryArgs>() heap allocation
on every invocation. Replace with set_interval + countdown counter which
avoids this entirely. The original code used fixed-interval polling
(no backoff), making set_interval a direct fit.
2026-02-07 14:51:45 -06:00
J. Nick Koston
6a3da67a1e [speaker] Replace set_retry with set_interval to avoid heap allocation
set_retry internally does a std::make_shared<RetryArgs>() heap allocation
on every invocation. Replace with set_interval + countdown counter which
avoids this entirely. All 3 call sites used fixed-interval polling
(no backoff), making set_interval a direct fit.
2026-02-07 14:48:34 -06:00
J. Nick Koston
6ebafa8a9e [core] Deprecate set_retry, cancel_retry, and RetryResult
set_retry does a std::make_shared<RetryArgs>() heap allocation on every
invocation. No core component needs this pattern - all callers have been
migrated to set_timeout or set_interval in prior PRs. The feature wastes
flash and RAM on every firmware for a pattern that set_interval covers
better, and the hidden heap allocation is a footgun for component authors.

Deprecated in 2026.2.0, removal in 2026.8.0.

Depends on:
- #13841 [lps22] Replace set_retry with set_interval
- #13842 [ms8607] Replace set_retry with set_timeout chain
- #13843 [speaker] Replace set_retry with set_interval
- #13844 [esp32_hosted] Replace set_retry with set_interval
2026-02-07 14:44:09 -06:00
J. Nick Koston
3ba7e48615 [lps22] Replace set_retry with set_interval to avoid heap allocation
set_retry internally does a std::make_shared<RetryArgs>() heap allocation
on every invocation. Replace with set_interval + countdown counter which
avoids this entirely.
2026-02-07 14:27:10 -06:00
schrob
9de91539e6 [epaper_spi] Add Waveshare 1.54-G (#13758) 2026-02-08 06:24:57 +11:00
J. Nick Koston
51b0661d9d Merge branch 'scheduler-inplace-cleanup' into integration 2026-02-07 19:56:42 +01:00
J. Nick Koston
3c85ff4744 try to avoid bloat 2026-02-07 19:56:20 +01:00
J. Nick Koston
6a383a62b8 Merge branch 'scheduler-inplace-cleanup' into integration 2026-02-07 19:52:42 +01:00
J. Nick Koston
0fa7050b1c remove temp test 2026-02-07 10:01:57 +01:00
J. Nick Koston
fa1554cac0 [scheduler] Eliminate heap allocation in full_cleanup_removed_items_
Replace the temporary std::vector copy with in-place compaction using a
read/write pointer pattern. This avoids a heap allocation+deallocation
cycle during scheduler cleanup, reducing heap fragmentation on
long-running ESP devices.

The new approach compacts valid items forward in the existing vector,
recycles removed items as they are encountered, then resizes the vector
(no reallocation since size only shrinks). Same O(n) complexity, same
behavior, zero allocations.
2026-02-07 09:54:43 +01:00
J. Nick Koston
14071086bb Merge branch 'logger_thread_name_cleanup' into integration 2026-02-07 09:02:06 +01:00
J. Nick Koston
30f9bfaf83 [logger] Resolve thread name once and pass through logging chain
Eliminate redundant xTaskGetCurrentTaskHandle() and pcTaskGetName()
calls on the hot path by resolving the thread name once in log_vprintf_
and passing it through as const char* to all downstream functions.

- Main task fast path passes nullptr (no task handle lookup needed)
- Non-main thread path resolves name once, passes to both ring buffer
  and emergency console fallback
- Unify log_vprintf_non_main_thread_ to single signature across platforms
- Change send_message_thread_safe() on all platforms from TaskHandle_t
  to const char* thread_name
- Add TaskHandle_t overload for get_thread_name_ as primary on
  ESP32/LibreTiny, with no-arg convenience wrapper
- Use std::span<char> for Host/Zephyr get_thread_name_ buffer parameter
- Document Zephyr single-task path thread safety limitation
2026-02-07 07:47:00 +01:00
J. Nick Koston
daebc2cc39 Merge branch 'dashboard-ws-close-on-exit' into integration 2026-02-07 06:15:23 +01:00
J. Nick Koston
6b089a611c [dashboard] Close WebSocket after process exit to prevent zombie connections
When a subprocess exited, _proc_on_exit sent the exit event but never
closed the server-side WebSocket. This left zombie connections open
until the client eventually disconnected.
2026-02-07 06:14:44 +01:00
J. Nick Koston
8a2c5407d8 Merge branch 'ld2450_batch_read' into integration 2026-02-06 23:40:24 +01:00
J. Nick Koston
52a039585d Merge branch 'ld2410_batch_read' into integration 2026-02-06 23:40:20 +01:00
J. Nick Koston
fd6bd7fb67 Merge branch 'ld2412_batch_read' into integration 2026-02-06 23:40:15 +01:00
J. Nick Koston
b544cf2ffe [ld2410] Batch UART reads to reduce loop overhead 2026-02-06 23:39:31 +01:00
J. Nick Koston
6d1281301f [ld2412] Batch UART reads to reduce loop overhead
Read all available bytes in batches via read_array() instead of
byte-at-a-time read() calls. Each read() internally chains through
read_byte -> read_array(1) -> check_read_timeout_ -> available(),
resulting in 3 UART calls per byte. Batching reduces this
significantly.
2026-02-06 23:36:01 +01:00
J. Nick Koston
901192cca1 [ld2450] Batch UART reads to reduce loop overhead
Read all available bytes in batches via read_array() instead of
byte-at-a-time read() calls. Each read() internally chains through
read_byte -> read_array(1) -> check_read_timeout_ -> available(),
resulting in 3 UART calls per byte. At 256000 baud with ~235 bytes
per loop iteration, this was ~706 UART operations per loop call.
Batching reduces this to ~12.

Measured 33% reduction in loop time (2348ms -> 1577ms per 60s).
2026-02-06 23:33:21 +01:00
J. Nick Koston
3478c68af7 Merge branch 'cse7766_batch_read' into integration 2026-02-06 23:12:14 +01:00
J. Nick Koston
67e7ba4812 handle unlikely 2026-02-06 23:12:00 +01:00
J. Nick Koston
981c132cf4 Merge branch 'cse7766_batch_read' into integration 2026-02-06 23:07:21 +01:00
J. Nick Koston
572376091e loop 2026-02-06 23:07:02 +01:00
J. Nick Koston
803e73fdec Merge branch 'cse7766_batch_read' into integration 2026-02-06 22:59:59 +01:00
J. Nick Koston
e7c9808b87 [cse7766] Batch UART reads to reduce loop overhead 2026-02-06 22:53:31 +01:00
J. Nick Koston
82eb8e3492 Merge branch 'ssd1306-progmem-tables' into integration 2026-02-06 21:39:50 +01:00
J. Nick Koston
21a5c2891e Merge branch 'i2c-arduino-cswtch' into integration 2026-02-06 21:39:46 +01:00
J. Nick Koston
96289775f2 [i2c] Replace switch with if-else to avoid CSWTCH table in RAM
Replace the Wire status-to-ErrorCode switch with if-else to prevent
the compiler from generating a 6-byte lookup table in DRAM on ESP8266.
2026-02-06 21:38:41 +01:00
J. Nick Koston
3e4269d32a Address review: add SSD1306_MODEL_COUNT sentinel and bounds checks
- Add SSD1306_MODEL_COUNT sentinel to enum for compile-time table size validation
- Replace 14 individual static_asserts with table size checks against SSD1306_MODEL_COUNT
- Add bounds checks in get_height_internal()/get_width_internal() to preserve default return 0
2026-02-06 21:37:51 +01:00
J. Nick Koston
bd6d43de52 Merge branch 'ssd1306-progmem-tables' into integration 2026-02-06 21:28:11 +01:00
J. Nick Koston
8da986d41a [ssd1306_base] Move switch tables to PROGMEM with lookup tables
Replace three compiler-generated switch tables (CSWTCH) with PROGMEM
lookup tables, saving 84 bytes of DRAM on ESP8266.

- model_str_(): 56B string pointer table → PROGMEM_STRING_TABLE
- get_height_internal(): 14B byte table → PROGMEM struct array
- get_width_internal(): 14B byte table → PROGMEM struct array

Width and height use a single ModelDimensions struct array for
maintainability. All 14 enum values verified with static_assert.
2026-02-06 21:26:10 +01:00