QEMU and Neovim for esp32 dev

。。。where we complicate the workflow to learn more about cross-compilation, qemu and neovim’s lua api.

“[…] the Linux philosophy is ‘Laugh in the face of danger’. Oops. Wrong One. ‘Do it yourself’. Yes, that’s it.”

 — Linus Torvalds

This post seeks to find answers to these thoughts:

  1. how to mock sensors (device drivers) on QEMU?
  2. how to adapt platformio-core from vscode for neovim with the platformio-nvim plugin?
  3. since platformio-nvim has platformio, which have python as dependency, it would be better to install it on an OCI container.
  4. but how to adapt the code of platformio-nvim, or ANY luajit-nvim code to call a dependency on a container?
  5. what are the types of mocking you can do with the available tooling today?

changelog:

17/05/2024

01/06/2024

1. Neovim and lua

From last to first, neovim can use the ~/.config/nvim/init.vim or ~/.config/nvim/init.lua file as the default config file and these dotfiles are compatible with the ~/.vimrc. The use of lua here is because nvim have an embedded luajit runtime. You can learn more in the neovim’s lua docs).

For project scaffolding, platformio-nvim could be used. It wraps around telescope and other cool plugins to provide a faster and lightweight interface than the version on vscode, but it uses platformio-core, which have Python dependencies. Now, it is not a problem and could still be used like that, but let’s complicate a bit so we can get there:

To add this dependency without versioning infrastructure (with IaaC) would be bloated in the sense of organization. It just can break in n different ways because it is not modular enough from a provisioning standpoint. Knowing OCI containers and minimal images, why not containerize the platformio-core installation, and change the path calls of the plugin to the running container? This way, you don’t need to install global dependencies, nor manage virtualenvs in who-knows-which relative $PATH for every new local python environment that would be placed in the system, where the point is not how easy it would be to find it, but to have control over when to plug a resource that is needless bloat.

The other point is that containerizing applications give us autonomy to manage infrastructure as code with versioning, which helps a lot for debugging purposes in the future.

But how to adapt the plugin to call a $PATH inside the container? Wouldn’t it break everything? Yeah, the way is just to understand how lua.require() from nvim deals with imports. From the luaref:

2. About Lua error handling

zzzz zzz

zzz

3. Changing plugin calls and OCI containers

  1. create a container volume mount
  2. on neovim init.lua, set the new python host provider (that’s python3_host_prog). Here I first checked the global variables per the current provider with :let g: on CLI mode. The docs say this must be set before any call for has(“python3”), meaning that you have to set on the init.lua or in any imported file instead of setting after neovim startup. It normally shows up like this if you setup a virtualenv lookup. For a container, just change the $PATH.
let g:python3_host_prog = "${HOME}/.config/nvim/venv_nvim/neovim3/bin/python"
  1. The new path will be the location of the python binary for the container distro global installation or a virtualenv set inside the container.

4. Project scaffolding

Below, the directory structure is at follows:

; tree -C
.
├── include
│   └── README
├── lib
│   └── README
├── platformio.ini
├── src
└── test
    └── README

It’s already a good start. I’ve added more files to do CI later:

; tree -C -L 1
.
├── assets
├── CMakeLists.txt
├── compose.yml
├── deploy.yml
├── Dockerfile
├── include
├── lib
├── LICENSE
├── Makefile
├── platformio.ini
├── README.md
├── scripts
├── src
└── test

5. Prepare a QEMU distro

Note that I used the Arch distro while doing this, but the software build is distro-agnostic. Check the references for the PKGBUILD.

Choose between qemu-system-xtensa (mainline qemu, where you configure with the xtensa option) and qemu-esp-xtensa (the fork)

I choose the second one, then got the PKGBUILD:

; wget -O ./PKGBUILD https://aur.archlinux.org/cgit/aur.git/plain/PKGBUILD?h=qemu-esp-xtensa-git
# disable asan, AddressSanitizer
; sed -e "/--enable-sanitizers/{h;d;}" -e "/--disable-gtk/{G;}" -e 's/\(--enable-sanitizers\)/#\1/g' -e 's/disable-gtk/& \\ /' ./PKGBUILD > ./PKGBUILD
; makepkg
# copy package to /opt/ and create a symbolic link of the binary
# to /usr/bin making it available system-wide
; sudo cp -r ./pkg/qemu-esp-xtensa-git/opt/qemu-esp-xtensa-git/ /opt/
; sudo ln -s ./opt/qemu-esp-xtensa-git/bin/qemu-system-xtensa /usr/bin/

6. Different ways for ship it on QEMU

  1. The somewhat hard way: read the platformio unit test guide. it provides QEMU, reneva and other as options. You build it, generate the elf and boot the VM with this.
  2. A friendly way: put the QEMU directive on the platformio.ini project file. When you test it will automatically simulate on userspace QEMU.
  3. Just build the project, merge the second-stage bootloader, partitions and firmware into a binary image.

I choose the 3rd option to understand how does esptool works. On neovim, build with Piorun build. After building the application, run it on QEMU:

mkdir -p ./scripts/
cat << "EOF" > ./scripts/qemu-xtensa-myifup.sh
#!/bin/sh
#
# qemu-esp-xtensa-git.git
qemu-system-xtensa -nographic \
    -machine esp32 \
    -drive file=./.pio/build/upesy_wroom/firmware.bin,if=mtd,format=raw
EOF

chmod +x ./scripts/qemu-xtensa-myifup.sh

7. Troubleshooting and esptool

Now you may come across this error:

qemu-system-xtensa: Error: only 2, 4, 8, 16 MB flash images are supported

To get the firmware binary working, use espressif’s esptool to flash the firmware into a binary image containing the second stage bootloader.

Create a python’s virtualenv to store the esptool dependencies using get-esptool.sh:

; mkdir -p ./assets/
; cd ./assets/ || return
; git clone git@github.com:espressif/esptool.git
; python3 -m venv venv
; cd - || return
; . ./assets/venv/bin/activate
; pip3 install --upgrade pip
; pip3 install -r ./assets/requirements.txt
; deactivate

Flash the second stage bootloader into the image then boot it into QEMU using flash.sh:

cat << "EOF" > ./scripts/flash.sh
#!/usr/bin/sh
#
# source(POSIX sh) venv relative to the repo root directory
. ./assets/venv/bin/activate

./assets/esptool/esptool.py --chip ESP32 merge_bin \
    -o merged-flash.bin \
    --flash_mode dio \
    --flash_size 4MB \
    0x1000 .pio/build/upesy_wroom/bootloader.bin \
    0x8000 .pio/build/upesy_wroom/partitions.bin \
    0x10000 .pio/build/upesy_wroom/firmware.bin \
    --fill-flash-size 4MB

deactivate
EOF

chmod +x ./scripts/flash.sh
./scripts/flash.sh

Now you can run the script:

. ./scripts/qemu-xtensa-myifup.sh

8. mockme.txt

The mocking can be handled different ways. In this reddit comment, they states that “the HAL accesses registers at addresses that are hardcoded in the CMSIS”. So AFAIK there are these options:

Simulating can be difficult but it allows users to deploy projects with unit or integration tests, Continuous Integration or other types of software automation without having hardware available, or when designing your own hardware, for example. Simulation in this context is actually achieved by emulation (userspace process virtualization using a register/stack virtual machine) of a different ISA. Projects like Tinkercad or Cisco Packet Tracer are different because it don’t virtualize at all, it just simulates the logic in a GUI, so you can’t actually deploy anything with these proprietary tools.

To simulate esp32 applications with virtualized emulation, wokwi or qemu can be used. On vscode there is Wokwi.wokwi-vscode using webview to span a screen with the virtual board where you can tinker with it. On Linux, QEMU emulates the ISA and you can mock even the device drivers.

Now, some specificities can change based on which tooling you are using:

On ESP-IDF, the application is cross-compiled on a host machine, flashed to the ESP chip flash memory and monitored by the host via UART/USB or even gdb via ethernet or Wi-Fi. For development and testing scenarios, there are various benefits:

The tools for testing:

The HAL (Hardware Abstraction Layer) is the interface between the hardware and the library calling its functionalities. Generally speaking, it can be presented on the system in these ways:

Despite it isn’t used on the device itself, it seems Matlab Simulink [12] can also it can be used as tool to design and simulate control systems, DSP or RF Filters, similar to Kicad/ngspice [13].

“Once you’ve designed a mathematical model of the system using these tools you can implement the model in hardware and software.”, says this comment on reddit. It seems the low code interface also generates C code too. More open source options could lay on javascript emulation or wasm cross-compiling/transpiling alternatives such as Bellard’s other virtual machine works aside from QEMU, such as the TinyEMU[14] and JSLinux[15], its port to javascript/wasm using emscripten[16], a LLVM toolchain for wasm/webassembly compilation.

….

cmock HAL mocking valgrind the other simulator (FreeRTOS POSIX/Linux simulator) qemu virtual device driver (simulator) wokwi

9. The rust low-level dev approach

Freestanding binaries for bare-metal scenarios (physical or virtual), usually embedded hardware; the compilation bits of Rust crates and the security vs build time trade-off.

From the embedded guides, we get that:

seila

feature no_std std
heap (dynamic memory) *
collections (Vec, BTreeMap, etc) **
stack overflow protection
runs init code before main
libstd available
libcore available
writing firmware, kernel, or bootloader code

Table excerpt from “The Embedded Rust Book”.


10. low level C libraries

newlib vs musl vs uclibc vs glibc

11. esp32 vs esp8266 backwards compatibility

on setting up environments that talks to both archs.

The rust esp library project for esp8266 has archived its repository on github at this date.

Basically, to write rust for esp8266

Wokwi projects also don’t support directly the esp8266.

12. virtual networking setup

qemu, wokwi etc. For an in-depth guide about host networking, read the previous post.

zzz zz zzzz

From eulab-poc:

The tricky part is the networking. As I’ve wrote here, there are 4 main points to grasp it fully:

  • The relation between devices and interfaces under Linux (specifically for networking)
  • The different packages to achieve this: the old net-tools and iproute2. This guide tries to use iproute2 tooling.
  • Distro-specific scripts that wraps around tooling for virtual network setups. An example is Debian’s ifup(8) and ifdown(8) which is used by LFS (Linux From Scratch) and referenced by the QEMU networking docs. These are cited by a lot of QEMU guides for bridging, also having some late versions distributing similar scripts under /var
  • Deprecated tools that were used in tutorials the last 15 years or so, like brctl that wraps around net-tools.


References