。。。where we complicate the workflow to learn more about cross-compilation, qemu and neovim’s lua api.
“[…] the Linux philosophy is ‘Laugh in the face of danger’. Oops. Wrong One. ‘Do it yourself’. Yes, that’s it.”
— Linus Torvalds
This post seeks to find answers to these thoughts:
- how to mock sensors (device drivers) on QEMU?
- how to adapt platformio-core from vscode for neovim with the platformio-nvim plugin?
- since platformio-nvim has platformio, which have python as dependency, it would be better to install it on an OCI container.
- but how to adapt the code of platformio-nvim, or ANY luajit-nvim code to call a dependency on a container?
- what are the types of mocking you can do with the available tooling today?
changelog:
17/05/2024
01/06/2024
1. Neovim and lua
From last to first, neovim can use the ~/.config/nvim/init.vim
or ~/.config/nvim/init.lua
file as the default config file and these dotfiles are compatible with the ~/.vimrc
. The use of lua here is because nvim have an embedded luajit runtime. You can learn more in the neovim’s lua docs).
For project scaffolding, platformio-nvim could be used. It wraps around telescope and other cool plugins to provide a faster and lightweight interface than the version on vscode, but it uses platformio-core, which have Python dependencies. Now, it is not a problem and could still be used like that, but let’s complicate a bit so we can get there:
To add this dependency without versioning infrastructure (with IaaC) would be bloated in the sense of organization. It just can break in n different ways because it is not modular enough from a provisioning standpoint. Knowing OCI containers and minimal images, why not containerize the platformio-core installation, and change the path calls of the plugin to the running container? This way, you don’t need to install global dependencies, nor manage virtualenvs in who-knows-which relative $PATH
for every new local python environment that would be placed in the system, where the point is not how easy it would be to find it, but to have control over when to plug a resource that is needless bloat.
The other point is that containerizing applications give us autonomy to manage infrastructure as code with versioning, which helps a lot for debugging purposes in the future.
But how to adapt the plugin to call a $PATH
inside the container? Wouldn’t it break everything? Yeah, the way is just to understand how lua.require()
from nvim deals with imports. From the luaref:
require()
loads a module, which is a lua table- If this table is in
package.loaded[name]
, this is the table - Otherwise,
2. About Lua error handling
zzzz zzz
zzz
3. Changing plugin calls and OCI containers
- create a container volume mount
- on neovim init.lua, set the new python host provider (that’s
python3_host_prog
). Here I first checked the global variables per the current provider with:let g:
on CLI mode. The docs say this must be set before any call for has(“python3”), meaning that you have to set on the init.lua or in any imported file instead of setting after neovim startup. It normally shows up like this if you setup a virtualenv lookup. For a container, just change the $PATH.
let g:python3_host_prog = "${HOME}/.config/nvim/venv_nvim/neovim3/bin/python"
- The new path will be the location of the python binary for the container distro global installation or a virtualenv set inside the container.
4. Project scaffolding
- On neovim, type
Pioinit
. Choose your board, then the template. Here I’ve chosen the “uPesy ESP32 Wroom DevKit” for espressif32, or `````` on the platformio.ini file. - On the next screen, I’ve chosen Arduino as the framework.
Below, the directory structure is at follows:
- The lib directory can be used for the project specific (private) libraries. More details are located in lib/README file.
- “platformio.ini” is the Project Configuration File
- src directory is where you should place source code (*.h, *.c, *.cpp, *.S, *.ino, etc.)
; tree -C
.
├── include
│ └── README
├── lib
│ └── README
├── platformio.ini
├── src
└── test
└── README
It’s already a good start. I’ve added more files to do CI later:
; tree -C -L 1
.
├── assets
├── CMakeLists.txt
├── compose.yml
├── deploy.yml
├── Dockerfile
├── include
├── lib
├── LICENSE
├── Makefile
├── platformio.ini
├── README.md
├── scripts
├── src
└── test
5. Prepare a QEMU distro
Note that I used the Arch distro while doing this, but the software build is distro-agnostic. Check the references for the PKGBUILD.
Choose between qemu-system-xtensa
(mainline qemu, where you configure with the xtensa option) and qemu-esp-xtensa
(the fork)
I choose the second one, then got the PKGBUILD:
; wget -O ./PKGBUILD https://aur.archlinux.org/cgit/aur.git/plain/PKGBUILD?h=qemu-esp-xtensa-git
# disable asan, AddressSanitizer
; sed -e "/--enable-sanitizers/{h;d;}" -e "/--disable-gtk/{G;}" -e 's/\(--enable-sanitizers\)/#\1/g' -e 's/disable-gtk/& \\ /' ./PKGBUILD > ./PKGBUILD
; makepkg
# copy package to /opt/ and create a symbolic link of the binary
# to /usr/bin making it available system-wide
; sudo cp -r ./pkg/qemu-esp-xtensa-git/opt/qemu-esp-xtensa-git/ /opt/
; sudo ln -s ./opt/qemu-esp-xtensa-git/bin/qemu-system-xtensa /usr/bin/
6. Different ways for ship it on QEMU
- The somewhat hard way: read the platformio unit test guide. it provides QEMU, reneva and other as options. You build it, generate the elf and boot the VM with this.
- A friendly way: put the QEMU directive on the
platformio.ini
project file. When you test it will automatically simulate on userspace QEMU. - Just build the project, merge the second-stage bootloader, partitions and firmware into a binary image.
I choose the 3rd option to understand how does esptool works. On neovim, build with Piorun build
. After building the application, run it on QEMU:
mkdir -p ./scripts/
cat << "EOF" > ./scripts/qemu-xtensa-myifup.sh
#!/bin/sh
#
# qemu-esp-xtensa-git.git
qemu-system-xtensa -nographic \
-machine esp32 \
-drive file=./.pio/build/upesy_wroom/firmware.bin,if=mtd,format=raw
EOF
chmod +x ./scripts/qemu-xtensa-myifup.sh
7. Troubleshooting and esptool
Now you may come across this error:
qemu-system-xtensa: Error: only 2, 4, 8, 16 MB flash images are supported
To get the firmware binary working, use espressif’s esptool to flash the firmware into a binary image containing the second stage bootloader.
Create a python’s virtualenv to store the esptool dependencies using get-esptool.sh:
; mkdir -p ./assets/
; cd ./assets/ || return
; git clone git@github.com:espressif/esptool.git
; python3 -m venv venv
; cd - || return
; . ./assets/venv/bin/activate
; pip3 install --upgrade pip
; pip3 install -r ./assets/requirements.txt
; deactivate
Flash the second stage bootloader into the image then boot it into QEMU using flash.sh:
cat << "EOF" > ./scripts/flash.sh
#!/usr/bin/sh
#
# source(POSIX sh) venv relative to the repo root directory
. ./assets/venv/bin/activate
./assets/esptool/esptool.py --chip ESP32 merge_bin \
-o merged-flash.bin \
--flash_mode dio \
--flash_size 4MB \
0x1000 .pio/build/upesy_wroom/bootloader.bin \
0x8000 .pio/build/upesy_wroom/partitions.bin \
0x10000 .pio/build/upesy_wroom/firmware.bin \
--fill-flash-size 4MB
deactivate
EOF
chmod +x ./scripts/flash.sh
./scripts/flash.sh
Now you can run the script:
. ./scripts/qemu-xtensa-myifup.sh
8. mockme.txt
The mocking can be handled different ways. In this reddit comment, they states that “the HAL accesses registers at addresses that are hardcoded in the CMSIS”. So AFAIK there are these options:
- The mock that is presented by the platformio unit test guide.
- Are you still sane? Mock the HAL (hardware abstract layer): copy the hal file, do your changes, replace the unit test calls for hal by your file.
- Craziness: actually create a Virtual Device Driver, plug it on QEMU.
Simulating can be difficult but it allows users to deploy projects with unit or integration tests, Continuous Integration or other types of software automation without having hardware available, or when designing your own hardware, for example. Simulation in this context is actually achieved by emulation (userspace process virtualization using a register/stack virtual machine) of a different ISA. Projects like Tinkercad or Cisco Packet Tracer are different because it don’t virtualize at all, it just simulates the logic in a GUI, so you can’t actually deploy anything with these proprietary tools.
To simulate esp32 applications with virtualized emulation, wokwi or qemu can be used. On vscode there is Wokwi.wokwi-vscode
using webview to span a screen with the virtual board where you can tinker with it. On Linux, QEMU emulates the ISA and you can mock even the device drivers.
Now, some specificities can change based on which tooling you are using:
- ESP-IDF (Espressif’s official IoT Development Framework)
- PlatformIO, the ‘pio’ python library
- esp-rs, rust package for espressif boards development
On ESP-IDF, the application is cross-compiled on a host machine, flashed to the ESP chip flash memory and monitored by the host via UART/USB or even gdb via ethernet or Wi-Fi. For development and testing scenarios, there are various benefits:
- no need to upload to a target.
- faster execution on a host machine, compared to running on an ESP chip, not only because sometimes the logical resources can be passed directly to the emulator, but also because of Paravirtualization/Paravirt solutions such as KVM for QEMU (and hence the QEMU/KVM OS).
- there is no need to check hardware requirements, which are already met if using tools like esptool to generate a valid compacted image format.
- the infrastructure provisioning for the testing can be automated.
- tools for code and runtime analysis can be used, such as FRIDA, Valgrind, radare2.
The tools for testing:
- CMock (mock all dependencies and run code in isolation)
- FreeRTOS POSIX/Linux simulator to mock the FreeRTOS scheduling
- QEMU
- to emulate the just the ISA and test the code live
- to also test the sensors using a Virtual Device Driver (VDD)
- wokwi to tinker with a GUI which reflects on the code.
The HAL (Hardware Abstraction Layer) is the interface between the hardware and the library calling its functionalities. Generally speaking, it can be presented on the system in these ways:
- A low-level hardware access, like registers
- It is exposed by the operating system, like sysfs under linux
- It is passed as an Adapter Pattern, in form of a function used to mock the types (typing system) expected by the interface in unit testing
- Via a Device Driver (which can also be virtual for cases scenarios like QEMU) for hardware adapters like l2C multiplexer or GPIO expander
…
Despite it isn’t used on the device itself, it seems Matlab Simulink [12] can also it can be used as tool to design and simulate control systems, DSP or RF Filters, similar to Kicad/ngspice [13].
“Once you’ve designed a mathematical model of the system using these tools you can implement the model in hardware and software.”, says this comment on reddit. It seems the low code interface also generates C code too. More open source options could lay on javascript emulation or wasm cross-compiling/transpiling alternatives such as Bellard’s other virtual machine works aside from QEMU, such as the TinyEMU[14] and JSLinux[15], its port to javascript/wasm using emscripten[16], a LLVM toolchain for wasm/webassembly compilation.
….
cmock HAL mocking valgrind the other simulator (FreeRTOS POSIX/Linux simulator) qemu virtual device driver (simulator) wokwi
9. The rust low-level dev approach
Freestanding binaries for bare-metal scenarios (physical or virtual), usually embedded hardware; the compilation bits of Rust crates and the security vs build time trade-off.
From the embedded guides, we get that:
- Hosted environments gives you a system interface e.g. POSIX, which lets you interact with networking, filesystems, threads, memory management, etc. Standard libraries depends on these as well.
- Bare Metal environments don’t have a kernel, so there is nothing loaded before your program, meaning you only have the hardware (bare metal) to run everything.
#![no_std]
use cases are for DIY freestanding software, that can be firmware, but overall doesn’t use the core library. It works on top of bare-metal.libcore
exposes the platform-agnostic parts of the standard library, which can be compiled to run under the#![no_std]
.core::
orstd::
- the
ESP-IDF
used by C, cpp and others refers to a hosted environment - the
arduino
set of libraries are used for overall compatibility and are backwards-compatible with the esp8266. -
about the arduino vs esp-idf on the pio python package
- The
esp-rs
organizes crates like this:- Repositories starting with esp- are focused on
no_std
approach. For example,esp-hal
no_std
works on top of bare metal, soesp-
is an Espressif chip
- Repositories starting with
esp-idf-
are focused on std approach. For example,esp-idf-hal
std
, apart from bare metal, also needs an additional layer, which isesp-idf-
- Repositories starting with esp- are focused on
- The support for esp8266 by esp-rs was deprecated on Feb 06, 2024 by archiving its Hardware Abstraction Layer repository
esp-rs/esp8266-hal
feature | no_std | std |
---|---|---|
heap (dynamic memory) | * | ✓ |
collections (Vec, BTreeMap, etc) | ** | ✓ |
stack overflow protection | ✘ | ✓ |
runs init code before main | ✘ | ✓ |
libstd available | ✘ | ✓ |
libcore available | ✓ | ✓ |
writing firmware, kernel, or bootloader code | ✓ | ✘ |
Table excerpt from “The Embedded Rust Book”.
10. low level C libraries
newlib vs musl vs uclibc vs glibc
11. esp32 vs esp8266 backwards compatibility
on setting up environments that talks to both archs.
The rust esp library project for esp8266 has archived its repository on github at this date.
Basically, to write rust for esp8266
Wokwi projects also don’t support directly the esp8266.
12. virtual networking setup
qemu, wokwi etc. For an in-depth guide about host networking, read the previous post.
zzz zz zzzz
From eulab-poc:
The tricky part is the networking. As I’ve wrote here, there are 4 main points to grasp it fully:
- The relation between devices and interfaces under Linux (specifically for networking)
- The different packages to achieve this: the old net-tools and iproute2. This guide tries to use iproute2 tooling.
- Distro-specific scripts that wraps around tooling for virtual network setups. An example is Debian’s ifup(8) and ifdown(8) which is used by LFS (Linux From Scratch) and referenced by the QEMU networking docs. These are cited by a lot of QEMU guides for bridging, also having some late versions distributing similar scripts under /var
- Deprecated tools that were used in tutorials the last 15 years or so, like brctl that wraps around net-tools.
References
- [1] neovim’s lua docs
- [2] qemu-system-xtensa (mainline qemu) PKGBUILD and official mirror repo
- [3] qemu-esp-xtensa-git PKGBUILD and repo, the Git version of Espressif’s fork of QEMU with support for ESP32 xtensa boards. This is a Fork of QEMU with Espressif patches.
- [4] nvim-platformio lua plugin, a PlatformIO wrapper for neovim written in lua
- [5] platformio-core repo, which dependencies (python) we containerize. This turns the project more modular.
- [6] docker-platformio-core, a repo that helped to understand how to containerize platformio.
- [7] Writing a custom device for QEMU, a blog post about how to create mock functions simulating sensors (a device) and how to plug it into QEMU.
- [8] for vscode users, check this video: “ESP32 Emulation with QEMU”
- [9] The ESP-IDF Programming Guide (v5.2.1), docs
- [10] The Embedded Rust book, docs
- [11] The Rust on ESP book, docs
- [12] Linkers and Loaders book by John R. Levine
- [13] Kicad/ngspice pcb design and simulation wiki
- [14] TinyEMU readme.txt
- [15] JSLinux’s technical notes
- [16] emscripten docs
- [17] OS in rusthttps://os.phil-opp.com/vga-text-mode/
- [18] mcyoung’s Everything You Never Wanted To Know About Linker Script blog post
- [19] Felix’s Linking Rust Crates blog post
- [20] Gankra’s Compact Unwinding amazing blog post
- [21] maskray’s blog posts about all about PLT and Stack Unwinding
- [22] IME USP’s Flusp website blog post about using QEMU to play with the linux kernel
- [23] xilinx guide for Networking in QEMU
- [24] The Rust libcore crate docs
- [25] osdev’s Porting Newlib page