Compare commits

...

80 Commits

Author SHA1 Message Date
Aiden McClelland
68c3d87c5e Merge pull request #3133 from Start9Labs/bugfix/alpha.20
Bugfixes for alpha.20
2026-03-17 15:19:24 -06:00
Matt Hill
24c1f47886 move skip button to left 2026-03-17 15:12:22 -06:00
Aiden McClelland
1b9fcaad2b fix: use proper mount types for proc/sysfs/efivarfs in chroot scripts
Replace bind mounts with typed mounts (mount -t proc, mount -t sysfs,
mount -t efivarfs) for /proc, /sys, and efivars in chroot environments.
2026-03-17 15:11:22 -06:00
Aiden McClelland
900d86ab83 feat: preserve volumes on failed install + migrate ext4 to btrfs
- COW snapshot (cp --reflink=always) of package volumes before
  install/update; restore on failure, remove on success
- Automatic ext4→btrfs conversion via btrfs-convert during disk attach
  with e2fsck pre-check and post-conversion defrag
- Probe package-data filesystem during setup.disk.list (on both disk
  and partition level) so the UI can warn about ext4 conversion
- Setup wizard preserve-overwrite dialog shows ext4 warning with
  backup acknowledgment checkbox before allowing preserve
2026-03-17 15:11:16 -06:00
Matt Hill
c1a328e5ca remove redundant success message 2026-03-17 11:51:16 -06:00
Matt Hill
2903b949ea fix notification display in marketplace 2026-03-17 08:55:50 -06:00
Aiden McClelland
8ac8dae6fd fix: set /media/startos permissions to 750 root:startos in image build 2026-03-16 21:29:01 -06:00
Aiden McClelland
0e8dd82910 chore: re-add raspberrypi to CI 2026-03-16 20:16:07 -06:00
Aiden McClelland
873922d9e3 chore: fix mac build 2026-03-16 20:10:51 -06:00
Aiden McClelland
c9ce2c57b3 chore: bump sdk to 0.4.0-beta.61 2026-03-16 20:10:09 -06:00
Aiden McClelland
6c9cbebe9c fix: correct argument order in asn1_time_to_system_time
The diff() method computes `compare - self`, not `self - compare`.
The reversed arguments caused all cert expiration times to resolve
to before the unix epoch, making getSslCertificate callbacks fire
immediately and infinitely on every registration.
2026-03-16 20:10:02 -06:00
Aiden McClelland
dd9837b9b2 refactor: convert service callbacks to DbWatch pattern
Convert getServiceInterface, listServiceInterfaces, getSystemSmtp, and
getServiceManifest from manual callback triggers to DbWatchedCallbacks.
getServiceManifest now always returns the installed manifest.
2026-03-16 20:09:10 -06:00
Aiden McClelland
7313693a9e fix: use lazy umount in chroot-and-upgrade 2026-03-16 20:08:59 -06:00
Aiden McClelland
66a606c14e fix: prevent consts from triggering after leaving effect context 2026-03-16 20:07:59 -06:00
Matt Hill
7352602f58 fix styling for table headers and show alert for language change 2026-03-16 18:55:15 -06:00
Matt Hill
4ab51c4570 Merge branch 'bugfix/alpha.20' of github.com:Start9Labs/start-os into bugfix/alpha.20 2026-03-16 15:55:33 -06:00
Aiden McClelland
1c1ae11241 chore: bump to v0.4.0-alpha.21 2026-03-16 13:54:59 -06:00
Aiden McClelland
cc6a134a32 chore: enable debug features and improve graceful shutdown for unstable builds
Adds stack overflow backtraces, debug info compilation, and SSH password
auth for development. Reduces shutdown timeouts from 60s to 100ms for
faster iteration. Fixes race condition in NetService cleanup.
2026-03-16 13:40:14 -06:00
Aiden McClelland
3ae24e63e2 perf: add O_DIRECT uploads and stabilize RPC continuation shutdown
Implements DirectIoFile for faster package uploads by bypassing page cache.
Refactors RpcContinuations to support graceful WebSocket shutdown via
broadcast signal, improving stability during daemon restart.
2026-03-16 13:40:13 -06:00
Aiden McClelland
8562e1e19d refactor: change kiosk parameter from Option<bool> to bool
Simplifies the setup API by making kiosk mandatory at the protocol level,
with platform-specific filtering applied at the database layer.
2026-03-16 13:40:13 -06:00
Aiden McClelland
90d8d39adf feat: migrate tor onion keys during v0.3.6a0 to v0.4.0a20 upgrade
Preserves tor service onion addresses by extracting keys from old
database tables and preparing them for inclusion in the new tor service.
2026-03-16 13:40:12 -06:00
Aiden McClelland
9f7bc74a1e feat: add bundled tor s9pk download and build infrastructure 2026-03-16 13:40:12 -06:00
Aiden McClelland
65e1c9e5d8 chore: bump sdk to beta.60 2026-03-16 13:40:11 -06:00
Matt Hill
5a6b2a5588 Merge branch 'bugfix/alpha.20' of github.com:Start9Labs/start-os into bugfix/alpha.20 2026-03-16 12:24:06 -06:00
Aiden McClelland
e86b06c2cd fix: register callbacks for getStatus, getServiceManifest, getContainerIp, getSslCertificate
These effects were passing the raw JS callback function through rpcRound
without converting it to a CallbackId via context.callbacks.addCallback().
Since functions are dropped by JSON.stringify, the Rust side never received
a callback, breaking the const() reactive pattern.
2026-03-16 10:45:27 -06:00
waterplea
7b8bb92d60 chore: fix 2026-03-16 09:57:46 +04:00
Matt Hill
ebb7916ecd docs: update ARCHITECTURE.md and CLAUDE.md for Angular 21 + Taiga UI 5
Update version references from Angular 20 to Angular 21 and Taiga UI to
Taiga UI 5 across architecture docs. Update web/CLAUDE.md with improved
Taiga golden rules: prioritize MCP server for docs, remove hardcoded
component examples in favor of live doc lookups.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 21:43:34 -06:00
Matt Hill
b5ac0b5200 Merge branch 'next/major' of github.com:Start9Labs/start-os into bugfix/alpha.20 2026-03-15 16:17:02 -06:00
Alex Inkin
a90b96cddd chore: update Taiga to 5 (#3136)
* chore: update Taiga to 5

* chore: fix
2026-03-15 09:51:50 -06:00
Matt Hill
d1b80cffb8 fix bug with non-fresh install 2026-03-14 14:26:55 -06:00
Matt Hill
ae5fe88a40 Merge branch 'bugfix/alpha.20' of github.com:Start9Labs/start-os into bugfix/alpha.20 2026-03-14 14:26:34 -06:00
Aiden McClelland
fc4b887b71 fix: raspberry pi image build improvements
- Move firmware config files to boot/firmware/ to match raspi-firmware
  package layout in Debian Trixie
- Use nested mounts (firmware and efi inside boot) so squashfs boot
  files land on the correct partitions without manual splitting
- Pre-calculate root partition size from squashfs instead of creating
  oversized btrfs and shrinking (avoids ioctl failure on loop devices)
- Use named loop devices (/dev/startos-loop-*) with automatic cleanup
  of stale devices from previous failed builds
- Use --rbind for /boot in upgrade scripts so nested mounts (efi,
  firmware) are automatically carried into the chroot
2026-03-13 12:09:14 -06:00
Aiden McClelland
a81b1aa5a6 feat: wait for db commit after tunnel add/remove
Add a typed DbWatch at the end of add_tunnel and remove_tunnel that
waits up to 15s for the sync loop to commit the gateway state change
to patch-db before returning.
2026-03-13 12:09:13 -06:00
Aiden McClelland
d8663cd3ae fix: use ip route replace to avoid connectivity gap on gateway changes
Replace the flush+add cycle in apply_policy_routing with ip route
replace for each desired route, then delete stale routes. This
eliminates the window where the per-interface routing table is empty,
which caused temporary connectivity loss on other gateways.
2026-03-13 12:09:13 -06:00
Matt Hill
9f36bc5b5d always show package id 2026-03-13 10:09:05 -06:00
Matt Hill
e2804f9b88 update workflows 2026-03-12 23:16:59 -06:00
Matt Hill
3cf9dbc6d2 update docs links 2026-03-12 17:35:25 -06:00
Matt Hill
0fa069126b mok ux, autofill device and pf forms, docss for st, docs for start-sdk 2026-03-12 14:15:45 -06:00
Matt Hill
50004da782 Merge branch 'next/major' into bugfix/alpha.20 2026-03-12 14:00:47 -06:00
Aiden McClelland
517bf80fc8 feat: update start-tunnel web app for typed tunnel API
- Use generated TS types for tunnel API params and data models
- Simplify API service methods to use typed RPC calls
- Update port forward UI for optional labels
2026-03-12 13:39:16 -06:00
Aiden McClelland
6091314981 chore: simplify SDK Makefile js/dts copy with rsync 2026-03-12 13:39:15 -06:00
Aiden McClelland
c485edfa12 feat: tunnel TS exports, port forward labels, and db migrations
- Add TS derive and type annotations to all tunnel API param structs
- Export tunnel bindings to a tunnel/ subdirectory with index generation
- Change port forward label from String to Option<String>
- Add TunnelDatabase::init() with default subnet creation
- Add tunnel migration framework with m_00_port_forward_entry migration
  to convert legacy string-only port forwards to the new entry format
2026-03-12 13:39:15 -06:00
Aiden McClelland
fd54e9ca91 fix: use raspberrypi-archive-keyring for sqv-compatible GPG key
The old raspberrypi.gpg.key has SHA1-only UID binding signatures,
which sqv (Sequoia PGP) on Trixie rejects as of 2026-02-01. Fetch the
key from the raspberrypi-archive-keyring package instead, which has
re-signed bindings using SHA-256/512.
2026-03-12 13:39:06 -06:00
Matt Hill
d1444b1175 ST port labels and move logout to settings (#3134)
* chore: update packages (#3132)

* chore: update packages

* start tunnel messaging

* chore: standalone

* pbpaste instead

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* port labels and move logout to settings

* enable-disable forwards

* Fix docs URLs in start-tunnel installer output (#3135)

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: gStart9 <106188942+gStart9@users.noreply.github.com>
2026-03-12 12:02:38 -06:00
Aiden McClelland
3024db2654 feat: add GRUB installer USB boot detection via configfile
Install a /etc/grub.d/07_startos_installer script that searches for a
.startos-installer marker file at boot. When found, it creates a
"StartOS Installer" menu entry that loads the USB's own grub.cfg via
configfile, making it the default with a 5-second timeout.

Uses configfile instead of chainloader because on hybrid ISOs the
.startos-installer marker and /boot/grub/grub.cfg are on the ISO9660
root partition, while the EFI binary lives on a separate embedded ESP.
chainloader would look for the EFI binary on the wrong partition.
2026-03-12 11:12:42 -06:00
Aiden McClelland
dba1cb93c1 feat: raspberry pi U-Boot + GPT + btrfs boot chain
Switch Raspberry Pi builds from proprietary firmware direct-boot to a
firmware → U-Boot → GRUB → kernel chain using GPT partitioning:

- GPT partition layout with fixed UUIDs matching os_install: firmware
  (128MB), ESP (100MB), boot (2GB FAT32), root (btrfs)
- U-Boot as the kernel in config.txt, chainloading GRUB EFI
- Pi-specific GRUB config overrides (console, USB quirks, cgroup)
- Btrfs root with shrink-to-minimum for image compression
- init_resize.sh updated for GPT (sgdisk -e) and btrfs resize
- Removed os-partitions from config.yaml (now derived from fstab)
2026-03-12 11:12:04 -06:00
Aiden McClelland
d12b278a84 feat: switch os_install root filesystem from ext4 to btrfs 2026-03-12 11:11:14 -06:00
Aiden McClelland
0070a8e692 refactor: derive OsPartitionInfo from fstab instead of config.yaml
Replace the serialized os_partitions field in ServerConfig with runtime
fstab parsing. OsPartitionInfo::from_fstab() resolves PARTUUID/UUID/LABEL
device specs via blkid and discovers the BIOS boot partition by scanning
for its GPT type GUID via lsblk.

Also removes the efibootmgr-based boot order management (replaced by
GRUB-based USB detection in a subsequent commit) and adds a dedicated
bios: Option<PathBuf> field for the unformatted BIOS boot partition.
2026-03-12 11:10:24 -06:00
Aiden McClelland
efc12691bd chore: reformat SDK utility files 2026-03-12 11:09:15 -06:00
Aiden McClelland
effcec7e2e feat: add Secure Boot MOK key enrollment and module signing
Generate DKMS MOK key pair during OS install, sign all unsigned kernel
modules, and enroll the MOK certificate using the user's master password.
On reboot, MokManager prompts the user to complete enrollment. Re-enrolls
on every boot if the key exists but isn't enrolled yet. Adds setup wizard
dialog to inform the user about the MokManager prompt.
2026-03-11 15:18:46 -06:00
Aiden McClelland
10a5bc0280 fix: add restart_again flag to DesiredStatus::Restarting
When a restart is requested while the service is already restarting
(stopped but not yet started), set restart_again so the actor will
perform another stop→start cycle after the current one completes.
2026-03-11 15:18:46 -06:00
Aiden McClelland
90b73dd320 feat: support multiple echoip URLs with fallback
Rename ifconfig_url to echoip_urls and iterate through configured URLs,
falling back to the next one on failure. Reduces timeout per attempt
from 10s to 5s.
2026-03-11 15:18:45 -06:00
Aiden McClelland
324f9d17cd fix: use z.union instead of z.intersection for health check schema 2026-03-11 15:18:45 -06:00
Aiden McClelland
a782cb270b refactor: consolidate SDK Watchable with generic map/eq and rename call to fetch 2026-03-11 15:18:44 -06:00
Aiden McClelland
c59c619e12 chore: update CLAUDE.md docs for commit signing and i18n rules 2026-03-11 15:18:44 -06:00
Aiden McClelland
00eecf3704 fix: treat all private IPs as private traffic, not just same-subnet
Previously, traffic was only classified as private if the source IP was
in a known interface subnet. This prevented private access from VPNs on
different VLANs. Now all RFC 1918 IPv4 and ULA/link-local IPv6 addresses
are treated as private, and DNS resolution for private domains works for
these sources by returning IPs from all interfaces.
2026-03-11 15:18:43 -06:00
Aiden McClelland
b67e554e76 bump sdk 2026-03-11 15:18:43 -06:00
Aiden McClelland
36b8fda6db fix: gracefully handle mount failure in legacy dependenciesAutoconfig
Non-legacy dependencies don't have an "embassy" volume, so the mount
fails. Catch the error and skip autoconfig instead of crashing.
2026-03-10 02:55:05 -06:00
Aiden McClelland
d2f12a7efc fix: run apt-get update before installing registry deb in Docker image 2026-03-10 00:14:27 -06:00
Aiden McClelland
8dd50eb9c0 fix: move unpack progress completion after rename and reformat 2026-03-10 00:14:27 -06:00
Aiden McClelland
73c6696873 refactor: simplify AddPackageSignerParams merge field from Option<bool> to bool 2026-03-10 00:14:26 -06:00
Aiden McClelland
2586f841b8 fix: make unmount idempotent by ignoring "not mounted" errors 2026-03-10 00:14:26 -06:00
Aiden McClelland
ccf6fa34b1 fix: set correct binary name and version on all CLI commands 2026-03-10 00:14:26 -06:00
Aiden McClelland
9546fc9ce0 fix: make GRUB serial console conditional on hardware availability
Unconditionally enabling serial terminal broke gfxterm on EFI systems
without a serial port. Now installs a /etc/grub.d/01_serial script
that probes for the serial port before enabling it. Also copies
unicode.pf2 font to boot partition for GRUB graphical mode.
2026-03-10 00:14:26 -06:00
Aiden McClelland
3441d4d6d6 fix: update patch-db (ciborium revert) and create /media/startos as 750
- Update patch-db submodule: fixes DB null-nuke caused by ciborium's
  broken deserialize_str, and stack overflow from recursive apply_patches
- Create /media/startos with mode 750 in initramfs before subdirectories
2026-03-10 00:14:18 -06:00
Aiden McClelland
7b05a7c585 fix: add network dependency to start-tunneld and rename web reset to uninit
Add After/Wants network-online.target to prevent race where
start-tunneld starts before the network interface is up, causing
missing MASQUERADE rules. Rename `web reset` to `web uninit` for
clarity.
2026-03-09 23:03:17 -06:00
Aiden McClelland
76de6be7de refactor: extract Watchable<T> base class for SDK effect wrappers
Eliminates boilerplate across 7 wrapper classes (GetContainerIp,
GetHostInfo, GetOutboundGateway, GetServiceManifest, GetSslCertificate,
GetStatus, GetSystemSmtp) by moving shared const/once/watch/onChange/
waitFor logic into an abstract Watchable<T> base class.
2026-03-09 15:54:02 -06:00
Aiden McClelland
c52fcf5087 feat: add DbWatchedCallbacks abstraction, TypedDbWatch-based callbacks, and SDK watchable wrappers
- Extract DbWatchedCallbacks<K> abstraction in callbacks.rs using SyncMutex
  for the repeated patchdb subscribe-wait-fire-remove callback pattern
- Move get_host_info and get_status callbacks to use TypedDbWatch instead of
  raw db.subscribe, eliminating race conditions between reading and watching
- Make getStatus return Option<StatusInfo> to handle uninstalled packages
- Add getStatus .const/.once/.watch/.onChange wrapper in container-runtime
  for legacy SystemForEmbassy adapter
- Add SDK watchable wrapper classes for all callback-enabled effects:
  GetStatus, GetServiceManifest, GetHostInfo, GetContainerIp, GetSslCertificate
2026-03-09 15:24:56 -06:00
Alex Inkin
be921b7865 chore: update packages (#3132)
* chore: update packages

* start tunnel messaging

* chore: standalone

* pbpaste instead

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2026-03-09 09:53:47 -06:00
Aiden McClelland
43e514f9ee fix: improve NVIDIA driver build in image recipe
Move enable-kiosk earlier (before NVIDIA hook), add pkg-config to
NVIDIA build deps, clean up .run installer after use, blacklist
nouveau, and rebuild initramfs after NVIDIA driver installation.
2026-03-08 21:43:52 -06:00
Aiden McClelland
8ef4ef4895 fix: add ca-certificates dependency to registry-deb 2026-03-08 21:43:51 -06:00
Aiden McClelland
36bf55c133 chore: restructure release signatures into subdirectory
Moves GPG signatures and keys into a signatures/ subdirectory
before packing into signatures.tar.gz, preventing glob collisions.
2026-03-08 21:43:51 -06:00
Aiden McClelland
f56262b845 chore: remove --unhandled-rejections=warn from container-runtime 2026-03-08 21:43:50 -06:00
Aiden McClelland
5316d6ea68 chore: update patch-db submodule
Updates patch-db submodule and adjusts Cargo.toml path from
patch-db/patch-db to patch-db/core. Switches from serde_cbor
to ciborium.
2026-03-08 21:43:50 -06:00
Aiden McClelland
ea8a7c0a57 feat: add s9pk inspect commitment subcommand 2026-03-08 21:43:49 -06:00
Aiden McClelland
68ae365897 feat: add bridge filter kind to service interface
Adds 'bridge' as a FilterKind to exclude LXC bridge interface
hostnames from non-local service interfaces.
2026-03-08 21:43:49 -06:00
Aiden McClelland
ba71f205dd fix: mark private domain hostnames as non-public 2026-03-08 21:43:48 -06:00
Aiden McClelland
95a519cbe8 feat: improve service version migration and data version handling
Extract get_data_version into a shared function used by both effects
and service_map. Use the actual data version (instead of the previous
package version) when computing migration targets, and skip migrations
when the target range is unsatisfiable. Also detect install vs update
based on the presence of a data version file rather than load
disposition alone.
2026-03-08 21:43:48 -06:00
Aiden McClelland
efd90d3bdf refactor: add delete_dir utility and use across codebase
Adds a delete_dir helper that ignores NotFound errors (matching
the existing delete_file pattern) and replaces the repeated
metadata-check-then-remove_dir_all pattern throughout the codebase.
2026-03-08 21:43:39 -06:00
Matt Hill
a4bae73592 rpc/v1 for polling 2026-03-06 11:25:03 -07:00
428 changed files with 8888 additions and 6436 deletions

View File

@@ -54,11 +54,11 @@ runs:
- name: Set up Python - name: Set up Python
if: inputs.setup-python == 'true' if: inputs.setup-python == 'true'
uses: actions/setup-python@v5 uses: actions/setup-python@v6
with: with:
python-version: "3.x" python-version: "3.x"
- uses: actions/setup-node@v4 - uses: actions/setup-node@v6
with: with:
node-version: ${{ inputs.nodejs-version }} node-version: ${{ inputs.nodejs-version }}
cache: npm cache: npm
@@ -66,15 +66,15 @@ runs:
- name: Set up Docker QEMU - name: Set up Docker QEMU
if: inputs.setup-docker == 'true' if: inputs.setup-docker == 'true'
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v4
- name: Set up Docker Buildx - name: Set up Docker Buildx
if: inputs.setup-docker == 'true' if: inputs.setup-docker == 'true'
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v4
- name: Configure sccache - name: Configure sccache
if: inputs.setup-sccache == 'true' if: inputs.setup-sccache == 'true'
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || ''); core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');

View File

@@ -68,7 +68,7 @@ jobs:
- name: Mount tmpfs - name: Mount tmpfs
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs . run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v6
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build
@@ -82,7 +82,7 @@ jobs:
SCCACHE_GHA_ENABLED: on SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0 SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v7
with: with:
name: start-cli_${{ matrix.triple }} name: start-cli_${{ matrix.triple }}
path: core/target/${{ matrix.triple }}/release/start-cli path: core/target/${{ matrix.triple }}/release/start-cli

View File

@@ -64,7 +64,7 @@ jobs:
- name: Mount tmpfs - name: Mount tmpfs
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs . run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v6
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build
@@ -78,7 +78,7 @@ jobs:
SCCACHE_GHA_ENABLED: on SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0 SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v7
with: with:
name: start-registry_${{ matrix.arch }}.deb name: start-registry_${{ matrix.arch }}.deb
path: results/start-registry-*_${{ matrix.arch }}.deb path: results/start-registry-*_${{ matrix.arch }}.deb
@@ -102,13 +102,13 @@ jobs:
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
- name: Set up docker QEMU - name: Set up docker QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v4
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v4
- name: "Login to GitHub Container Registry" - name: "Login to GitHub Container Registry"
uses: docker/login-action@v3 uses: docker/login-action@v4
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{github.actor}} username: ${{github.actor}}
@@ -116,14 +116,14 @@ jobs:
- name: Docker meta - name: Docker meta
id: meta id: meta
uses: docker/metadata-action@v5 uses: docker/metadata-action@v6
with: with:
images: ghcr.io/Start9Labs/startos-registry images: ghcr.io/Start9Labs/startos-registry
tags: | tags: |
type=raw,value=${{ github.ref_name }} type=raw,value=${{ github.ref_name }}
- name: Download debian package - name: Download debian package
uses: actions/download-artifact@v4 uses: actions/download-artifact@v8
with: with:
pattern: start-registry_*.deb pattern: start-registry_*.deb
@@ -162,7 +162,7 @@ jobs:
ADD *.deb . ADD *.deb .
RUN apt-get install -y ./*_$(uname -m).deb && rm *.deb RUN apt-get update && apt-get install -y ./*_$(uname -m).deb && rm -rf *.deb /var/lib/apt/lists/*
VOLUME /var/lib/startos VOLUME /var/lib/startos

View File

@@ -64,7 +64,7 @@ jobs:
- name: Mount tmpfs - name: Mount tmpfs
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs . run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v6
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build
@@ -78,7 +78,7 @@ jobs:
SCCACHE_GHA_ENABLED: on SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0 SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v7
with: with:
name: start-tunnel_${{ matrix.arch }}.deb name: start-tunnel_${{ matrix.arch }}.deb
path: results/start-tunnel-*_${{ matrix.arch }}.deb path: results/start-tunnel-*_${{ matrix.arch }}.deb

View File

@@ -100,7 +100,7 @@ jobs:
- name: Mount tmpfs - name: Mount tmpfs
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs . run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v6
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build
@@ -114,7 +114,7 @@ jobs:
SCCACHE_GHA_ENABLED: on SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0 SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v7
with: with:
name: compiled-${{ matrix.arch }}.tar name: compiled-${{ matrix.arch }}.tar
path: compiled-${{ matrix.arch }}.tar path: compiled-${{ matrix.arch }}.tar
@@ -124,14 +124,13 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
# TODO: re-add "raspberrypi" to the platform list below
platform: >- platform: >-
${{ ${{
fromJson( fromJson(
format( format(
'[ '[
["{0}"], ["{0}"],
["x86_64", "x86_64-nonfree", "x86_64-nvidia", "aarch64", "aarch64-nonfree", "aarch64-nvidia", "riscv64", "riscv64-nonfree"] ["x86_64", "x86_64-nonfree", "x86_64-nvidia", "aarch64", "aarch64-nonfree", "aarch64-nvidia", "raspberrypi", "riscv64", "riscv64-nonfree"]
]', ]',
github.event.inputs.platform || 'ALL' github.event.inputs.platform || 'ALL'
) )
@@ -209,14 +208,14 @@ jobs:
run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache
- name: Set up docker QEMU - name: Set up docker QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v4
- uses: actions/checkout@v4 - uses: actions/checkout@v6
with: with:
submodules: recursive submodules: recursive
- name: Download compiled artifacts - name: Download compiled artifacts
uses: actions/download-artifact@v4 uses: actions/download-artifact@v8
with: with:
name: compiled-${{ env.ARCH }}.tar name: compiled-${{ env.ARCH }}.tar
@@ -253,18 +252,18 @@ jobs:
run: PLATFORM=${{ matrix.platform }} make img run: PLATFORM=${{ matrix.platform }} make img
if: ${{ matrix.platform == 'raspberrypi' }} if: ${{ matrix.platform == 'raspberrypi' }}
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v7
with: with:
name: ${{ matrix.platform }}.squashfs name: ${{ matrix.platform }}.squashfs
path: results/*.squashfs path: results/*.squashfs
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v7
with: with:
name: ${{ matrix.platform }}.iso name: ${{ matrix.platform }}.iso
path: results/*.iso path: results/*.iso
if: ${{ matrix.platform != 'raspberrypi' }} if: ${{ matrix.platform != 'raspberrypi' }}
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v7
with: with:
name: ${{ matrix.platform }}.img name: ${{ matrix.platform }}.img
path: results/*.img path: results/*.img

View File

@@ -24,7 +24,7 @@ jobs:
if: github.event.pull_request.draft != true if: github.event.pull_request.draft != true
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v6
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build

1
.gitignore vendored
View File

@@ -22,3 +22,4 @@ secrets.db
tmp tmp
web/.i18n-checked web/.i18n-checked
docs/USER.md docs/USER.md
*.s9pk

View File

@@ -5,7 +5,7 @@ StartOS is an open-source Linux distribution for running personal servers. It ma
## Tech Stack ## Tech Stack
- Backend: Rust (async/Tokio, Axum web framework) - Backend: Rust (async/Tokio, Axum web framework)
- Frontend: Angular 20 + TypeScript + TaigaUI - Frontend: Angular 21 + TypeScript + Taiga UI 5
- Container runtime: Node.js/TypeScript with LXC - Container runtime: Node.js/TypeScript with LXC
- Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync - Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync
- API: JSON-RPC via rpc-toolkit (see `core/rpc-toolkit.md`) - API: JSON-RPC via rpc-toolkit (see `core/rpc-toolkit.md`)
@@ -30,7 +30,7 @@ StartOS is an open-source Linux distribution for running personal servers. It ma
- **`core/`** — Rust backend daemon. Produces a single binary `startbox` that is symlinked as `startd` (main daemon), `start-cli` (CLI), `start-container` (runs inside LXC containers), `registrybox` (package registry), and `tunnelbox` (VPN/tunnel). Handles all backend logic: RPC API, service lifecycle, networking (DNS, ACME, WiFi, Tor, WireGuard), backups, and database state management. See [core/ARCHITECTURE.md](core/ARCHITECTURE.md). - **`core/`** — Rust backend daemon. Produces a single binary `startbox` that is symlinked as `startd` (main daemon), `start-cli` (CLI), `start-container` (runs inside LXC containers), `registrybox` (package registry), and `tunnelbox` (VPN/tunnel). Handles all backend logic: RPC API, service lifecycle, networking (DNS, ACME, WiFi, Tor, WireGuard), backups, and database state management. See [core/ARCHITECTURE.md](core/ARCHITECTURE.md).
- **`web/`** — Angular 20 + TypeScript workspace using Taiga UI. Contains three applications (admin UI, setup wizard, VPN management) and two shared libraries (common components/services, marketplace). Communicates with the backend exclusively via JSON-RPC. See [web/ARCHITECTURE.md](web/ARCHITECTURE.md). - **`web/`** — Angular 21 + TypeScript workspace using Taiga UI 5. Contains three applications (admin UI, setup wizard, VPN management) and two shared libraries (common components/services, marketplace). Communicates with the backend exclusively via JSON-RPC. See [web/ARCHITECTURE.md](web/ARCHITECTURE.md).
- **`container-runtime/`** — Node.js runtime that runs inside each service's LXC container. Loads the service's JavaScript from its S9PK package and manages subcontainers. Communicates with the host daemon via JSON-RPC over Unix socket. See [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md). - **`container-runtime/`** — Node.js runtime that runs inside each service's LXC container. Loads the service's JavaScript from its S9PK package and manages subcontainers. Communicates with the host daemon via JSON-RPC over Unix socket. See [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md).

View File

@@ -31,6 +31,7 @@ make test-core # Run Rust tests
- Check component-level CLAUDE.md files for component-specific conventions. ALWAYS read it before operating on that component. - Check component-level CLAUDE.md files for component-specific conventions. ALWAYS read it before operating on that component.
- Follow existing patterns before inventing new ones - Follow existing patterns before inventing new ones
- Always use `make` recipes when they exist for testing builds rather than manually invoking build commands - Always use `make` recipes when they exist for testing builds rather than manually invoking build commands
- **Commit signing:** Never push unsigned commits. Before pushing, check all unpushed commits for signatures with `git log --show-signature @{upstream}..HEAD`. If any are unsigned, prompt the user to sign them with `git rebase --exec 'git commit --amend -S --no-edit' @{upstream}`.
## Supplementary Documentation ## Supplementary Documentation
@@ -50,7 +51,6 @@ On startup:
1. **Check for `docs/USER.md`** - If it doesn't exist, prompt the user for their name/identifier and create it. This file is gitignored since it varies per developer. 1. **Check for `docs/USER.md`** - If it doesn't exist, prompt the user for their name/identifier and create it. This file is gitignored since it varies per developer.
2. **Check `docs/TODO.md` for relevant tasks** - Show TODOs that either: 2. **Check `docs/TODO.md` for relevant tasks** - Show TODOs that either:
- Have no `@username` tag (relevant to everyone) - Have no `@username` tag (relevant to everyone)
- Are tagged with the current user's identifier - Are tagged with the current user's identifier

View File

@@ -15,7 +15,8 @@ IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo
WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html
COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html
FIRMWARE_ROMS := build/lib/firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./build/lib/firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json) FIRMWARE_ROMS := build/lib/firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./build/lib/firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json)
BUILD_SRC := $(call ls-files, build/lib) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS) TOR_S9PK := build/lib/tor_$(ARCH).s9pk
BUILD_SRC := $(call ls-files, build/lib) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS) $(TOR_S9PK)
IMAGE_RECIPE_SRC := $(call ls-files, build/image-recipe/) IMAGE_RECIPE_SRC := $(call ls-files, build/image-recipe/)
STARTD_SRC := core/startd.service $(BUILD_SRC) STARTD_SRC := core/startd.service $(BUILD_SRC)
CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE) CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE)
@@ -155,7 +156,7 @@ results/$(BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/startos) $(
registry-deb: results/$(REGISTRY_BASENAME).deb registry-deb: results/$(REGISTRY_BASENAME).deb
results/$(REGISTRY_BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS) results/$(REGISTRY_BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS)
PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian ./build/os-compat/run-compat.sh ./debian/dpkg-build.sh PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=ca-certificates ./build/os-compat/run-compat.sh ./debian/dpkg-build.sh
tunnel-deb: results/$(TUNNEL_BASENAME).deb tunnel-deb: results/$(TUNNEL_BASENAME).deb
@@ -188,6 +189,9 @@ install: $(STARTOS_TARGETS)
$(call mkdir,$(DESTDIR)/lib/systemd/system) $(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startd.service,$(DESTDIR)/lib/systemd/system/startd.service) $(call cp,core/startd.service,$(DESTDIR)/lib/systemd/system/startd.service)
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then \
sed -i '/^Environment=/a Environment=RUST_BACKTRACE=full' $(DESTDIR)/lib/systemd/system/startd.service; \
fi
$(call mkdir,$(DESTDIR)/usr/lib) $(call mkdir,$(DESTDIR)/usr/lib)
$(call rm,$(DESTDIR)/usr/lib/startos) $(call rm,$(DESTDIR)/usr/lib/startos)
@@ -283,6 +287,10 @@ core/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
rm -rf core/bindings rm -rf core/bindings
./core/build/build-ts.sh ./core/build/build-ts.sh
ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts
if [ -d core/bindings/tunnel ]; then \
ls core/bindings/tunnel/*.ts | sed 's/core\/bindings\/tunnel\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' > core/bindings/tunnel/index.ts; \
echo 'export * as Tunnel from "./tunnel";' >> core/bindings/index.ts; \
fi
npm --prefix sdk/base exec -- prettier --config=./sdk/base/package.json -w './core/bindings/**/*.ts' npm --prefix sdk/base exec -- prettier --config=./sdk/base/package.json -w './core/bindings/**/*.ts'
touch core/bindings/index.ts touch core/bindings/index.ts
@@ -308,6 +316,9 @@ build/lib/depends build/lib/conflicts: $(ENVIRONMENT_FILE) $(PLATFORM_FILE) $(sh
$(FIRMWARE_ROMS): build/lib/firmware.json ./build/download-firmware.sh $(PLATFORM_FILE) $(FIRMWARE_ROMS): build/lib/firmware.json ./build/download-firmware.sh $(PLATFORM_FILE)
./build/download-firmware.sh $(PLATFORM) ./build/download-firmware.sh $(PLATFORM)
$(TOR_S9PK): ./build/download-tor-s9pk.sh
./build/download-tor-s9pk.sh $(ARCH)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE) core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE)
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-startbox.sh ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-startbox.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox

14
build/download-tor-s9pk.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
ARCH=$1
if [ -z "$ARCH" ]; then
>&2 echo "usage: $0 <ARCH>"
exit 1
fi
curl --fail -L -o "./lib/tor_${ARCH}.s9pk" "https://s9pks.nyc3.cdn.digitaloceanspaces.com/tor_${ARCH}.s9pk"

View File

@@ -11,6 +11,7 @@ cifs-utils
conntrack conntrack
cryptsetup cryptsetup
curl curl
dkms
dmidecode dmidecode
dnsutils dnsutils
dosfstools dosfstools
@@ -36,6 +37,7 @@ lvm2
lxc lxc
magic-wormhole magic-wormhole
man-db man-db
mokutil
ncdu ncdu
net-tools net-tools
network-manager network-manager

View File

@@ -1,5 +1,6 @@
- grub-efi + gdisk
+ parted + parted
+ u-boot-rpi
+ raspberrypi-net-mods + raspberrypi-net-mods
+ raspberrypi-sys-mods + raspberrypi-sys-mods
+ raspi-config + raspi-config

View File

@@ -23,6 +23,8 @@ RUN apt-get update && \
squashfs-tools \ squashfs-tools \
rsync \ rsync \
b3sum \ b3sum \
btrfs-progs \
gdisk \
dpkg-dev dpkg-dev

View File

@@ -1,7 +1,6 @@
#!/bin/bash #!/bin/bash
set -e set -e
MAX_IMG_LEN=$((4 * 1024 * 1024 * 1024)) # 4GB
echo "==== StartOS Image Build ====" echo "==== StartOS Image Build ===="
@@ -132,6 +131,15 @@ ff02::1 ip6-allnodes
ff02::2 ip6-allrouters ff02::2 ip6-allrouters
EOT EOT
if [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then
mkdir -p config/includes.chroot/etc/ssh/sshd_config.d
echo "PasswordAuthentication yes" > config/includes.chroot/etc/ssh/sshd_config.d/dev-password-auth.conf
fi
# Installer marker file (used by installed GRUB to detect the live USB)
mkdir -p config/includes.binary
touch config/includes.binary/.startos-installer
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
mkdir -p config/includes.chroot mkdir -p config/includes.chroot
git clone --depth=1 --branch=stable https://github.com/raspberrypi/rpi-firmware.git config/includes.chroot/boot git clone --depth=1 --branch=stable https://github.com/raspberrypi/rpi-firmware.git config/includes.chroot/boot
@@ -172,7 +180,13 @@ sed -i -e '2i set timeout=5' config/bootloaders/grub-pc/config.cfg
mkdir -p config/archives mkdir -p config/archives
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
curl -fsSL https://archive.raspberrypi.com/debian/raspberrypi.gpg.key | gpg --dearmor -o config/archives/raspi.key # Fetch the keyring package (not the old raspberrypi.gpg.key, which has
# SHA1-only binding signatures that sqv on Trixie rejects).
KEYRING_DEB=$(mktemp)
curl -fsSL -o "$KEYRING_DEB" https://archive.raspberrypi.com/debian/pool/main/r/raspberrypi-archive-keyring/raspberrypi-archive-keyring_2025.1+rpt1_all.deb
dpkg-deb -x "$KEYRING_DEB" "$KEYRING_DEB.d"
cp "$KEYRING_DEB.d/usr/share/keyrings/raspberrypi-archive-keyring.gpg" config/archives/raspi.key
rm -rf "$KEYRING_DEB" "$KEYRING_DEB.d"
echo "deb [arch=${IB_TARGET_ARCH} signed-by=/etc/apt/trusted.gpg.d/raspi.key.gpg] https://archive.raspberrypi.com/debian/ ${IB_SUITE} main" > config/archives/raspi.list echo "deb [arch=${IB_TARGET_ARCH} signed-by=/etc/apt/trusted.gpg.d/raspi.key.gpg] https://archive.raspberrypi.com/debian/ ${IB_SUITE} main" > config/archives/raspi.list
fi fi
@@ -209,6 +223,10 @@ cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF
set -e set -e
if [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
/usr/lib/startos/scripts/enable-kiosk
fi
if [ "${NVIDIA}" = "1" ]; then if [ "${NVIDIA}" = "1" ]; then
# install a specific NVIDIA driver version # install a specific NVIDIA driver version
@@ -236,7 +254,7 @@ if [ "${NVIDIA}" = "1" ]; then
echo "[nvidia-hook] Target kernel version: \${KVER}" >&2 echo "[nvidia-hook] Target kernel version: \${KVER}" >&2
# Ensure kernel headers are present # Ensure kernel headers are present
TEMP_APT_DEPS=(build-essential) TEMP_APT_DEPS=(build-essential pkg-config)
if [ ! -e "/lib/modules/\${KVER}/build" ]; then if [ ! -e "/lib/modules/\${KVER}/build" ]; then
TEMP_APT_DEPS+=(linux-headers-\${KVER}) TEMP_APT_DEPS+=(linux-headers-\${KVER})
fi fi
@@ -279,12 +297,32 @@ if [ "${NVIDIA}" = "1" ]; then
echo "[nvidia-hook] NVIDIA \${NVIDIA_DRIVER_VERSION} installation complete for kernel \${KVER}" >&2 echo "[nvidia-hook] NVIDIA \${NVIDIA_DRIVER_VERSION} installation complete for kernel \${KVER}" >&2
echo "[nvidia-hook] Removing .run installer..." >&2
rm -f "\${RUN_PATH}"
echo "[nvidia-hook] Blacklisting nouveau..." >&2
echo "blacklist nouveau" > /etc/modprobe.d/blacklist-nouveau.conf
echo "options nouveau modeset=0" >> /etc/modprobe.d/blacklist-nouveau.conf
echo "[nvidia-hook] Rebuilding initramfs..." >&2
update-initramfs -u -k "\${KVER}"
echo "[nvidia-hook] Removing build dependencies..." >&2 echo "[nvidia-hook] Removing build dependencies..." >&2
apt-get purge -y nvidia-depends apt-get purge -y nvidia-depends
apt-get autoremove -y apt-get autoremove -y
echo "[nvidia-hook] Removed build dependencies." >&2 echo "[nvidia-hook] Removed build dependencies." >&2
fi fi
# Install linux-kbuild for sign-file (Secure Boot module signing)
KVER_ALL="\$(ls -1t /boot/vmlinuz-* 2>/dev/null | head -n1 | sed 's|.*/vmlinuz-||')"
if [ -n "\${KVER_ALL}" ]; then
KBUILD_VER="\$(echo "\${KVER_ALL}" | grep -oP '^\d+\.\d+')"
if [ -n "\${KBUILD_VER}" ]; then
echo "[build] Installing linux-kbuild-\${KBUILD_VER} for Secure Boot support" >&2
apt-get install -y "linux-kbuild-\${KBUILD_VER}" || echo "[build] WARNING: linux-kbuild-\${KBUILD_VER} not available" >&2
fi
fi
cp /etc/resolv.conf /etc/resolv.conf.bak cp /etc/resolv.conf /etc/resolv.conf.bak
if [ "${IB_SUITE}" = trixie ] && [ "${IB_TARGET_ARCH}" != riscv64 ]; then if [ "${IB_SUITE}" = trixie ] && [ "${IB_TARGET_ARCH}" != riscv64 ]; then
@@ -298,9 +336,10 @@ fi
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
ln -sf /usr/bin/pi-beep /usr/local/bin/beep ln -sf /usr/bin/pi-beep /usr/local/bin/beep
KERNEL_VERSION=${RPI_KERNEL_VERSION} sh /boot/config.sh > /boot/config.txt sh /boot/firmware/config.sh > /boot/firmware/config.txt
mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-v8 ${RPI_KERNEL_VERSION}-rpi-v8 mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-v8 ${RPI_KERNEL_VERSION}-rpi-v8
mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-2712 ${RPI_KERNEL_VERSION}-rpi-2712 mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-2712 ${RPI_KERNEL_VERSION}-rpi-2712
cp /usr/lib/u-boot/rpi_arm64/u-boot.bin /boot/firmware/u-boot.bin
fi fi
useradd --shell /bin/bash -G startos -m start9 useradd --shell /bin/bash -G startos -m start9
@@ -310,14 +349,14 @@ usermod -aG systemd-journal start9
echo "start9 ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee "/etc/sudoers.d/010_start9-nopasswd" echo "start9 ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee "/etc/sudoers.d/010_start9-nopasswd"
if [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
/usr/lib/startos/scripts/enable-kiosk
fi
if ! [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then if ! [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then
passwd -l start9 passwd -l start9
fi fi
mkdir -p /media/startos
chmod 750 /media/startos
chown root:startos /media/startos
EOF EOF
SOURCE_DATE_EPOCH="${SOURCE_DATE_EPOCH:-$(date '+%s')}" SOURCE_DATE_EPOCH="${SOURCE_DATE_EPOCH:-$(date '+%s')}"
@@ -370,38 +409,85 @@ if [ "${IMAGE_TYPE}" = iso ]; then
elif [ "${IMAGE_TYPE}" = img ]; then elif [ "${IMAGE_TYPE}" = img ]; then
SECTOR_LEN=512 SECTOR_LEN=512
BOOT_START=$((1024 * 1024)) # 1MiB FW_START=$((1024 * 1024)) # 1MiB (sector 2048) — Pi-specific
BOOT_LEN=$((512 * 1024 * 1024)) # 512MiB FW_LEN=$((128 * 1024 * 1024)) # 128MiB (Pi firmware + U-Boot + DTBs)
FW_END=$((FW_START + FW_LEN - 1))
ESP_START=$((FW_END + 1)) # 100MB EFI System Partition (matches os_install)
ESP_LEN=$((100 * 1024 * 1024))
ESP_END=$((ESP_START + ESP_LEN - 1))
BOOT_START=$((ESP_END + 1)) # 2GB /boot (matches os_install)
BOOT_LEN=$((2 * 1024 * 1024 * 1024))
BOOT_END=$((BOOT_START + BOOT_LEN - 1)) BOOT_END=$((BOOT_START + BOOT_LEN - 1))
ROOT_START=$((BOOT_END + 1)) ROOT_START=$((BOOT_END + 1))
ROOT_LEN=$((MAX_IMG_LEN - ROOT_START))
ROOT_END=$((MAX_IMG_LEN - 1)) # Size root partition to fit the squashfs + 256MB overhead for btrfs
# metadata and config overlay, avoiding the need for btrfs resize
SQUASHFS_SIZE=$(stat -c %s $prep_results_dir/binary/live/filesystem.squashfs)
ROOT_LEN=$(( SQUASHFS_SIZE + 256 * 1024 * 1024 ))
# Align to sector boundary
ROOT_LEN=$(( (ROOT_LEN + SECTOR_LEN - 1) / SECTOR_LEN * SECTOR_LEN ))
# Total image: partitions + GPT backup header (34 sectors)
IMG_LEN=$((ROOT_START + ROOT_LEN + 34 * SECTOR_LEN))
# Fixed GPT partition UUIDs (deterministic, based on old MBR disk ID cb15ae4d)
FW_UUID=cb15ae4d-0001-4000-8000-000000000001
ESP_UUID=cb15ae4d-0002-4000-8000-000000000002
BOOT_UUID=cb15ae4d-0003-4000-8000-000000000003
ROOT_UUID=cb15ae4d-0004-4000-8000-000000000004
TARGET_NAME=$prep_results_dir/${IMAGE_BASENAME}.img TARGET_NAME=$prep_results_dir/${IMAGE_BASENAME}.img
truncate -s $MAX_IMG_LEN $TARGET_NAME truncate -s $IMG_LEN $TARGET_NAME
sfdisk $TARGET_NAME <<-EOF sfdisk $TARGET_NAME <<-EOF
label: dos label: gpt
label-id: 0xcb15ae4d
unit: sectors
sector-size: 512
${TARGET_NAME}1 : start=$((BOOT_START / SECTOR_LEN)), size=$((BOOT_LEN / SECTOR_LEN)), type=c, bootable ${TARGET_NAME}1 : start=$((FW_START / SECTOR_LEN)), size=$((FW_LEN / SECTOR_LEN)), type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=${FW_UUID}, name="firmware"
${TARGET_NAME}2 : start=$((ROOT_START / SECTOR_LEN)), size=$((ROOT_LEN / SECTOR_LEN)), type=83 ${TARGET_NAME}2 : start=$((ESP_START / SECTOR_LEN)), size=$((ESP_LEN / SECTOR_LEN)), type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=${ESP_UUID}, name="efi"
${TARGET_NAME}3 : start=$((BOOT_START / SECTOR_LEN)), size=$((BOOT_LEN / SECTOR_LEN)), type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, uuid=${BOOT_UUID}, name="boot"
${TARGET_NAME}4 : start=$((ROOT_START / SECTOR_LEN)), size=$((ROOT_LEN / SECTOR_LEN)), type=B921B045-1DF0-41C3-AF44-4C6F280D3FAE, uuid=${ROOT_UUID}, name="root"
EOF EOF
BOOT_DEV=$(losetup --show -f --offset $BOOT_START --sizelimit $BOOT_LEN $TARGET_NAME) # Create named loop device nodes (high minor numbers to avoid conflicts)
ROOT_DEV=$(losetup --show -f --offset $ROOT_START --sizelimit $ROOT_LEN $TARGET_NAME) # and detach any stale ones from previous failed builds
FW_DEV=/dev/startos-loop-fw
ESP_DEV=/dev/startos-loop-esp
BOOT_DEV=/dev/startos-loop-boot
ROOT_DEV=/dev/startos-loop-root
for dev in $FW_DEV:200 $ESP_DEV:201 $BOOT_DEV:202 $ROOT_DEV:203; do
name=${dev%:*}
minor=${dev#*:}
[ -e $name ] || mknod $name b 7 $minor
losetup -d $name 2>/dev/null || true
done
mkfs.vfat -F32 $BOOT_DEV losetup $FW_DEV --offset $FW_START --sizelimit $FW_LEN $TARGET_NAME
mkfs.ext4 $ROOT_DEV losetup $ESP_DEV --offset $ESP_START --sizelimit $ESP_LEN $TARGET_NAME
losetup $BOOT_DEV --offset $BOOT_START --sizelimit $BOOT_LEN $TARGET_NAME
losetup $ROOT_DEV --offset $ROOT_START --sizelimit $ROOT_LEN $TARGET_NAME
mkfs.vfat -F32 -n firmware $FW_DEV
mkfs.vfat -F32 -n efi $ESP_DEV
mkfs.vfat -F32 -n boot $BOOT_DEV
mkfs.btrfs -f -L rootfs $ROOT_DEV
TMPDIR=$(mktemp -d) TMPDIR=$(mktemp -d)
# Extract boot files from squashfs to staging area
BOOT_STAGING=$(mktemp -d)
unsquashfs -n -f -d $BOOT_STAGING $prep_results_dir/binary/live/filesystem.squashfs boot
# Mount partitions (nested: firmware and efi inside boot)
mkdir -p $TMPDIR/boot $TMPDIR/root mkdir -p $TMPDIR/boot $TMPDIR/root
mount $ROOT_DEV $TMPDIR/root
mount $BOOT_DEV $TMPDIR/boot mount $BOOT_DEV $TMPDIR/boot
unsquashfs -n -f -d $TMPDIR $prep_results_dir/binary/live/filesystem.squashfs boot mkdir -p $TMPDIR/boot/firmware $TMPDIR/boot/efi
mount $FW_DEV $TMPDIR/boot/firmware
mount $ESP_DEV $TMPDIR/boot/efi
mount $ROOT_DEV $TMPDIR/root
# Copy boot files — nested mounts route firmware/* to the firmware partition
cp -a $BOOT_STAGING/boot/. $TMPDIR/boot/
rm -rf $BOOT_STAGING
mkdir $TMPDIR/root/images $TMPDIR/root/config mkdir $TMPDIR/root/images $TMPDIR/root/config
B3SUM=$(b3sum $prep_results_dir/binary/live/filesystem.squashfs | head -c 16) B3SUM=$(b3sum $prep_results_dir/binary/live/filesystem.squashfs | head -c 16)
@@ -414,40 +500,46 @@ elif [ "${IMAGE_TYPE}" = img ]; then
mount -t overlay -o lowerdir=$TMPDIR/lower,workdir=$TMPDIR/root/config/work,upperdir=$TMPDIR/root/config/overlay overlay $TMPDIR/next mount -t overlay -o lowerdir=$TMPDIR/lower,workdir=$TMPDIR/root/config/work,upperdir=$TMPDIR/root/config/overlay overlay $TMPDIR/next
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
sed -i 's| boot=startos| boot=startos init=/usr/lib/startos/scripts/init_resize\.sh|' $TMPDIR/boot/cmdline.txt
rsync -a $SOURCE_DIR/raspberrypi/img/ $TMPDIR/next/ rsync -a $SOURCE_DIR/raspberrypi/img/ $TMPDIR/next/
# Install GRUB: ESP at /boot/efi (Part 2), /boot (Part 3)
mkdir -p $TMPDIR/next/boot \
$TMPDIR/next/dev $TMPDIR/next/proc $TMPDIR/next/sys $TMPDIR/next/media/startos/root
mount --rbind $TMPDIR/boot $TMPDIR/next/boot
mount --bind /dev $TMPDIR/next/dev
mount -t proc proc $TMPDIR/next/proc
mount -t sysfs sysfs $TMPDIR/next/sys
mount --bind $TMPDIR/root $TMPDIR/next/media/startos/root
chroot $TMPDIR/next grub-install --target=arm64-efi --removable --efi-directory=/boot/efi --boot-directory=/boot --no-nvram
chroot $TMPDIR/next update-grub
umount $TMPDIR/next/media/startos/root
umount $TMPDIR/next/sys
umount $TMPDIR/next/proc
umount $TMPDIR/next/dev
umount -l $TMPDIR/next/boot
# Fix root= in grub.cfg: update-grub sees loop devices, but the
# real device uses a fixed GPT PARTUUID for root (Part 4).
sed -i "s|root=[^ ]*|root=PARTUUID=${ROOT_UUID}|g" $TMPDIR/boot/grub/grub.cfg
# Inject first-boot resize script into GRUB config
sed -i 's| boot=startos| boot=startos init=/usr/lib/startos/scripts/init_resize\.sh|' $TMPDIR/boot/grub/grub.cfg
fi fi
umount $TMPDIR/next umount $TMPDIR/next
umount $TMPDIR/lower umount $TMPDIR/lower
umount $TMPDIR/boot/firmware
umount $TMPDIR/boot/efi
umount $TMPDIR/boot umount $TMPDIR/boot
umount $TMPDIR/root umount $TMPDIR/root
e2fsck -fy $ROOT_DEV
resize2fs -M $ROOT_DEV
BLOCK_COUNT=$(dumpe2fs -h $ROOT_DEV | awk '/^Block count:/ { print $3 }')
BLOCK_SIZE=$(dumpe2fs -h $ROOT_DEV | awk '/^Block size:/ { print $3 }')
ROOT_LEN=$((BLOCK_COUNT * BLOCK_SIZE))
losetup -d $ROOT_DEV losetup -d $ROOT_DEV
losetup -d $BOOT_DEV losetup -d $BOOT_DEV
losetup -d $ESP_DEV
# Recreate partition 2 with the new size using sfdisk losetup -d $FW_DEV
sfdisk $TARGET_NAME <<-EOF
label: dos
label-id: 0xcb15ae4d
unit: sectors
sector-size: 512
${TARGET_NAME}1 : start=$((BOOT_START / SECTOR_LEN)), size=$((BOOT_LEN / SECTOR_LEN)), type=c, bootable
${TARGET_NAME}2 : start=$((ROOT_START / SECTOR_LEN)), size=$((ROOT_LEN / SECTOR_LEN)), type=83
EOF
TARGET_SIZE=$((ROOT_START + ROOT_LEN))
truncate -s $TARGET_SIZE $TARGET_NAME
mv $TARGET_NAME $RESULTS_DIR/$IMAGE_BASENAME.img mv $TARGET_NAME $RESULTS_DIR/$IMAGE_BASENAME.img

View File

@@ -1,2 +1,4 @@
/dev/mmcblk0p1 /boot vfat umask=0077 0 2 PARTUUID=cb15ae4d-0001-4000-8000-000000000001 /boot/firmware vfat umask=0077 0 2
/dev/mmcblk0p2 / ext4 defaults 0 1 PARTUUID=cb15ae4d-0002-4000-8000-000000000002 /boot/efi vfat umask=0077 0 1
PARTUUID=cb15ae4d-0003-4000-8000-000000000003 /boot vfat umask=0077 0 2
PARTUUID=cb15ae4d-0004-4000-8000-000000000004 / btrfs defaults 0 1

View File

@@ -12,15 +12,16 @@ get_variables () {
BOOT_DEV_NAME=$(echo /sys/block/*/"${BOOT_PART_NAME}" | cut -d "/" -f 4) BOOT_DEV_NAME=$(echo /sys/block/*/"${BOOT_PART_NAME}" | cut -d "/" -f 4)
BOOT_PART_NUM=$(cat "/sys/block/${BOOT_DEV_NAME}/${BOOT_PART_NAME}/partition") BOOT_PART_NUM=$(cat "/sys/block/${BOOT_DEV_NAME}/${BOOT_PART_NAME}/partition")
OLD_DISKID=$(fdisk -l "$ROOT_DEV" | sed -n 's/Disk identifier: 0x\([^ ]*\)/\1/p')
ROOT_DEV_SIZE=$(cat "/sys/block/${ROOT_DEV_NAME}/size") ROOT_DEV_SIZE=$(cat "/sys/block/${ROOT_DEV_NAME}/size")
if [ "$ROOT_DEV_SIZE" -le 67108864 ]; then # GPT backup header/entries occupy last 33 sectors
TARGET_END=$((ROOT_DEV_SIZE - 1)) USABLE_END=$((ROOT_DEV_SIZE - 34))
if [ "$USABLE_END" -le 67108864 ]; then
TARGET_END=$USABLE_END
else else
TARGET_END=$((33554432 - 1)) TARGET_END=$((33554432 - 1))
DATA_PART_START=33554432 DATA_PART_START=33554432
DATA_PART_END=$((ROOT_DEV_SIZE - 1)) DATA_PART_END=$USABLE_END
fi fi
PARTITION_TABLE=$(parted -m "$ROOT_DEV" unit s print | tr -d 's') PARTITION_TABLE=$(parted -m "$ROOT_DEV" unit s print | tr -d 's')
@@ -57,37 +58,30 @@ check_variables () {
main () { main () {
get_variables get_variables
# Fix GPT backup header first — the image was built with a tight root
# partition, so the backup GPT is not at the end of the SD card. parted
# will prompt interactively if this isn't fixed before we use it.
sgdisk -e "$ROOT_DEV" 2>/dev/null || true
if ! check_variables; then if ! check_variables; then
return 1 return 1
fi fi
# if [ "$ROOT_PART_END" -eq "$TARGET_END" ]; then
# reboot_pi
# fi
if ! echo Yes | parted -m --align=optimal "$ROOT_DEV" ---pretend-input-tty u s resizepart "$ROOT_PART_NUM" "$TARGET_END" ; then if ! echo Yes | parted -m --align=optimal "$ROOT_DEV" ---pretend-input-tty u s resizepart "$ROOT_PART_NUM" "$TARGET_END" ; then
FAIL_REASON="Root partition resize failed" FAIL_REASON="Root partition resize failed"
return 1 return 1
fi fi
if [ -n "$DATA_PART_START" ]; then if [ -n "$DATA_PART_START" ]; then
if ! parted -ms --align=optimal "$ROOT_DEV" u s mkpart primary "$DATA_PART_START" "$DATA_PART_END"; then if ! parted -ms --align=optimal "$ROOT_DEV" u s mkpart data "$DATA_PART_START" "$DATA_PART_END"; then
FAIL_REASON="Data partition creation failed" FAIL_REASON="Data partition creation failed"
return 1 return 1
fi fi
fi fi
(
echo x
echo i
echo "0xcb15ae4d"
echo r
echo w
) | fdisk $ROOT_DEV
mount / -o remount,rw mount / -o remount,rw
resize2fs $ROOT_PART_DEV btrfs filesystem resize max /media/startos/root
if ! systemd-machine-id-setup --root=/media/startos/config/overlay/; then if ! systemd-machine-id-setup --root=/media/startos/config/overlay/; then
FAIL_REASON="systemd-machine-id-setup failed" FAIL_REASON="systemd-machine-id-setup failed"
@@ -111,7 +105,7 @@ mount / -o remount,ro
beep beep
if main; then if main; then
sed -i 's| init=/usr/lib/startos/scripts/init_resize\.sh||' /boot/cmdline.txt sed -i 's| init=/usr/lib/startos/scripts/init_resize\.sh||' /boot/grub/grub.cfg
echo "Resized root filesystem. Rebooting in 5 seconds..." echo "Resized root filesystem. Rebooting in 5 seconds..."
sleep 5 sleep 5
else else

View File

@@ -1 +0,0 @@
usb-storage.quirks=152d:0562:u,14cd:121c:u,0781:cfcb:u console=serial0,115200 console=tty1 root=PARTUUID=cb15ae4d-02 rootfstype=ext4 fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory boot=startos

View File

@@ -27,20 +27,18 @@ disable_overscan=1
# (e.g. for USB device mode) or if USB support is not required. # (e.g. for USB device mode) or if USB support is not required.
otg_mode=1 otg_mode=1
[all]
[pi4] [pi4]
# Run as fast as firmware / board allows # Run as fast as firmware / board allows
arm_boost=1 arm_boost=1
kernel=vmlinuz-${KERNEL_VERSION}-rpi-v8
initramfs initrd.img-${KERNEL_VERSION}-rpi-v8 followkernel
[pi5]
kernel=vmlinuz-${KERNEL_VERSION}-rpi-2712
initramfs initrd.img-${KERNEL_VERSION}-rpi-2712 followkernel
[all] [all]
gpu_mem=16 gpu_mem=16
dtoverlay=pwm-2chan,disable-bt dtoverlay=pwm-2chan,disable-bt
# Enable UART for U-Boot and serial console
enable_uart=1
# Load U-Boot as the bootloader (GRUB is chainloaded from U-Boot)
kernel=u-boot.bin
EOF EOF

View File

@@ -84,4 +84,8 @@ arm_boost=1
gpu_mem=16 gpu_mem=16
dtoverlay=pwm-2chan,disable-bt dtoverlay=pwm-2chan,disable-bt
auto_initramfs=1 # Enable UART for U-Boot and serial console
enable_uart=1
# Load U-Boot as the bootloader (GRUB is chainloaded from U-Boot)
kernel=u-boot.bin

View File

@@ -0,0 +1,4 @@
# Raspberry Pi-specific GRUB overrides
# Overrides GRUB_CMDLINE_LINUX from /etc/default/grub with Pi-specific
# console devices and hardware quirks.
GRUB_CMDLINE_LINUX="boot=startos console=serial0,115200 console=tty1 usb-storage.quirks=152d:0562:u,14cd:121c:u,0781:cfcb:u cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory"

View File

@@ -1,6 +1,3 @@
os-partitions:
boot: /dev/mmcblk0p1
root: /dev/mmcblk0p2
ethernet-interface: end0 ethernet-interface: end0
wifi-interface: wlan0 wifi-interface: wlan0
disable-encryption: true disable-encryption: true

View File

@@ -34,7 +34,7 @@ set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
if [ -z "$NO_SYNC" ]; then if [ -z "$NO_SYNC" ]; then
echo 'Syncing...' echo 'Syncing...'
umount -R /media/startos/next 2> /dev/null umount -l /media/startos/next 2> /dev/null
umount /media/startos/upper 2> /dev/null umount /media/startos/upper 2> /dev/null
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next
mkdir /media/startos/upper mkdir /media/startos/upper
@@ -58,13 +58,13 @@ mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys mount -t sysfs sysfs /media/startos/next/sys
mount --bind /proc /media/startos/next/proc mount -t proc proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars mount -t efivarfs efivarfs /media/startos/next/sys/firmware/efi/efivars
fi fi
if [ -z "$*" ]; then if [ -z "$*" ]; then
@@ -111,6 +111,6 @@ if [ "$CHROOT_RES" -eq 0 ]; then
reboot reboot
fi fi
umount /media/startos/next umount -l /media/startos/next
umount /media/startos/upper umount -l /media/startos/upper
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next

View File

@@ -0,0 +1,76 @@
#!/bin/bash
# sign-unsigned-modules [--source <dir> --dest <dir>] [--sign-file <path>]
# [--mok-key <path>] [--mok-pub <path>]
#
# Signs all unsigned kernel modules using the DKMS MOK key.
#
# Default (install) mode:
# Run inside a chroot. Finds and signs unsigned modules in /lib/modules in-place.
# sign-file and MOK key are auto-detected from standard paths.
#
# Overlay mode (--source/--dest):
# Finds unsigned modules in <source>, copies to <dest>, signs the copies.
# Clears old signed modules in <dest> first. Used during upgrades where the
# overlay upper is tmpfs and writes would be lost.
set -e
SOURCE=""
DEST=""
SIGN_FILE=""
MOK_KEY="/var/lib/dkms/mok.key"
MOK_PUB="/var/lib/dkms/mok.pub"
while [[ $# -gt 0 ]]; do
case $1 in
--source) SOURCE="$2"; shift 2;;
--dest) DEST="$2"; shift 2;;
--sign-file) SIGN_FILE="$2"; shift 2;;
--mok-key) MOK_KEY="$2"; shift 2;;
--mok-pub) MOK_PUB="$2"; shift 2;;
*) echo "Unknown option: $1" >&2; exit 1;;
esac
done
# Auto-detect sign-file if not specified
if [ -z "$SIGN_FILE" ]; then
SIGN_FILE="$(ls -1 /usr/lib/linux-kbuild-*/scripts/sign-file 2>/dev/null | head -1)"
fi
if [ -z "$SIGN_FILE" ] || [ ! -x "$SIGN_FILE" ]; then
exit 0
fi
if [ ! -f "$MOK_KEY" ] || [ ! -f "$MOK_PUB" ]; then
exit 0
fi
COUNT=0
if [ -n "$SOURCE" ] && [ -n "$DEST" ]; then
# Overlay mode: find unsigned in source, copy to dest, sign in dest
rm -rf "${DEST}"/lib/modules
for ko in $(find "${SOURCE}"/lib/modules -name '*.ko' 2>/dev/null); do
if ! modinfo "$ko" 2>/dev/null | grep -q '^sig_id:'; then
rel_path="${ko#${SOURCE}}"
mkdir -p "${DEST}$(dirname "$rel_path")"
cp "$ko" "${DEST}${rel_path}"
"$SIGN_FILE" sha256 "$MOK_KEY" "$MOK_PUB" "${DEST}${rel_path}"
COUNT=$((COUNT + 1))
fi
done
else
# In-place mode: sign modules directly
for ko in $(find /lib/modules -name '*.ko' 2>/dev/null); do
if ! modinfo "$ko" 2>/dev/null | grep -q '^sig_id:'; then
"$SIGN_FILE" sha256 "$MOK_KEY" "$MOK_PUB" "$ko"
COUNT=$((COUNT + 1))
fi
done
fi
if [ $COUNT -gt 0 ]; then
echo "[sign-modules] Signed $COUNT unsigned kernel modules"
fi

View File

@@ -104,6 +104,7 @@ local_mount_root()
-olowerdir=/startos/config/overlay:/lower,upperdir=/upper/data,workdir=/upper/work \ -olowerdir=/startos/config/overlay:/lower,upperdir=/upper/data,workdir=/upper/work \
overlay ${rootmnt} overlay ${rootmnt}
mkdir -m 750 -p ${rootmnt}/media/startos
mkdir -p ${rootmnt}/media/startos/config mkdir -p ${rootmnt}/media/startos/config
mount --bind /startos/config ${rootmnt}/media/startos/config mount --bind /startos/config ${rootmnt}/media/startos/config
mkdir -p ${rootmnt}/media/startos/images mkdir -p ${rootmnt}/media/startos/images

View File

@@ -24,7 +24,7 @@ fi
unsquashfs -f -d / $1 boot unsquashfs -f -d / $1 boot
umount -R /media/startos/next 2> /dev/null || true umount -l /media/startos/next 2> /dev/null || true
umount /media/startos/upper 2> /dev/null || true umount /media/startos/upper 2> /dev/null || true
umount /media/startos/lower 2> /dev/null || true umount /media/startos/lower 2> /dev/null || true
@@ -45,18 +45,13 @@ mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys mount -t sysfs sysfs /media/startos/next/sys
mount --bind /proc /media/startos/next/proc mount -t proc proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot mount --rbind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /boot/efi 2>&1 > /dev/null; then
mkdir -p /media/startos/next/boot/efi
mount --bind /boot/efi /media/startos/next/boot/efi
fi
if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars mount -t efivarfs efivarfs /media/startos/next/sys/firmware/efi/efivars
fi fi
chroot /media/startos/next bash -e << "EOF" chroot /media/startos/next bash -e << "EOF"
@@ -68,24 +63,18 @@ fi
EOF EOF
# Promote the USB installer boot entry back to first in EFI boot order. # Sign unsigned kernel modules for Secure Boot
# The entry number was saved during initial OS install. SIGN_FILE="$(ls -1 /media/startos/next/usr/lib/linux-kbuild-*/scripts/sign-file 2>/dev/null | head -1)"
if [ -d /sys/firmware/efi ] && [ -f /media/startos/config/efi-installer-entry ]; then /media/startos/next/usr/lib/startos/scripts/sign-unsigned-modules \
USB_ENTRY=$(cat /media/startos/config/efi-installer-entry) --source /media/startos/lower \
if [ -n "$USB_ENTRY" ]; then --dest /media/startos/config/overlay \
CURRENT_ORDER=$(efibootmgr | grep BootOrder | sed 's/BootOrder: //') --sign-file "$SIGN_FILE" \
OTHER_ENTRIES=$(echo "$CURRENT_ORDER" | tr ',' '\n' | grep -v "$USB_ENTRY" | tr '\n' ',' | sed 's/,$//') --mok-key /media/startos/config/overlay/var/lib/dkms/mok.key \
if [ -n "$OTHER_ENTRIES" ]; then --mok-pub /media/startos/config/overlay/var/lib/dkms/mok.pub
efibootmgr -o "$USB_ENTRY,$OTHER_ENTRIES"
else
efibootmgr -o "$USB_ENTRY"
fi
fi
fi
sync sync
umount -Rl /media/startos/next umount -l /media/startos/next
umount /media/startos/upper umount /media/startos/upper
umount /media/startos/lower umount /media/startos/lower

View File

@@ -198,20 +198,22 @@ cmd_sign() {
enter_release_dir enter_release_dir
resolve_gh_user resolve_gh_user
mkdir -p signatures
for file in $(release_files); do for file in $(release_files); do
gpg -u $START9_GPG_KEY --detach-sign --armor -o "${file}.start9.asc" "$file" gpg -u $START9_GPG_KEY --detach-sign --armor -o "signatures/${file}.start9.asc" "$file"
if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then
gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "${file}.${GH_USER}.asc" "$file" gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "signatures/${file}.${GH_USER}.asc" "$file"
fi fi
done done
gpg --export -a $START9_GPG_KEY > start9.key.asc gpg --export -a $START9_GPG_KEY > signatures/start9.key.asc
if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then
gpg --export -a "$GH_GPG_KEY" > "${GH_USER}.key.asc" gpg --export -a "$GH_GPG_KEY" > "signatures/${GH_USER}.key.asc"
else else
>&2 echo 'Warning: could not determine GitHub user or GPG signing key, skipping personal signature' >&2 echo 'Warning: could not determine GitHub user or GPG signing key, skipping personal signature'
fi fi
tar -czvf signatures.tar.gz *.asc tar -czvf signatures.tar.gz -C signatures .
gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber
} }
@@ -229,17 +231,18 @@ cmd_cosign() {
echo "Downloading existing signatures..." echo "Downloading existing signatures..."
gh release download -R $REPO "v$VERSION" -p "signatures.tar.gz" -D "$(pwd)" --clobber gh release download -R $REPO "v$VERSION" -p "signatures.tar.gz" -D "$(pwd)" --clobber
tar -xzf signatures.tar.gz mkdir -p signatures
tar -xzf signatures.tar.gz -C signatures
echo "Adding personal signatures as $GH_USER..." echo "Adding personal signatures as $GH_USER..."
for file in $(release_files); do for file in $(release_files); do
gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "${file}.${GH_USER}.asc" "$file" gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "signatures/${file}.${GH_USER}.asc" "$file"
done done
gpg --export -a "$GH_GPG_KEY" > "${GH_USER}.key.asc" gpg --export -a "$GH_GPG_KEY" > "signatures/${GH_USER}.key.asc"
echo "Re-packing signatures..." echo "Re-packing signatures..."
tar -czvf signatures.tar.gz *.asc tar -czvf signatures.tar.gz -C signatures .
gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber
echo "Done. Personal signatures for $GH_USER added to v$VERSION." echo "Done. Personal signatures for $GH_USER added to v$VERSION."

View File

@@ -5,7 +5,7 @@ OnFailure=container-runtime-failure.service
[Service] [Service]
Type=simple Type=simple
Environment=RUST_LOG=startos=debug Environment=RUST_LOG=startos=debug
ExecStart=/usr/bin/node --experimental-detect-module --trace-warnings --unhandled-rejections=warn /usr/lib/startos/init/index.js ExecStart=/usr/bin/node --experimental-detect-module --trace-warnings /usr/lib/startos/init/index.js
Restart=no Restart=no
[Install] [Install]

View File

@@ -37,7 +37,7 @@
}, },
"../sdk/dist": { "../sdk/dist": {
"name": "@start9labs/start-sdk", "name": "@start9labs/start-sdk",
"version": "0.4.0-beta.58", "version": "0.4.0-beta.61",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@iarna/toml": "^3.0.0", "@iarna/toml": "^3.0.0",

View File

@@ -187,9 +187,10 @@ export function makeEffects(context: EffectContext): Effects {
getServiceManifest( getServiceManifest(
...[options]: Parameters<T.Effects["getServiceManifest"]> ...[options]: Parameters<T.Effects["getServiceManifest"]>
) { ) {
return rpcRound("get-service-manifest", options) as ReturnType< return rpcRound("get-service-manifest", {
T.Effects["getServiceManifest"] ...options,
> callback: context.callbacks?.addCallback(options.callback) || null,
}) as ReturnType<T.Effects["getServiceManifest"]>
}, },
subcontainer: { subcontainer: {
createFs(options: { imageId: string; name: string }) { createFs(options: { imageId: string; name: string }) {
@@ -211,9 +212,10 @@ export function makeEffects(context: EffectContext): Effects {
> >
}) as Effects["exportServiceInterface"], }) as Effects["exportServiceInterface"],
getContainerIp(...[options]: Parameters<T.Effects["getContainerIp"]>) { getContainerIp(...[options]: Parameters<T.Effects["getContainerIp"]>) {
return rpcRound("get-container-ip", options) as ReturnType< return rpcRound("get-container-ip", {
T.Effects["getContainerIp"] ...options,
> callback: context.callbacks?.addCallback(options.callback) || null,
}) as ReturnType<T.Effects["getContainerIp"]>
}, },
getOsIp(...[]: Parameters<T.Effects["getOsIp"]>) { getOsIp(...[]: Parameters<T.Effects["getOsIp"]>) {
return rpcRound("get-os-ip", {}) as ReturnType<T.Effects["getOsIp"]> return rpcRound("get-os-ip", {}) as ReturnType<T.Effects["getOsIp"]>
@@ -244,9 +246,10 @@ export function makeEffects(context: EffectContext): Effects {
> >
}, },
getSslCertificate(options: Parameters<T.Effects["getSslCertificate"]>[0]) { getSslCertificate(options: Parameters<T.Effects["getSslCertificate"]>[0]) {
return rpcRound("get-ssl-certificate", options) as ReturnType< return rpcRound("get-ssl-certificate", {
T.Effects["getSslCertificate"] ...options,
> callback: context.callbacks?.addCallback(options.callback) || null,
}) as ReturnType<T.Effects["getSslCertificate"]>
}, },
getSslKey(options: Parameters<T.Effects["getSslKey"]>[0]) { getSslKey(options: Parameters<T.Effects["getSslKey"]>[0]) {
return rpcRound("get-ssl-key", options) as ReturnType< return rpcRound("get-ssl-key", options) as ReturnType<
@@ -308,7 +311,10 @@ export function makeEffects(context: EffectContext): Effects {
}, },
getStatus(...[o]: Parameters<T.Effects["getStatus"]>) { getStatus(...[o]: Parameters<T.Effects["getStatus"]>) {
return rpcRound("get-status", o) as ReturnType<T.Effects["getStatus"]> return rpcRound("get-status", {
...o,
callback: context.callbacks?.addCallback(o.callback) || null,
}) as ReturnType<T.Effects["getStatus"]>
}, },
/// DEPRECATED /// DEPRECATED
setMainStatus(o: { status: "running" | "stopped" }): Promise<null> { setMainStatus(o: { status: "running" | "stopped" }): Promise<null> {

View File

@@ -298,13 +298,10 @@ export class RpcListener {
} }
case "stop": { case "stop": {
const { id } = stopType.parse(input) const { id } = stopType.parse(input)
this.callbacks?.removeChild("main")
return handleRpc( return handleRpc(
id, id,
this.system.stop().then((result) => { this.system.stop().then((result) => ({ result })),
this.callbacks?.removeChild("main")
return { result }
}),
) )
} }
case "exit": { case "exit": {

View File

@@ -42,6 +42,74 @@ function todo(): never {
throw new Error("Not implemented") throw new Error("Not implemented")
} }
function getStatus(
effects: Effects,
options: Omit<Parameters<Effects["getStatus"]>[0], "callback"> = {},
) {
async function* watch(abort?: AbortSignal) {
const resolveCell = { resolve: () => {} }
effects.onLeaveContext(() => {
resolveCell.resolve()
})
abort?.addEventListener("abort", () => resolveCell.resolve())
while (effects.isInContext && !abort?.aborted) {
let callback: () => void = () => {}
const waitForNext = new Promise<void>((resolve) => {
callback = resolve
resolveCell.resolve = resolve
})
yield await effects.getStatus({ ...options, callback })
await waitForNext
}
}
return {
const: () =>
effects.getStatus({
...options,
callback:
effects.constRetry &&
(() => effects.constRetry && effects.constRetry()),
}),
once: () => effects.getStatus(options),
watch: (abort?: AbortSignal) => {
const ctrl = new AbortController()
abort?.addEventListener("abort", () => ctrl.abort())
return watch(ctrl.signal)
},
onChange: (
callback: (
value: T.StatusInfo | null,
error?: Error,
) => { cancel: boolean } | Promise<{ cancel: boolean }>,
) => {
;(async () => {
const ctrl = new AbortController()
for await (const value of watch(ctrl.signal)) {
try {
const res = await callback(value)
if (res.cancel) {
ctrl.abort()
break
}
} catch (e) {
console.error(
"callback function threw an error @ getStatus.onChange",
e,
)
}
}
})()
.catch((e) => callback(null, e as Error))
.catch((e) =>
console.error(
"callback function threw an error @ getStatus.onChange",
e,
),
)
},
}
}
/** /**
* Local type for procedure values from the manifest. * Local type for procedure values from the manifest.
* The manifest's zod schemas use ZodTypeAny casts that produce `unknown` in zod v4. * The manifest's zod schemas use ZodTypeAny casts that produce `unknown` in zod v4.
@@ -1046,16 +1114,26 @@ export class SystemForEmbassy implements System {
timeoutMs: number | null, timeoutMs: number | null,
): Promise<void> { ): Promise<void> {
// TODO: docker // TODO: docker
await effects.mount({ const status = await getStatus(effects, { packageId: id }).const()
location: `/media/embassy/${id}`, if (!status) return
target: { try {
packageId: id, await effects.mount({
volumeId: "embassy", location: `/media/embassy/${id}`,
subpath: null, target: {
readonly: true, packageId: id,
idmap: [], volumeId: "embassy",
}, subpath: null,
}) readonly: true,
idmap: [],
},
})
} catch (e) {
console.error(
`Failed to mount dependency volume for ${id}, skipping autoconfig:`,
e,
)
return
}
configFile configFile
.withPath(`/media/embassy/${id}/config.json`) .withPath(`/media/embassy/${id}/config.json`)
.read() .read()
@@ -1204,6 +1282,11 @@ async function updateConfig(
if (specValue.target === "config") { if (specValue.target === "config") {
const jp = require("jsonpath") const jp = require("jsonpath")
const depId = specValue["package-id"] const depId = specValue["package-id"]
const depStatus = await getStatus(effects, { packageId: depId }).const()
if (!depStatus) {
mutConfigValue[key] = null
continue
}
await effects.mount({ await effects.mount({
location: `/media/embassy/${depId}`, location: `/media/embassy/${depId}`,
target: { target: {

View File

@@ -10,6 +10,11 @@ const matchJsProcedure = z.object({
const matchProcedure = z.union([matchDockerProcedure, matchJsProcedure]) const matchProcedure = z.union([matchDockerProcedure, matchJsProcedure])
export type Procedure = z.infer<typeof matchProcedure> export type Procedure = z.infer<typeof matchProcedure>
const healthCheckFields = {
name: z.string(),
"success-message": z.string().nullable().optional(),
}
const matchAction = z.object({ const matchAction = z.object({
name: z.string(), name: z.string(),
description: z.string(), description: z.string(),
@@ -32,13 +37,10 @@ export const matchManifest = z.object({
.optional(), .optional(),
"health-checks": z.record( "health-checks": z.record(
z.string(), z.string(),
z.intersection( z.union([
matchProcedure, matchDockerProcedure.extend(healthCheckFields),
z.object({ matchJsProcedure.extend(healthCheckFields),
name: z.string(), ]),
"success-message": z.string().nullable().optional(),
}),
),
), ),
config: z config: z
.object({ .object({

View File

@@ -71,7 +71,7 @@ export class SystemForStartOs implements System {
this.starting = true this.starting = true
effects.constRetry = utils.once(() => { effects.constRetry = utils.once(() => {
console.debug(".const() triggered") console.debug(".const() triggered")
effects.restart() if (effects.isInContext) effects.restart()
}) })
let mainOnTerm: () => Promise<void> | undefined let mainOnTerm: () => Promise<void> | undefined
const daemons = await ( const daemons = await (

View File

@@ -22,7 +22,7 @@ cd sdk && make baseDist dist # Rebuild SDK after ts-bindings
- Always run `cargo check -p start-os` after modifying Rust code - Always run `cargo check -p start-os` after modifying Rust code
- When adding RPC endpoints, follow the patterns in [rpc-toolkit.md](rpc-toolkit.md) - When adding RPC endpoints, follow the patterns in [rpc-toolkit.md](rpc-toolkit.md)
- When modifying `#[ts(export)]` types, regenerate bindings and rebuild the SDK (see [ARCHITECTURE.md](../ARCHITECTURE.md#build-pipeline)) - When modifying `#[ts(export)]` types, regenerate bindings and rebuild the SDK (see [ARCHITECTURE.md](../ARCHITECTURE.md#build-pipeline))
- When adding i18n keys, add all 5 locales in `core/locales/i18n.yaml` (see [i18n-patterns.md](i18n-patterns.md)) - **i18n is mandatory** — any user-facing string must go in `core/locales/i18n.yaml` with all 5 locales (`en_US`, `de_DE`, `es_ES`, `fr_FR`, `pl_PL`). This includes CLI subcommand descriptions (`about.<name>`), CLI arg help (`help.arg.<name>`), error messages (`error.<name>`), notifications, setup messages, and any other text shown to users. Entries are alphabetically ordered within their section. See [i18n-patterns.md](i18n-patterns.md)
- When using DB watches, follow the `TypedDbWatch<T>` patterns in [patchdb.md](patchdb.md) - When using DB watches, follow the `TypedDbWatch<T>` patterns in [patchdb.md](patchdb.md)
- **Always use `.invoke(ErrorKind::...)` instead of `.status()` when running CLI commands** via `tokio::process::Command`. The `Invoke` trait (from `crate::util::Invoke`) captures stdout/stderr and checks exit codes properly. Using `.status()` leaks stderr directly to system logs, creating noise. For check-then-act patterns (e.g. `iptables -C`), use `.invoke(...).await.is_ok()` / `.is_err()` instead of `.status().await.map_or(false, |s| s.success())`. - **Always use `.invoke(ErrorKind::...)` instead of `.status()` when running CLI commands** via `tokio::process::Command`. The `Invoke` trait (from `crate::util::Invoke`) captures stdout/stderr and checks exit codes properly. Using `.status()` leaks stderr directly to system logs, creating noise. For check-then-act patterns (e.g. `iptables -C`), use `.invoke(...).await.is_ok()` / `.is_err()` instead of `.status().await.map_or(false, |s| s.success())`.
- Always use file utils in util::io instead of tokio::fs when available - Always use file utils in util::io instead of tokio::fs when available

39
core/Cargo.lock generated
View File

@@ -3376,6 +3376,15 @@ dependencies = [
"serde_json", "serde_json",
] ]
[[package]]
name = "keccak"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cb26cec98cce3a3d96cbb7bced3c4b16e3d13f27ec56dbd62cbc8f39cfb9d653"
dependencies = [
"cpufeatures",
]
[[package]] [[package]]
name = "kv" name = "kv"
version = "0.24.0" version = "0.24.0"
@@ -4355,7 +4364,7 @@ dependencies = [
"nix 0.30.1", "nix 0.30.1",
"patch-db-macro", "patch-db-macro",
"serde", "serde",
"serde_cbor 0.11.1", "serde_cbor_2",
"thiserror 2.0.18", "thiserror 2.0.18",
"tokio", "tokio",
"tracing", "tracing",
@@ -5377,7 +5386,7 @@ dependencies = [
"pin-project", "pin-project",
"reqwest", "reqwest",
"serde", "serde",
"serde_cbor 0.11.2", "serde_cbor",
"serde_json", "serde_json",
"thiserror 2.0.18", "thiserror 2.0.18",
"tokio", "tokio",
@@ -5785,19 +5794,20 @@ dependencies = [
[[package]] [[package]]
name = "serde_cbor" name = "serde_cbor"
version = "0.11.1" version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2bef2ebfde456fb76bbcf9f59315333decc4fda0b2b44b420243c11e0f5ec1f5"
dependencies = [ dependencies = [
"half 1.8.3", "half 1.8.3",
"serde", "serde",
] ]
[[package]] [[package]]
name = "serde_cbor" name = "serde_cbor_2"
version = "0.11.2" version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "git+https://github.com/dr-bonez/cbor.git#2ce7fe5a5ca5700aa095668b5ba67154b7f213a4"
checksum = "2bef2ebfde456fb76bbcf9f59315333decc4fda0b2b44b420243c11e0f5ec1f5"
dependencies = [ dependencies = [
"half 1.8.3", "half 2.7.1",
"serde", "serde",
] ]
@@ -5984,6 +5994,16 @@ dependencies = [
"digest 0.10.7", "digest 0.10.7",
] ]
[[package]]
name = "sha3"
version = "0.10.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75872d278a8f37ef87fa0ddbda7802605cb18344497949862c0d4dcb291eba60"
dependencies = [
"digest 0.10.7",
"keccak",
]
[[package]] [[package]]
name = "sharded-slab" name = "sharded-slab"
version = "0.1.7" version = "0.1.7"
@@ -6414,7 +6434,7 @@ dependencies = [
[[package]] [[package]]
name = "start-os" name = "start-os"
version = "0.4.0-alpha.20" version = "0.4.0-alpha.21"
dependencies = [ dependencies = [
"aes", "aes",
"async-acme", "async-acme",
@@ -6518,6 +6538,7 @@ dependencies = [
"serde_yml", "serde_yml",
"sha-crypt", "sha-crypt",
"sha2 0.10.9", "sha2 0.10.9",
"sha3",
"signal-hook", "signal-hook",
"socket2 0.6.2", "socket2 0.6.2",
"socks5-impl", "socks5-impl",

View File

@@ -15,7 +15,7 @@ license = "MIT"
name = "start-os" name = "start-os"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Start9Labs/start-os" repository = "https://github.com/Start9Labs/start-os"
version = "0.4.0-alpha.20" # VERSION_BUMP version = "0.4.0-alpha.21" # VERSION_BUMP
[lib] [lib]
name = "startos" name = "startos"
@@ -170,9 +170,7 @@ once_cell = "1.19.0"
openssh-keys = "0.6.2" openssh-keys = "0.6.2"
openssl = { version = "0.10.57", features = ["vendored"] } openssl = { version = "0.10.57", features = ["vendored"] }
p256 = { version = "0.13.2", features = ["pem"] } p256 = { version = "0.13.2", features = ["pem"] }
patch-db = { version = "*", path = "../patch-db/patch-db", features = [ patch-db = { version = "*", path = "../patch-db/core", features = ["trace"] }
"trace",
] }
pbkdf2 = "0.12.2" pbkdf2 = "0.12.2"
pin-project = "1.1.3" pin-project = "1.1.3"
pkcs8 = { version = "0.10.2", features = ["std"] } pkcs8 = { version = "0.10.2", features = ["std"] }
@@ -202,6 +200,7 @@ serde_toml = { package = "toml", version = "0.9.9+spec-1.0.0" }
serde_yaml = { package = "serde_yml", version = "0.0.12" } serde_yaml = { package = "serde_yml", version = "0.0.12" }
sha-crypt = "0.5.0" sha-crypt = "0.5.0"
sha2 = "0.10.2" sha2 = "0.10.2"
sha3 = "0.10"
signal-hook = "0.3.17" signal-hook = "0.3.17"
socket2 = { version = "0.6.0", features = ["all"] } socket2 = { version = "0.6.0", features = ["all"] }
socks5-impl = { version = "0.7.2", features = ["client", "server"] } socks5-impl = { version = "0.7.2", features = ["client", "server"] }

View File

@@ -67,6 +67,10 @@ if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT:-}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-cli --target=$TARGET rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-cli --target=$TARGET

View File

@@ -38,6 +38,10 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -38,6 +38,10 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-container --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-container --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -38,6 +38,10 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -38,6 +38,10 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -857,6 +857,13 @@ error.set-sys-info:
fr_FR: "Erreur de Définition des Infos Système" fr_FR: "Erreur de Définition des Infos Système"
pl_PL: "Błąd Ustawiania Informacji o Systemie" pl_PL: "Błąd Ustawiania Informacji o Systemie"
error.bios:
en_US: "BIOS/UEFI Error"
de_DE: "BIOS/UEFI-Fehler"
es_ES: "Error de BIOS/UEFI"
fr_FR: "Erreur BIOS/UEFI"
pl_PL: "Błąd BIOS/UEFI"
# disk/main.rs # disk/main.rs
disk.main.disk-not-found: disk.main.disk-not-found:
en_US: "StartOS disk not found." en_US: "StartOS disk not found."
@@ -1248,6 +1255,13 @@ backup.bulk.leaked-reference:
fr_FR: "référence fuitée vers BackupMountGuard" fr_FR: "référence fuitée vers BackupMountGuard"
pl_PL: "wyciekła referencja do BackupMountGuard" pl_PL: "wyciekła referencja do BackupMountGuard"
backup.bulk.service-not-ready:
en_US: "Cannot create a backup of a service that is still initializing or in an error state"
de_DE: "Es kann keine Sicherung eines Dienstes erstellt werden, der noch initialisiert wird oder sich im Fehlerzustand befindet"
es_ES: "No se puede crear una copia de seguridad de un servicio que aún se está inicializando o está en estado de error"
fr_FR: "Impossible de créer une sauvegarde d'un service encore en cours d'initialisation ou en état d'erreur"
pl_PL: "Nie można utworzyć kopii zapasowej usługi, która jest jeszcze inicjalizowana lub znajduje się w stanie błędu"
# backup/restore.rs # backup/restore.rs
backup.restore.package-error: backup.restore.package-error:
en_US: "Error restoring package %{id}: %{error}" en_US: "Error restoring package %{id}: %{error}"
@@ -1372,6 +1386,21 @@ net.tor.client-error:
fr_FR: "Erreur du client Tor : %{error}" fr_FR: "Erreur du client Tor : %{error}"
pl_PL: "Błąd klienta Tor: %{error}" pl_PL: "Błąd klienta Tor: %{error}"
# net/tunnel.rs
net.tunnel.timeout-waiting-for-add:
en_US: "timed out waiting for gateway %{gateway} to appear in database"
de_DE: "Zeitüberschreitung beim Warten auf das Erscheinen von Gateway %{gateway} in der Datenbank"
es_ES: "se agotó el tiempo esperando que la puerta de enlace %{gateway} aparezca en la base de datos"
fr_FR: "délai d'attente dépassé pour l'apparition de la passerelle %{gateway} dans la base de données"
pl_PL: "upłynął limit czasu oczekiwania na pojawienie się bramy %{gateway} w bazie danych"
net.tunnel.timeout-waiting-for-remove:
en_US: "timed out waiting for gateway %{gateway} to be removed from database"
de_DE: "Zeitüberschreitung beim Warten auf das Entfernen von Gateway %{gateway} aus der Datenbank"
es_ES: "se agotó el tiempo esperando que la puerta de enlace %{gateway} sea eliminada de la base de datos"
fr_FR: "délai d'attente dépassé pour la suppression de la passerelle %{gateway} de la base de données"
pl_PL: "upłynął limit czasu oczekiwania na usunięcie bramy %{gateway} z bazy danych"
# net/wifi.rs # net/wifi.rs
net.wifi.ssid-no-special-characters: net.wifi.ssid-no-special-characters:
en_US: "SSID may not have special characters" en_US: "SSID may not have special characters"
@@ -1585,6 +1614,13 @@ net.gateway.cannot-delete-without-connection:
fr_FR: "Impossible de supprimer l'appareil sans connexion active" fr_FR: "Impossible de supprimer l'appareil sans connexion active"
pl_PL: "Nie można usunąć urządzenia bez aktywnego połączenia" pl_PL: "Nie można usunąć urządzenia bez aktywnego połączenia"
net.gateway.no-configured-echoip-urls:
en_US: "No configured echoip URLs"
de_DE: "Keine konfigurierten EchoIP-URLs"
es_ES: "No hay URLs de echoip configuradas"
fr_FR: "Aucune URL echoip configurée"
pl_PL: "Brak skonfigurowanych adresów URL echoip"
# net/dns.rs # net/dns.rs
net.dns.timeout-updating-catalog: net.dns.timeout-updating-catalog:
en_US: "timed out waiting to update dns catalog" en_US: "timed out waiting to update dns catalog"
@@ -2746,6 +2782,13 @@ help.arg.download-directory:
fr_FR: "Chemin du répertoire de téléchargement" fr_FR: "Chemin du répertoire de téléchargement"
pl_PL: "Ścieżka katalogu do pobrania" pl_PL: "Ścieżka katalogu do pobrania"
help.arg.echoip-urls:
en_US: "Echo IP service URLs for external IP detection"
de_DE: "Echo-IP-Dienst-URLs zur externen IP-Erkennung"
es_ES: "URLs del servicio Echo IP para detección de IP externa"
fr_FR: "URLs du service Echo IP pour la détection d'IP externe"
pl_PL: "Adresy URL usługi Echo IP do wykrywania zewnętrznego IP"
help.arg.emulate-missing-arch: help.arg.emulate-missing-arch:
en_US: "Emulate missing architecture using this one" en_US: "Emulate missing architecture using this one"
de_DE: "Fehlende Architektur mit dieser emulieren" de_DE: "Fehlende Architektur mit dieser emulieren"
@@ -2914,6 +2957,13 @@ help.arg.log-limit:
fr_FR: "Nombre maximum d'entrées de journal" fr_FR: "Nombre maximum d'entrées de journal"
pl_PL: "Maksymalna liczba wpisów logu" pl_PL: "Maksymalna liczba wpisów logu"
help.arg.merge:
en_US: "Merge with existing version range instead of replacing"
de_DE: "Mit vorhandenem Versionsbereich zusammenführen statt ersetzen"
es_ES: "Combinar con el rango de versiones existente en lugar de reemplazar"
fr_FR: "Fusionner avec la plage de versions existante au lieu de remplacer"
pl_PL: "Połącz z istniejącym zakresem wersji zamiast zastępować"
help.arg.mirror-url: help.arg.mirror-url:
en_US: "URL of the mirror" en_US: "URL of the mirror"
de_DE: "URL des Spiegels" de_DE: "URL des Spiegels"
@@ -5204,12 +5254,12 @@ about.reset-user-interface-password:
fr_FR: "Réinitialiser le mot de passe de l'interface utilisateur" fr_FR: "Réinitialiser le mot de passe de l'interface utilisateur"
pl_PL: "Zresetuj hasło interfejsu użytkownika" pl_PL: "Zresetuj hasło interfejsu użytkownika"
about.reset-webserver: about.uninitialize-webserver:
en_US: "Reset the webserver" en_US: "Uninitialize the webserver"
de_DE: "Den Webserver zurücksetzen" de_DE: "Den Webserver deinitialisieren"
es_ES: "Restablecer el servidor web" es_ES: "Desinicializar el servidor web"
fr_FR: "initialiser le serveur web" fr_FR: "Désinitialiser le serveur web"
pl_PL: "Zresetuj serwer internetowy" pl_PL: "Zdezinicjalizuj serwer internetowy"
about.restart-server: about.restart-server:
en_US: "Restart the server" en_US: "Restart the server"
@@ -5246,6 +5296,13 @@ about.set-country:
fr_FR: "Définir le pays" fr_FR: "Définir le pays"
pl_PL: "Ustaw kraj" pl_PL: "Ustaw kraj"
about.set-echoip-urls:
en_US: "Set the Echo IP service URLs"
de_DE: "Die Echo-IP-Dienst-URLs festlegen"
es_ES: "Establecer las URLs del servicio Echo IP"
fr_FR: "Définir les URLs du service Echo IP"
pl_PL: "Ustaw adresy URL usługi Echo IP"
about.set-hostname: about.set-hostname:
en_US: "Set the server hostname" en_US: "Set the server hostname"
de_DE: "Den Server-Hostnamen festlegen" de_DE: "Den Server-Hostnamen festlegen"

View File

@@ -300,6 +300,15 @@ async fn perform_backup(
error: backup_result, error: backup_result,
}, },
); );
} else {
backup_report.insert(
id.clone(),
PackageBackupReport {
error: Some(
t!("backup.bulk.service-not-ready").to_string(),
),
},
);
} }
} }
@@ -323,9 +332,7 @@ async fn perform_backup(
os_backup_file.save().await?; os_backup_file.save().await?;
let luks_folder_old = backup_guard.path().join("luks.old"); let luks_folder_old = backup_guard.path().join("luks.old");
if tokio::fs::metadata(&luks_folder_old).await.is_ok() { crate::util::io::delete_dir(&luks_folder_old).await?;
tokio::fs::remove_dir_all(&luks_folder_old).await?;
}
let luks_folder_bak = backup_guard.path().join("luks"); let luks_folder_bak = backup_guard.path().join("luks");
if tokio::fs::metadata(&luks_folder_bak).await.is_ok() { if tokio::fs::metadata(&luks_folder_bak).await.is_ok() {
tokio::fs::rename(&luks_folder_bak, &luks_folder_old).await?; tokio::fs::rename(&luks_folder_bak, &luks_folder_old).await?;

View File

@@ -10,6 +10,7 @@ use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use super::target::BackupTargetId; use super::target::BackupTargetId;
use crate::PackageId;
use crate::backup::os::OsBackup; use crate::backup::os::OsBackup;
use crate::context::setup::SetupResult; use crate::context::setup::SetupResult;
use crate::context::{RpcContext, SetupContext}; use crate::context::{RpcContext, SetupContext};
@@ -26,7 +27,6 @@ use crate::service::service_map::DownloadInstallFuture;
use crate::setup::SetupExecuteProgress; use crate::setup::SetupExecuteProgress;
use crate::system::{save_language, sync_kiosk}; use crate::system::{save_language, sync_kiosk};
use crate::util::serde::{IoFormat, Pem}; use crate::util::serde::{IoFormat, Pem};
use crate::{PLATFORM, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
@@ -90,7 +90,7 @@ pub async fn recover_full_server(
recovery_source: TmpMountGuard, recovery_source: TmpMountGuard,
server_id: &str, server_id: &str,
recovery_password: &str, recovery_password: &str,
kiosk: Option<bool>, kiosk: bool,
hostname: Option<ServerHostnameInfo>, hostname: Option<ServerHostnameInfo>,
SetupExecuteProgress { SetupExecuteProgress {
init_phases, init_phases,
@@ -123,7 +123,6 @@ pub async fn recover_full_server(
os_backup.account.hostname = h; os_backup.account.hostname = h;
} }
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
sync_kiosk(kiosk).await?; sync_kiosk(kiosk).await?;
let language = ctx.language.peek(|a| a.clone()); let language = ctx.language.peek(|a| a.clone());

View File

@@ -7,10 +7,6 @@ use crate::service::cli::{ContainerCliContext, ContainerClientConfig};
use crate::util::logger::LOGGER; use crate::util::logger::LOGGER;
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::default().semver().to_string();
}
pub fn main(args: impl IntoIterator<Item = OsString>) { pub fn main(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable(); LOGGER.enable();
if let Err(e) = CliApp::new( if let Err(e) = CliApp::new(
@@ -18,6 +14,10 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
crate::service::effects::handler(), crate::service::effects::handler(),
) )
.mutate_command(super::translate_cli) .mutate_command(super::translate_cli)
.mutate_command(|cmd| {
cmd.name("start-container")
.version(Current::default().semver().to_string())
})
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -149,6 +149,11 @@ impl MultiExecutable {
} }
pub fn execute(&self) { pub fn execute(&self) {
#[cfg(feature = "backtrace-on-stack-overflow")]
unsafe {
backtrace_on_stack_overflow::enable()
};
set_locale_from_env(); set_locale_from_env();
let mut popped = Vec::with_capacity(2); let mut popped = Vec::with_capacity(2);

View File

@@ -8,6 +8,7 @@ use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
use crate::context::CliContext; use crate::context::CliContext;
use crate::version::{Current, VersionT};
use crate::context::config::ClientConfig; use crate::context::config::ClientConfig;
use crate::net::web_server::{Acceptor, WebServer}; use crate::net::web_server::{Acceptor, WebServer};
use crate::prelude::*; use crate::prelude::*;
@@ -101,6 +102,10 @@ pub fn cli(args: impl IntoIterator<Item = OsString>) {
crate::registry::registry_api(), crate::registry::registry_api(),
) )
.mutate_command(super::translate_cli) .mutate_command(super::translate_cli)
.mutate_command(|cmd| {
cmd.name("start-registry")
.version(Current::default().semver().to_string())
})
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -8,10 +8,6 @@ use crate::context::config::ClientConfig;
use crate::util::logger::LOGGER; use crate::util::logger::LOGGER;
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::default().semver().to_string();
}
pub fn main(args: impl IntoIterator<Item = OsString>) { pub fn main(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable(); LOGGER.enable();
@@ -20,6 +16,10 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
crate::main_api(), crate::main_api(),
) )
.mutate_command(super::translate_cli) .mutate_command(super::translate_cli)
.mutate_command(|cmd| {
cmd.name("start-cli")
.version(Current::default().semver().to_string())
})
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -190,7 +190,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
} }
} }
}); });
rt.shutdown_timeout(Duration::from_secs(60)); rt.shutdown_timeout(Duration::from_millis(100));
res res
}; };

View File

@@ -13,6 +13,7 @@ use tracing::instrument;
use visit_rs::Visit; use visit_rs::Visit;
use crate::context::CliContext; use crate::context::CliContext;
use crate::version::{Current, VersionT};
use crate::context::config::ClientConfig; use crate::context::config::ClientConfig;
use crate::net::tls::TlsListener; use crate::net::tls::TlsListener;
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer}; use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
@@ -186,6 +187,10 @@ pub fn cli(args: impl IntoIterator<Item = OsString>) {
crate::tunnel::api::tunnel_api(), crate::tunnel::api::tunnel_api(),
) )
.mutate_command(super::translate_cli) .mutate_command(super::translate_cli)
.mutate_command(|cmd| {
cmd.name("start-tunnel")
.version(Current::default().semver().to_string())
})
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -9,7 +9,6 @@ use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::MAIN_DATA; use crate::MAIN_DATA;
use crate::disk::OsPartitionInfo;
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::version::VersionT; use crate::version::VersionT;
@@ -120,8 +119,6 @@ impl ClientConfig {
pub struct ServerConfig { pub struct ServerConfig {
#[arg(short, long, help = "help.arg.config-file-path")] #[arg(short, long, help = "help.arg.config-file-path")]
pub config: Option<PathBuf>, pub config: Option<PathBuf>,
#[arg(skip)]
pub os_partitions: Option<OsPartitionInfo>,
#[arg(long, help = "help.arg.socks-listen-address")] #[arg(long, help = "help.arg.socks-listen-address")]
pub socks_listen: Option<SocketAddr>, pub socks_listen: Option<SocketAddr>,
#[arg(long, help = "help.arg.revision-cache-size")] #[arg(long, help = "help.arg.revision-cache-size")]
@@ -138,7 +135,6 @@ impl ContextConfig for ServerConfig {
self.config.take() self.config.take()
} }
fn merge_with(&mut self, other: Self) { fn merge_with(&mut self, other: Self) {
self.os_partitions = self.os_partitions.take().or(other.os_partitions);
self.socks_listen = self.socks_listen.take().or(other.socks_listen); self.socks_listen = self.socks_listen.take().or(other.socks_listen);
self.revision_cache_size = self self.revision_cache_size = self
.revision_cache_size .revision_cache_size

View File

@@ -39,7 +39,7 @@ impl DiagnosticContext {
shutdown, shutdown,
disk_guid, disk_guid,
error: Arc::new(error.into()), error: Arc::new(error.into()),
rpc_continuations: RpcContinuations::new(), rpc_continuations: RpcContinuations::new(None),
}))) })))
} }
} }

View File

@@ -32,7 +32,7 @@ impl InitContext {
error: watch::channel(None).0, error: watch::channel(None).0,
progress, progress,
shutdown, shutdown,
rpc_continuations: RpcContinuations::new(), rpc_continuations: RpcContinuations::new(None),
}))) })))
} }
} }

View File

@@ -62,8 +62,8 @@ pub struct RpcContextSeed {
pub db: TypedPatchDb<Database>, pub db: TypedPatchDb<Database>,
pub sync_db: watch::Sender<u64>, pub sync_db: watch::Sender<u64>,
pub account: SyncRwLock<AccountInfo>, pub account: SyncRwLock<AccountInfo>,
pub net_controller: Arc<NetController>,
pub os_net_service: NetService, pub os_net_service: NetService,
pub net_controller: Arc<NetController>,
pub s9pk_arch: Option<&'static str>, pub s9pk_arch: Option<&'static str>,
pub services: ServiceMap, pub services: ServiceMap,
pub cancellable_installs: SyncMutex<BTreeMap<PackageId, oneshot::Sender<()>>>, pub cancellable_installs: SyncMutex<BTreeMap<PackageId, oneshot::Sender<()>>>,
@@ -327,12 +327,7 @@ impl RpcContext {
let seed = Arc::new(RpcContextSeed { let seed = Arc::new(RpcContextSeed {
is_closed: AtomicBool::new(false), is_closed: AtomicBool::new(false),
os_partitions: config.os_partitions.clone().ok_or_else(|| { os_partitions: OsPartitionInfo::from_fstab().await?,
Error::new(
eyre!("{}", t!("context.rpc.os-partition-info-missing")),
ErrorKind::Filesystem,
)
})?,
wifi_interface: wifi_interface.clone(), wifi_interface: wifi_interface.clone(),
ethernet_interface: find_eth_iface().await?, ethernet_interface: find_eth_iface().await?,
disk_guid, disk_guid,
@@ -351,10 +346,10 @@ impl RpcContext {
services, services,
cancellable_installs: SyncMutex::new(BTreeMap::new()), cancellable_installs: SyncMutex::new(BTreeMap::new()),
metrics_cache, metrics_cache,
rpc_continuations: RpcContinuations::new(Some(shutdown.clone())),
shutdown, shutdown,
lxc_manager: Arc::new(LxcManager::new()), lxc_manager: Arc::new(LxcManager::new()),
open_authed_continuations: OpenAuthedContinuations::new(), open_authed_continuations: OpenAuthedContinuations::new(),
rpc_continuations: RpcContinuations::new(),
wifi_manager: Arc::new(RwLock::new(wifi_interface.clone().map(|i| WpaCli::init(i)))), wifi_manager: Arc::new(RwLock::new(wifi_interface.clone().map(|i| WpaCli::init(i)))),
current_secret: Arc::new( current_secret: Arc::new(
Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| { Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| {

View File

@@ -85,7 +85,7 @@ impl SetupContext {
result: OnceCell::new(), result: OnceCell::new(),
disk_guid: OnceCell::new(), disk_guid: OnceCell::new(),
shutdown, shutdown,
rpc_continuations: RpcContinuations::new(), rpc_continuations: RpcContinuations::new(None),
install_rootfs: SyncMutex::new(None), install_rootfs: SyncMutex::new(None),
language: SyncMutex::new(None), language: SyncMutex::new(None),
keyboard: SyncMutex::new(None), keyboard: SyncMutex::new(None),

View File

@@ -31,7 +31,7 @@ pub struct Database {
impl Database { impl Database {
pub fn init( pub fn init(
account: &AccountInfo, account: &AccountInfo,
kiosk: Option<bool>, kiosk: bool,
language: Option<InternedString>, language: Option<InternedString>,
keyboard: Option<KeyboardOptions>, keyboard: Option<KeyboardOptions>,
) -> Result<Self, Error> { ) -> Result<Self, Error> {

View File

@@ -49,7 +49,7 @@ pub struct Public {
impl Public { impl Public {
pub fn init( pub fn init(
account: &AccountInfo, account: &AccountInfo,
kiosk: Option<bool>, kiosk: bool,
language: Option<InternedString>, language: Option<InternedString>,
keyboard: Option<KeyboardOptions>, keyboard: Option<KeyboardOptions>,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
@@ -146,10 +146,10 @@ impl Public {
zram: true, zram: true,
governor: None, governor: None,
smtp: None, smtp: None,
ifconfig_url: default_ifconfig_url(), echoip_urls: default_echoip_urls(),
ram: 0, ram: 0,
devices: Vec::new(), devices: Vec::new(),
kiosk, kiosk: Some(kiosk).filter(|_| &*PLATFORM != "raspberrypi"),
language, language,
keyboard, keyboard,
}, },
@@ -168,8 +168,11 @@ fn get_platform() -> InternedString {
(&*PLATFORM).into() (&*PLATFORM).into()
} }
pub fn default_ifconfig_url() -> Url { pub fn default_echoip_urls() -> Vec<Url> {
"https://ifconfig.co".parse().unwrap() vec![
"https://ipconfig.io".parse().unwrap(),
"https://ifconfig.co".parse().unwrap(),
]
} }
#[derive(Debug, Deserialize, Serialize, HasModel, TS)] #[derive(Debug, Deserialize, Serialize, HasModel, TS)]
@@ -206,9 +209,9 @@ pub struct ServerInfo {
pub zram: bool, pub zram: bool,
pub governor: Option<Governor>, pub governor: Option<Governor>,
pub smtp: Option<SmtpValue>, pub smtp: Option<SmtpValue>,
#[serde(default = "default_ifconfig_url")] #[serde(default = "default_echoip_urls")]
#[ts(type = "string")] #[ts(type = "string[]")]
pub ifconfig_url: Url, pub echoip_urls: Vec<Url>,
#[ts(type = "number")] #[ts(type = "number")]
pub ram: u64, pub ram: u64,
pub devices: Vec<LshwDevice>, pub devices: Vec<LshwDevice>,

View File

@@ -25,20 +25,28 @@ pub enum RepairStrategy {
Preen, Preen,
Aggressive, Aggressive,
} }
/// Detects the filesystem type of a block device using `grub-probe`.
/// Returns e.g. `"ext2"` (for ext4), `"btrfs"`, etc.
pub async fn detect_filesystem(
logicalname: impl AsRef<Path> + std::fmt::Debug,
) -> Result<String, Error> {
Ok(String::from_utf8(
Command::new("grub-probe")
.arg("-d")
.arg(logicalname.as_ref())
.invoke(crate::ErrorKind::DiskManagement)
.await?,
)?
.trim()
.to_owned())
}
impl RepairStrategy { impl RepairStrategy {
pub async fn fsck( pub async fn fsck(
&self, &self,
logicalname: impl AsRef<Path> + std::fmt::Debug, logicalname: impl AsRef<Path> + std::fmt::Debug,
) -> Result<RequiresReboot, Error> { ) -> Result<RequiresReboot, Error> {
match &*String::from_utf8( match &*detect_filesystem(&logicalname).await? {
Command::new("grub-probe")
.arg("-d")
.arg(logicalname.as_ref())
.invoke(crate::ErrorKind::DiskManagement)
.await?,
)?
.trim()
{
"ext2" => self.e2fsck(logicalname).await, "ext2" => self.e2fsck(logicalname).await,
"btrfs" => self.btrfs_check(logicalname).await, "btrfs" => self.btrfs_check(logicalname).await,
fs => { fs => {

View File

@@ -7,7 +7,7 @@ use rust_i18n::t;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use super::fsck::{RepairStrategy, RequiresReboot}; use super::fsck::{RepairStrategy, RequiresReboot, detect_filesystem};
use super::util::pvscan; use super::util::pvscan;
use crate::disk::mount::filesystem::block_dev::BlockDev; use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::{FileSystem, ReadWrite}; use crate::disk::mount::filesystem::{FileSystem, ReadWrite};
@@ -301,6 +301,37 @@ pub async fn mount_fs<P: AsRef<Path>>(
.with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?; .with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?;
blockdev_path = Path::new("/dev/mapper").join(&full_name); blockdev_path = Path::new("/dev/mapper").join(&full_name);
} }
// Convert ext4 → btrfs on the package-data partition if needed
let fs_type = detect_filesystem(&blockdev_path).await?;
if fs_type == "ext2" {
tracing::info!("Running e2fsck before converting {name} from ext4 to btrfs");
Command::new("e2fsck")
.arg("-fy")
.arg(&blockdev_path)
.invoke(ErrorKind::DiskManagement)
.await?;
tracing::info!("Converting {name} from ext4 to btrfs");
Command::new("btrfs-convert")
.arg("--no-progress")
.arg(&blockdev_path)
.invoke(ErrorKind::DiskManagement)
.await?;
// Defragment after conversion for optimal performance
let tmp_mount = datadir.as_ref().join(format!("{name}.convert-tmp"));
tokio::fs::create_dir_all(&tmp_mount).await?;
BlockDev::new(&blockdev_path)
.mount(&tmp_mount, ReadWrite)
.await?;
Command::new("btrfs")
.args(["filesystem", "defragment", "-r"])
.arg(&tmp_mount)
.invoke(ErrorKind::DiskManagement)
.await?;
unmount(&tmp_mount, false).await?;
tokio::fs::remove_dir(&tmp_mount).await?;
}
let reboot = repair.fsck(&blockdev_path).await?; let reboot = repair.fsck(&blockdev_path).await?;
if !guid.ends_with("_UNENC") { if !guid.ends_with("_UNENC") {
@@ -342,3 +373,99 @@ pub async fn mount_all_fs<P: AsRef<Path>>(
reboot |= mount_fs(guid, &datadir, "package-data", repair, password).await?; reboot |= mount_fs(guid, &datadir, "package-data", repair, password).await?;
Ok(reboot) Ok(reboot)
} }
/// Temporarily activates a VG and opens LUKS to probe the `package-data`
/// filesystem type. Returns `None` if probing fails (e.g. LV doesn't exist).
#[instrument(skip_all)]
pub async fn probe_package_data_fs(guid: &str) -> Result<Option<String>, Error> {
// Import and activate the VG
match Command::new("vgimport")
.arg(guid)
.invoke(ErrorKind::DiskManagement)
.await
{
Ok(_) => {}
Err(e)
if format!("{}", e.source)
.lines()
.any(|l| l.trim() == format!("Volume group \"{}\" is not exported", guid)) =>
{
// Already imported, that's fine
}
Err(e) => {
tracing::warn!("Could not import VG {guid} for filesystem probe: {e}");
return Ok(None);
}
}
if let Err(e) = Command::new("vgchange")
.arg("-ay")
.arg(guid)
.invoke(ErrorKind::DiskManagement)
.await
{
tracing::warn!("Could not activate VG {guid} for filesystem probe: {e}");
return Ok(None);
}
let mut opened_luks = false;
let result = async {
let lv_path = Path::new("/dev").join(guid).join("package-data");
if tokio::fs::metadata(&lv_path).await.is_err() {
return Ok(None);
}
let blockdev_path = if !guid.ends_with("_UNENC") {
let full_name = format!("{guid}_package-data");
let password = DEFAULT_PASSWORD;
if let Some(parent) = Path::new(PASSWORD_PATH).parent() {
tokio::fs::create_dir_all(parent).await?;
}
tokio::fs::write(PASSWORD_PATH, password)
.await
.with_ctx(|_| (ErrorKind::Filesystem, PASSWORD_PATH))?;
Command::new("cryptsetup")
.arg("-q")
.arg("luksOpen")
.arg("--allow-discards")
.arg(format!("--key-file={PASSWORD_PATH}"))
.arg(format!("--keyfile-size={}", password.len()))
.arg(&lv_path)
.arg(&full_name)
.invoke(ErrorKind::DiskManagement)
.await?;
let _ = tokio::fs::remove_file(PASSWORD_PATH).await;
opened_luks = true;
PathBuf::from(format!("/dev/mapper/{full_name}"))
} else {
lv_path.clone()
};
detect_filesystem(&blockdev_path).await.map(Some)
}
.await;
// Always clean up: close LUKS, deactivate VG, export VG
if opened_luks {
let full_name = format!("{guid}_package-data");
Command::new("cryptsetup")
.arg("-q")
.arg("luksClose")
.arg(&full_name)
.invoke(ErrorKind::DiskManagement)
.await
.log_err();
}
Command::new("vgchange")
.arg("-an")
.arg(guid)
.invoke(ErrorKind::DiskManagement)
.await
.log_err();
Command::new("vgexport")
.arg(guid)
.invoke(ErrorKind::DiskManagement)
.await
.log_err();
result
}

View File

@@ -1,13 +1,17 @@
use std::collections::BTreeMap;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use itertools::Itertools; use itertools::Itertools;
use lazy_format::lazy_format; use lazy_format::lazy_format;
use rpc_toolkit::{CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::process::Command;
use crate::Error; use crate::{Error, ErrorKind};
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::disk::util::DiskInfo; use crate::disk::util::DiskInfo;
use crate::prelude::*;
use crate::util::Invoke;
use crate::util::serde::{HandlerExtSerde, WithIoFormat, display_serializable}; use crate::util::serde::{HandlerExtSerde, WithIoFormat, display_serializable};
pub mod fsck; pub mod fsck;
@@ -21,27 +25,143 @@ pub const REPAIR_DISK_PATH: &str = "/media/startos/config/repair-disk";
#[derive(Clone, Debug, Default, Deserialize, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct OsPartitionInfo { pub struct OsPartitionInfo {
pub efi: Option<PathBuf>,
pub bios: Option<PathBuf>, pub bios: Option<PathBuf>,
pub boot: PathBuf, pub boot: PathBuf,
pub root: PathBuf, pub root: PathBuf,
#[serde(skip)] // internal use only #[serde(default)]
pub extra_boot: BTreeMap<String, PathBuf>,
#[serde(skip)]
pub data: Option<PathBuf>, pub data: Option<PathBuf>,
} }
impl OsPartitionInfo { impl OsPartitionInfo {
pub fn contains(&self, logicalname: impl AsRef<Path>) -> bool { pub fn contains(&self, logicalname: impl AsRef<Path>) -> bool {
self.efi let p = logicalname.as_ref();
.as_ref() self.bios.as_deref() == Some(p)
.map(|p| p == logicalname.as_ref()) || p == &*self.boot
.unwrap_or(false) || p == &*self.root
|| self || self.extra_boot.values().any(|v| v == p)
.bios
.as_ref()
.map(|p| p == logicalname.as_ref())
.unwrap_or(false)
|| &*self.boot == logicalname.as_ref()
|| &*self.root == logicalname.as_ref()
} }
/// Build partition info by parsing /etc/fstab and resolving device specs,
/// then discovering the BIOS boot partition (which is never mounted).
pub async fn from_fstab() -> Result<Self, Error> {
let fstab = tokio::fs::read_to_string("/etc/fstab")
.await
.with_ctx(|_| (ErrorKind::Filesystem, "/etc/fstab"))?;
let mut boot = None;
let mut root = None;
let mut extra_boot = BTreeMap::new();
for line in fstab.lines() {
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
continue;
}
let mut fields = line.split_whitespace();
let Some(source) = fields.next() else {
continue;
};
let Some(target) = fields.next() else {
continue;
};
let dev = match resolve_fstab_source(source).await {
Ok(d) => d,
Err(e) => {
tracing::warn!("Failed to resolve fstab source {source}: {e}");
continue;
}
};
match target {
"/" => root = Some(dev),
"/boot" => boot = Some(dev),
t if t.starts_with("/boot/") => {
if let Some(name) = t.strip_prefix("/boot/") {
extra_boot.insert(name.to_string(), dev);
}
}
_ => {}
}
}
let boot = boot.unwrap_or_default();
let bios = if !boot.as_os_str().is_empty() {
find_bios_boot_partition(&boot).await.ok().flatten()
} else {
None
};
Ok(Self {
bios,
boot,
root: root.unwrap_or_default(),
extra_boot,
data: None,
})
}
}
const BIOS_BOOT_TYPE_GUID: &str = "21686148-6449-6e6f-744e-656564726548";
/// Find the BIOS boot partition on the same disk as `known_part`.
async fn find_bios_boot_partition(known_part: &Path) -> Result<Option<PathBuf>, Error> {
let output = Command::new("lsblk")
.args(["-n", "-l", "-o", "NAME,PKNAME,PARTTYPE"])
.arg(known_part)
.invoke(ErrorKind::DiskManagement)
.await?;
let text = String::from_utf8(output)?;
let parent_disk = text.lines().find_map(|line| {
let mut fields = line.split_whitespace();
let _name = fields.next()?;
let pkname = fields.next()?;
(!pkname.is_empty()).then(|| pkname.to_string())
});
let Some(parent_disk) = parent_disk else {
return Ok(None);
};
let output = Command::new("lsblk")
.args(["-n", "-l", "-o", "NAME,PARTTYPE"])
.arg(format!("/dev/{parent_disk}"))
.invoke(ErrorKind::DiskManagement)
.await?;
let text = String::from_utf8(output)?;
for line in text.lines() {
let mut fields = line.split_whitespace();
let Some(name) = fields.next() else { continue };
let Some(parttype) = fields.next() else {
continue;
};
if parttype.eq_ignore_ascii_case(BIOS_BOOT_TYPE_GUID) {
return Ok(Some(PathBuf::from(format!("/dev/{name}"))));
}
}
Ok(None)
}
/// Resolve an fstab device spec (e.g. /dev/sda1, PARTUUID=..., UUID=...) to a
/// canonical device path.
async fn resolve_fstab_source(source: &str) -> Result<PathBuf, Error> {
if source.starts_with('/') {
return Ok(
tokio::fs::canonicalize(source)
.await
.unwrap_or_else(|_| PathBuf::from(source)),
);
}
// PARTUUID=, UUID=, LABEL= — resolve via blkid
let output = Command::new("blkid")
.args(["-o", "device", "-t", source])
.invoke(ErrorKind::DiskManagement)
.await?;
Ok(PathBuf::from(String::from_utf8(output)?.trim()))
} }
pub fn disk<C: Context>() -> ParentHandler<C> { pub fn disk<C: Context>() -> ParentHandler<C> {

View File

@@ -53,9 +53,7 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
})?, })?,
)? )?
} else { } else {
if tokio::fs::metadata(&crypt_path).await.is_ok() { crate::util::io::delete_dir(&crypt_path).await?;
tokio::fs::remove_dir_all(&crypt_path).await?;
}
Default::default() Default::default()
}; };
let enc_key = if let (Some(hash), Some(wrapped_key)) = ( let enc_key = if let (Some(hash), Some(wrapped_key)) = (

View File

@@ -52,13 +52,19 @@ pub async fn bind<P0: AsRef<Path>, P1: AsRef<Path>>(
pub async fn unmount<P: AsRef<Path>>(mountpoint: P, lazy: bool) -> Result<(), Error> { pub async fn unmount<P: AsRef<Path>>(mountpoint: P, lazy: bool) -> Result<(), Error> {
tracing::debug!("Unmounting {}.", mountpoint.as_ref().display()); tracing::debug!("Unmounting {}.", mountpoint.as_ref().display());
let mut cmd = tokio::process::Command::new("umount"); let mut cmd = tokio::process::Command::new("umount");
cmd.env("LANG", "C.UTF-8");
if lazy { if lazy {
cmd.arg("-l"); cmd.arg("-l");
} }
cmd.arg(mountpoint.as_ref()) match cmd
.arg(mountpoint.as_ref())
.invoke(crate::ErrorKind::Filesystem) .invoke(crate::ErrorKind::Filesystem)
.await?; .await
Ok(()) {
Ok(_) => Ok(()),
Err(e) if e.to_string().contains("not mounted") => Ok(()),
Err(e) => Err(e),
}
} }
/// Unmounts all mountpoints under (and including) the given path, in reverse /// Unmounts all mountpoints under (and including) the given path, in reverse

View File

@@ -41,6 +41,7 @@ pub struct DiskInfo {
pub partitions: Vec<PartitionInfo>, pub partitions: Vec<PartitionInfo>,
pub capacity: u64, pub capacity: u64,
pub guid: Option<InternedString>, pub guid: Option<InternedString>,
pub filesystem: Option<String>,
} }
#[derive(Clone, Debug, Deserialize, Serialize, ts_rs::TS)] #[derive(Clone, Debug, Deserialize, Serialize, ts_rs::TS)]
@@ -55,6 +56,7 @@ pub struct PartitionInfo {
pub used: Option<u64>, pub used: Option<u64>,
pub start_os: BTreeMap<String, StartOsRecoveryInfo>, pub start_os: BTreeMap<String, StartOsRecoveryInfo>,
pub guid: Option<InternedString>, pub guid: Option<InternedString>,
pub filesystem: Option<String>,
} }
#[derive(Clone, Debug, Default, Deserialize, Serialize, ts_rs::TS)] #[derive(Clone, Debug, Default, Deserialize, Serialize, ts_rs::TS)]
@@ -374,6 +376,15 @@ pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> {
disk_info.capacity = part_info.capacity; disk_info.capacity = part_info.capacity;
if let Some(g) = disk_guids.get(&disk_info.logicalname) { if let Some(g) = disk_guids.get(&disk_info.logicalname) {
disk_info.guid = g.clone(); disk_info.guid = g.clone();
if let Some(guid) = g {
disk_info.filesystem =
crate::disk::main::probe_package_data_fs(guid)
.await
.unwrap_or_else(|e| {
tracing::warn!("Failed to probe filesystem for {guid}: {e}");
None
});
}
} else { } else {
disk_info.partitions = vec![part_info]; disk_info.partitions = vec![part_info];
} }
@@ -384,11 +395,31 @@ pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> {
disk_info.partitions = Vec::with_capacity(index.parts.len()); disk_info.partitions = Vec::with_capacity(index.parts.len());
if let Some(g) = disk_guids.get(&disk_info.logicalname) { if let Some(g) = disk_guids.get(&disk_info.logicalname) {
disk_info.guid = g.clone(); disk_info.guid = g.clone();
if let Some(guid) = g {
disk_info.filesystem =
crate::disk::main::probe_package_data_fs(guid)
.await
.unwrap_or_else(|e| {
tracing::warn!("Failed to probe filesystem for {guid}: {e}");
None
});
}
} else { } else {
for part in index.parts { for part in index.parts {
let mut part_info = part_info(part).await; let mut part_info = part_info(part).await;
if let Some(g) = disk_guids.get(&part_info.logicalname) { if let Some(g) = disk_guids.get(&part_info.logicalname) {
part_info.guid = g.clone(); part_info.guid = g.clone();
if let Some(guid) = g {
part_info.filesystem =
crate::disk::main::probe_package_data_fs(guid)
.await
.unwrap_or_else(|e| {
tracing::warn!(
"Failed to probe filesystem for {guid}: {e}"
);
None
});
}
} }
disk_info.partitions.push(part_info); disk_info.partitions.push(part_info);
} }
@@ -461,6 +492,7 @@ async fn disk_info(disk: PathBuf) -> DiskInfo {
partitions: Vec::new(), partitions: Vec::new(),
capacity, capacity,
guid: None, guid: None,
filesystem: None,
} }
} }
@@ -544,6 +576,7 @@ async fn part_info(part: PathBuf) -> PartitionInfo {
used, used,
start_os, start_os,
guid: None, guid: None,
filesystem: None,
} }
} }

View File

@@ -101,6 +101,7 @@ pub enum ErrorKind {
UpdateFailed = 77, UpdateFailed = 77,
Smtp = 78, Smtp = 78,
SetSysInfo = 79, SetSysInfo = 79,
Bios = 80,
} }
impl ErrorKind { impl ErrorKind {
pub fn as_str(&self) -> String { pub fn as_str(&self) -> String {
@@ -185,6 +186,7 @@ impl ErrorKind {
UpdateFailed => t!("error.update-failed"), UpdateFailed => t!("error.update-failed"),
Smtp => t!("error.smtp"), Smtp => t!("error.smtp"),
SetSysInfo => t!("error.set-sys-info"), SetSysInfo => t!("error.set-sys-info"),
Bios => t!("error.bios"),
} }
.to_string() .to_string()
} }

View File

@@ -173,6 +173,13 @@ pub async fn init(
RpcContext::init_auth_cookie().await?; RpcContext::init_auth_cookie().await?;
local_auth.complete(); local_auth.complete();
// Re-enroll MOK on every boot if Secure Boot key exists but isn't enrolled yet
if let Err(e) =
crate::util::mok::enroll_mok(std::path::Path::new(crate::util::mok::DKMS_MOK_PUB)).await
{
tracing::warn!("MOK enrollment failed: {e}");
}
load_database.start(); load_database.start();
let db = cfg.db().await?; let db = cfg.db().await?;
crate::version::Current::default().pre_init(&db).await?; crate::version::Current::default().pre_init(&db).await?;
@@ -291,21 +298,15 @@ pub async fn init(
init_tmp.start(); init_tmp.start();
let tmp_dir = Path::new(PACKAGE_DATA).join("tmp"); let tmp_dir = Path::new(PACKAGE_DATA).join("tmp");
if tokio::fs::metadata(&tmp_dir).await.is_ok() { crate::util::io::delete_dir(&tmp_dir).await?;
tokio::fs::remove_dir_all(&tmp_dir).await?;
}
if tokio::fs::metadata(&tmp_dir).await.is_err() { if tokio::fs::metadata(&tmp_dir).await.is_err() {
tokio::fs::create_dir_all(&tmp_dir).await?; tokio::fs::create_dir_all(&tmp_dir).await?;
} }
let tmp_var = Path::new(PACKAGE_DATA).join("tmp/var"); let tmp_var = Path::new(PACKAGE_DATA).join("tmp/var");
if tokio::fs::metadata(&tmp_var).await.is_ok() { crate::util::io::delete_dir(&tmp_var).await?;
tokio::fs::remove_dir_all(&tmp_var).await?;
}
crate::disk::mount::util::bind(&tmp_var, "/var/tmp", false).await?; crate::disk::mount::util::bind(&tmp_var, "/var/tmp", false).await?;
let downloading = Path::new(PACKAGE_DATA).join("archive/downloading"); let downloading = Path::new(PACKAGE_DATA).join("archive/downloading");
if tokio::fs::metadata(&downloading).await.is_ok() { crate::util::io::delete_dir(&downloading).await?;
tokio::fs::remove_dir_all(&downloading).await?;
}
let tmp_docker = Path::new(PACKAGE_DATA).join("tmp").join(*CONTAINER_TOOL); let tmp_docker = Path::new(PACKAGE_DATA).join("tmp").join(*CONTAINER_TOOL);
crate::disk::mount::util::bind(&tmp_docker, *CONTAINER_DATADIR, false).await?; crate::disk::mount::util::bind(&tmp_docker, *CONTAINER_DATADIR, false).await?;
init_tmp.complete(); init_tmp.complete();
@@ -370,7 +371,7 @@ pub async fn init(
enable_zram.complete(); enable_zram.complete();
update_server_info.start(); update_server_info.start();
sync_kiosk(server_info.as_kiosk().de()?).await?; sync_kiosk(server_info.as_kiosk().de()?.unwrap_or(false)).await?;
let ram = get_mem_info().await?.total.0 as u64 * 1024 * 1024; let ram = get_mem_info().await?.total.0 as u64 * 1024 * 1024;
let devices = lshw().await?; let devices = lshw().await?;
let status_info = ServerStatus { let status_info = ServerStatus {

View File

@@ -400,10 +400,10 @@ pub fn server<C: Context>() -> ParentHandler<C> {
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"set-ifconfig-url", "set-echoip-urls",
from_fn_async(system::set_ifconfig_url) from_fn_async(system::set_echoip_urls)
.no_display() .no_display()
.with_about("about.set-ifconfig-url") .with_about("about.set-echoip-urls")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(

View File

@@ -32,6 +32,7 @@ use crate::context::{CliContext, RpcContext};
use crate::db::model::Database; use crate::db::model::Database;
use crate::db::model::public::NetworkInterfaceInfo; use crate::db::model::public::NetworkInterfaceInfo;
use crate::net::gateway::NetworkInterfaceWatcher; use crate::net::gateway::NetworkInterfaceWatcher;
use crate::net::utils::is_private_ip;
use crate::prelude::*; use crate::prelude::*;
use crate::util::future::NonDetachingJoinHandle; use crate::util::future::NonDetachingJoinHandle;
use crate::util::io::file_string_stream; use crate::util::io::file_string_stream;
@@ -400,6 +401,18 @@ impl Resolver {
}) })
}) { }) {
return Some(res); return Some(res);
} else if is_private_ip(src) {
// Source is a private IP not in any known subnet (e.g. VPN on a different VLAN).
// Return all private IPs from all interfaces.
let res: Vec<IpAddr> = self.net_iface.peek(|i| {
i.values()
.filter_map(|i| i.ip_info.as_ref())
.flat_map(|ip_info| ip_info.subnets.iter().map(|s| s.addr()))
.collect()
});
if !res.is_empty() {
return Some(res);
}
} else { } else {
tracing::warn!( tracing::warn!(
"{}", "{}",

View File

@@ -205,7 +205,7 @@ pub async fn check_port(
CheckPortParams { port, gateway }: CheckPortParams, CheckPortParams { port, gateway }: CheckPortParams,
) -> Result<CheckPortRes, Error> { ) -> Result<CheckPortRes, Error> {
let db = ctx.db.peek().await; let db = ctx.db.peek().await;
let base_url = db.as_public().as_server_info().as_ifconfig_url().de()?; let base_urls = db.as_public().as_server_info().as_echoip_urls().de()?;
let gateways = db let gateways = db
.as_public() .as_public()
.as_server_info() .as_server_info()
@@ -240,22 +240,41 @@ pub async fn check_port(
let client = reqwest::Client::builder(); let client = reqwest::Client::builder();
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
let client = client.interface(gateway.as_str()); let client = client.interface(gateway.as_str());
let url = base_url let client = client.build()?;
.join(&format!("/port/{port}"))
.with_kind(ErrorKind::ParseUrl)?; let mut res = None;
let IfconfigPortRes { for base_url in base_urls {
let url = base_url
.join(&format!("/port/{port}"))
.with_kind(ErrorKind::ParseUrl)?;
res = Some(
async {
client
.get(url)
.timeout(Duration::from_secs(5))
.send()
.await?
.error_for_status()?
.json()
.await
}
.await,
);
if res.as_ref().map_or(false, |r| r.is_ok()) {
break;
}
}
let Some(IfconfigPortRes {
ip, ip,
port, port,
reachable: open_externally, reachable: open_externally,
} = client }) = res.transpose()?
.build()? else {
.get(url) return Err(Error::new(
.timeout(Duration::from_secs(10)) eyre!("{}", t!("net.gateway.no-configured-echoip-urls")),
.send() ErrorKind::Network,
.await? ));
.error_for_status()? };
.json()
.await?;
let hairpinning = tokio::time::timeout( let hairpinning = tokio::time::timeout(
Duration::from_secs(5), Duration::from_secs(5),
@@ -761,7 +780,7 @@ async fn get_wan_ipv4(iface: &str, base_url: &Url) -> Result<Option<Ipv4Addr>, E
let text = client let text = client
.build()? .build()?
.get(url) .get(url)
.timeout(Duration::from_secs(10)) .timeout(Duration::from_secs(5))
.send() .send()
.await? .await?
.error_for_status()? .error_for_status()?
@@ -857,7 +876,7 @@ async fn watch_ip(
.fuse() .fuse()
}); });
let mut prev_attempt: Option<Instant> = None; let mut echoip_ratelimit_state: BTreeMap<Url, Instant> = BTreeMap::new();
loop { loop {
until until
@@ -967,7 +986,7 @@ async fn watch_ip(
&dhcp4_proxy, &dhcp4_proxy,
&policy_guard, &policy_guard,
&iface, &iface,
&mut prev_attempt, &mut echoip_ratelimit_state,
db, db,
write_to, write_to,
device_type, device_type,
@@ -999,18 +1018,16 @@ async fn apply_policy_routing(
}) })
.copied(); .copied();
// Flush and rebuild per-interface routing table. // Rebuild per-interface routing table using `ip route replace` to avoid
// Clone all non-default routes from the main table so that LAN IPs on // the connectivity gap that a flush+add cycle would create. We replace
// other subnets remain reachable when the priority-75 catch-all overrides // every desired route in-place (each replace is atomic in the kernel),
// default routing, then replace the default route with this interface's. // then delete any stale routes that are no longer in the desired set.
Command::new("ip")
.arg("route") // Collect the set of desired non-default route prefixes (the first
.arg("flush") // whitespace-delimited token of each `ip route show` line is the
.arg("table") // destination prefix, e.g. "192.168.1.0/24" or "10.0.0.0/8").
.arg(&table_str) let mut desired_prefixes = BTreeSet::<String>::new();
.invoke(ErrorKind::Network)
.await
.log_err();
if let Ok(main_routes) = Command::new("ip") if let Ok(main_routes) = Command::new("ip")
.arg("route") .arg("route")
.arg("show") .arg("show")
@@ -1025,11 +1042,14 @@ async fn apply_policy_routing(
if line.is_empty() || line.starts_with("default") { if line.is_empty() || line.starts_with("default") {
continue; continue;
} }
if let Some(prefix) = line.split_whitespace().next() {
desired_prefixes.insert(prefix.to_owned());
}
let mut cmd = Command::new("ip"); let mut cmd = Command::new("ip");
cmd.arg("route").arg("add"); cmd.arg("route").arg("replace");
for part in line.split_whitespace() { for part in line.split_whitespace() {
// Skip status flags that appear in route output but // Skip status flags that appear in route output but
// are not valid for `ip route add`. // are not valid for `ip route replace`.
if part == "linkdown" || part == "dead" { if part == "linkdown" || part == "dead" {
continue; continue;
} }
@@ -1039,10 +1059,11 @@ async fn apply_policy_routing(
cmd.invoke(ErrorKind::Network).await.log_err(); cmd.invoke(ErrorKind::Network).await.log_err();
} }
} }
// Add default route via this interface's gateway
// Replace the default route via this interface's gateway.
{ {
let mut cmd = Command::new("ip"); let mut cmd = Command::new("ip");
cmd.arg("route").arg("add").arg("default"); cmd.arg("route").arg("replace").arg("default");
if let Some(gw) = ipv4_gateway { if let Some(gw) = ipv4_gateway {
cmd.arg("via").arg(gw.to_string()); cmd.arg("via").arg(gw.to_string());
} }
@@ -1056,6 +1077,40 @@ async fn apply_policy_routing(
cmd.invoke(ErrorKind::Network).await.log_err(); cmd.invoke(ErrorKind::Network).await.log_err();
} }
// Delete stale routes: any non-default route in the per-interface table
// whose prefix is not in the desired set.
if let Ok(existing_routes) = Command::new("ip")
.arg("route")
.arg("show")
.arg("table")
.arg(&table_str)
.invoke(ErrorKind::Network)
.await
.and_then(|b| String::from_utf8(b).with_kind(ErrorKind::Utf8))
{
for line in existing_routes.lines() {
let line = line.trim();
if line.is_empty() || line.starts_with("default") {
continue;
}
let Some(prefix) = line.split_whitespace().next() else {
continue;
};
if desired_prefixes.contains(prefix) {
continue;
}
Command::new("ip")
.arg("route")
.arg("del")
.arg(prefix)
.arg("table")
.arg(&table_str)
.invoke(ErrorKind::Network)
.await
.log_err();
}
}
// Ensure global CONNMARK restore rules in mangle PREROUTING (forwarded // Ensure global CONNMARK restore rules in mangle PREROUTING (forwarded
// packets) and OUTPUT (locally-generated replies). Both are needed: // packets) and OUTPUT (locally-generated replies). Both are needed:
// PREROUTING handles DNAT-forwarded traffic, OUTPUT handles replies from // PREROUTING handles DNAT-forwarded traffic, OUTPUT handles replies from
@@ -1174,7 +1229,7 @@ async fn poll_ip_info(
dhcp4_proxy: &Option<Dhcp4ConfigProxy<'_>>, dhcp4_proxy: &Option<Dhcp4ConfigProxy<'_>>,
policy_guard: &Option<PolicyRoutingCleanup>, policy_guard: &Option<PolicyRoutingCleanup>,
iface: &GatewayId, iface: &GatewayId,
prev_attempt: &mut Option<Instant>, echoip_ratelimit_state: &mut BTreeMap<Url, Instant>,
db: Option<&TypedPatchDb<Database>>, db: Option<&TypedPatchDb<Database>>,
write_to: &Watch<OrdMap<GatewayId, NetworkInterfaceInfo>>, write_to: &Watch<OrdMap<GatewayId, NetworkInterfaceInfo>>,
device_type: Option<NetworkInterfaceType>, device_type: Option<NetworkInterfaceType>,
@@ -1221,43 +1276,49 @@ async fn poll_ip_info(
apply_policy_routing(guard, iface, &lan_ip).await?; apply_policy_routing(guard, iface, &lan_ip).await?;
} }
let ifconfig_url = if let Some(db) = db { let echoip_urls = if let Some(db) = db {
db.peek() db.peek()
.await .await
.as_public() .as_public()
.as_server_info() .as_server_info()
.as_ifconfig_url() .as_echoip_urls()
.de() .de()
.unwrap_or_else(|_| crate::db::model::public::default_ifconfig_url()) .unwrap_or_else(|_| crate::db::model::public::default_echoip_urls())
} else { } else {
crate::db::model::public::default_ifconfig_url() crate::db::model::public::default_echoip_urls()
}; };
let wan_ip = if prev_attempt.map_or(true, |i| i.elapsed() > Duration::from_secs(300)) let mut wan_ip = None;
&& !subnets.is_empty() for echoip_url in echoip_urls {
&& !matches!( let wan_ip = if echoip_ratelimit_state
device_type, .get(&echoip_url)
Some(NetworkInterfaceType::Bridge | NetworkInterfaceType::Loopback) .map_or(true, |i| i.elapsed() > Duration::from_secs(300))
) { && !subnets.is_empty()
let res = match get_wan_ipv4(iface.as_str(), &ifconfig_url).await { && !matches!(
Ok(a) => a, device_type,
Err(e) => { Some(NetworkInterfaceType::Bridge | NetworkInterfaceType::Loopback)
tracing::error!( ) {
"{}", match get_wan_ipv4(iface.as_str(), &echoip_url).await {
t!( Ok(a) => {
"net.gateway.failed-to-determine-wan-ip", wan_ip = a;
iface = iface.to_string(), }
error = e.to_string() Err(e) => {
) tracing::error!(
); "{}",
tracing::debug!("{e:?}"); t!(
None "net.gateway.failed-to-determine-wan-ip",
iface = iface.to_string(),
error = e.to_string()
)
);
tracing::debug!("{e:?}");
}
};
echoip_ratelimit_state.insert(echoip_url, Instant::now());
if wan_ip.is_some() {
break;
} }
}; };
*prev_attempt = Some(Instant::now()); }
res
} else {
None
};
let mut ip_info = IpInfo { let mut ip_info = IpInfo {
name: name.clone(), name: name.clone(),
scope_id, scope_id,

View File

@@ -283,7 +283,7 @@ impl Model<Host> {
}; };
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl), ssl: opt.secure.map_or(false, |s| s.ssl),
public: true, public: false,
hostname: domain.clone(), hostname: domain.clone(),
port: Some(port), port: Some(port),
metadata: HostnameMetadata::PrivateDomain { gateways }, metadata: HostnameMetadata::PrivateDomain { gateways },
@@ -300,7 +300,7 @@ impl Model<Host> {
} }
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: true, ssl: true,
public: true, public: false,
hostname: domain, hostname: domain,
port: Some(port), port: Some(port),
metadata: HostnameMetadata::PrivateDomain { metadata: HostnameMetadata::PrivateDomain {
@@ -314,7 +314,7 @@ impl Model<Host> {
{ {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: true, ssl: true,
public: true, public: false,
hostname: domain, hostname: domain,
port: Some(opt.preferred_external_port), port: Some(opt.preferred_external_port),
metadata: HostnameMetadata::PrivateDomain { metadata: HostnameMetadata::PrivateDomain {

View File

@@ -820,7 +820,6 @@ impl NetService {
break; break;
} }
} }
self.shutdown = true;
Ok(()) Ok(())
} }
@@ -832,6 +831,7 @@ impl NetService {
impl Drop for NetService { impl Drop for NetService {
fn drop(&mut self) { fn drop(&mut self) {
if !self.shutdown { if !self.shutdown {
self.shutdown = true;
let svc = std::mem::replace(self, Self::dummy()); let svc = std::mem::replace(self, Self::dummy());
tokio::spawn(async move { svc.remove_all().await.log_err() }); tokio::spawn(async move { svc.remove_all().await.log_err() });
} }

View File

@@ -145,9 +145,10 @@ pub struct GatewayInfo {
pub public: bool, pub public: bool,
} }
#[derive(Clone, Debug, Deserialize, Serialize, TS)] #[derive(Clone, Debug, Deserialize, Serialize, HasModel, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct ServiceInterface { pub struct ServiceInterface {
pub id: ServiceInterfaceId, pub id: ServiceInterfaceId,
pub name: String, pub name: String,

View File

@@ -188,7 +188,7 @@ lazy_static::lazy_static! {
} }
fn asn1_time_to_system_time(time: &Asn1TimeRef) -> Result<SystemTime, Error> { fn asn1_time_to_system_time(time: &Asn1TimeRef) -> Result<SystemTime, Error> {
let diff = time.diff(&**ASN1_UNIX_EPOCH)?; let diff = ASN1_UNIX_EPOCH.diff(time)?;
let mut res = UNIX_EPOCH; let mut res = UNIX_EPOCH;
if diff.days >= 0 { if diff.days >= 0 {
res += Duration::from_secs(diff.days as u64 * 86400); res += Duration::from_secs(diff.days as u64 * 86400);

View File

@@ -1,3 +1,5 @@
use std::time::Duration;
use clap::Parser; use clap::Parser;
use imbl_value::InternedString; use imbl_value::InternedString;
use patch_db::json_ptr::JsonPointer; use patch_db::json_ptr::JsonPointer;
@@ -8,7 +10,9 @@ use ts_rs::TS;
use crate::GatewayId; use crate::GatewayId;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::db::model::public::{GatewayType, NetworkInterfaceInfo, NetworkInterfaceType}; use crate::db::model::public::{
GatewayType, NetworkInfo, NetworkInterfaceInfo, NetworkInterfaceType,
};
use crate::net::host::all_hosts; use crate::net::host::all_hosts;
use crate::prelude::*; use crate::prelude::*;
use crate::util::Invoke; use crate::util::Invoke;
@@ -139,6 +143,34 @@ pub async fn add_tunnel(
.result?; .result?;
} }
// Wait for the sync loop to fully commit gateway state (addresses, hosts)
// to the database, with a 15-second timeout.
if tokio::time::timeout(Duration::from_secs(15), async {
let mut watch = ctx
.db
.watch("/public/serverInfo/network".parse::<JsonPointer>().unwrap())
.await
.typed::<NetworkInfo>();
loop {
if watch
.peek()?
.as_gateways()
.as_idx(&iface)
.and_then(|g| g.as_ip_info().transpose_ref())
.is_some()
{
break;
}
watch.changed().await?;
}
Ok::<_, Error>(())
})
.await
.is_err()
{
tracing::warn!("{}", t!("net.tunnel.timeout-waiting-for-add", gateway = iface.as_str()));
}
Ok(iface) Ok(iface)
} }
@@ -224,5 +256,27 @@ pub async fn remove_tunnel(
.await .await
.result?; .result?;
// Wait for the sync loop to fully commit gateway removal to the database,
// with a 15-second timeout.
if tokio::time::timeout(Duration::from_secs(15), async {
let mut watch = ctx
.db
.watch("/public/serverInfo/network".parse::<JsonPointer>().unwrap())
.await
.typed::<NetworkInfo>();
loop {
if watch.peek()?.as_gateways().as_idx(&id).is_none() {
break;
}
watch.changed().await?;
}
Ok::<_, Error>(())
})
.await
.is_err()
{
tracing::warn!("{}", t!("net.tunnel.timeout-waiting-for-remove", gateway = id.as_str()));
}
Ok(()) Ok(())
} }

View File

@@ -66,6 +66,13 @@ pub fn ipv6_is_local(addr: Ipv6Addr) -> bool {
addr.is_loopback() || (addr.segments()[0] & 0xfe00) == 0xfc00 || ipv6_is_link_local(addr) addr.is_loopback() || (addr.segments()[0] & 0xfe00) == 0xfc00 || ipv6_is_link_local(addr)
} }
pub fn is_private_ip(addr: IpAddr) -> bool {
match addr {
IpAddr::V4(v4) => v4.is_private() || v4.is_loopback() || v4.is_link_local(),
IpAddr::V6(v6) => ipv6_is_local(v6),
}
}
fn parse_iface_ip(output: &str) -> Result<Vec<&str>, Error> { fn parse_iface_ip(output: &str) -> Result<Vec<&str>, Error> {
let output = output.trim(); let output = output.trim();
if output.is_empty() { if output.is_empty() {

View File

@@ -38,7 +38,7 @@ use crate::net::ssl::{CertStore, RootCaTlsHandler};
use crate::net::tls::{ use crate::net::tls::{
ChainedHandler, TlsHandlerAction, TlsHandlerWrapper, TlsListener, TlsMetadata, WrapTlsHandler, ChainedHandler, TlsHandlerAction, TlsHandlerWrapper, TlsListener, TlsMetadata, WrapTlsHandler,
}; };
use crate::net::utils::ipv6_is_link_local; use crate::net::utils::{ipv6_is_link_local, is_private_ip};
use crate::net::web_server::{Accept, AcceptStream, ExtractVisitor, TcpMetadata, extract}; use crate::net::web_server::{Accept, AcceptStream, ExtractVisitor, TcpMetadata, extract};
use crate::prelude::*; use crate::prelude::*;
use crate::util::collections::EqSet; use crate::util::collections::EqSet;
@@ -732,8 +732,9 @@ where
}; };
let src = tcp.peer_addr.ip(); let src = tcp.peer_addr.ip();
// Public: source is outside all known subnets (direct internet) // Private: source is in a known subnet or is a private IP (e.g. VPN on a different VLAN)
let is_public = !ip_info.subnets.iter().any(|s| s.contains(&src)); let is_public =
!ip_info.subnets.iter().any(|s| s.contains(&src)) && !is_private_ip(src);
if is_public { if is_public {
self.public.contains(&gw.id) self.public.contains(&gw.id)

View File

@@ -509,7 +509,7 @@ where
drop(queue_cell.replace(None)); drop(queue_cell.replace(None));
if !runner.is_empty() { if !runner.is_empty() {
tokio::time::timeout(Duration::from_secs(60), runner) tokio::time::timeout(Duration::from_millis(100), runner)
.await .await
.log_err(); .log_err();
} }

View File

@@ -1,3 +1,3 @@
{boot} /boot vfat umask=0077 0 2 {boot} /boot vfat umask=0077 0 2
{efi} /boot/efi vfat umask=0077 0 1 {efi} /boot/efi vfat umask=0077 0 1
{root} / ext4 defaults 0 1 {root} / btrfs defaults 0 1

View File

@@ -197,11 +197,19 @@ pub async fn partition(
.invoke(crate::ErrorKind::DiskManagement) .invoke(crate::ErrorKind::DiskManagement)
.await?; .await?;
let mut extra_boot = std::collections::BTreeMap::new();
let bios;
if efi {
extra_boot.insert("efi".to_string(), partition_for(&disk_path, 1));
bios = None;
} else {
bios = Some(partition_for(&disk_path, 1));
}
Ok(OsPartitionInfo { Ok(OsPartitionInfo {
efi: efi.then(|| partition_for(&disk_path, 1)), bios,
bios: (!efi).then(|| partition_for(&disk_path, 1)),
boot: partition_for(&disk_path, 2), boot: partition_for(&disk_path, 2),
root: partition_for(&disk_path, 3), root: partition_for(&disk_path, 3),
extra_boot,
data: data_part, data: data_part,
}) })
} }

View File

@@ -164,10 +164,10 @@ pub async fn partition(
.await?; .await?;
Ok(OsPartitionInfo { Ok(OsPartitionInfo {
efi: None,
bios: None, bios: None,
boot: partition_for(&disk_path, 1), boot: partition_for(&disk_path, 1),
root: partition_for(&disk_path, 2), root: partition_for(&disk_path, 2),
extra_boot: Default::default(),
data: data_part, data: data_part,
}) })
} }

View File

@@ -21,69 +21,12 @@ use crate::prelude::*;
use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile; use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile;
use crate::setup::SetupInfo; use crate::setup::SetupInfo;
use crate::util::Invoke; use crate::util::Invoke;
use crate::util::io::{TmpDir, delete_file, open_file, write_file_atomic}; use crate::util::io::{TmpDir, delete_dir, delete_file, open_file, write_file_atomic};
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
mod gpt; mod gpt;
mod mbr; mod mbr;
/// Get the EFI BootCurrent entry number (the entry firmware used to boot).
/// Returns None on non-EFI systems or if BootCurrent is not set.
async fn get_efi_boot_current() -> Result<Option<String>, Error> {
let efi_output = String::from_utf8(
Command::new("efibootmgr")
.invoke(ErrorKind::Grub)
.await?,
)
.map_err(|e| Error::new(eyre!("efibootmgr output not valid UTF-8: {e}"), ErrorKind::Grub))?;
Ok(efi_output
.lines()
.find(|line| line.starts_with("BootCurrent:"))
.and_then(|line| line.strip_prefix("BootCurrent:"))
.map(|s| s.trim().to_string()))
}
/// Promote a specific boot entry to first in the EFI boot order.
async fn promote_efi_entry(entry: &str) -> Result<(), Error> {
let efi_output = String::from_utf8(
Command::new("efibootmgr")
.invoke(ErrorKind::Grub)
.await?,
)
.map_err(|e| Error::new(eyre!("efibootmgr output not valid UTF-8: {e}"), ErrorKind::Grub))?;
let current_order = efi_output
.lines()
.find(|line| line.starts_with("BootOrder:"))
.and_then(|line| line.strip_prefix("BootOrder:"))
.map(|s| s.trim())
.unwrap_or("");
if current_order.is_empty() || current_order.starts_with(entry) {
return Ok(());
}
let other_entries: Vec<&str> = current_order
.split(',')
.filter(|e| e.trim() != entry)
.collect();
let new_order = if other_entries.is_empty() {
entry.to_string()
} else {
format!("{},{}", entry, other_entries.join(","))
};
Command::new("efibootmgr")
.arg("-o")
.arg(&new_order)
.invoke(ErrorKind::Grub)
.await?;
Ok(())
}
/// Probe a squashfs image to determine its target architecture /// Probe a squashfs image to determine its target architecture
async fn probe_squashfs_arch(squashfs_path: &Path) -> Result<InternedString, Error> { async fn probe_squashfs_arch(squashfs_path: &Path) -> Result<InternedString, Error> {
let output = String::from_utf8( let output = String::from_utf8(
@@ -182,6 +125,7 @@ struct DataDrive {
pub struct InstallOsResult { pub struct InstallOsResult {
pub part_info: OsPartitionInfo, pub part_info: OsPartitionInfo,
pub rootfs: TmpMountGuard, pub rootfs: TmpMountGuard,
pub mok_enrolled: bool,
} }
pub async fn install_os_to( pub async fn install_os_to(
@@ -199,7 +143,7 @@ pub async fn install_os_to(
let part_info = partition(disk_path, capacity, partition_table, protect, use_efi).await?; let part_info = partition(disk_path, capacity, partition_table, protect, use_efi).await?;
if let Some(efi) = &part_info.efi { if let Some(efi) = part_info.extra_boot.get("efi") {
Command::new("mkfs.vfat") Command::new("mkfs.vfat")
.arg(efi) .arg(efi)
.invoke(crate::ErrorKind::DiskManagement) .invoke(crate::ErrorKind::DiskManagement)
@@ -230,6 +174,7 @@ pub async fn install_os_to(
delete_file(guard.path().join("config/upgrade")).await?; delete_file(guard.path().join("config/upgrade")).await?;
delete_file(guard.path().join("config/overlay/etc/hostname")).await?; delete_file(guard.path().join("config/overlay/etc/hostname")).await?;
delete_file(guard.path().join("config/disk.guid")).await?; delete_file(guard.path().join("config/disk.guid")).await?;
delete_dir(guard.path().join("config/lib/modules")).await?;
Command::new("cp") Command::new("cp")
.arg("-r") .arg("-r")
.arg(guard.path().join("config")) .arg(guard.path().join("config"))
@@ -265,9 +210,7 @@ pub async fn install_os_to(
let config_path = rootfs.path().join("config"); let config_path = rootfs.path().join("config");
if tokio::fs::metadata("/tmp/config.bak").await.is_ok() { if tokio::fs::metadata("/tmp/config.bak").await.is_ok() {
if tokio::fs::metadata(&config_path).await.is_ok() { crate::util::io::delete_dir(&config_path).await?;
tokio::fs::remove_dir_all(&config_path).await?;
}
Command::new("cp") Command::new("cp")
.arg("-r") .arg("-r")
.arg("/tmp/config.bak") .arg("/tmp/config.bak")
@@ -317,10 +260,7 @@ pub async fn install_os_to(
tokio::fs::write( tokio::fs::write(
rootfs.path().join("config/config.yaml"), rootfs.path().join("config/config.yaml"),
IoFormat::Yaml.to_vec(&ServerConfig { IoFormat::Yaml.to_vec(&ServerConfig::default())?,
os_partitions: Some(part_info.clone()),
..Default::default()
})?,
) )
.await?; .await?;
@@ -339,7 +279,7 @@ pub async fn install_os_to(
ReadWrite, ReadWrite,
) )
.await?; .await?;
let efi = if let Some(efi) = &part_info.efi { let efi = if let Some(efi) = part_info.extra_boot.get("efi") {
Some( Some(
MountGuard::mount( MountGuard::mount(
&BlockDev::new(efi), &BlockDev::new(efi),
@@ -380,8 +320,8 @@ pub async fn install_os_to(
include_str!("fstab.template"), include_str!("fstab.template"),
boot = part_info.boot.display(), boot = part_info.boot.display(),
efi = part_info efi = part_info
.efi .extra_boot
.as_ref() .get("efi")
.map(|p| p.display().to_string()) .map(|p| p.display().to_string())
.unwrap_or_else(|| "# N/A".to_owned()), .unwrap_or_else(|| "# N/A".to_owned()),
root = part_info.root.display(), root = part_info.root.display(),
@@ -402,6 +342,28 @@ pub async fn install_os_to(
.invoke(crate::ErrorKind::OpenSsh) .invoke(crate::ErrorKind::OpenSsh)
.await?; .await?;
// Secure Boot: generate MOK key, sign unsigned modules, enroll MOK
let mut mok_enrolled = false;
if use_efi && crate::util::mok::is_secure_boot_enabled().await {
let new_key = crate::util::mok::ensure_dkms_key(overlay.path()).await?;
tracing::info!(
"DKMS MOK key: {}",
if new_key {
"generated"
} else {
"already exists"
}
);
crate::util::mok::sign_unsigned_modules(overlay.path()).await?;
let mok_pub = overlay.path().join(crate::util::mok::DKMS_MOK_PUB.trim_start_matches('/'));
match crate::util::mok::enroll_mok(&mok_pub).await {
Ok(enrolled) => mok_enrolled = enrolled,
Err(e) => tracing::warn!("MOK enrollment failed: {e}"),
}
}
let mut install = Command::new("chroot"); let mut install = Command::new("chroot");
install.arg(overlay.path()).arg("grub-install"); install.arg(overlay.path()).arg("grub-install");
if !use_efi { if !use_efi {
@@ -443,7 +405,11 @@ pub async fn install_os_to(
tokio::fs::remove_dir_all(&work).await?; tokio::fs::remove_dir_all(&work).await?;
lower.unmount().await?; lower.unmount().await?;
Ok(InstallOsResult { part_info, rootfs }) Ok(InstallOsResult {
part_info,
rootfs,
mok_enrolled,
})
} }
pub async fn install_os( pub async fn install_os(
@@ -486,21 +452,11 @@ pub async fn install_os(
let use_efi = tokio::fs::metadata("/sys/firmware/efi").await.is_ok(); let use_efi = tokio::fs::metadata("/sys/firmware/efi").await.is_ok();
// Save the boot entry we booted from (the USB installer) before grub-install let InstallOsResult {
// overwrites the boot order. part_info,
let boot_current = if use_efi { rootfs,
match get_efi_boot_current().await { mok_enrolled,
Ok(entry) => entry, } = install_os_to(
Err(e) => {
tracing::warn!("Failed to get EFI BootCurrent: {e}");
None
}
}
} else {
None
};
let InstallOsResult { part_info, rootfs } = install_os_to(
"/run/live/medium/live/filesystem.squashfs", "/run/live/medium/live/filesystem.squashfs",
&disk.logicalname, &disk.logicalname,
disk.capacity, disk.capacity,
@@ -511,24 +467,8 @@ pub async fn install_os(
) )
.await?; .await?;
// grub-install prepends its new entry to the EFI boot order, overriding the
// USB-first priority. Promote the USB entry (identified by BootCurrent from
// when we booted the installer) back to first, and persist the entry number
// so the upgrade script can do the same.
if let Some(ref entry) = boot_current {
if let Err(e) = promote_efi_entry(entry).await {
tracing::warn!("Failed to restore EFI boot order: {e}");
}
let efi_entry_path = rootfs.path().join("config/efi-installer-entry");
if let Err(e) = tokio::fs::write(&efi_entry_path, entry).await {
tracing::warn!("Failed to save EFI installer entry number: {e}");
}
}
ctx.config
.mutate(|c| c.os_partitions = Some(part_info.clone()));
let mut setup_info = SetupInfo::default(); let mut setup_info = SetupInfo::default();
setup_info.mok_enrolled = mok_enrolled;
if let Some(data_drive) = data_drive { if let Some(data_drive) = data_drive {
let mut logicalname = &*data_drive.logicalname; let mut logicalname = &*data_drive.logicalname;
@@ -612,7 +552,11 @@ pub async fn cli_install_os(
let use_efi = efi.unwrap_or_else(|| !matches!(partition_table, Some(PartitionTable::Mbr))); let use_efi = efi.unwrap_or_else(|| !matches!(partition_table, Some(PartitionTable::Mbr)));
let InstallOsResult { part_info, rootfs } = install_os_to( let InstallOsResult {
part_info,
rootfs,
mok_enrolled: _,
} = install_os_to(
&squashfs, &squashfs,
&disk, &disk,
capacity, capacity,

View File

@@ -141,7 +141,7 @@ impl RegistryContext {
listen: config.registry_listen.unwrap_or(DEFAULT_REGISTRY_LISTEN), listen: config.registry_listen.unwrap_or(DEFAULT_REGISTRY_LISTEN),
db, db,
datadir, datadir,
rpc_continuations: RpcContinuations::new(), rpc_continuations: RpcContinuations::new(None),
client: Client::builder() client: Client::builder()
.proxy(Proxy::custom(move |url| { .proxy(Proxy::custom(move |url| {
if url.host_str().map_or(false, |h| h.ends_with(".onion")) { if url.host_str().map_or(false, |h| h.ends_with(".onion")) {

View File

@@ -59,8 +59,7 @@ pub struct AddPackageSignerParams {
#[ts(type = "string | null")] #[ts(type = "string | null")]
pub versions: Option<VersionRange>, pub versions: Option<VersionRange>,
#[arg(long, help = "help.arg.merge")] #[arg(long, help = "help.arg.merge")]
#[ts(optional)] pub merge: bool,
pub merge: Option<bool>,
} }
pub async fn add_package_signer( pub async fn add_package_signer(
@@ -89,7 +88,7 @@ pub async fn add_package_signer(
.as_authorized_mut() .as_authorized_mut()
.upsert(&signer, || Ok(VersionRange::None))? .upsert(&signer, || Ok(VersionRange::None))?
.mutate(|existing| { .mutate(|existing| {
*existing = if merge.unwrap_or(false) { *existing = if merge {
VersionRange::or(existing.clone(), versions) VersionRange::or(existing.clone(), versions)
} else { } else {
versions versions

View File

@@ -17,6 +17,7 @@ use ts_rs::TS;
#[allow(unused_imports)] #[allow(unused_imports)]
use crate::prelude::*; use crate::prelude::*;
use crate::shutdown::Shutdown;
use crate::util::future::TimedResource; use crate::util::future::TimedResource;
use crate::util::net::WebSocket; use crate::util::net::WebSocket;
use crate::util::{FromStrParser, new_guid}; use crate::util::{FromStrParser, new_guid};
@@ -98,12 +99,15 @@ pub type RestHandler = Box<dyn FnOnce(Request) -> RestFuture + Send>;
pub struct WebSocketFuture { pub struct WebSocketFuture {
kill: Option<broadcast::Receiver<()>>, kill: Option<broadcast::Receiver<()>>,
shutdown: Option<broadcast::Receiver<Option<Shutdown>>>,
fut: BoxFuture<'static, ()>, fut: BoxFuture<'static, ()>,
} }
impl Future for WebSocketFuture { impl Future for WebSocketFuture {
type Output = (); type Output = ();
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> { fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
if self.kill.as_ref().map_or(false, |k| !k.is_empty()) { if self.kill.as_ref().map_or(false, |k| !k.is_empty())
|| self.shutdown.as_ref().map_or(false, |s| !s.is_empty())
{
Poll::Ready(()) Poll::Ready(())
} else { } else {
self.fut.poll_unpin(cx) self.fut.poll_unpin(cx)
@@ -138,6 +142,7 @@ impl RpcContinuation {
RpcContinuation::WebSocket(TimedResource::new( RpcContinuation::WebSocket(TimedResource::new(
Box::new(|ws| WebSocketFuture { Box::new(|ws| WebSocketFuture {
kill: None, kill: None,
shutdown: None,
fut: handler(ws.into()).boxed(), fut: handler(ws.into()).boxed(),
}), }),
timeout, timeout,
@@ -170,6 +175,7 @@ impl RpcContinuation {
RpcContinuation::WebSocket(TimedResource::new( RpcContinuation::WebSocket(TimedResource::new(
Box::new(|ws| WebSocketFuture { Box::new(|ws| WebSocketFuture {
kill, kill,
shutdown: None,
fut: handler(ws.into()).boxed(), fut: handler(ws.into()).boxed(),
}), }),
timeout, timeout,
@@ -183,15 +189,21 @@ impl RpcContinuation {
} }
} }
pub struct RpcContinuations(AsyncMutex<BTreeMap<Guid, RpcContinuation>>); pub struct RpcContinuations {
continuations: AsyncMutex<BTreeMap<Guid, RpcContinuation>>,
shutdown: Option<broadcast::Sender<Option<Shutdown>>>,
}
impl RpcContinuations { impl RpcContinuations {
pub fn new() -> Self { pub fn new(shutdown: Option<broadcast::Sender<Option<Shutdown>>>) -> Self {
RpcContinuations(AsyncMutex::new(BTreeMap::new())) RpcContinuations {
continuations: AsyncMutex::new(BTreeMap::new()),
shutdown,
}
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn clean(&self) { pub async fn clean(&self) {
let mut continuations = self.0.lock().await; let mut continuations = self.continuations.lock().await;
let mut to_remove = Vec::new(); let mut to_remove = Vec::new();
for (guid, cont) in &*continuations { for (guid, cont) in &*continuations {
if cont.is_timed_out() { if cont.is_timed_out() {
@@ -206,23 +218,28 @@ impl RpcContinuations {
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn add(&self, guid: Guid, handler: RpcContinuation) { pub async fn add(&self, guid: Guid, handler: RpcContinuation) {
self.clean().await; self.clean().await;
self.0.lock().await.insert(guid, handler); self.continuations.lock().await.insert(guid, handler);
} }
pub async fn get_ws_handler(&self, guid: &Guid) -> Option<WebSocketHandler> { pub async fn get_ws_handler(&self, guid: &Guid) -> Option<WebSocketHandler> {
let mut continuations = self.0.lock().await; let mut continuations = self.continuations.lock().await;
if !matches!(continuations.get(guid), Some(RpcContinuation::WebSocket(_))) { if !matches!(continuations.get(guid), Some(RpcContinuation::WebSocket(_))) {
return None; return None;
} }
let Some(RpcContinuation::WebSocket(x)) = continuations.remove(guid) else { let Some(RpcContinuation::WebSocket(x)) = continuations.remove(guid) else {
return None; return None;
}; };
x.get().await let handler = x.get().await?;
let shutdown = self.shutdown.as_ref().map(|s| s.subscribe());
Some(Box::new(move |ws| {
let mut fut = handler(ws);
fut.shutdown = shutdown;
fut
}))
} }
pub async fn get_rest_handler(&self, guid: &Guid) -> Option<RestHandler> { pub async fn get_rest_handler(&self, guid: &Guid) -> Option<RestHandler> {
let mut continuations: tokio::sync::MutexGuard<'_, BTreeMap<Guid, RpcContinuation>> = let mut continuations = self.continuations.lock().await;
self.0.lock().await;
if !matches!(continuations.get(guid), Some(RpcContinuation::Rest(_))) { if !matches!(continuations.get(guid), Some(RpcContinuation::Rest(_))) {
return None; return None;
} }

View File

@@ -17,6 +17,7 @@ use crate::s9pk::manifest::{HardwareRequirements, Manifest};
use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile; use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile;
use crate::s9pk::v2::SIG_CONTEXT; use crate::s9pk::v2::SIG_CONTEXT;
use crate::s9pk::v2::pack::ImageConfig; use crate::s9pk::v2::pack::ImageConfig;
use crate::sign::commitment::merkle_archive::MerkleArchiveCommitment;
use crate::util::io::{TmpDir, create_file, open_file}; use crate::util::io::{TmpDir, create_file, open_file};
use crate::util::serde::{HandlerExtSerde, apply_expr}; use crate::util::serde::{HandlerExtSerde, apply_expr};
use crate::util::{Apply, Invoke}; use crate::util::{Apply, Invoke};
@@ -131,6 +132,13 @@ fn inspect() -> ParentHandler<CliContext, S9pkPath> {
.with_display_serializable() .with_display_serializable()
.with_about("about.display-s9pk-manifest"), .with_about("about.display-s9pk-manifest"),
) )
.subcommand(
"commitment",
from_fn_async(inspect_commitment)
.with_inherited(only_parent)
.with_display_serializable()
.with_about("about.display-s9pk-root-sighash-and-maxsize"),
)
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
@@ -262,6 +270,15 @@ async fn inspect_manifest(
Ok(s9pk.as_manifest().clone()) Ok(s9pk.as_manifest().clone())
} }
async fn inspect_commitment(
_: CliContext,
_: Empty,
S9pkPath { s9pk: s9pk_path }: S9pkPath,
) -> Result<MerkleArchiveCommitment, Error> {
let s9pk = super::S9pk::open(&s9pk_path, None).await?;
s9pk.as_archive().commitment().await
}
async fn convert(ctx: CliContext, S9pkPath { s9pk: s9pk_path }: S9pkPath) -> Result<(), Error> { async fn convert(ctx: CliContext, S9pkPath { s9pk: s9pk_path }: S9pkPath) -> Result<(), Error> {
let mut s9pk = super::load( let mut s9pk = super::load(
MultiCursorFile::from(open_file(&s9pk_path).await?), MultiCursorFile::from(open_file(&s9pk_path).await?),

View File

@@ -1,6 +1,6 @@
use std::cmp::min; use std::cmp::min;
use std::collections::{BTreeMap, BTreeSet}; use std::collections::{BTreeMap, BTreeSet};
use std::sync::{Arc, Mutex, Weak}; use std::sync::{Arc, Weak};
use std::time::{Duration, SystemTime}; use std::time::{Duration, SystemTime};
use clap::Parser; use clap::Parser;
@@ -8,185 +8,72 @@ use futures::future::join_all;
use imbl::{OrdMap, Vector, vector}; use imbl::{OrdMap, Vector, vector};
use imbl_value::InternedString; use imbl_value::InternedString;
use patch_db::TypedDbWatch; use patch_db::TypedDbWatch;
use patch_db::json_ptr::JsonPointer;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tracing::warn; use tracing::warn;
use ts_rs::TS; use ts_rs::TS;
use crate::db::model::Database; use crate::db::model::package::PackageState;
use crate::db::model::public::NetworkInterfaceInfo; use crate::db::model::public::NetworkInterfaceInfo;
use crate::net::host::Host;
use crate::net::service_interface::ServiceInterface;
use crate::net::ssl::FullchainCertData; use crate::net::ssl::FullchainCertData;
use crate::prelude::*; use crate::prelude::*;
use crate::service::effects::context::EffectContext; use crate::service::effects::context::EffectContext;
use crate::service::effects::net::ssl::Algorithm; use crate::service::effects::net::ssl::Algorithm;
use crate::service::rpc::{CallbackHandle, CallbackId}; use crate::service::rpc::{CallbackHandle, CallbackId};
use crate::service::{Service, ServiceActorSeed}; use crate::service::{Service, ServiceActorSeed};
use crate::status::StatusInfo;
use crate::util::collections::EqMap; use crate::util::collections::EqMap;
use crate::util::future::NonDetachingJoinHandle; use crate::util::future::NonDetachingJoinHandle;
use crate::util::sync::SyncMutex;
use crate::{GatewayId, HostId, PackageId, ServiceInterfaceId}; use crate::{GatewayId, HostId, PackageId, ServiceInterfaceId};
#[derive(Default)] /// Abstraction for callbacks that are triggered by patchdb subscriptions.
pub struct ServiceCallbacks(Mutex<ServiceCallbackMap>); ///
/// Handles the subscribe-wait-fire-remove pattern: when a callback is first
#[derive(Default)] /// registered for a key, a patchdb subscription is spawned. When the subscription
struct ServiceCallbackMap { /// fires, all handlers are consumed and invoked, then the subscription stops.
get_service_interface: BTreeMap<(PackageId, ServiceInterfaceId), Vec<CallbackHandler>>, /// A new subscription is created if a handler is registered again.
list_service_interfaces: BTreeMap<PackageId, Vec<CallbackHandler>>, pub struct DbWatchedCallbacks<K: Ord> {
get_system_smtp: Vec<CallbackHandler>, label: &'static str,
get_host_info: inner: SyncMutex<BTreeMap<K, (NonDetachingJoinHandle<()>, Vec<CallbackHandler>)>>,
BTreeMap<(PackageId, HostId), (NonDetachingJoinHandle<()>, Vec<CallbackHandler>)>,
get_ssl_certificate: EqMap<
(BTreeSet<InternedString>, FullchainCertData, Algorithm),
(NonDetachingJoinHandle<()>, Vec<CallbackHandler>),
>,
get_status: BTreeMap<PackageId, Vec<CallbackHandler>>,
get_container_ip: BTreeMap<PackageId, Vec<CallbackHandler>>,
get_service_manifest: BTreeMap<PackageId, Vec<CallbackHandler>>,
get_outbound_gateway: BTreeMap<PackageId, (NonDetachingJoinHandle<()>, Vec<CallbackHandler>)>,
} }
impl ServiceCallbacks { impl<K: Ord + Clone + Send + Sync + 'static> DbWatchedCallbacks<K> {
fn mutate<T>(&self, f: impl FnOnce(&mut ServiceCallbackMap) -> T) -> T { pub fn new(label: &'static str) -> Self {
let mut this = self.0.lock().unwrap(); Self {
f(&mut *this) label,
inner: SyncMutex::new(BTreeMap::new()),
}
} }
pub fn gc(&self) { pub fn add<T: Send + 'static>(
self.mutate(|this| {
this.get_service_interface.retain(|_, v| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
this.list_service_interfaces.retain(|_, v| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
this.get_system_smtp
.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
this.get_host_info.retain(|_, (_, v)| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
this.get_ssl_certificate.retain(|_, (_, v)| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
this.get_status.retain(|_, v| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
this.get_service_manifest.retain(|_, v| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
this.get_outbound_gateway.retain(|_, (_, v)| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
})
}
pub(super) fn add_get_service_interface(
&self,
package_id: PackageId,
service_interface_id: ServiceInterfaceId,
handler: CallbackHandler,
) {
self.mutate(|this| {
this.get_service_interface
.entry((package_id, service_interface_id))
.or_default()
.push(handler);
})
}
#[must_use]
pub fn get_service_interface(
&self,
id: &(PackageId, ServiceInterfaceId),
) -> Option<CallbackHandlers> {
self.mutate(|this| {
Some(CallbackHandlers(
this.get_service_interface.remove(id).unwrap_or_default(),
))
.filter(|cb| !cb.0.is_empty())
})
}
pub(super) fn add_list_service_interfaces(
&self,
package_id: PackageId,
handler: CallbackHandler,
) {
self.mutate(|this| {
this.list_service_interfaces
.entry(package_id)
.or_default()
.push(handler);
})
}
#[must_use]
pub fn list_service_interfaces(&self, id: &PackageId) -> Option<CallbackHandlers> {
self.mutate(|this| {
Some(CallbackHandlers(
this.list_service_interfaces.remove(id).unwrap_or_default(),
))
.filter(|cb| !cb.0.is_empty())
})
}
pub(super) fn add_get_system_smtp(&self, handler: CallbackHandler) {
self.mutate(|this| {
this.get_system_smtp.push(handler);
})
}
#[must_use]
pub fn get_system_smtp(&self) -> Option<CallbackHandlers> {
self.mutate(|this| {
Some(CallbackHandlers(std::mem::take(&mut this.get_system_smtp)))
.filter(|cb| !cb.0.is_empty())
})
}
pub(super) fn add_get_host_info(
self: &Arc<Self>, self: &Arc<Self>,
db: &TypedPatchDb<Database>, key: K,
package_id: PackageId, watch: TypedDbWatch<T>,
host_id: HostId,
handler: CallbackHandler, handler: CallbackHandler,
) { ) {
self.mutate(|this| { self.inner.mutate(|map| {
this.get_host_info map.entry(key.clone())
.entry((package_id.clone(), host_id.clone()))
.or_insert_with(|| { .or_insert_with(|| {
let ptr: JsonPointer = let this = Arc::clone(self);
format!("/public/packageData/{}/hosts/{}", package_id, host_id) let k = key;
.parse() let label = self.label;
.expect("valid json pointer");
let db = db.clone();
let callbacks = Arc::clone(self);
let key = (package_id, host_id);
( (
tokio::spawn(async move { tokio::spawn(async move {
let mut sub = db.subscribe(ptr).await; let mut watch = watch.untyped();
while sub.recv().await.is_some() { if watch.changed().await.is_ok() {
if let Some(cbs) = callbacks.mutate(|this| { if let Some(cbs) = this.inner.mutate(|map| {
this.get_host_info map.remove(&k)
.remove(&key)
.map(|(_, handlers)| CallbackHandlers(handlers)) .map(|(_, handlers)| CallbackHandlers(handlers))
.filter(|cb| !cb.0.is_empty()) .filter(|cb| !cb.0.is_empty())
}) { }) {
if let Err(e) = cbs.call(vector![]).await { let value = watch.peek_and_mark_seen().unwrap_or_default();
tracing::error!("Error in host info callback: {e}"); if let Err(e) = cbs.call(vector![value]).await {
tracing::error!("Error in {label} callback: {e}");
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
} }
// entry was removed when we consumed handlers,
// so stop watching — a new subscription will be
// created if the service re-registers
break;
} }
}) })
.into(), .into(),
@@ -198,6 +85,113 @@ impl ServiceCallbacks {
}) })
} }
pub fn gc(&self) {
self.inner.mutate(|map| {
map.retain(|_, (_, v)| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
})
}
}
pub struct ServiceCallbacks {
inner: SyncMutex<ServiceCallbackMap>,
get_host_info: Arc<DbWatchedCallbacks<(PackageId, HostId)>>,
get_status: Arc<DbWatchedCallbacks<PackageId>>,
get_service_interface: Arc<DbWatchedCallbacks<(PackageId, ServiceInterfaceId)>>,
list_service_interfaces: Arc<DbWatchedCallbacks<PackageId>>,
get_system_smtp: Arc<DbWatchedCallbacks<()>>,
get_service_manifest: Arc<DbWatchedCallbacks<PackageId>>,
}
impl Default for ServiceCallbacks {
fn default() -> Self {
Self {
inner: SyncMutex::new(ServiceCallbackMap::default()),
get_host_info: Arc::new(DbWatchedCallbacks::new("host info")),
get_status: Arc::new(DbWatchedCallbacks::new("get_status")),
get_service_interface: Arc::new(DbWatchedCallbacks::new("get_service_interface")),
list_service_interfaces: Arc::new(DbWatchedCallbacks::new("list_service_interfaces")),
get_system_smtp: Arc::new(DbWatchedCallbacks::new("get_system_smtp")),
get_service_manifest: Arc::new(DbWatchedCallbacks::new("get_service_manifest")),
}
}
}
#[derive(Default)]
struct ServiceCallbackMap {
get_ssl_certificate: EqMap<
(BTreeSet<InternedString>, FullchainCertData, Algorithm),
(NonDetachingJoinHandle<()>, Vec<CallbackHandler>),
>,
get_container_ip: BTreeMap<PackageId, Vec<CallbackHandler>>,
get_outbound_gateway: BTreeMap<PackageId, (NonDetachingJoinHandle<()>, Vec<CallbackHandler>)>,
}
impl ServiceCallbacks {
fn mutate<T>(&self, f: impl FnOnce(&mut ServiceCallbackMap) -> T) -> T {
self.inner.mutate(f)
}
pub fn gc(&self) {
self.mutate(|this| {
this.get_ssl_certificate.retain(|_, (_, v)| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
this.get_outbound_gateway.retain(|_, (_, v)| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
});
self.get_host_info.gc();
self.get_status.gc();
self.get_service_interface.gc();
self.list_service_interfaces.gc();
self.get_system_smtp.gc();
self.get_service_manifest.gc();
}
pub(super) fn add_get_service_interface(
&self,
package_id: PackageId,
service_interface_id: ServiceInterfaceId,
watch: TypedDbWatch<ServiceInterface>,
handler: CallbackHandler,
) {
self.get_service_interface
.add((package_id, service_interface_id), watch, handler);
}
pub(super) fn add_list_service_interfaces<T: Send + 'static>(
&self,
package_id: PackageId,
watch: TypedDbWatch<T>,
handler: CallbackHandler,
) {
self.list_service_interfaces.add(package_id, watch, handler);
}
pub(super) fn add_get_system_smtp<T: Send + 'static>(
&self,
watch: TypedDbWatch<T>,
handler: CallbackHandler,
) {
self.get_system_smtp.add((), watch, handler);
}
pub(super) fn add_get_host_info(
&self,
package_id: PackageId,
host_id: HostId,
watch: TypedDbWatch<Host>,
handler: CallbackHandler,
) {
self.get_host_info
.add((package_id, host_id), watch, handler);
}
pub(super) fn add_get_ssl_certificate( pub(super) fn add_get_ssl_certificate(
&self, &self,
ctx: EffectContext, ctx: EffectContext,
@@ -256,19 +250,14 @@ impl ServiceCallbacks {
.push(handler); .push(handler);
}) })
} }
pub(super) fn add_get_status(&self, package_id: PackageId, handler: CallbackHandler) {
self.mutate(|this| this.get_status.entry(package_id).or_default().push(handler)) pub(super) fn add_get_status(
} &self,
#[must_use] package_id: PackageId,
pub fn get_status(&self, package_id: &PackageId) -> Option<CallbackHandlers> { watch: TypedDbWatch<StatusInfo>,
self.mutate(|this| { handler: CallbackHandler,
if let Some(watched) = this.get_status.remove(package_id) { ) {
Some(CallbackHandlers(watched)) self.get_status.add(package_id, watch, handler);
} else {
None
}
.filter(|cb| !cb.0.is_empty())
})
} }
pub(super) fn add_get_container_ip(&self, package_id: PackageId, handler: CallbackHandler) { pub(super) fn add_get_container_ip(&self, package_id: PackageId, handler: CallbackHandler) {
@@ -345,23 +334,13 @@ impl ServiceCallbacks {
}) })
} }
pub(super) fn add_get_service_manifest(&self, package_id: PackageId, handler: CallbackHandler) { pub(super) fn add_get_service_manifest(
self.mutate(|this| { &self,
this.get_service_manifest package_id: PackageId,
.entry(package_id) watch: TypedDbWatch<PackageState>,
.or_default() handler: CallbackHandler,
.push(handler) ) {
}) self.get_service_manifest.add(package_id, watch, handler);
}
#[must_use]
pub fn get_service_manifest(&self, package_id: &PackageId) -> Option<CallbackHandlers> {
self.mutate(|this| {
this.get_service_manifest
.remove(package_id)
.map(CallbackHandlers)
.filter(|cb| !cb.0.is_empty())
})
} }
} }

View File

@@ -80,27 +80,32 @@ pub async fn get_status(
package_id, package_id,
callback, callback,
}: GetStatusParams, }: GetStatusParams,
) -> Result<StatusInfo, Error> { ) -> Result<Option<StatusInfo>, Error> {
let context = context.deref()?; let context = context.deref()?;
let id = package_id.unwrap_or_else(|| context.seed.id.clone()); let id = package_id.unwrap_or_else(|| context.seed.id.clone());
let db = context.seed.ctx.db.peek().await;
let ptr = format!("/public/packageData/{}/statusInfo", id)
.parse()
.expect("valid json pointer");
let mut watch = context
.seed
.ctx
.db
.watch(ptr)
.await
.typed::<StatusInfo>();
let status = watch.peek_and_mark_seen()?.de().ok();
if let Some(callback) = callback { if let Some(callback) = callback {
let callback = callback.register(&context.seed.persistent_container); let callback = callback.register(&context.seed.persistent_container);
context.seed.ctx.callbacks.add_get_status( context.seed.ctx.callbacks.add_get_status(
id.clone(), id.clone(),
watch,
super::callbacks::CallbackHandler::new(&context, callback), super::callbacks::CallbackHandler::new(&context, callback),
); );
} }
let status = db
.as_public()
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.as_status_info()
.de()?;
Ok(status) Ok(status)
} }
@@ -158,7 +163,7 @@ pub async fn set_main_status(
if prev.is_none() && status == SetMainStatusStatus::Running { if prev.is_none() && status == SetMainStatusStatus::Running {
s.as_desired_mut().map_mutate(|s| { s.as_desired_mut().map_mutate(|s| {
Ok(match s { Ok(match s {
DesiredStatus::Restarting => DesiredStatus::Running, DesiredStatus::Restarting { .. } => DesiredStatus::Running,
x => x, x => x,
}) })
})?; })?;

View File

@@ -399,27 +399,38 @@ pub async fn get_service_manifest(
callback, callback,
}: GetServiceManifestParams, }: GetServiceManifestParams,
) -> Result<Manifest, Error> { ) -> Result<Manifest, Error> {
use crate::db::model::package::PackageState;
let context = context.deref()?; let context = context.deref()?;
let ptr = format!("/public/packageData/{}/stateInfo", package_id)
.parse()
.expect("valid json pointer");
let mut watch = context
.seed
.ctx
.db
.watch(ptr)
.await
.typed::<PackageState>();
let manifest = watch
.peek_and_mark_seen()?
.as_manifest(ManifestPreference::Old)
.de()?;
if let Some(callback) = callback { if let Some(callback) = callback {
let callback = callback.register(&context.seed.persistent_container); let callback = callback.register(&context.seed.persistent_container);
context context
.seed .seed
.ctx .ctx
.callbacks .callbacks
.add_get_service_manifest(package_id.clone(), CallbackHandler::new(&context, callback)); .add_get_service_manifest(
package_id.clone(),
watch,
CallbackHandler::new(&context, callback),
);
} }
let db = context.seed.ctx.db.peek().await;
let manifest = db
.as_public()
.as_package_data()
.as_idx(&package_id)
.or_not_found(&package_id)?
.as_state_info()
.as_manifest(ManifestPreference::New)
.de()?;
Ok(manifest) Ok(manifest)
} }

View File

@@ -23,26 +23,30 @@ pub async fn get_host_info(
}: GetHostInfoParams, }: GetHostInfoParams,
) -> Result<Option<Host>, Error> { ) -> Result<Option<Host>, Error> {
let context = context.deref()?; let context = context.deref()?;
let db = context.seed.ctx.db.peek().await;
let package_id = package_id.unwrap_or_else(|| context.seed.id.clone()); let package_id = package_id.unwrap_or_else(|| context.seed.id.clone());
let ptr = format!("/public/packageData/{}/hosts/{}", package_id, host_id)
.parse()
.expect("valid json pointer");
let mut watch = context
.seed
.ctx
.db
.watch(ptr)
.await
.typed::<Host>();
let res = watch.peek_and_mark_seen()?.de().ok();
if let Some(callback) = callback { if let Some(callback) = callback {
let callback = callback.register(&context.seed.persistent_container); let callback = callback.register(&context.seed.persistent_container);
context.seed.ctx.callbacks.add_get_host_info( context.seed.ctx.callbacks.add_get_host_info(
&context.seed.ctx.db,
package_id.clone(), package_id.clone(),
host_id.clone(), host_id.clone(),
watch,
CallbackHandler::new(&context, callback), CallbackHandler::new(&context, callback),
); );
} }
let res = db
.as_public()
.as_package_data()
.as_idx(&package_id)
.and_then(|m| m.as_hosts().as_idx(&host_id))
.map(|m| m.de())
.transpose()?;
Ok(res) Ok(res)
} }

View File

@@ -1,7 +1,5 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use imbl::vector;
use crate::net::service_interface::{AddressInfo, ServiceInterface, ServiceInterfaceType}; use crate::net::service_interface::{AddressInfo, ServiceInterface, ServiceInterfaceType};
use crate::service::effects::callbacks::CallbackHandler; use crate::service::effects::callbacks::CallbackHandler;
use crate::service::effects::prelude::*; use crate::service::effects::prelude::*;
@@ -42,7 +40,7 @@ pub async fn export_service_interface(
interface_type: r#type, interface_type: r#type,
}; };
let res = context context
.seed .seed
.ctx .ctx
.db .db
@@ -56,27 +54,8 @@ pub async fn export_service_interface(
ifaces.insert(&id, &service_interface)?; ifaces.insert(&id, &service_interface)?;
Ok(()) Ok(())
}) })
.await; .await
res.result?; .result?;
if res.revision.is_some() {
if let Some(callbacks) = context
.seed
.ctx
.callbacks
.get_service_interface(&(package_id.clone(), id))
{
callbacks.call(vector![]).await?;
}
if let Some(callbacks) = context
.seed
.ctx
.callbacks
.list_service_interfaces(&package_id)
{
callbacks.call(vector![]).await?;
}
}
Ok(()) Ok(())
} }
@@ -101,26 +80,34 @@ pub async fn get_service_interface(
) -> Result<Option<ServiceInterface>, Error> { ) -> Result<Option<ServiceInterface>, Error> {
let context = context.deref()?; let context = context.deref()?;
let package_id = package_id.unwrap_or_else(|| context.seed.id.clone()); let package_id = package_id.unwrap_or_else(|| context.seed.id.clone());
let db = context.seed.ctx.db.peek().await;
let ptr = format!(
"/public/packageData/{}/serviceInterfaces/{}",
package_id, service_interface_id
)
.parse()
.expect("valid json pointer");
let mut watch = context
.seed
.ctx
.db
.watch(ptr)
.await
.typed::<ServiceInterface>();
let res = watch.peek_and_mark_seen()?.de().ok();
if let Some(callback) = callback { if let Some(callback) = callback {
let callback = callback.register(&context.seed.persistent_container); let callback = callback.register(&context.seed.persistent_container);
context.seed.ctx.callbacks.add_get_service_interface( context.seed.ctx.callbacks.add_get_service_interface(
package_id.clone(), package_id.clone(),
service_interface_id.clone(), service_interface_id.clone(),
watch,
CallbackHandler::new(&context, callback), CallbackHandler::new(&context, callback),
); );
} }
let interface = db Ok(res)
.as_public()
.as_package_data()
.as_idx(&package_id)
.and_then(|m| m.as_service_interfaces().as_idx(&service_interface_id))
.map(|m| m.de())
.transpose()?;
Ok(interface)
} }
#[derive(Debug, Clone, Serialize, Deserialize, TS)] #[derive(Debug, Clone, Serialize, Deserialize, TS)]
@@ -142,27 +129,23 @@ pub async fn list_service_interfaces(
let context = context.deref()?; let context = context.deref()?;
let package_id = package_id.unwrap_or_else(|| context.seed.id.clone()); let package_id = package_id.unwrap_or_else(|| context.seed.id.clone());
let ptr = format!("/public/packageData/{}/serviceInterfaces", package_id)
.parse()
.expect("valid json pointer");
let mut watch = context.seed.ctx.db.watch(ptr).await;
let res = imbl_value::from_value(watch.peek_and_mark_seen()?)
.unwrap_or_default();
if let Some(callback) = callback { if let Some(callback) = callback {
let callback = callback.register(&context.seed.persistent_container); let callback = callback.register(&context.seed.persistent_container);
context.seed.ctx.callbacks.add_list_service_interfaces( context.seed.ctx.callbacks.add_list_service_interfaces(
package_id.clone(), package_id.clone(),
watch.typed::<BTreeMap<ServiceInterfaceId, ServiceInterface>>(),
CallbackHandler::new(&context, callback), CallbackHandler::new(&context, callback),
); );
} }
let res = context
.seed
.ctx
.db
.peek()
.await
.into_public()
.into_package_data()
.into_idx(&package_id)
.map(|m| m.into_service_interfaces().de())
.transpose()?
.unwrap_or_default();
Ok(res) Ok(res)
} }
@@ -180,52 +163,22 @@ pub async fn clear_service_interfaces(
let context = context.deref()?; let context = context.deref()?;
let package_id = context.seed.id.clone(); let package_id = context.seed.id.clone();
let res = context context
.seed .seed
.ctx .ctx
.db .db
.mutate(|db| { .mutate(|db| {
let mut removed = Vec::new();
db.as_public_mut() db.as_public_mut()
.as_package_data_mut() .as_package_data_mut()
.as_idx_mut(&package_id) .as_idx_mut(&package_id)
.or_not_found(&package_id)? .or_not_found(&package_id)?
.as_service_interfaces_mut() .as_service_interfaces_mut()
.mutate(|s| { .mutate(|s| {
Ok(s.retain(|id, _| { Ok(s.retain(|id, _| except.contains(id)))
if except.contains(id) { })
true
} else {
removed.push(id.clone());
false
}
}))
})?;
Ok(removed)
}) })
.await; .await
let removed = res.result?; .result?;
if res.revision.is_some() {
for id in removed {
if let Some(callbacks) = context
.seed
.ctx
.callbacks
.get_service_interface(&(package_id.clone(), id))
{
callbacks.call(vector![]).await?;
}
}
if let Some(callbacks) = context
.seed
.ctx
.callbacks
.list_service_interfaces(&package_id)
{
callbacks.call(vector![]).await?;
}
}
Ok(()) Ok(())
} }

View File

@@ -16,25 +16,25 @@ pub async fn get_system_smtp(
) -> Result<Option<SmtpValue>, Error> { ) -> Result<Option<SmtpValue>, Error> {
let context = context.deref()?; let context = context.deref()?;
let ptr = "/public/serverInfo/smtp"
.parse()
.expect("valid json pointer");
let mut watch = context.seed.ctx.db.watch(ptr).await;
let res = imbl_value::from_value(watch.peek_and_mark_seen()?)
.with_kind(ErrorKind::Deserialization)?;
if let Some(callback) = callback { if let Some(callback) = callback {
let callback = callback.register(&context.seed.persistent_container); let callback = callback.register(&context.seed.persistent_container);
context context
.seed .seed
.ctx .ctx
.callbacks .callbacks
.add_get_system_smtp(CallbackHandler::new(&context, callback)); .add_get_system_smtp(
watch.typed::<Option<SmtpValue>>(),
CallbackHandler::new(&context, callback),
);
} }
let res = context
.seed
.ctx
.db
.peek()
.await
.into_public()
.into_server_info()
.into_smtp()
.de()?;
Ok(res) Ok(res)
} }

View File

@@ -2,7 +2,7 @@ use std::path::Path;
use crate::DATA_DIR; use crate::DATA_DIR;
use crate::service::effects::prelude::*; use crate::service::effects::prelude::*;
use crate::util::io::{delete_file, maybe_read_file_to_string, write_file_atomic}; use crate::util::io::{delete_file, write_file_atomic};
use crate::volume::PKG_VOLUME_DIR; use crate::volume::PKG_VOLUME_DIR;
#[derive(Debug, Clone, Serialize, Deserialize, TS, Parser)] #[derive(Debug, Clone, Serialize, Deserialize, TS, Parser)]
@@ -36,11 +36,5 @@ pub async fn set_data_version(
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn get_data_version(context: EffectContext) -> Result<Option<String>, Error> { pub async fn get_data_version(context: EffectContext) -> Result<Option<String>, Error> {
let context = context.deref()?; let context = context.deref()?;
let package_id = &context.seed.id; crate::service::get_data_version(&context.seed.id).await
let path = Path::new(DATA_DIR)
.join(PKG_VOLUME_DIR)
.join(package_id)
.join("data")
.join(".version");
maybe_read_file_to_string(path).await
} }

View File

@@ -46,12 +46,14 @@ use crate::service::uninstall::cleanup;
use crate::util::Never; use crate::util::Never;
use crate::util::actor::concurrent::ConcurrentActor; use crate::util::actor::concurrent::ConcurrentActor;
use crate::util::future::NonDetachingJoinHandle; use crate::util::future::NonDetachingJoinHandle;
use crate::util::io::{AsyncReadStream, AtomicFile, TermSize, delete_file}; use crate::util::io::{
AsyncReadStream, AtomicFile, TermSize, delete_file, maybe_read_file_to_string,
};
use crate::util::net::WebSocket; use crate::util::net::WebSocket;
use crate::util::serde::Pem; use crate::util::serde::Pem;
use crate::util::sync::SyncMutex; use crate::util::sync::SyncMutex;
use crate::util::tui::choose; use crate::util::tui::choose;
use crate::volume::data_dir; use crate::volume::{PKG_VOLUME_DIR, data_dir};
use crate::{ActionId, CAP_1_KiB, DATA_DIR, ImageId, PackageId}; use crate::{ActionId, CAP_1_KiB, DATA_DIR, ImageId, PackageId};
pub mod action; pub mod action;
@@ -81,6 +83,17 @@ pub enum LoadDisposition {
Undo, Undo,
} }
/// Read the data version file for a service from disk.
/// Returns `Ok(None)` if the file does not exist (fresh install).
pub async fn get_data_version(id: &PackageId) -> Result<Option<String>, Error> {
let path = Path::new(DATA_DIR)
.join(PKG_VOLUME_DIR)
.join(id)
.join("data")
.join(".version");
maybe_read_file_to_string(&path).await
}
struct RootCommand(pub String); struct RootCommand(pub String);
#[derive(Clone, Debug, Serialize, Deserialize, Default, TS)] #[derive(Clone, Debug, Serialize, Deserialize, Default, TS)]
@@ -390,12 +403,17 @@ impl Service {
tracing::error!("Error opening s9pk for install: {e}"); tracing::error!("Error opening s9pk for install: {e}");
tracing::debug!("{e:?}") tracing::debug!("{e:?}")
}) { }) {
let init_kind = if get_data_version(id).await.ok().flatten().is_some() {
InitKind::Update
} else {
InitKind::Install
};
if let Ok(service) = Self::install( if let Ok(service) = Self::install(
ctx.clone(), ctx.clone(),
s9pk, s9pk,
&s9pk_path, &s9pk_path,
&None, &None,
InitKind::Install, init_kind,
None::<Never>, None::<Never>,
None, None,
) )
@@ -404,11 +422,15 @@ impl Service {
tracing::error!("Error installing service: {e}"); tracing::error!("Error installing service: {e}");
tracing::debug!("{e:?}") tracing::debug!("{e:?}")
}) { }) {
crate::volume::remove_install_backup(id).await.log_err();
return Ok(Some(service)); return Ok(Some(service));
} }
} }
} }
cleanup(ctx, id, false).await.log_err(); cleanup(ctx, id, false).await.log_err();
crate::volume::restore_volumes_from_install_backup(id)
.await
.log_err();
ctx.db ctx.db
.mutate(|v| v.as_public_mut().as_package_data_mut().remove(id)) .mutate(|v| v.as_public_mut().as_package_data_mut().remove(id))
.await .await
@@ -424,12 +446,17 @@ impl Service {
tracing::error!("Error opening s9pk for update: {e}"); tracing::error!("Error opening s9pk for update: {e}");
tracing::debug!("{e:?}") tracing::debug!("{e:?}")
}) { }) {
let init_kind = if get_data_version(id).await.ok().flatten().is_some() {
InitKind::Update
} else {
InitKind::Install
};
if let Ok(service) = Self::install( if let Ok(service) = Self::install(
ctx.clone(), ctx.clone(),
s9pk, s9pk,
&s9pk_path, &s9pk_path,
&None, &None,
InitKind::Update, init_kind,
None::<Never>, None::<Never>,
None, None,
) )
@@ -438,37 +465,60 @@ impl Service {
tracing::error!("Error installing service: {e}"); tracing::error!("Error installing service: {e}");
tracing::debug!("{e:?}") tracing::debug!("{e:?}")
}) { }) {
crate::volume::remove_install_backup(id).await.log_err();
return Ok(Some(service)); return Ok(Some(service));
} }
} }
} }
let s9pk = S9pk::open(s9pk_path, Some(id)).await?; match async {
ctx.db let s9pk = S9pk::open(s9pk_path, Some(id)).await?;
.mutate({ ctx.db
|db| { .mutate({
db.as_public_mut() |db| {
.as_package_data_mut() db.as_public_mut()
.as_idx_mut(id) .as_package_data_mut()
.or_not_found(id)? .as_idx_mut(id)
.as_state_info_mut() .or_not_found(id)?
.map_mutate(|s| { .as_state_info_mut()
if let PackageState::Updating(UpdatingState { .map_mutate(|s| {
manifest, .. if let PackageState::Updating(UpdatingState {
}) = s manifest,
{ ..
Ok(PackageState::Installed(InstalledState { manifest })) }) = s
} else { {
Err(Error::new( Ok(PackageState::Installed(InstalledState { manifest }))
eyre!("{}", t!("service.mod.race-condition-detected")), } else {
ErrorKind::Database, Err(Error::new(
)) eyre!(
} "{}",
}) t!("service.mod.race-condition-detected")
} ),
}) ErrorKind::Database,
.await ))
.result?; }
handle_installed(s9pk).await })
}
})
.await
.result?;
handle_installed(s9pk).await
}
.await
{
Ok(service) => {
crate::volume::remove_install_backup(id).await.log_err();
Ok(service)
}
Err(e) => {
tracing::error!(
"Update rollback failed for {id}, restoring volume snapshot: {e}"
);
crate::volume::restore_volumes_from_install_backup(id)
.await
.log_err();
Err(e)
}
}
} }
PackageStateMatchModelRef::Removing(_) | PackageStateMatchModelRef::Restoring(_) => { PackageStateMatchModelRef::Removing(_) | PackageStateMatchModelRef::Restoring(_) => {
if let Ok(s9pk) = S9pk::open(s9pk_path, Some(id)).await.map_err(|e| { if let Ok(s9pk) = S9pk::open(s9pk_path, Some(id)).await.map_err(|e| {
@@ -617,17 +667,6 @@ impl Service {
tokio::task::yield_now().await; tokio::task::yield_now().await;
} }
// Trigger manifest callbacks after successful installation
let manifest = service.seed.persistent_container.s9pk.as_manifest();
if let Some(callbacks) = ctx.callbacks.get_service_manifest(&manifest.id) {
let manifest_value =
serde_json::to_value(manifest).with_kind(ErrorKind::Serialization)?;
callbacks
.call(imbl::vector![manifest_value.into()])
.await
.log_err();
}
Ok(service) Ok(service)
} }

View File

@@ -107,6 +107,12 @@ impl ExitParams {
target: Some(InternedString::from_display(range)), target: Some(InternedString::from_display(range)),
} }
} }
pub fn target_str(s: &str) -> Self {
Self {
id: Guid::new(),
target: Some(InternedString::intern(s)),
}
}
pub fn uninstall() -> Self { pub fn uninstall() -> Self {
Self { Self {
id: Guid::new(), id: Guid::new(),

View File

@@ -1,7 +1,6 @@
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use imbl::vector;
use patch_db::TypedDbWatch; use patch_db::TypedDbWatch;
use super::ServiceActorSeed; use super::ServiceActorSeed;
@@ -99,19 +98,12 @@ async fn service_actor_loop<'a>(
seed: &'a Arc<ServiceActorSeed>, seed: &'a Arc<ServiceActorSeed>,
transition: &mut Option<Transition<'a>>, transition: &mut Option<Transition<'a>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let id = &seed.id;
let status_model = watch.peek_and_mark_seen()?; let status_model = watch.peek_and_mark_seen()?;
let status = status_model.de()?; let status = status_model.de()?;
if let Some(callbacks) = seed.ctx.callbacks.get_status(id) {
callbacks
.call(vector![patch_db::ModelExt::into_value(status_model)])
.await?;
}
match status { match status {
StatusInfo { StatusInfo {
desired: DesiredStatus::Running | DesiredStatus::Restarting, desired: DesiredStatus::Running | DesiredStatus::Restarting { .. },
started: None, started: None,
.. ..
} => { } => {
@@ -122,7 +114,7 @@ async fn service_actor_loop<'a>(
} }
StatusInfo { StatusInfo {
desired: desired:
DesiredStatus::Stopped | DesiredStatus::Restarting | DesiredStatus::BackingUp { .. }, DesiredStatus::Stopped | DesiredStatus::Restarting { .. } | DesiredStatus::BackingUp { .. },
started: Some(_), started: Some(_),
.. ..
} => { } => {

View File

@@ -28,7 +28,7 @@ use crate::s9pk::S9pk;
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::s9pk::merkle_archive::source::FileSource; use crate::s9pk::merkle_archive::source::FileSource;
use crate::service::rpc::{ExitParams, InitKind}; use crate::service::rpc::{ExitParams, InitKind};
use crate::service::{LoadDisposition, Service, ServiceRef}; use crate::service::{LoadDisposition, Service, ServiceRef, get_data_version};
use crate::sign::commitment::merkle_archive::MerkleArchiveCommitment; use crate::sign::commitment::merkle_archive::MerkleArchiveCommitment;
use crate::status::{DesiredStatus, StatusInfo}; use crate::status::{DesiredStatus, StatusInfo};
use crate::util::future::NonDetachingJoinHandle; use crate::util::future::NonDetachingJoinHandle;
@@ -243,12 +243,7 @@ impl ServiceMap {
PackageState::Installing(installing) PackageState::Installing(installing)
}, },
s9pk: installed_path, s9pk: installed_path,
status_info: StatusInfo { status_info: StatusInfo::default(),
error: None,
health: BTreeMap::new(),
started: None,
desired: DesiredStatus::Stopped,
},
registry, registry,
developer_key: Pem::new(developer_key), developer_key: Pem::new(developer_key),
icon, icon,
@@ -299,10 +294,11 @@ impl ServiceMap {
s9pk.serialize(&mut progress_writer, true).await?; s9pk.serialize(&mut progress_writer, true).await?;
let (file, mut unpack_progress) = progress_writer.into_inner(); let (file, mut unpack_progress) = progress_writer.into_inner();
file.sync_all().await?; file.sync_all().await?;
unpack_progress.complete();
crate::util::io::rename(&download_path, &installed_path).await?; crate::util::io::rename(&download_path, &installed_path).await?;
unpack_progress.complete();
Ok::<_, Error>(sync_progress_task) Ok::<_, Error>(sync_progress_task)
}) })
.await?; .await?;
@@ -310,36 +306,52 @@ impl ServiceMap {
.handle_last(async move { .handle_last(async move {
finalization_progress.start(); finalization_progress.start();
let s9pk = S9pk::open(&installed_path, Some(&id)).await?; let s9pk = S9pk::open(&installed_path, Some(&id)).await?;
let data_version = get_data_version(&id).await?;
// Snapshot existing volumes before install/update modifies them
crate::volume::snapshot_volumes_for_install(&id).await?;
let prev = if let Some(service) = service.take() { let prev = if let Some(service) = service.take() {
ensure_code!( ensure_code!(
recovery_source.is_none(), recovery_source.is_none(),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
"cannot restore over existing package" "cannot restore over existing package"
); );
let prev_version = service let uninit = if let Some(ref data_ver) = data_version {
.seed let prev_can_migrate_to = &service
.persistent_container .seed
.s9pk .persistent_container
.as_manifest() .s9pk
.version .as_manifest()
.clone(); .can_migrate_to;
let prev_can_migrate_to = &service let next_version = &s9pk.as_manifest().version;
.seed let next_can_migrate_from = &s9pk.as_manifest().can_migrate_from;
.persistent_container if let Ok(data_ver_ev) = data_ver.parse::<exver::ExtendedVersion>() {
.s9pk if data_ver_ev.satisfies(next_can_migrate_from) {
.as_manifest() ExitParams::target_str(data_ver)
.can_migrate_to; } else if next_version.satisfies(prev_can_migrate_to) {
let next_version = &s9pk.as_manifest().version; ExitParams::target_version(&s9pk.as_manifest().version)
let next_can_migrate_from = &s9pk.as_manifest().can_migrate_from; } else {
let uninit = if prev_version.satisfies(next_can_migrate_from) { ExitParams::target_range(&VersionRange::and(
ExitParams::target_version(&*prev_version) prev_can_migrate_to.clone(),
} else if next_version.satisfies(prev_can_migrate_to) { next_can_migrate_from.clone(),
ExitParams::target_version(&s9pk.as_manifest().version) ))
}
} else if let Ok(data_ver_range) = data_ver.parse::<VersionRange>() {
ExitParams::target_range(&VersionRange::and(
data_ver_range,
next_can_migrate_from.clone(),
))
} else if next_version.satisfies(prev_can_migrate_to) {
ExitParams::target_version(&s9pk.as_manifest().version)
} else {
ExitParams::target_range(&VersionRange::and(
prev_can_migrate_to.clone(),
next_can_migrate_from.clone(),
))
}
} else { } else {
ExitParams::target_range(&VersionRange::and( ExitParams::target_version(
prev_can_migrate_to.clone(), &*service.seed.persistent_container.s9pk.as_manifest().version,
next_can_migrate_from.clone(), )
))
}; };
let cleanup = service.uninstall(uninit, false, false).await?; let cleanup = service.uninstall(uninit, false, false).await?;
progress.complete(); progress.complete();
@@ -354,7 +366,7 @@ impl ServiceMap {
&registry, &registry,
if recovery_source.is_some() { if recovery_source.is_some() {
InitKind::Restore InitKind::Restore
} else if prev.is_some() { } else if data_version.is_some() {
InitKind::Update InitKind::Update
} else { } else {
InitKind::Install InitKind::Install
@@ -372,6 +384,8 @@ impl ServiceMap {
cleanup.await?; cleanup.await?;
} }
crate::volume::remove_install_backup(&id).await.log_err();
drop(service); drop(service);
sync_progress_task.await.map_err(|_| { sync_progress_task.await.map_err(|_| {

View File

@@ -1,8 +1,6 @@
use std::collections::BTreeSet; use std::collections::BTreeSet;
use std::path::Path; use std::path::Path;
use imbl::vector;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::db::model::package::{InstalledState, InstallingInfo, InstallingState, PackageState}; use crate::db::model::package::{InstalledState, InstallingInfo, InstallingState, PackageState};
use crate::net::host::all_hosts; use crate::net::host::all_hosts;
@@ -94,20 +92,13 @@ pub async fn cleanup(ctx: &RpcContext, id: &PackageId, soft: bool) -> Result<(),
)); ));
} }
}; };
// Trigger manifest callbacks with null to indicate uninstall
if let Some(callbacks) = ctx.callbacks.get_service_manifest(&manifest.id) {
callbacks.call(vector![Value::Null]).await.log_err();
}
if !soft { if !soft {
let path = Path::new(DATA_DIR).join(PKG_VOLUME_DIR).join(&manifest.id); let path = Path::new(DATA_DIR).join(PKG_VOLUME_DIR).join(&manifest.id);
if tokio::fs::metadata(&path).await.is_ok() { crate::util::io::delete_dir(&path).await?;
tokio::fs::remove_dir_all(&path).await?; #[cfg(not(feature = "dev"))]
} {
let logs_dir = Path::new(PACKAGE_DATA).join("logs").join(&manifest.id); let logs_dir = Path::new(PACKAGE_DATA).join("logs").join(&manifest.id);
if tokio::fs::metadata(&logs_dir).await.is_ok() { crate::util::io::delete_dir(&logs_dir).await?;
#[cfg(not(feature = "dev"))]
tokio::fs::remove_dir_all(&logs_dir).await?;
} }
} }
}, },

View File

@@ -95,8 +95,8 @@ const LIVE_MEDIUM_PATH: &str = "/run/live/medium";
pub async fn list_disks(ctx: SetupContext) -> Result<Vec<DiskInfo>, Error> { pub async fn list_disks(ctx: SetupContext) -> Result<Vec<DiskInfo>, Error> {
let mut disks = crate::disk::util::list( let mut disks = crate::disk::util::list(
&ctx.config &crate::disk::OsPartitionInfo::from_fstab()
.peek(|c| c.os_partitions.clone()) .await
.unwrap_or_default(), .unwrap_or_default(),
) )
.await?; .await?;
@@ -115,7 +115,7 @@ pub async fn list_disks(ctx: SetupContext) -> Result<Vec<DiskInfo>, Error> {
async fn setup_init( async fn setup_init(
ctx: &SetupContext, ctx: &SetupContext,
password: Option<String>, password: Option<String>,
kiosk: Option<bool>, kiosk: bool,
hostname: Option<ServerHostnameInfo>, hostname: Option<ServerHostnameInfo>,
init_phases: InitPhases, init_phases: InitPhases,
) -> Result<(AccountInfo, InitResult), Error> { ) -> Result<(AccountInfo, InitResult), Error> {
@@ -137,9 +137,8 @@ async fn setup_init(
account.save(m)?; account.save(m)?;
let info = m.as_public_mut().as_server_info_mut(); let info = m.as_public_mut().as_server_info_mut();
info.as_password_hash_mut().ser(&account.password)?; info.as_password_hash_mut().ser(&account.password)?;
if let Some(kiosk) = kiosk { info.as_kiosk_mut()
info.as_kiosk_mut().ser(&Some(kiosk))?; .ser(&Some(kiosk).filter(|_| &*PLATFORM != "raspberrypi"))?;
}
if let Some(language) = language.clone() { if let Some(language) = language.clone() {
info.as_language_mut().ser(&Some(language))?; info.as_language_mut().ser(&Some(language))?;
} }
@@ -174,8 +173,7 @@ async fn setup_init(
pub struct AttachParams { pub struct AttachParams {
pub password: Option<EncryptedWire>, pub password: Option<EncryptedWire>,
pub guid: InternedString, pub guid: InternedString,
#[ts(optional)] pub kiosk: bool,
pub kiosk: Option<bool>,
} }
#[instrument(skip_all)] #[instrument(skip_all)]
@@ -279,6 +277,7 @@ pub enum SetupStatusRes {
pub struct SetupInfo { pub struct SetupInfo {
pub guid: Option<InternedString>, pub guid: Option<InternedString>,
pub attach: bool, pub attach: bool,
pub mok_enrolled: bool,
} }
#[derive(Debug, Deserialize, Serialize, TS)] #[derive(Debug, Deserialize, Serialize, TS)]
@@ -410,8 +409,7 @@ pub struct SetupExecuteParams {
guid: InternedString, guid: InternedString,
password: Option<EncryptedWire>, password: Option<EncryptedWire>,
recovery_source: Option<RecoverySource<EncryptedWire>>, recovery_source: Option<RecoverySource<EncryptedWire>>,
#[ts(optional)] kiosk: bool,
kiosk: Option<bool>,
name: Option<InternedString>, name: Option<InternedString>,
hostname: Option<InternedString>, hostname: Option<InternedString>,
} }
@@ -548,7 +546,7 @@ pub async fn execute_inner(
guid: InternedString, guid: InternedString,
password: Option<String>, password: Option<String>,
recovery_source: Option<RecoverySource<String>>, recovery_source: Option<RecoverySource<String>>,
kiosk: Option<bool>, kiosk: bool,
hostname: Option<ServerHostnameInfo>, hostname: Option<ServerHostnameInfo>,
) -> Result<(SetupResult, RpcContext), Error> { ) -> Result<(SetupResult, RpcContext), Error> {
let progress = &ctx.progress; let progress = &ctx.progress;
@@ -621,7 +619,7 @@ async fn fresh_setup(
ctx: &SetupContext, ctx: &SetupContext,
guid: InternedString, guid: InternedString,
password: &str, password: &str,
kiosk: Option<bool>, kiosk: bool,
hostname: Option<ServerHostnameInfo>, hostname: Option<ServerHostnameInfo>,
SetupExecuteProgress { SetupExecuteProgress {
init_phases, init_phases,
@@ -630,8 +628,8 @@ async fn fresh_setup(
}: SetupExecuteProgress, }: SetupExecuteProgress,
) -> Result<(SetupResult, RpcContext), Error> { ) -> Result<(SetupResult, RpcContext), Error> {
let account = AccountInfo::new(password, root_ca_start_time().await, hostname)?; let account = AccountInfo::new(password, root_ca_start_time().await, hostname)?;
let db = ctx.db().await?; let db = ctx.db().await?;
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
sync_kiosk(kiosk).await?; sync_kiosk(kiosk).await?;
let language = ctx.language.peek(|a| a.clone()); let language = ctx.language.peek(|a| a.clone());
@@ -682,7 +680,7 @@ async fn recover(
recovery_source: BackupTargetFS, recovery_source: BackupTargetFS,
server_id: String, server_id: String,
recovery_password: String, recovery_password: String,
kiosk: Option<bool>, kiosk: bool,
hostname: Option<ServerHostnameInfo>, hostname: Option<ServerHostnameInfo>,
progress: SetupExecuteProgress, progress: SetupExecuteProgress,
) -> Result<(SetupResult, RpcContext), Error> { ) -> Result<(SetupResult, RpcContext), Error> {
@@ -707,7 +705,7 @@ async fn migrate(
guid: InternedString, guid: InternedString,
old_guid: &str, old_guid: &str,
password: Option<String>, password: Option<String>,
kiosk: Option<bool>, kiosk: bool,
hostname: Option<ServerHostnameInfo>, hostname: Option<ServerHostnameInfo>,
SetupExecuteProgress { SetupExecuteProgress {
init_phases, init_phases,
@@ -738,9 +736,7 @@ async fn migrate(
); );
let tmpdir = Path::new(package_data_transfer_args.0).join("tmp"); let tmpdir = Path::new(package_data_transfer_args.0).join("tmp");
if tokio::fs::metadata(&tmpdir).await.is_ok() { crate::util::io::delete_dir(&tmpdir).await?;
tokio::fs::remove_dir_all(&tmpdir).await?;
}
let ordering = std::sync::atomic::Ordering::Relaxed; let ordering = std::sync::atomic::Ordering::Relaxed;

View File

@@ -38,7 +38,17 @@ impl Model<StatusInfo> {
.map_mutate(|s| Ok(Some(s.unwrap_or_else(|| Utc::now()))))?; .map_mutate(|s| Ok(Some(s.unwrap_or_else(|| Utc::now()))))?;
self.as_desired_mut().map_mutate(|s| { self.as_desired_mut().map_mutate(|s| {
Ok(match s { Ok(match s {
DesiredStatus::Restarting => DesiredStatus::Running, DesiredStatus::Restarting {
restart_again: true,
} => {
// Clear the flag but stay Restarting so actor will stop→start again
DesiredStatus::Restarting {
restart_again: false,
}
}
DesiredStatus::Restarting {
restart_again: false,
} => DesiredStatus::Running,
a => a, a => a,
}) })
})?; })?;
@@ -55,7 +65,9 @@ impl Model<StatusInfo> {
Ok(()) Ok(())
} }
pub fn restart(&mut self) -> Result<(), Error> { pub fn restart(&mut self) -> Result<(), Error> {
self.as_desired_mut().map_mutate(|s| Ok(s.restart()))?; let started = self.as_started().transpose_ref().is_some();
self.as_desired_mut()
.map_mutate(|s| Ok(s.restart(started)))?;
self.as_health_mut().ser(&Default::default())?; self.as_health_mut().ser(&Default::default())?;
Ok(()) Ok(())
} }
@@ -69,7 +81,7 @@ impl Model<StatusInfo> {
DesiredStatus::BackingUp { DesiredStatus::BackingUp {
on_complete: StartStop::Stop, on_complete: StartStop::Stop,
} => DesiredStatus::Stopped, } => DesiredStatus::Stopped,
DesiredStatus::Restarting => DesiredStatus::Running, DesiredStatus::Restarting { .. } => DesiredStatus::Running,
x => x, x => x,
}) })
})?; })?;
@@ -84,9 +96,14 @@ impl Model<StatusInfo> {
#[serde(rename_all_fields = "camelCase")] #[serde(rename_all_fields = "camelCase")]
pub enum DesiredStatus { pub enum DesiredStatus {
Stopped, Stopped,
Restarting, Restarting {
#[serde(default)]
restart_again: bool,
},
Running, Running,
BackingUp { on_complete: StartStop }, BackingUp {
on_complete: StartStop,
},
} }
impl Default for DesiredStatus { impl Default for DesiredStatus {
fn default() -> Self { fn default() -> Self {
@@ -97,7 +114,7 @@ impl DesiredStatus {
pub fn running(&self) -> bool { pub fn running(&self) -> bool {
match self { match self {
Self::Running Self::Running
| Self::Restarting | Self::Restarting { .. }
| Self::BackingUp { | Self::BackingUp {
on_complete: StartStop::Start, on_complete: StartStop::Start,
} => true, } => true,
@@ -140,10 +157,15 @@ impl DesiredStatus {
} }
} }
pub fn restart(&self) -> Self { pub fn restart(&self, started: bool) -> Self {
match self { match self {
Self::Running => Self::Restarting, Self::Running => Self::Restarting {
x => *x, // no-op: restart is meaningless in any other state restart_again: false,
},
Self::Restarting { .. } if !started => Self::Restarting {
restart_again: true,
},
x => *x,
} }
} }
} }

View File

@@ -6,7 +6,6 @@ use chrono::Utc;
use clap::Parser; use clap::Parser;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::FutureExt; use futures::FutureExt;
use imbl::vector;
use imbl_value::InternedString; use imbl_value::InternedString;
use rpc_toolkit::{Context, Empty, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Context, Empty, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
@@ -319,13 +318,11 @@ pub fn kernel_logs<C: Context + AsRef<RpcContinuations>>() -> ParentHandler<C, L
const DISABLE_KIOSK_PATH: &str = const DISABLE_KIOSK_PATH: &str =
"/media/startos/config/overlay/etc/systemd/system/getty@tty1.service.d/autologin.conf"; "/media/startos/config/overlay/etc/systemd/system/getty@tty1.service.d/autologin.conf";
pub async fn sync_kiosk(kiosk: Option<bool>) -> Result<(), Error> { pub async fn sync_kiosk(kiosk: bool) -> Result<(), Error> {
if let Some(kiosk) = kiosk { if kiosk {
if kiosk { enable_kiosk().await?;
enable_kiosk().await?; } else {
} else { disable_kiosk().await?;
disable_kiosk().await?;
}
} }
Ok(()) Ok(())
} }
@@ -1150,9 +1147,6 @@ pub async fn set_system_smtp(ctx: RpcContext, smtp: SmtpValue) -> Result<(), Err
}) })
.await .await
.result?; .result?;
if let Some(callbacks) = ctx.callbacks.get_system_smtp() {
callbacks.call(vector![to_value(&smtp)?]).await?;
}
Ok(()) Ok(())
} }
pub async fn clear_system_smtp(ctx: RpcContext) -> Result<(), Error> { pub async fn clear_system_smtp(ctx: RpcContext) -> Result<(), Error> {
@@ -1165,28 +1159,25 @@ pub async fn clear_system_smtp(ctx: RpcContext) -> Result<(), Error> {
}) })
.await .await
.result?; .result?;
if let Some(callbacks) = ctx.callbacks.get_system_smtp() {
callbacks.call(vector![Value::Null]).await?;
}
Ok(()) Ok(())
} }
#[derive(Debug, Clone, Deserialize, Serialize, Parser)] #[derive(Debug, Clone, Deserialize, Serialize, Parser)]
pub struct SetIfconfigUrlParams { pub struct SetEchoipUrlsParams {
#[arg(help = "help.arg.ifconfig-url")] #[arg(help = "help.arg.echoip-urls")]
pub url: url::Url, pub urls: Vec<url::Url>,
} }
pub async fn set_ifconfig_url( pub async fn set_echoip_urls(
ctx: RpcContext, ctx: RpcContext,
SetIfconfigUrlParams { url }: SetIfconfigUrlParams, SetEchoipUrlsParams { urls }: SetEchoipUrlsParams,
) -> Result<(), Error> { ) -> Result<(), Error> {
ctx.db ctx.db
.mutate(|db| { .mutate(|db| {
db.as_public_mut() db.as_public_mut()
.as_server_info_mut() .as_server_info_mut()
.as_ifconfig_url_mut() .as_echoip_urls_mut()
.ser(&url) .ser(&urls)
}) })
.await .await
.result .result

Some files were not shown because too many files have changed in this diff Show More