Compare commits

...

19 Commits

Author SHA1 Message Date
Matt Hill
fe28a812a4 fix ssh, undeprecate wifi 2026-02-10 13:41:54 -07:00
Aiden McClelland
b6262c8e13 Fix PackageInfoShort to handle LocaleString on releaseNotes (#3112)
* Fix PackageInfoShort to handle LocaleString on releaseNotes

* fix: filter by target_version in get_matching_models and pass otherVersions from install

* chore: add exver documentation for ai agents
2026-02-09 19:42:03 +00:00
Matt Hill
ba740a9ee2 Multiple (#3111)
* fix alerts i18n, fix status display, better, remove usb media, hide shutdown for install complete

* trigger chnage detection for localize pipe and round out implementing localize pipe for consistency even though not needed
2026-02-09 12:41:29 -07:00
Aiden McClelland
f2142f0bb3 add documentation for ai agents (#3115)
* add documentation for ai agents

* docs: consolidate CLAUDE.md and CONTRIBUTING.md, add style guidelines

- Refactor CLAUDE.md to reference CONTRIBUTING.md for build/test/format info
- Expand CONTRIBUTING.md with comprehensive build targets, env vars, and testing
- Add code style guidelines section with conventional commits
- Standardize SDK prettier config to use single quotes (matching web)
- Add project-level Claude Code settings to disable co-author attribution

* style(sdk): apply prettier with single quotes

Run prettier across sdk/base and sdk/package to apply the
standardized quote style (single quotes matching web).

* docs: add USER.md for per-developer TODO filtering

- Add agents/USER.md to .gitignore (contains user identifier)
- Document session startup flow in CLAUDE.md:
  - Create USER.md if missing, prompting for identifier
  - Filter TODOs by @username tags
  - Offer relevant TODOs on session start

* docs: add i18n documentation task to agent TODOs

* docs: document i18n ID patterns in core/

Add agents/i18n-patterns.md covering rust-i18n setup, translation file
format, t!() macro usage, key naming conventions, and locale selection.
Remove completed TODO item and add reference in CLAUDE.md.

* chore: clarify that all builds work on any OS with Docker
2026-02-06 00:10:16 +01:00
gStart9
86ca23c093 Remove redundant https:// strings in start-tunnel installation output (#3114) 2026-02-05 23:22:31 +01:00
Dominion5254
463b6ca4ef propagate Error info (#3116) 2026-02-05 23:21:28 +01:00
Matt Hill
58e0b166cb move comment to safe place 2026-02-02 21:09:19 -07:00
Matt Hill
2a678bb017 fix warning and skip raspberrypi builds for now 2026-02-02 20:16:41 -07:00
Matt Hill
5664456b77 fix for buildjet 2026-02-02 18:51:11 -07:00
Matt Hill
3685b7e57e fix workflows 2026-02-02 18:37:13 -07:00
Matt Hill
989d5f73b1 fix --arch flag to fall back to emulation when native image unavailab… (#3108)
* fix --arch flag to fall back to emulation when native image unavailable, always infer hardware requirement for arch

* better handling of arch filter

* dont cancel in-progress commit workflows and abstract common setup

* cli improvements

fix group handling

* fix cli publish

* alpha.19

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2026-02-03 00:56:59 +00:00
Matt Hill
4f84073cb5 actually eliminate duplicate workflows 2026-01-30 11:28:18 -07:00
Matt Hill
c190295c34 cleaner, also eliminate duplicate workflows 2026-01-30 11:23:40 -07:00
Matt Hill
60875644a1 better i18n checks, better action disabled, fix cert download for ios 2026-01-30 10:59:27 -07:00
Matt Hill
113b09ad01 fix cert download issue in index html 2026-01-29 16:57:12 -07:00
Alex Inkin
2605d0e671 chore: make column shorter (#3107) 2026-01-29 09:54:42 -07:00
Aiden McClelland
d232b91d31 update ota script, rbind for dependency mounts, cli list-ingredients fix, and formatting 2026-01-28 16:09:37 -07:00
Aiden McClelland
c65db31fd9 Feature/consolidate setup (#3092)
* start consolidating

* add start-cli flash-os

* combine install and setup and refactor all

* use http

* undo mock

* fix translation

* translations

* use dialogservice wrapper

* better ST messaging on setup

* only warn on update if breakages (#3097)

* finish setup wizard and ui language-keyboard feature

* fix typo

* wip: localization

* remove start-tunnel readme

* switch to posix strings for language internal

* revert mock

* translate backend strings

* fix missing about text

* help text for args

* feat: add "Add new gateway" option (#3098)

* feat: add "Add new gateway" option

* Update web/projects/ui/src/app/routes/portal/components/form/controls/select.component.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* add translation

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix dns selection

* keyboard keymap also

* ability to shutdown after install

* revert mock

* working setup flow + manifest localization

* (mostly) redundant localization on frontend

* version bump

* omit live medium from disk list and better space management

* ignore missing package archive on 035 migration

* fix device migration

* add i18n helper to sdk

* fix install over 0.3.5.1

* fix grub config

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-27 14:44:41 -08:00
Aiden McClelland
99871805bd hardware acceleration and support for NVIDIA cards on nonfree images (#3089)
* add nvidia packages

* add nvidia deps to nonfree

* gpu_acceleration flag & nvidia hacking

* fix gpu_config & /tmp/lxc.log

* implement hardware acceleration more dynamically

* refactor OpenUI

* use mknod

* registry updates for multi-hardware-requirements

* pluralize

* handle new registry types

* remove log

* migrations and driver fixes

* wip

* misc patches

* handle nvidia-container differently

* chore: comments (#3093)

* chore: comments

* revert some sizing

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* Revert "handle nvidia-container differently"

This reverts commit d708ae53df.

* fix debian containers

* cleanup

* feat: add empty array placeholder in forms (#3095)

* fixes from testing, client side device filtering for better fingerprinting resistance

* fix mac builds

---------

Co-authored-by: Sam Sartor <me@samsartor.com>
Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
2026-01-15 11:42:17 -08:00
571 changed files with 23409 additions and 11128 deletions

5
.claude/settings.json Normal file
View File

@@ -0,0 +1,5 @@
{
"attribution": {
"commit": ""
}
}

81
.github/actions/setup-build/action.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: Setup Build Environment
description: Common build environment setup steps
inputs:
nodejs-version:
description: Node.js version
required: true
setup-python:
description: Set up Python
required: false
default: "false"
setup-docker:
description: Set up Docker QEMU and Buildx
required: false
default: "true"
setup-sccache:
description: Configure sccache for GitHub Actions
required: false
default: "true"
free-space:
description: Remove unnecessary packages to free disk space
required: false
default: "true"
runs:
using: composite
steps:
- name: Free disk space
if: inputs.free-space == 'true'
shell: bash
run: |
sudo apt-get remove --purge -y azure-cli || true
sudo apt-get remove --purge -y firefox || true
sudo apt-get remove --purge -y ghc-* || true
sudo apt-get remove --purge -y google-cloud-sdk || true
sudo apt-get remove --purge -y google-chrome-stable || true
sudo apt-get remove --purge -y powershell || true
sudo apt-get remove --purge -y php* || true
sudo apt-get remove --purge -y ruby* || true
sudo apt-get remove --purge -y mono-* || true
sudo apt-get autoremove -y
sudo apt-get clean
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/share/swift
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
# BuildJet runners lack /opt/hostedtoolcache, which setup-python and setup-qemu expect
- name: Ensure hostedtoolcache exists
shell: bash
run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache
- name: Set up Python
if: inputs.setup-python == 'true'
uses: actions/setup-python@v5
with:
python-version: "3.x"
- uses: actions/setup-node@v4
with:
node-version: ${{ inputs.nodejs-version }}
cache: npm
cache-dependency-path: "**/package-lock.json"
- name: Set up Docker QEMU
if: inputs.setup-docker == 'true'
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
if: inputs.setup-docker == 'true'
uses: docker/setup-buildx-action@v3
- name: Configure sccache
if: inputs.setup-sccache == 'true'
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');

View File

@@ -37,6 +37,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
@@ -44,6 +48,7 @@ env:
jobs: jobs:
compile: compile:
name: Build Debian Package name: Build Debian Package
if: github.event.pull_request.draft != true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@@ -60,50 +65,15 @@ jobs:
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Cleaning up unnecessary files - name: Mount tmpfs
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- uses: actions/setup-node@v4
with: with:
node-version: ${{ env.NODEJS_VERSION }} nodejs-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: TARGET=${{ matrix.triple }} make cli run: TARGET=${{ matrix.triple }} make cli

View File

@@ -1,4 +1,4 @@
name: Start-Registry name: start-registry
on: on:
workflow_call: workflow_call:
@@ -35,6 +35,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
@@ -42,6 +46,7 @@ env:
jobs: jobs:
compile: compile:
name: Build Debian Package name: Build Debian Package
if: github.event.pull_request.draft != true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@@ -56,50 +61,15 @@ jobs:
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Cleaning up unnecessary files - name: Mount tmpfs
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- uses: actions/setup-node@v4
with: with:
node-version: ${{ env.NODEJS_VERSION }} nodejs-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: make registry-deb run: make registry-deb

View File

@@ -1,4 +1,4 @@
name: Start-Tunnel name: start-tunnel
on: on:
workflow_call: workflow_call:
@@ -35,6 +35,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
@@ -42,6 +46,7 @@ env:
jobs: jobs:
compile: compile:
name: Build Debian Package name: Build Debian Package
if: github.event.pull_request.draft != true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@@ -56,50 +61,15 @@ jobs:
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Cleaning up unnecessary files - name: Mount tmpfs
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- uses: actions/setup-node@v4
with: with:
node-version: ${{ env.NODEJS_VERSION }} nodejs-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: make tunnel-deb run: make tunnel-deb

View File

@@ -27,7 +27,7 @@ on:
- x86_64-nonfree - x86_64-nonfree
- aarch64 - aarch64
- aarch64-nonfree - aarch64-nonfree
- raspberrypi # - raspberrypi
- riscv64 - riscv64
deploy: deploy:
type: choice type: choice
@@ -45,6 +45,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
@@ -52,6 +56,7 @@ env:
jobs: jobs:
compile: compile:
name: Compile Base Binaries name: Compile Base Binaries
if: github.event.pull_request.draft != true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@@ -86,54 +91,16 @@ jobs:
)[github.event.inputs.runner == 'fast'] )[github.event.inputs.runner == 'fast']
}} }}
steps: steps:
- name: Cleaning up unnecessary files - name: Mount tmpfs
run: |
sudo apt-get remove --purge -y azure-cli || true
sudo apt-get remove --purge -y firefox || true
sudo apt-get remove --purge -y ghc-* || true
sudo apt-get remove --purge -y google-cloud-sdk || true
sudo apt-get remove --purge -y google-chrome-stable || true
sudo apt-get remove --purge -y powershell || true
sudo apt-get remove --purge -y php* || true
sudo apt-get remove --purge -y ruby* || true
sudo apt-get remove --purge -y mono-* || true
sudo apt-get autoremove -y
sudo apt-get clean
sudo rm -rf /usr/lib/jvm # All JDKs
sudo rm -rf /usr/local/.ghcup # Haskell toolchain
sudo rm -rf /usr/local/lib/android # Android SDK/NDK, emulator
sudo rm -rf /usr/share/dotnet # .NET SDKs
sudo rm -rf /usr/share/swift # Swift toolchain (if present)
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- name: Set up Python
uses: actions/setup-python@v5
with: with:
python-version: "3.x" nodejs-version: ${{ env.NODEJS_VERSION }}
setup-python: "true"
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar
@@ -151,13 +118,14 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
# TODO: re-add "raspberrypi" to the platform list below
platform: >- platform: >-
${{ ${{
fromJson( fromJson(
format( format(
'[ '[
["{0}"], ["{0}"],
["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64", "raspberrypi"] ["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64"]
]', ]',
github.event.inputs.platform || 'ALL' github.event.inputs.platform || 'ALL'
) )
@@ -221,6 +189,10 @@ jobs:
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.) sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
if: ${{ github.event.inputs.runner != 'fast' }} if: ${{ github.event.inputs.runner != 'fast' }}
# BuildJet runners lack /opt/hostedtoolcache, which setup-qemu expects
- name: Ensure hostedtoolcache exists
run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache
- name: Set up docker QEMU - name: Set up docker QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
@@ -251,10 +223,8 @@ jobs:
mkdir -p patch-db/client/dist mkdir -p patch-db/client/dist
mkdir -p web/.angular mkdir -p web/.angular
mkdir -p web/dist/raw/ui mkdir -p web/dist/raw/ui
mkdir -p web/dist/raw/install-wizard
mkdir -p web/dist/raw/setup-wizard mkdir -p web/dist/raw/setup-wizard
mkdir -p web/dist/static/ui mkdir -p web/dist/static/ui
mkdir -p web/dist/static/install-wizard
mkdir -p web/dist/static/setup-wizard mkdir -p web/dist/static/setup-wizard
PLATFORM=${{ matrix.platform }} make -t compiled-${{ env.ARCH }}.tar PLATFORM=${{ matrix.platform }} make -t compiled-${{ env.ARCH }}.tar

View File

@@ -10,6 +10,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: dev-unstable ENVIRONMENT: dev-unstable
@@ -17,15 +21,18 @@ env:
jobs: jobs:
test: test:
name: Run Automated Tests name: Run Automated Tests
if: github.event.pull_request.draft != true
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- uses: actions/setup-node@v4
with: with:
node-version: ${{ env.NODEJS_VERSION }} nodejs-version: ${{ env.NODEJS_VERSION }}
free-space: "false"
setup-docker: "false"
setup-sccache: "false"
- name: Build And Run Tests - name: Build And Run Tests
run: make test run: make test

4
.gitignore vendored
View File

@@ -19,4 +19,6 @@ secrets.db
/compiled.tar /compiled.tar
/compiled-*.tar /compiled-*.tar
/build/lib/firmware /build/lib/firmware
tmp tmp
web/.i18n-checked
agents/USER.md

146
CLAUDE.md Normal file
View File

@@ -0,0 +1,146 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
StartOS is an open-source Linux distribution for running personal servers. It manages discovery, installation, network configuration, backups, and health monitoring of self-hosted services.
**Tech Stack:**
- Backend: Rust (async/Tokio, Axum web framework)
- Frontend: Angular 20 + TypeScript + TaigaUI
- Container runtime: Node.js/TypeScript with LXC
- Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync
- API: JSON-RPC via rpc-toolkit (see `agents/rpc-toolkit.md`)
- Auth: Password + session cookie, public/private key signatures, local authcookie (see `core/src/middleware/auth/`)
## Build & Development
See [CONTRIBUTING.md](CONTRIBUTING.md) for:
- Environment setup and requirements
- Build commands and make targets
- Testing and formatting commands
- Environment variables
**Quick reference:**
```bash
. ./devmode.sh # Enable dev mode
make update-startbox REMOTE=start9@<ip> # Fastest iteration (binary + UI)
make test-core # Run Rust tests
```
## Architecture
### Core (`/core`)
The Rust backend daemon. Main binaries:
- `startbox` - Main daemon (runs as `startd`)
- `start-cli` - CLI interface
- `start-container` - Runs inside LXC containers; communicates with host and manages subcontainers
- `registrybox` - Registry daemon
- `tunnelbox` - VPN/tunnel daemon
**Key modules:**
- `src/context/` - Context types (RpcContext, CliContext, InitContext, DiagnosticContext)
- `src/service/` - Service lifecycle management with actor pattern (`service_actor.rs`)
- `src/db/model/` - Patch-DB models (`public.rs` synced to frontend, `private.rs` backend-only)
- `src/net/` - Networking (DNS, ACME, WiFi, Tor via Arti, WireGuard)
- `src/s9pk/` - S9PK package format (merkle archive)
- `src/registry/` - Package registry management
**RPC Pattern:** See `agents/rpc-toolkit.md`
### Web (`/web`)
Angular projects sharing common code:
- `projects/ui/` - Main admin interface
- `projects/setup-wizard/` - Initial setup
- `projects/start-tunnel/` - VPN management UI
- `projects/shared/` - Common library (API clients, components)
- `projects/marketplace/` - Service discovery
**Development:**
```bash
cd web
npm ci
npm run start:ui # Dev server with mocks
npm run build:ui # Production build
npm run check # Type check all projects
```
### Container Runtime (`/container-runtime`)
Node.js runtime that manages service containers via RPC. See `RPCSpec.md` for protocol.
**Container Architecture:**
```
LXC Container (uniform base for all services)
└── systemd
└── container-runtime.service
└── Loads /usr/lib/startos/package/index.js (from s9pk javascript.squashfs)
└── Package JS launches subcontainers (from images in s9pk)
```
The container runtime communicates with the host via JSON-RPC over Unix socket. Package JavaScript must export functions conforming to the `ABI` type defined in `sdk/base/lib/types.ts`.
**`/media/startos/` directory (mounted by host into container):**
| Path | Description |
|------|-------------|
| `volumes/<name>/` | Package data volumes (id-mapped, persistent) |
| `assets/` | Read-only assets from s9pk `assets.squashfs` |
| `images/<name>/` | Container images (squashfs, used for subcontainers) |
| `images/<name>.env` | Environment variables for image |
| `images/<name>.json` | Image metadata |
| `backup/` | Backup mount point (mounted during backup operations) |
| `rpc/service.sock` | RPC socket (container runtime listens here) |
| `rpc/host.sock` | Host RPC socket (for effects callbacks to host) |
**S9PK Structure:** See `agents/s9pk-structure.md`
### SDK (`/sdk`)
TypeScript SDK for packaging services (`@start9labs/start-sdk`).
- `base/` - Core types, ABI definitions, effects interface (`@start9labs/start-sdk-base`)
- `package/` - Full SDK for package developers, re-exports base
### Patch-DB (`/patch-db`)
Git submodule providing diff-based state synchronization. Changes to `db/model/public.rs` automatically sync to the frontend.
**Key patterns:**
- `db.peek().await` - Get a read-only snapshot of the database state
- `db.mutate(|db| { ... }).await` - Apply mutations atomically, returns `MutateResult`
- `#[derive(HasModel)]` - Derive macro for types stored in the database, generates typed accessors
**Generated accessor types** (from `HasModel` derive):
- `as_field()` - Immutable reference: `&Model<T>`
- `as_field_mut()` - Mutable reference: `&mut Model<T>`
- `into_field()` - Owned value: `Model<T>`
**`Model<T>` APIs** (from `db/prelude.rs`):
- `.de()` - Deserialize to `T`
- `.ser(&value)` - Serialize from `T`
- `.mutate(|v| ...)` - Deserialize, mutate, reserialize
- For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()`
## Supplementary Documentation
The `agents/` directory contains detailed documentation for AI assistants:
- `TODO.md` - Pending tasks for AI agents (check this first, remove items when completed)
- `USER.md` - Current user identifier (gitignored, see below)
- `rpc-toolkit.md` - JSON-RPC patterns and handler configuration
- `core-rust-patterns.md` - Common utilities and patterns for Rust code in `/core` (guard pattern, mount guards, etc.)
- `s9pk-structure.md` - S9PK package format structure
- `i18n-patterns.md` - Internationalization key conventions and usage in `/core`
### Session Startup
On startup:
1. **Check for `agents/USER.md`** - If it doesn't exist, prompt the user for their name/identifier and create it. This file is gitignored since it varies per developer.
2. **Check `agents/TODO.md` for relevant tasks** - Show TODOs that either:
- Have no `@username` tag (relevant to everyone)
- Are tagged with the current user's identifier
Skip TODOs tagged with a different user.
3. **Ask "What would you like to do today?"** - Offer options for each relevant TODO item, plus "Something else" for other requests.

View File

@@ -11,123 +11,190 @@ This guide is for contributing to the StartOS. If you are interested in packagin
```bash ```bash
/ /
├── assets/ ├── assets/ # Screenshots for README
├── container-runtime/ ├── build/ # Auxiliary files and scripts for deployed images
├── core/ ├── container-runtime/ # Node.js program managing package containers
├── build/ ├── core/ # Rust backend: API, daemon (startd), CLI (start-cli)
├── debian/ ├── debian/ # Debian package maintainer scripts
├── web/ ├── image-recipe/ # Scripts for building StartOS images
├── image-recipe/ ├── patch-db/ # (submodule) Diff-based data store for frontend sync
├── patch-db ├── sdk/ # TypeScript SDK for building StartOS packages
└── sdk/ └── web/ # Web UIs (Angular)
``` ```
#### assets See component READMEs for details:
- [`core`](core/README.md)
screenshots for the StartOS README - [`web`](web/README.md)
- [`build`](build/README.md)
#### container-runtime - [`patch-db`](https://github.com/Start9Labs/patch-db)
A NodeJS program that dynamically loads maintainer scripts and communicates with the OS to manage packages
#### core
An API, daemon (startd), and CLI (start-cli) that together provide the core functionality of StartOS.
#### build
Auxiliary files and scripts to include in deployed StartOS images
#### debian
Maintainer scripts for the StartOS Debian package
#### web
Web UIs served under various conditions and used to interact with StartOS APIs.
#### image-recipe
Scripts for building StartOS images
#### patch-db (submodule)
A diff based data store used to synchronize data between the web interfaces and server.
#### sdk
A typescript sdk for building start-os packages
## Environment Setup ## Environment Setup
#### Clone the StartOS repository
```sh ```sh
git clone https://github.com/Start9Labs/start-os.git --recurse-submodules git clone https://github.com/Start9Labs/start-os.git --recurse-submodules
cd start-os cd start-os
``` ```
#### Continue to your project of interest for additional instructions: ### Development Mode
- [`core`](core/README.md) For faster iteration during development:
- [`web-interfaces`](web-interfaces/README.md)
- [`build`](build/README.md) ```sh
- [`patch-db`](https://github.com/Start9Labs/patch-db) . ./devmode.sh
```
This sets `ENVIRONMENT=dev` and `GIT_BRANCH_AS_HASH=1` to prevent rebuilds on every commit.
## Building ## Building
This project uses [GNU Make](https://www.gnu.org/software/make/) to build its components. To build any specific component, simply run `make <TARGET>` replacing `<TARGET>` with the name of the target you'd like to build All builds can be performed on any operating system that can run Docker.
This project uses [GNU Make](https://www.gnu.org/software/make/) to build its components.
### Requirements ### Requirements
- [GNU Make](https://www.gnu.org/software/make/) - [GNU Make](https://www.gnu.org/software/make/)
- [Docker](https://docs.docker.com/get-docker/) - [Docker](https://docs.docker.com/get-docker/) or [Podman](https://podman.io/)
- [NodeJS v20.16.0](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) - [NodeJS v20.16.0](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
- [sed](https://www.gnu.org/software/sed/) - [Rust](https://rustup.rs/) (nightly for formatting)
- [grep](https://www.gnu.org/software/grep/) - [sed](https://www.gnu.org/software/sed/), [grep](https://www.gnu.org/software/grep/), [awk](https://www.gnu.org/software/gawk/)
- [awk](https://www.gnu.org/software/gawk/)
- [jq](https://jqlang.github.io/jq/) - [jq](https://jqlang.github.io/jq/)
- [gzip](https://www.gnu.org/software/gzip/) - [gzip](https://www.gnu.org/software/gzip/), [brotli](https://github.com/google/brotli)
- [brotli](https://github.com/google/brotli)
### Environment variables ### Environment Variables
- `PLATFORM`: which platform you would like to build for. Must be one of `x86_64`, `x86_64-nonfree`, `aarch64`, `aarch64-nonfree`, `raspberrypi` | Variable | Description |
- NOTE: `nonfree` images are for including `nonfree` firmware packages in the built ISO |----------|-------------|
- `ENVIRONMENT`: a hyphen separated set of feature flags to enable | `PLATFORM` | Target platform: `x86_64`, `x86_64-nonfree`, `aarch64`, `aarch64-nonfree`, `riscv64`, `raspberrypi` |
- `dev`: enables password ssh (INSECURE!) and does not compress frontends | `ENVIRONMENT` | Hyphen-separated feature flags (see below) |
- `unstable`: enables assertions that will cause errors on unexpected inconsistencies that are undesirable in production use either for performance or reliability reasons | `PROFILE` | Build profile: `release` (default) or `dev` |
- `docker`: use `docker` instead of `podman` | `GIT_BRANCH_AS_HASH` | Set to `1` to use git branch name as version hash (avoids rebuilds) |
- `GIT_BRANCH_AS_HASH`: set to `1` to use the current git branch name as the git hash so that the project does not need to be rebuilt on each commit
### Useful Make Targets **ENVIRONMENT flags:**
- `dev` - Enables password SSH before setup, skips frontend compression
- `unstable` - Enables assertions and debugging with performance penalty
- `console` - Enables tokio-console for async debugging
**Platform notes:**
- `-nonfree` variants include proprietary firmware and drivers
- `raspberrypi` includes non-free components by necessity
- Platform is remembered between builds if not specified
### Make Targets
#### Building
| Target | Description |
|--------|-------------|
| `iso` | Create full `.iso` image (not for raspberrypi) |
| `img` | Create full `.img` image (raspberrypi only) |
| `deb` | Build Debian package |
| `all` | Build all Rust binaries |
| `uis` | Build all web UIs |
| `ui` | Build main UI only |
| `ts-bindings` | Generate TypeScript bindings from Rust types |
#### Deploying to Device
For devices on the same network:
| Target | Description |
|--------|-------------|
| `update-startbox REMOTE=start9@<ip>` | Deploy binary + UI only (fastest) |
| `update-deb REMOTE=start9@<ip>` | Deploy full Debian package |
| `update REMOTE=start9@<ip>` | OTA-style update |
| `reflash REMOTE=start9@<ip>` | Reflash as if using live ISO |
| `update-overlay REMOTE=start9@<ip>` | Deploy to in-memory overlay (reverts on reboot) |
For devices on different networks (uses [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)):
| Target | Description |
|--------|-------------|
| `wormhole` | Send startbox binary |
| `wormhole-deb` | Send Debian package |
| `wormhole-squashfs` | Send squashfs image |
#### Other
| Target | Description |
|--------|-------------|
| `format` | Run code formatting (Rust nightly required) |
| `test` | Run all automated tests |
| `test-core` | Run Rust tests |
| `test-sdk` | Run SDK tests |
| `test-container-runtime` | Run container runtime tests |
| `clean` | Delete all compiled artifacts |
## Testing
```bash
make test # All tests
make test-core # Rust tests (via ./core/run-tests.sh)
make test-sdk # SDK tests
make test-container-runtime # Container runtime tests
# Run specific Rust test
cd core && cargo test <test_name> --features=test
```
## Code Formatting
```bash
# Rust (requires nightly)
make format
# TypeScript/HTML/SCSS (web)
cd web && npm run format
```
## Code Style Guidelines
### Formatting
Run the formatters before committing. Configuration is handled by `rustfmt.toml` (Rust) and prettier configs (TypeScript).
### Documentation & Comments
**Rust:**
- Add doc comments (`///`) to public APIs, structs, and non-obvious functions
- Use `//` comments sparingly for complex logic that isn't self-evident
- Prefer self-documenting code (clear naming, small functions) over comments
**TypeScript:**
- Document exported functions and complex types with JSDoc
- Keep comments focused on "why" rather than "what"
**General:**
- Don't add comments that just restate the code
- Update or remove comments when code changes
- TODOs should include context: `// TODO(username): reason`
### Commit Messages
Use [Conventional Commits](https://www.conventionalcommits.org/):
```
<type>(<scope>): <description>
[optional body]
[optional footer]
```
**Types:**
- `feat` - New feature
- `fix` - Bug fix
- `docs` - Documentation only
- `style` - Formatting, no code change
- `refactor` - Code change that neither fixes a bug nor adds a feature
- `test` - Adding or updating tests
- `chore` - Build process, dependencies, etc.
**Examples:**
```
feat(web): add dark mode toggle
fix(core): resolve race condition in service startup
docs: update CONTRIBUTING.md with style guidelines
refactor(sdk): simplify package validation logic
```
- `iso`: Create a full `.iso` image
- Only possible from Debian
- Not available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- `img`: Create a full `.img` image
- Only possible from Debian
- Only available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- `format`: Run automatic code formatting for the project
- Additional Requirements:
- [rust](https://rustup.rs/)
- `test`: Run automated tests for the project
- Additional Requirements:
- [rust](https://rustup.rs/)
- `update`: Deploy the current working project to a device over ssh as if through an over-the-air update
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `reflash`: Deploy the current working project to a device over ssh as if using a live `iso` image to reflash it
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `update-overlay`: Deploy the current working project to a device over ssh to the in-memory overlay without restarting it
- WARNING: changes will be reverted after the device is rebooted
- WARNING: changes to `init` will not take effect as the device is already initialized
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `wormhole`: Deploy the `startbox` to a device using [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- When the build it complete will emit a command to paste into the shell of the device to upgrade it
- Additional Requirements:
- [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- `clean`: Delete all compiled artifacts

View File

@@ -12,8 +12,8 @@ RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else ech
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh) REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh)
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh) TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh)
IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi) IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi)
WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html web/dist/raw/install-wizard/index.html WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html
COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html web/dist/static/install-wizard/index.html COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html
FIRMWARE_ROMS := build/lib/firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./build/lib/firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json) FIRMWARE_ROMS := build/lib/firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./build/lib/firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json)
BUILD_SRC := $(call ls-files, build/lib) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS) BUILD_SRC := $(call ls-files, build/lib) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
IMAGE_RECIPE_SRC := $(call ls-files, build/image-recipe/) IMAGE_RECIPE_SRC := $(call ls-files, build/image-recipe/)
@@ -22,7 +22,6 @@ CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules pat
WEB_SHARED_SRC := $(call ls-files, web/projects/shared) $(call ls-files, web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json WEB_SHARED_SRC := $(call ls-files, web/projects/shared) $(call ls-files, web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json
WEB_UI_SRC := $(call ls-files, web/projects/ui) WEB_UI_SRC := $(call ls-files, web/projects/ui)
WEB_SETUP_WIZARD_SRC := $(call ls-files, web/projects/setup-wizard) WEB_SETUP_WIZARD_SRC := $(call ls-files, web/projects/setup-wizard)
WEB_INSTALL_WIZARD_SRC := $(call ls-files, web/projects/install-wizard)
WEB_START_TUNNEL_SRC := $(call ls-files, web/projects/start-tunnel) WEB_START_TUNNEL_SRC := $(call ls-files, web/projects/start-tunnel)
PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client) PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client)
GZIP_BIN := $(shell which pigz || which gzip) GZIP_BIN := $(shell which pigz || which gzip)
@@ -325,19 +324,19 @@ web/.angular/.updated: patch-db/client/dist/index.js sdk/baseDist/package.json w
mkdir -p web/.angular mkdir -p web/.angular
touch web/.angular/.updated touch web/.angular/.updated
web/dist/raw/ui/index.html: $(WEB_UI_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/.i18n-checked: $(WEB_SHARED_SRC) $(WEB_UI_SRC) $(WEB_SETUP_WIZARD_SRC) $(WEB_START_TUNNEL_SRC)
npm --prefix web run check:i18n
touch web/.i18n-checked
web/dist/raw/ui/index.html: $(WEB_UI_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/.i18n-checked
npm --prefix web run build:ui npm --prefix web run build:ui
touch web/dist/raw/ui/index.html touch web/dist/raw/ui/index.html
web/dist/raw/setup-wizard/index.html: $(WEB_SETUP_WIZARD_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/dist/raw/setup-wizard/index.html: $(WEB_SETUP_WIZARD_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/.i18n-checked
npm --prefix web run build:setup npm --prefix web run build:setup
touch web/dist/raw/setup-wizard/index.html touch web/dist/raw/setup-wizard/index.html
web/dist/raw/install-wizard/index.html: $(WEB_INSTALL_WIZARD_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/dist/raw/start-tunnel/index.html: $(WEB_START_TUNNEL_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/.i18n-checked
npm --prefix web run build:install
touch web/dist/raw/install-wizard/index.html
web/dist/raw/start-tunnel/index.html: $(WEB_START_TUNNEL_SRC) $(WEB_SHARED_SRC) web/.angular/.updated
npm --prefix web run build:tunnel npm --prefix web run build:tunnel
touch web/dist/raw/start-tunnel/index.html touch web/dist/raw/start-tunnel/index.html

View File

@@ -1,95 +0,0 @@
# StartTunnel
A self-hosted WireGuard VPN optimized for creating VLANs and reverse tunneling to personal servers.
You can think of StartTunnel as "virtual router in the cloud".
Use it for private remote access to self-hosted services running on a personal server, or to expose self-hosted services to the public Internet without revealing the host server's IP address.
## Features
- **Create Subnets**: Each subnet creates a private, virtual local area network (VLAN), similar to the LAN created by a home router.
- **Add Devices**: When you add a device (server, phone, laptop) to a subnet, it receives a LAN IP address on that subnet as well as a unique WireGuard config that must be copied, downloaded, or scanned into the device.
- **Forward Ports**: Forwarding a port creates a "reverse tunnel", exposing a specific port on a specific device to the public Internet.
## Installation
1. Rent a low cost VPS. For most use cases, the cheapest option should be enough.
- It must have a dedicated public IP address.
- For compute (CPU), memory (RAM), and storage (disk), choose the minimum spec.
- For transfer (bandwidth), it depends on (1) your use case and (2) your home Internet's _upload_ speed. Even if you intend to serve large files or stream content from your server, there is no reason to pay for speeds that exceed your home Internet's upload speed.
1. Provision the VPS with the latest version of Debian.
1. Access the VPS via SSH.
1. Run the StartTunnel install script:
curl -fsSL https://start9labs.github.io/start-tunnel | sh
1. [Initialize the web interface](#web-interface) (recommended)
## Updating
Simply re-run the install command:
```sh
curl -fsSL https://start9labs.github.io/start-tunnel | sh
```
## CLI
By default, StartTunnel is managed via the `start-tunnel` command line interface, which is self-documented.
```
start-tunnel --help
```
## Web Interface
Enable the web interface (recommended in most cases) to access your StartTunnel from the browser or via API.
1. Initialize the web interface.
start-tunnel web init
1. If your VPS has multiple public IP addresses, you will be prompted to select the IP address at which to host the web interface.
1. When prompted, enter the port at which to host the web interface. The default is 8443, and we recommend using it. If you change the default, choose an uncommon port to avoid future conflicts.
1. To access your StartTunnel web interface securely over HTTPS, you need an SSL certificate. When prompted, select whether to autogenerate a certificate or provide your own. _This is only for accessing your StartTunnel web interface_.
1. You will receive a success message with 3 pieces of information:
- **<https://IP:port>**: the URL where you can reach your personal web interface.
- **Password**: an autogenerated password for your interface. If you lose/forget it, you can reset it using the start-tunnel CLI.
- **Root Certificate Authority**: the Root CA of your StartTunnel instance.
1. If you autogenerated your SSL certificate, visiting the `https://IP:port` URL in the browser will warn you that the website is insecure. This is expected. You have two options for getting past this warning:
- option 1 (recommended): [Trust your StartTunnel Root CA on your connecting device](#trusting-your-starttunnel-root-ca).
- Option 2: bypass the warning in the browser, creating a one-time security exception.
### Trusting your StartTunnel Root CA
1. Copy the contents of your Root CA (starting with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----).
2. Open a text editor:
- Linux: gedit, nano, or any editor
- Mac: TextEdit
- Windows: Notepad
3. Paste the contents of your Root CA.
4. Save the file with a `.crt` extension (e.g. `start-tunnel.crt`) (make sure it saves as plain text, not rich text).
5. Trust the Root CA on your client device(s):
- [Linux](https://staging.docs.start9.com/device-guides/linux/ca.html)
- [Mac](https://staging.docs.start9.com/device-guides/mac/ca.html)
- [Windows](https://staging.docs.start9.com/device-guides/windows/ca.html)
- [Android/Graphene](https://staging.docs.start9.com/device-guides/android/ca.html)
- [iOS](https://staging.docs.start9.com/device-guides/ios/ca.html)

9
agents/TODO.md Normal file
View File

@@ -0,0 +1,9 @@
# AI Agent TODOs
Pending tasks for AI agents. Remove items when completed.
## Unreviewed CLAUDE.md Sections
- [ ] Architecture - Web (`/web`) - @MattDHill

View File

@@ -0,0 +1,249 @@
# Utilities & Patterns
This document covers common utilities and patterns used throughout the StartOS codebase.
## Util Module (`core/src/util/`)
The `util` module contains reusable utilities. Key submodules:
| Module | Purpose |
|--------|---------|
| `actor/` | Actor pattern implementation for concurrent state management |
| `collections/` | Custom collection types |
| `crypto.rs` | Cryptographic utilities (encryption, hashing) |
| `future.rs` | Future/async utilities |
| `io.rs` | File I/O helpers (create_file, canonicalize, etc.) |
| `iter.rs` | Iterator extensions |
| `net.rs` | Network utilities |
| `rpc.rs` | RPC helpers |
| `rpc_client.rs` | RPC client utilities |
| `serde.rs` | Serialization helpers (Base64, display/fromstr, etc.) |
| `sync.rs` | Synchronization primitives (SyncMutex, etc.) |
## Command Invocation (`Invoke` trait)
The `Invoke` trait provides a clean way to run external commands with error handling:
```rust
use crate::util::Invoke;
// Simple invocation
tokio::process::Command::new("ls")
.arg("-la")
.invoke(ErrorKind::Filesystem)
.await?;
// With timeout
tokio::process::Command::new("slow-command")
.timeout(Some(Duration::from_secs(30)))
.invoke(ErrorKind::Timeout)
.await?;
// With input
let mut input = Cursor::new(b"input data");
tokio::process::Command::new("cat")
.input(Some(&mut input))
.invoke(ErrorKind::Filesystem)
.await?;
// Piped commands
tokio::process::Command::new("cat")
.arg("file.txt")
.pipe(&mut tokio::process::Command::new("grep").arg("pattern"))
.invoke(ErrorKind::Filesystem)
.await?;
```
## Guard Pattern
Guards ensure cleanup happens when they go out of scope.
### `GeneralGuard` / `GeneralBoxedGuard`
For arbitrary cleanup actions:
```rust
use crate::util::GeneralGuard;
let guard = GeneralGuard::new(|| {
println!("Cleanup runs on drop");
});
// Do work...
// Explicit drop with action
guard.drop();
// Or skip the action
// guard.drop_without_action();
```
### `FileLock`
File-based locking with automatic unlock:
```rust
use crate::util::FileLock;
let lock = FileLock::new("/path/to/lockfile", true).await?; // blocking=true
// Lock held until dropped or explicitly unlocked
lock.unlock().await?;
```
## Mount Guard Pattern (`core/src/disk/mount/guard.rs`)
RAII guards for filesystem mounts. Ensures filesystems are unmounted when guards are dropped.
### `MountGuard`
Basic mount guard:
```rust
use crate::disk::mount::guard::MountGuard;
use crate::disk::mount::filesystem::{MountType, ReadOnly};
let guard = MountGuard::mount(&filesystem, "/mnt/target", ReadOnly).await?;
// Use the mounted filesystem at guard.path()
do_something(guard.path()).await?;
// Explicit unmount (or auto-unmounts on drop)
guard.unmount(false).await?; // false = don't delete mountpoint
```
### `TmpMountGuard`
Reference-counted temporary mount (mounts to `/media/startos/tmp/`):
```rust
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::filesystem::ReadOnly;
// Multiple clones share the same mount
let guard1 = TmpMountGuard::mount(&filesystem, ReadOnly).await?;
let guard2 = guard1.clone();
// Mount stays alive while any guard exists
// Auto-unmounts when last guard is dropped
```
### `GenericMountGuard` trait
All mount guards implement this trait:
```rust
pub trait GenericMountGuard: std::fmt::Debug + Send + Sync + 'static {
fn path(&self) -> &Path;
fn unmount(self) -> impl Future<Output = Result<(), Error>> + Send;
}
```
### `SubPath`
Wraps a mount guard to point to a subdirectory:
```rust
use crate::disk::mount::guard::SubPath;
let mount = TmpMountGuard::mount(&filesystem, ReadOnly).await?;
let subdir = SubPath::new(mount, "data/subdir");
// subdir.path() returns the full path including subdirectory
```
## FileSystem Implementations (`core/src/disk/mount/filesystem/`)
Various filesystem types that can be mounted:
| Type | Description |
|------|-------------|
| `bind.rs` | Bind mounts |
| `block_dev.rs` | Block device mounts |
| `cifs.rs` | CIFS/SMB network shares |
| `ecryptfs.rs` | Encrypted filesystem |
| `efivarfs.rs` | EFI variables |
| `httpdirfs.rs` | HTTP directory as filesystem |
| `idmapped.rs` | ID-mapped mounts |
| `label.rs` | Mount by label |
| `loop_dev.rs` | Loop device mounts |
| `overlayfs.rs` | Overlay filesystem |
## Other Useful Utilities
### `Apply` / `ApplyRef` traits
Fluent method chaining:
```rust
use crate::util::Apply;
let result = some_value
.apply(|v| transform(v))
.apply(|v| another_transform(v));
```
### `Container<T>`
Async-safe optional container:
```rust
use crate::util::Container;
let container = Container::new(None);
container.set(value).await;
let taken = container.take().await;
```
### `HashWriter<H, W>`
Write data while computing hash:
```rust
use crate::util::HashWriter;
use sha2::Sha256;
let writer = HashWriter::new(Sha256::new(), file);
// Write data...
let (hasher, file) = writer.finish();
let hash = hasher.finalize();
```
### `Never` type
Uninhabited type for impossible cases:
```rust
use crate::util::Never;
fn impossible() -> Never {
// This function can never return
}
let never: Never = impossible();
never.absurd::<String>() // Can convert to any type
```
### `MaybeOwned<'a, T>`
Either borrowed or owned data:
```rust
use crate::util::MaybeOwned;
fn accept_either(data: MaybeOwned<'_, String>) {
// Use &*data to access the value
}
accept_either(MaybeOwned::from(&existing_string));
accept_either(MaybeOwned::from(owned_string));
```
### `new_guid()`
Generate a random GUID:
```rust
use crate::util::new_guid;
let guid = new_guid(); // Returns InternedString
```

301
agents/exver.md Normal file
View File

@@ -0,0 +1,301 @@
# exver — Extended Versioning
Extended semver supporting **downstream versioning** (wrapper updates independent of upstream) and **flavors** (package fork variants).
Two implementations exist:
- **Rust crate** (`exver`) — used in `core/`. Source: https://github.com/Start9Labs/exver-rs
- **TypeScript** (`sdk/base/lib/exver/index.ts`) — used in `sdk/` and `web/`
Both parse the same string format and agree on `satisfies` semantics.
## Version Format
An **ExtendedVersion** string looks like:
```
[#flavor:]upstream:downstream
```
- **upstream** — the original package version (semver-style: `1.2.3`, `1.2.3-beta.1`)
- **downstream** — the StartOS wrapper version (incremented independently)
- **flavor** — optional lowercase ASCII prefix for fork variants
Examples:
- `1.2.3:0` — upstream 1.2.3, first downstream release
- `1.2.3:2` — upstream 1.2.3, third downstream release
- `#bitcoin:21.0:1` — bitcoin flavor, upstream 21.0, downstream 1
- `1.0.0-rc.1:0` — upstream with prerelease tag
## Core Types
### `Version`
A semver-style version with arbitrary digit segments and optional prerelease.
**Rust:**
```rust
use exver::Version;
let v = Version::new([1, 2, 3], []); // 1.2.3
let v = Version::new([1, 0], ["beta".into()]); // 1.0-beta
let v: Version = "1.2.3".parse().unwrap();
v.number() // &[1, 2, 3]
v.prerelease() // &[]
```
**TypeScript:**
```typescript
const v = new Version([1, 2, 3], [])
const v = Version.parse("1.2.3")
v.number // number[]
v.prerelease // (string | number)[]
v.compare(other) // 'greater' | 'equal' | 'less'
v.compareForSort(other) // -1 | 0 | 1
```
Default: `0`
### `ExtendedVersion`
The primary version type. Wraps upstream + downstream `Version` plus an optional flavor.
**Rust:**
```rust
use exver::ExtendedVersion;
let ev = ExtendedVersion::new(
Version::new([1, 2, 3], []),
Version::default(), // downstream = 0
);
let ev: ExtendedVersion = "1.2.3:0".parse().unwrap();
ev.flavor() // Option<&str>
ev.upstream() // &Version
ev.downstream() // &Version
// Builder methods (consuming):
ev.with_flavor("bitcoin")
ev.without_flavor()
ev.map_upstream(|v| ...)
ev.map_downstream(|v| ...)
```
**TypeScript:**
```typescript
const ev = new ExtendedVersion(null, upstream, downstream)
const ev = ExtendedVersion.parse("1.2.3:0")
const ev = ExtendedVersion.parseEmver("1.2.3.4") // emver compat
ev.flavor // string | null
ev.upstream // Version
ev.downstream // Version
ev.compare(other) // 'greater' | 'equal' | 'less' | null
ev.equals(other) // boolean
ev.greaterThan(other) // boolean
ev.lessThan(other) // boolean
ev.incrementMajor() // new ExtendedVersion
ev.incrementMinor() // new ExtendedVersion
```
**Ordering:** Versions with different flavors are **not comparable** (`PartialOrd`/`compare` returns `None`/`null`).
Default: `0:0`
### `VersionString` (Rust only, StartOS wrapper)
Defined in `core/src/util/version.rs`. Caches the original string representation alongside the parsed `ExtendedVersion`. Used as the key type in registry version maps.
```rust
use crate::util::VersionString;
let vs: VersionString = "1.2.3:0".parse().unwrap();
let vs = VersionString::from(extended_version);
// Deref to ExtendedVersion:
vs.satisfies(&range);
vs.upstream();
// String access:
vs.as_str(); // &str
AsRef::<str>::as_ref(&vs);
```
`Ord` is implemented with a total ordering — versions with different flavors are ordered by flavor name (unflavored sorts last).
### `VersionRange`
A predicate over `ExtendedVersion`. Supports comparison operators, boolean logic, and flavor constraints.
**Rust:**
```rust
use exver::VersionRange;
// Constructors:
VersionRange::any() // matches everything
VersionRange::none() // matches nothing
VersionRange::exactly(ev) // = ev
VersionRange::anchor(GTE, ev) // >= ev
VersionRange::caret(ev) // ^ev (compatible changes)
VersionRange::tilde(ev) // ~ev (patch-level changes)
// Combinators (smart — eagerly simplify):
VersionRange::and(a, b) // a && b
VersionRange::or(a, b) // a || b
VersionRange::not(a) // !a
// Parsing:
let r: VersionRange = ">=1.0.0:0".parse().unwrap();
let r: VersionRange = "^1.2.3:0".parse().unwrap();
let r: VersionRange = ">=1.0.0 <2.0.0".parse().unwrap(); // implicit AND
let r: VersionRange = ">=1.0.0 || >=2.0.0".parse().unwrap();
let r: VersionRange = "#bitcoin".parse().unwrap(); // flavor match
let r: VersionRange = "*".parse().unwrap(); // any
// Monoid wrappers for folding:
AnyRange // fold with or, empty = None
AllRange // fold with and, empty = Any
```
**TypeScript:**
```typescript
// Constructors:
VersionRange.any()
VersionRange.none()
VersionRange.anchor('=', ev)
VersionRange.anchor('>=', ev)
VersionRange.anchor('^', ev) // ^ and ~ are first-class operators
VersionRange.anchor('~', ev)
VersionRange.flavor(null) // match unflavored versions
VersionRange.flavor("bitcoin") // match #bitcoin versions
// Combinators — static (smart, variadic):
VersionRange.and(a, b, c, ...)
VersionRange.or(a, b, c, ...)
// Combinators — instance (not smart, just wrap):
range.and(other)
range.or(other)
range.not()
// Parsing:
VersionRange.parse(">=1.0.0:0")
VersionRange.parseEmver(">=1.2.3.4") // emver compat
// Analysis (TS only):
range.normalize() // canonical form (see below)
range.satisfiable() // boolean
range.intersects(other) // boolean
```
**Checking satisfaction:**
```rust
// Rust:
version.satisfies(&range) // bool
```
```typescript
// TypeScript:
version.satisfies(range) // boolean
range.satisfiedBy(version) // boolean (convenience)
```
Also available on `Version` (wraps in `ExtendedVersion` with downstream=0).
When no operator is specified in a range string, `^` (caret) is the default.
## Operators
| Syntax | Rust | TS | Meaning |
|--------|------|----|---------|
| `=` | `EQ` | `'='` | Equal |
| `!=` | `NEQ` | `'!='` | Not equal |
| `>` | `GT` | `'>'` | Greater than |
| `>=` | `GTE` | `'>='` | Greater than or equal |
| `<` | `LT` | `'<'` | Less than |
| `<=` | `LTE` | `'<='` | Less than or equal |
| `^` | expanded to `And(GTE, LT)` | `'^'` | Compatible (first non-zero digit unchanged) |
| `~` | expanded to `And(GTE, LT)` | `'~'` | Patch-level (minor unchanged) |
## Flavor Rules
- Versions with **different flavors** never satisfy comparison operators (except `!=`, which returns true)
- `VersionRange::Flavor(Some("bitcoin"))` matches only `#bitcoin:*` versions
- `VersionRange::Flavor(None)` matches only unflavored versions
- Flavor constraints compose with `and`/`or`/`not` like any other range
## Reduction and Normalization
### Rust: `reduce()` (shallow)
`VersionRange::reduce(self) -> Self` re-applies smart constructor rules to one level of the AST. Useful for simplifying a node that was constructed directly (e.g. deserialized) rather than through the smart constructors.
**Smart constructor rules applied by `and`, `or`, `not`, and `reduce`:**
`and`:
- `and(Any, b) → b`, `and(a, Any) → a`
- `and(None, _) → None`, `and(_, None) → None`
`or`:
- `or(Any, _) → Any`, `or(_, Any) → Any`
- `or(None, b) → b`, `or(a, None) → a`
`not`:
- `not(=v) → !=v`, `not(!=v) → =v`
- `not(and(a, b)) → or(not(a), not(b))` (De Morgan)
- `not(or(a, b)) → and(not(a), not(b))` (De Morgan)
- `not(not(a)) → a`
- `not(Any) → None`, `not(None) → Any`
### TypeScript: `normalize()` (deep, canonical)
`VersionRange.normalize(): VersionRange` in `sdk/base/lib/exver/index.ts` performs full normalization by converting the range AST into a canonical form. This is a deep operation that produces a semantically equivalent but simplified range.
**How it works:**
1. **`tables()`** — Converts the VersionRange AST into truth tables (`VersionRangeTable`). Each table is a number line split at version boundary points, with boolean values for each segment indicating whether versions in that segment satisfy the range. Separate tables are maintained per flavor (and for flavor negations).
2. **`VersionRangeTable.zip(a, b, func)`** — Merges two tables by walking their boundary points in sorted order and applying a boolean function (`&&` or `||`) to combine segment values. Adjacent segments with the same boolean value are collapsed automatically.
3. **`VersionRangeTable.and/or/not`** — Table-level boolean operations. `and` computes the cross-product of flavor tables (since `#a && #b` for different flavors is unsatisfiable). `not` inverts all segment values.
4. **`VersionRangeTable.collapse()`** — Checks if a table is uniformly true or false across all flavors and segments. Returns `true`, `false`, or `null` (mixed).
5. **`VersionRangeTable.minterms()`** — Converts truth tables back into a VersionRange AST in [sum-of-products](https://en.wikipedia.org/wiki/Canonical_normal_form#Minterms) canonical form. Each `true` segment becomes a product term (conjunction of boundary constraints), and all terms are joined with `or`. Adjacent boundary points collapse into `=` anchors.
**Example:** `normalize` can simplify:
- `>=1.0.0:0 && <=1.0.0:0``=1.0.0:0`
- `>=2.0.0:0 || >=1.0.0:0``>=1.0.0:0`
- `!(!>=1.0.0:0)``>=1.0.0:0`
**Also exposes:**
- `satisfiable(): boolean` — returns `true` if there exists any version satisfying the range (checks if `collapse(tables())` is not `false`)
- `intersects(other): boolean` — returns `true` if `and(this, other)` is satisfiable
## API Differences Between Rust and TypeScript
| | Rust | TypeScript |
|-|------|------------|
| **`^` / `~`** | Expanded at construction to `And(GTE, LT)` | First-class operator on `Anchor` |
| **`not()`** | Static, eagerly simplifies (De Morgan, double negation) | Instance method, just wraps |
| **`and()`/`or()`** | Binary static | Both binary instance and variadic static |
| **Normalization** | `reduce()` — shallow, one AST level | `normalize()` — deep canonical form via truth tables |
| **Satisfiability** | Not available | `satisfiable()` and `intersects(other)` |
| **ExtendedVersion helpers** | `with_flavor()`, `without_flavor()`, `map_upstream()`, `map_downstream()` | `incrementMajor()`, `incrementMinor()`, `greaterThan()`, `lessThan()`, `equals()`, etc. |
| **Monoid wrappers** | `AnyRange` (fold with `or`) and `AllRange` (fold with `and`) | Not present — use variadic static methods |
| **`VersionString`** | Wrapper caching parsed + string form | Not present |
| **Emver compat** | `From<emver::Version>` for `ExtendedVersion` | `ExtendedVersion.parseEmver()`, `VersionRange.parseEmver()` |
## Serde
All types serialize/deserialize as strings (requires `serde` feature, enabled in StartOS):
```json
{
"version": "1.2.3:0",
"targetVersion": ">=1.0.0:0 <2.0.0:0",
"sourceVersion": "^0.3.0:0"
}
```

100
agents/i18n-patterns.md Normal file
View File

@@ -0,0 +1,100 @@
# i18n Patterns in `core/`
## Library & Setup
**Crate:** [`rust-i18n`](https://crates.io/crates/rust-i18n) v3.1.5 (`core/Cargo.toml`)
**Initialization** (`core/src/lib.rs:3`):
```rust
rust_i18n::i18n!("locales", fallback = ["en_US"]);
```
This macro scans `core/locales/` at compile time and embeds all translations as constants.
**Prelude re-export** (`core/src/prelude.rs:4`):
```rust
pub use rust_i18n::t;
```
Most modules import `t!` via the prelude.
## Translation File
**Location:** `core/locales/i18n.yaml`
**Format:** YAML v2 (~755 keys)
**Supported languages:** `en_US`, `de_DE`, `es_ES`, `fr_FR`, `pl_PL`
**Entry structure:**
```yaml
namespace.sub.key-name:
en_US: "English text with %{param}"
de_DE: "German text with %{param}"
# ...
```
## Using `t!()`
```rust
// Simple key
t!("error.unknown")
// With parameter interpolation (%{name} in YAML)
t!("bins.deprecated.renamed", old = old_name, new = new_name)
```
## Key Naming Conventions
Keys use **dot-separated hierarchical namespaces** with **kebab-case** for multi-word segments:
```
<module>.<submodule>.<descriptive-name>
```
Examples:
- `error.incorrect-password` — error kind label
- `bins.start-init.updating-firmware` — startup phase message
- `backup.bulk.complete-title` — backup notification title
- `help.arg.acme-contact` — CLI help text for an argument
- `context.diagnostic.starting-diagnostic-ui` — diagnostic context status
### Top-Level Namespaces
| Namespace | Purpose |
|-----------|---------|
| `error.*` | `ErrorKind` display strings (see `src/error.rs`) |
| `bins.*` | CLI binary messages (deprecated, start-init, startd, etc.) |
| `init.*` | Initialization phase labels |
| `setup.*` | First-run setup messages |
| `context.*` | Context startup messages (diagnostic, setup, CLI) |
| `service.*` | Service lifecycle messages |
| `backup.*` | Backup/restore operation messages |
| `registry.*` | Package registry messages |
| `net.*` | Network-related messages |
| `middleware.*` | Request middleware messages (auth, etc.) |
| `disk.*` | Disk operation messages |
| `lxc.*` | Container management messages |
| `system.*` | System monitoring/metrics messages |
| `notifications.*` | User-facing notification messages |
| `update.*` | OS update messages |
| `util.*` | Utility messages (TUI, RPC) |
| `ssh.*` | SSH operation messages |
| `shutdown.*` | Shutdown-related messages |
| `logs.*` | Log-related messages |
| `auth.*` | Authentication messages |
| `help.*` | CLI help text (`help.arg.<arg-name>`) |
| `about.*` | CLI command descriptions |
## Locale Selection
`core/src/bins/mod.rs:15-36``set_locale_from_env()`:
1. Reads `LANG` environment variable
2. Strips `.UTF-8` suffix
3. Exact-matches against available locales, falls back to language-prefix match (e.g. `en_GB` matches `en_US`)
## Adding New Keys
1. Add the key to `core/locales/i18n.yaml` with all 5 language translations
2. Use the `t!("your.key.name")` macro in Rust code
3. Follow existing namespace conventions — match the module path where the key is used
4. Use kebab-case for multi-word segments
5. Translations are validated at compile time

226
agents/rpc-toolkit.md Normal file
View File

@@ -0,0 +1,226 @@
# rpc-toolkit
StartOS uses [rpc-toolkit](https://github.com/Start9Labs/rpc-toolkit) for its JSON-RPC API. This document covers the patterns used in this codebase.
## Overview
The API is JSON-RPC (not REST). All endpoints are RPC methods organized in a hierarchical command structure.
## Handler Functions
There are four types of handler functions, chosen based on the function's characteristics:
### `from_fn_async` - Async handlers
For standard async functions. Most handlers use this.
```rust
pub async fn my_handler(ctx: RpcContext, params: MyParams) -> Result<MyResponse, Error> {
// Can use .await
}
from_fn_async(my_handler)
```
### `from_fn_async_local` - Non-thread-safe async handlers
For async functions that are not `Send` (cannot be safely moved between threads). Use when working with non-thread-safe types.
```rust
pub async fn cli_download(ctx: CliContext, params: Params) -> Result<(), Error> {
// Non-Send async operations
}
from_fn_async_local(cli_download)
```
### `from_fn_blocking` - Sync blocking handlers
For synchronous functions that perform blocking I/O or long computations.
```rust
pub fn query_dns(ctx: RpcContext, params: DnsParams) -> Result<DnsResponse, Error> {
// Blocking operations (file I/O, DNS lookup, etc.)
}
from_fn_blocking(query_dns)
```
### `from_fn` - Sync non-blocking handlers
For pure functions or quick synchronous operations with no I/O.
```rust
pub fn echo(ctx: RpcContext, params: EchoParams) -> Result<String, Error> {
Ok(params.message)
}
from_fn(echo)
```
## ParentHandler
Groups related RPC methods into a hierarchy:
```rust
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
pub fn my_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
.subcommand("list", from_fn_async(list_handler).with_call_remote::<CliContext>())
.subcommand("create", from_fn_async(create_handler).with_call_remote::<CliContext>())
}
```
## Handler Extensions
Chain methods to configure handler behavior.
**Ordering rules:**
1. `with_about()` must come AFTER other CLI modifiers (`no_display()`, `with_custom_display_fn()`, etc.)
2. `with_call_remote()` must be the LAST adapter in the chain
| Method | Purpose |
|--------|---------|
| `.with_metadata("key", Value)` | Attach metadata for middleware |
| `.no_cli()` | RPC-only, not available via CLI |
| `.no_display()` | No CLI output |
| `.with_display_serializable()` | Default JSON/YAML output for CLI |
| `.with_custom_display_fn(\|_, res\| ...)` | Custom CLI output formatting |
| `.with_about("about.description")` | Add help text (i18n key) - **after CLI modifiers** |
| `.with_call_remote::<CliContext>()` | Enable CLI to call remotely - **must be last** |
### Correct ordering example:
```rust
from_fn_async(my_handler)
.with_metadata("sync_db", Value::Bool(true)) // metadata early
.no_display() // CLI modifier
.with_about("about.my-handler") // after CLI modifiers
.with_call_remote::<CliContext>() // always last
```
## Metadata by Middleware
Metadata tags are processed by different middleware. Group them logically:
### Auth Middleware (`middleware/auth/mod.rs`)
| Metadata | Default | Description |
|----------|---------|-------------|
| `authenticated` | `true` | Whether endpoint requires authentication. Set to `false` for public endpoints. |
### Session Auth Middleware (`middleware/auth/session.rs`)
| Metadata | Default | Description |
|----------|---------|-------------|
| `login` | `false` | Special handling for login endpoints (rate limiting, cookie setting) |
| `get_session` | `false` | Inject session ID into params as `__Auth_session` |
### Signature Auth Middleware (`middleware/auth/signature.rs`)
| Metadata | Default | Description |
|----------|---------|-------------|
| `get_signer` | `false` | Inject signer public key into params as `__Auth_signer` |
### Registry Auth (extends Signature Auth)
| Metadata | Default | Description |
|----------|---------|-------------|
| `admin` | `false` | Require admin privileges (signer must be in admin list) |
| `get_device_info` | `false` | Inject device info header for hardware filtering |
### Database Middleware (`middleware/db.rs`)
| Metadata | Default | Description |
|----------|---------|-------------|
| `sync_db` | `false` | Sync database after mutation, add `X-Patch-Sequence` header |
## Context Types
Different contexts for different execution environments:
- `RpcContext` - Web/RPC requests with full service access
- `CliContext` - CLI operations, calls remote RPC
- `InitContext` - During system initialization
- `DiagnosticContext` - Diagnostic/recovery mode
- `RegistryContext` - Registry daemon context
- `EffectContext` - Service effects context (container-to-host calls)
## Parameter Structs
Parameters use derive macros for JSON-RPC, CLI parsing, and TypeScript generation:
```rust
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] // JSON-RPC uses camelCase
#[command(rename_all = "kebab-case")] // CLI uses kebab-case
#[ts(export)] // Generate TypeScript types
pub struct MyParams {
pub package_id: PackageId,
}
```
### Middleware Injection
Auth middleware can inject values into params using special field names:
```rust
#[derive(Deserialize, Serialize, Parser, TS)]
pub struct MyParams {
#[ts(skip)]
#[serde(rename = "__Auth_session")] // Injected by session auth
session: InternedString,
#[ts(skip)]
#[serde(rename = "__Auth_signer")] // Injected by signature auth
signer: AnyVerifyingKey,
#[ts(skip)]
#[serde(rename = "__Auth_userAgent")] // Injected during login
user_agent: Option<String>,
}
```
## Common Patterns
### Adding a New RPC Endpoint
1. Define params struct with `Deserialize, Serialize, Parser, TS`
2. Choose handler type based on sync/async and thread-safety
3. Write handler function taking `(Context, Params) -> Result<Response, Error>`
4. Add to parent handler with appropriate extensions (display modifiers before `with_about`)
5. TypeScript types auto-generated via `make ts-bindings`
### Public (Unauthenticated) Endpoint
```rust
from_fn_async(get_info)
.with_metadata("authenticated", Value::Bool(false))
.with_display_serializable()
.with_about("about.get-info")
.with_call_remote::<CliContext>() // last
```
### Mutating Endpoint with DB Sync
```rust
from_fn_async(update_config)
.with_metadata("sync_db", Value::Bool(true))
.no_display()
.with_about("about.update-config")
.with_call_remote::<CliContext>() // last
```
### Session-Aware Endpoint
```rust
from_fn_async(logout)
.with_metadata("get_session", Value::Bool(true))
.no_display()
.with_about("about.logout")
.with_call_remote::<CliContext>() // last
```
## File Locations
- Handler definitions: Throughout `core/src/` modules
- Main API tree: `core/src/lib.rs` (`main_api()`, `server()`, `package()`)
- Auth middleware: `core/src/middleware/auth/`
- DB middleware: `core/src/middleware/db.rs`
- Context types: `core/src/context/`

122
agents/s9pk-structure.md Normal file
View File

@@ -0,0 +1,122 @@
# S9PK Package Format
S9PK is the package format for StartOS services. Version 2 uses a merkle archive structure for efficient downloading and cryptographic verification.
## File Format
S9PK files begin with a 3-byte header: `0x3b 0x3b 0x02` (magic bytes + version 2).
The archive is cryptographically signed using Ed25519 with prehashed content (SHA-512 over blake3 merkle root hash).
## Archive Structure
```
/
├── manifest.json # Package metadata (required)
├── icon.<ext> # Package icon - any image/* format (required)
├── LICENSE.md # License text (required)
├── dependencies/ # Dependency metadata (optional)
│ └── <package-id>/
│ ├── metadata.json # DependencyMetadata
│ └── icon.<ext> # Dependency icon
├── javascript.squashfs # Package JavaScript code (required)
├── assets.squashfs # Static assets (optional, legacy: assets/ directory)
└── images/ # Container images by architecture
└── <arch>/ # e.g., x86_64, aarch64, riscv64
├── <image-id>.squashfs # Container filesystem
├── <image-id>.json # Image metadata
└── <image-id>.env # Environment variables
```
## Components
### manifest.json
The package manifest contains all metadata:
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Package identifier (e.g., `bitcoind`) |
| `title` | string | Display name |
| `version` | string | Extended version string |
| `satisfies` | string[] | Version ranges this version satisfies |
| `releaseNotes` | string/object | Release notes (localized) |
| `canMigrateTo` | string | Version range for forward migration |
| `canMigrateFrom` | string | Version range for backward migration |
| `license` | string | License type |
| `wrapperRepo` | string | StartOS wrapper repository URL |
| `upstreamRepo` | string | Upstream project URL |
| `supportSite` | string | Support site URL |
| `marketingSite` | string | Marketing site URL |
| `donationUrl` | string? | Optional donation URL |
| `docsUrl` | string? | Optional documentation URL |
| `description` | object | Short and long descriptions (localized) |
| `images` | object | Image configurations by image ID |
| `volumes` | string[] | Volume IDs for persistent data |
| `alerts` | object | User alerts for lifecycle events |
| `dependencies` | object | Package dependencies |
| `hardwareRequirements` | object | Hardware requirements (arch, RAM, devices) |
| `hardwareAcceleration` | boolean | Whether package uses hardware acceleration |
| `gitHash` | string? | Git commit hash |
| `osVersion` | string | Minimum StartOS version |
| `sdkVersion` | string? | SDK version used to build |
### javascript.squashfs
Contains the package JavaScript that implements the `ABI` interface from `@start9labs/start-sdk-base`. This code runs in the container runtime and manages the package lifecycle.
The squashfs is mounted at `/usr/lib/startos/package/` and the runtime loads `index.js`.
### images/
Container images organized by architecture:
- **`<image-id>.squashfs`** - Container root filesystem
- **`<image-id>.json`** - Image metadata (entrypoint, user, workdir, etc.)
- **`<image-id>.env`** - Environment variables for the container
Images are built from Docker/Podman and converted to squashfs. The `ImageConfig` in manifest specifies:
- `arch` - Supported architectures
- `emulateMissingAs` - Fallback architecture for emulation
- `nvidiaContainer` - Whether to enable NVIDIA container support
### assets.squashfs
Static assets accessible to the package, mounted read-only at `/media/startos/assets/` in the container.
### dependencies/
Metadata for dependencies displayed in the UI:
- `metadata.json` - Just title for now
- `icon.<ext>` - Icon for the dependency
## Merkle Archive
The S9PK uses a merkle tree structure where each file and directory has a blake3 hash. This enables:
1. **Partial downloads** - Download and verify individual files
2. **Integrity verification** - Verify any subset of the archive
3. **Efficient updates** - Only download changed portions
4. **DOS protection** - Size limits enforced before downloading content
Files are sorted by priority for streaming (manifest first, then icon, license, dependencies, javascript, assets, images).
## Building S9PK
Use `start-cli s9pk pack` to build packages:
```bash
start-cli s9pk pack <manifest-path> -o <output.s9pk>
```
Images can be sourced from:
- Docker/Podman build (`--docker-build`)
- Existing Docker tag (`--docker-tag`)
- Pre-built squashfs files
## Related Code
- `core/src/s9pk/v2/mod.rs` - S9pk struct and serialization
- `core/src/s9pk/v2/manifest.rs` - Manifest types
- `core/src/s9pk/v2/pack.rs` - Packing logic
- `core/src/s9pk/merkle_archive/` - Merkle archive implementation

0
build/README.md Normal file
View File

View File

@@ -3,6 +3,7 @@ avahi-utils
b3sum b3sum
bash-completion bash-completion
beep beep
binfmt-support
bmon bmon
btrfs-progs btrfs-progs
ca-certificates ca-certificates
@@ -15,6 +16,7 @@ dnsutils
dosfstools dosfstools
e2fsprogs e2fsprogs
ecryptfs-utils ecryptfs-utils
equivs
exfatprogs exfatprogs
flashrom flashrom
fuse3 fuse3
@@ -44,6 +46,7 @@ openssh-server
podman podman
psmisc psmisc
qemu-guest-agent qemu-guest-agent
qemu-user-static
rfkill rfkill
rsync rsync
samba-common-bin samba-common-bin

View File

@@ -9,6 +9,9 @@ FEATURES+=("${ARCH}")
if [ "$ARCH" != "$PLATFORM" ]; then if [ "$ARCH" != "$PLATFORM" ]; then
FEATURES+=("${PLATFORM}") FEATURES+=("${PLATFORM}")
fi fi
if [[ "$PLATFORM" =~ -nonfree$ ]]; then
FEATURES+=("nonfree")
fi
feature_file_checker=' feature_file_checker='
/^#/ { next } /^#/ { next }

View File

@@ -0,0 +1,10 @@
+ firmware-amd-graphics
+ firmware-atheros
+ firmware-brcm80211
+ firmware-iwlwifi
+ firmware-libertas
+ firmware-misc-nonfree
+ firmware-realtek
+ nvidia-container-toolkit
# + nvidia-driver
# + nvidia-kernel-dkms

View File

@@ -73,7 +73,7 @@ if [ "$NON_FREE" = 1 ]; then
if [ "$IB_SUITE" = "bullseye" ]; then if [ "$IB_SUITE" = "bullseye" ]; then
ARCHIVE_AREAS="$ARCHIVE_AREAS non-free" ARCHIVE_AREAS="$ARCHIVE_AREAS non-free"
else else
ARCHIVE_AREAS="$ARCHIVE_AREAS non-free-firmware" ARCHIVE_AREAS="$ARCHIVE_AREAS non-free non-free-firmware"
fi fi
fi fi
@@ -154,9 +154,12 @@ prompt 0
timeout 50 timeout 50
EOF EOF
cp $SOURCE_DIR/splash.png config/bootloaders/syslinux_common/splash.png # Extract splash.png from the deb package
cp $SOURCE_DIR/splash.png config/bootloaders/isolinux/splash.png dpkg-deb --fsys-tarfile $DEB_PATH | tar --to-stdout -xf - ./usr/lib/startos/splash.png > /tmp/splash.png
cp $SOURCE_DIR/splash.png config/bootloaders/grub-pc/splash.png cp /tmp/splash.png config/bootloaders/syslinux_common/splash.png
cp /tmp/splash.png config/bootloaders/isolinux/splash.png
cp /tmp/splash.png config/bootloaders/grub-pc/splash.png
rm /tmp/splash.png
sed -i -e '2i set timeout=5' config/bootloaders/grub-pc/config.cfg sed -i -e '2i set timeout=5' config/bootloaders/grub-pc/config.cfg
@@ -174,40 +177,123 @@ if [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list
fi fi
cat > config/archives/backports.pref <<- EOF if [ "$NON_FREE" = 1 ]; then
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o config/archives/nvidia-container-toolkit.key
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/etc/apt/trusted.gpg.d/nvidia-container-toolkit.key.gpg] https://#g' \
> config/archives/nvidia-container-toolkit.list
fi
cat > config/archives/backports.pref <<-EOF
Package: linux-image-* Package: linux-image-*
Pin: release n=${IB_SUITE}-backports Pin: release n=${IB_SUITE}-backports
Pin-Priority: 500 Pin-Priority: 500
Package: linux-headers-*
Pin: release n=${IB_SUITE}-backports
Pin-Priority: 500
Package: *nvidia*
Pin: release n=${IB_SUITE}-backports
Pin-Priority: 500
EOF EOF
# Dependencies # Hooks
## Firmware
if [ "$NON_FREE" = 1 ]; then
echo 'firmware-iwlwifi firmware-misc-nonfree firmware-brcm80211 firmware-realtek firmware-atheros firmware-libertas firmware-amd-graphics' > config/package-lists/nonfree.list.chroot
fi
cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF
#!/bin/bash #!/bin/bash
set -e set -e
if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
# install a specific NVIDIA driver version
# ---------------- configuration ----------------
NVIDIA_DRIVER_VERSION="\${NVIDIA_DRIVER_VERSION:-580.119.02}"
BASE_URL="https://download.nvidia.com/XFree86/Linux-${QEMU_ARCH}"
echo "[nvidia-hook] Using NVIDIA driver: \${NVIDIA_DRIVER_VERSION}" >&2
# ---------------- kernel version ----------------
# Determine target kernel version from newest /boot/vmlinuz-* in the chroot.
KVER="\$(
ls -1t /boot/vmlinuz-* 2>/dev/null \
| head -n1 \
| sed 's|.*/vmlinuz-||'
)"
if [ -z "\${KVER}" ]; then
echo "[nvidia-hook] ERROR: no /boot/vmlinuz-* found; cannot determine kernel version" >&2
exit 1
fi
echo "[nvidia-hook] Target kernel version: \${KVER}" >&2
# Ensure kernel headers are present
TEMP_APT_DEPS=(build-essential)
if [ ! -e "/lib/modules/\${KVER}/build" ]; then
TEMP_APT_DEPS+=(linux-headers-\${KVER})
fi
echo "[nvidia-hook] Installing build dependencies" >&2
/usr/lib/startos/scripts/install-equivs <<-EOF
Package: nvidia-depends
Version: \${NVIDIA_DRIVER_VERSION}
Section: unknown
Priority: optional
Depends: \${dep_list="\$(IFS=', '; echo "\${TEMP_APT_DEPS[*]}")"}
EOF
# ---------------- download and run installer ----------------
RUN_NAME="NVIDIA-Linux-${QEMU_ARCH}-\${NVIDIA_DRIVER_VERSION}.run"
RUN_PATH="/root/\${RUN_NAME}"
RUN_URL="\${BASE_URL}/\${NVIDIA_DRIVER_VERSION}/\${RUN_NAME}"
echo "[nvidia-hook] Downloading \${RUN_URL}" >&2
wget -O "\${RUN_PATH}" "\${RUN_URL}"
chmod +x "\${RUN_PATH}"
echo "[nvidia-hook] Running NVIDIA installer for kernel \${KVER}" >&2
sh "\${RUN_PATH}" \
--silent \
--kernel-name="\${KVER}" \
--no-x-check \
--no-nouveau-check \
--no-runlevel-check
# Rebuild module metadata
echo "[nvidia-hook] Running depmod for \${KVER}" >&2
depmod -a "\${KVER}"
echo "[nvidia-hook] NVIDIA \${NVIDIA_DRIVER_VERSION} installation complete for kernel \${KVER}" >&2
echo "[nvidia-hook] Removing build dependencies..." >&2
apt-get purge -y nvidia-depends
apt-get autoremove -y
echo "[nvidia-hook] Removed build dependencies." >&2
fi
cp /etc/resolv.conf /etc/resolv.conf.bak cp /etc/resolv.conf /etc/resolv.conf.bak
if [ "${IB_SUITE}" = trixie ] && [ "${IB_TARGET_ARCH}" != riscv64 ]; then if [ "${IB_SUITE}" = trixie ] && [ "${IB_TARGET_ARCH}" != riscv64 ]; then
echo 'deb https://deb.debian.org/debian/ bookworm main' > /etc/apt/sources.list.d/bookworm.list echo 'deb https://deb.debian.org/debian/ bookworm main' > /etc/apt/sources.list.d/bookworm.list
apt-get update apt-get update
apt-get install -y postgresql-15 apt-get install -y postgresql-15
rm /etc/apt/sources.list.d/bookworm.list rm /etc/apt/sources.list.d/bookworm.list
apt-get update apt-get update
systemctl mask postgresql systemctl mask postgresql
fi fi
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
ln -sf /usr/bin/pi-beep /usr/local/bin/beep ln -sf /usr/bin/pi-beep /usr/local/bin/beep
KERNEL_VERSION=${RPI_KERNEL_VERSION} sh /boot/config.sh > /boot/config.txt KERNEL_VERSION=${RPI_KERNEL_VERSION} sh /boot/config.sh > /boot/config.txt
mkinitramfs -c gzip -o initrd.img-${RPI_KERNEL_VERSION}-rpi-v8 ${RPI_KERNEL_VERSION}-rpi-v8 mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-v8 ${RPI_KERNEL_VERSION}-rpi-v8
mkinitramfs -c gzip -o initrd.img-${RPI_KERNEL_VERSION}-rpi-2712 ${RPI_KERNEL_VERSION}-rpi-2712 mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-2712 ${RPI_KERNEL_VERSION}-rpi-2712
fi fi
useradd --shell /bin/bash -G startos -m start9 useradd --shell /bin/bash -G startos -m start9
@@ -218,11 +304,11 @@ usermod -aG systemd-journal start9
echo "start9 ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee "/etc/sudoers.d/010_start9-nopasswd" echo "start9 ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee "/etc/sudoers.d/010_start9-nopasswd"
if [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
/usr/lib/startos/scripts/enable-kiosk /usr/lib/startos/scripts/enable-kiosk
fi fi
if ! [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then if ! [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then
passwd -l start9 passwd -l start9
fi fi
EOF EOF
@@ -360,4 +446,4 @@ elif [ "${IMAGE_TYPE}" = img ]; then
fi fi
chown $IB_UID:$IB_UID $RESULTS_DIR/$IMAGE_BASENAME.* chown $IB_UID:$IB_UID $RESULTS_DIR/$IMAGE_BASENAME.*

View File

@@ -1 +1 @@
usb-storage.quirks=152d:0562:u,14cd:121c:u,0781:cfcb:u console=serial0,115200 console=tty1 root=PARTUUID=cb15ae4d-02 rootfstype=ext4 fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory quiet boot=startos usb-storage.quirks=152d:0562:u,14cd:121c:u,0781:cfcb:u console=serial0,115200 console=tty1 root=PARTUUID=cb15ae4d-02 rootfstype=ext4 fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory boot=startos

View File

@@ -0,0 +1,51 @@
desktop-image: "../splash.png"
title-color: "#ffffff"
title-font: "Unifont Regular 16"
title-text: "StartOS Boot Menu with GRUB"
message-font: "Unifont Regular 16"
terminal-font: "Unifont Regular 16"
#help bar at the bottom
+ label {
top = 100%-50
left = 0
width = 100%
height = 20
text = "@KEYMAP_SHORT@"
align = "center"
color = "#ffffff"
font = "Unifont Regular 16"
}
#boot menu
+ boot_menu {
left = 10%
width = 80%
top = 52%
height = 48%-80
item_color = "#a8a8a8"
item_font = "Unifont Regular 16"
selected_item_color= "#ffffff"
selected_item_font = "Unifont Regular 16"
item_height = 16
item_padding = 0
item_spacing = 4
icon_width = 0
icon_heigh = 0
item_icon_space = 0
}
#progress bar
+ progress_bar {
id = "__timeout__"
left = 15%
top = 100%-80
height = 16
width = 70%
font = "Unifont Regular 16"
text_color = "#000000"
fg_color = "#ffffff"
bg_color = "#a8a8a8"
border_color = "#ffffff"
text = "@TIMEOUT_NOTIFICATION_LONG@"
}

View File

@@ -4,7 +4,7 @@ parse_essential_db_info() {
DB_DUMP="/tmp/startos_db.json" DB_DUMP="/tmp/startos_db.json"
if command -v start-cli >/dev/null 2>&1; then if command -v start-cli >/dev/null 2>&1; then
start-cli db dump > "$DB_DUMP" 2>/dev/null || return 1 timeout 30 start-cli db dump > "$DB_DUMP" 2>/dev/null || return 1
else else
return 1 return 1
fi fi

View File

@@ -111,6 +111,6 @@ if [ "$CHROOT_RES" -eq 0 ]; then
reboot reboot
fi fi
umount -R /media/startos/next umount /media/startos/next
umount /media/startos/upper umount /media/startos/upper
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next

View File

@@ -0,0 +1,20 @@
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
export DEBCONF_NONINTERACTIVE_SEEN=true
TMP_DIR=$(mktemp -d)
(
set -e
cd $TMP_DIR
cat > control.equivs
equivs-build control.equivs
apt-get install -y ./*.deb < /dev/null
)
rm -rf $TMP_DIR
echo Install complete. >&2
exit 0

View File

@@ -29,10 +29,13 @@ if [ -z "$needed" ]; then
exit 1 exit 1
fi fi
MARGIN=${MARGIN:-1073741824}
target=$((needed + MARGIN))
if [ -h /media/startos/config/current.rootfs ] && [ -e /media/startos/config/current.rootfs ]; then if [ -h /media/startos/config/current.rootfs ] && [ -e /media/startos/config/current.rootfs ]; then
echo 'Pruning...' echo 'Pruning...'
current="$(readlink -f /media/startos/config/current.rootfs)" current="$(readlink -f /media/startos/config/current.rootfs)"
while [[ "$(df -B1 --output=avail --sync /media/startos/images | tail -n1)" -lt "$needed" ]]; do while [[ "$(df -B1 --output=avail --sync /media/startos/images | tail -n1)" -lt "$target" ]]; do
to_prune="$(ls -t1 /media/startos/images/*.rootfs /media/startos/images/*.squashfs 2> /dev/null | grep -v "$current" | tail -n1)" to_prune="$(ls -t1 /media/startos/images/*.rootfs /media/startos/images/*.squashfs 2> /dev/null | grep -v "$current" | tail -n1)"
if [ -e "$to_prune" ]; then if [ -e "$to_prune" ]; then
echo " Pruning $to_prune" echo " Pruning $to_prune"

View File

Before

Width:  |  Height:  |  Size: 9.6 KiB

After

Width:  |  Height:  |  Size: 9.6 KiB

View File

@@ -1,87 +0,0 @@
#!/bin/bash
set -e
function partition_for () {
if [[ "$1" =~ [0-9]+$ ]]; then
echo "$1p$2"
else
echo "$1$2"
fi
}
VERSION=$(cat VERSION.txt)
ENVIRONMENT=$(cat ENVIRONMENT.txt)
GIT_HASH=$(cat GIT_HASH.txt | head -c 7)
DATE=$(date +%Y%m%d)
ROOT_PART_END=7217792
VERSION_FULL="$VERSION-$GIT_HASH"
if [ -n "$ENVIRONMENT" ]; then
VERSION_FULL="$VERSION_FULL~$ENVIRONMENT"
fi
TARGET_NAME=startos-${VERSION_FULL}-${DATE}_raspberrypi.img
TARGET_SIZE=$[($ROOT_PART_END+1)*512]
rm -f $TARGET_NAME
truncate -s $TARGET_SIZE $TARGET_NAME
(
echo o
echo x
echo i
echo "0xcb15ae4d"
echo r
echo n
echo p
echo 1
echo 2048
echo 526335
echo t
echo c
echo n
echo p
echo 2
echo 526336
echo $ROOT_PART_END
echo a
echo 1
echo w
) | fdisk $TARGET_NAME
OUTPUT_DEVICE=$(sudo losetup --show -fP $TARGET_NAME)
sudo mkfs.ext4 `partition_for ${OUTPUT_DEVICE} 2`
sudo mkfs.vfat `partition_for ${OUTPUT_DEVICE} 1`
TMPDIR=$(mktemp -d)
sudo mount `partition_for ${OUTPUT_DEVICE} 2` $TMPDIR
sudo mkdir $TMPDIR/boot
sudo mount `partition_for ${OUTPUT_DEVICE} 1` $TMPDIR/boot
sudo unsquashfs -f -d $TMPDIR startos.raspberrypi.squashfs
REAL_GIT_HASH=$(cat $TMPDIR/usr/lib/startos/GIT_HASH.txt)
REAL_VERSION=$(cat $TMPDIR/usr/lib/startos/VERSION.txt)
REAL_ENVIRONMENT=$(cat $TMPDIR/usr/lib/startos/ENVIRONMENT.txt)
sudo sed -i 's| boot=startos| init=/usr/lib/startos/scripts/init_resize\.sh|' $TMPDIR/boot/cmdline.txt
sudo cp ./build/raspberrypi/fstab $TMPDIR/etc/
sudo cp ./build/raspberrypi/init_resize.sh $TMPDIR/usr/lib/startos/scripts/init_resize.sh
sudo umount $TMPDIR/boot
sudo umount $TMPDIR
sudo losetup -d $OUTPUT_DEVICE
if [ "$ALLOW_VERSION_MISMATCH" != 1 ]; then
if [ "$(cat GIT_HASH.txt)" != "$REAL_GIT_HASH" ]; then
>&2 echo "startos.raspberrypi.squashfs GIT_HASH.txt mismatch"
>&2 echo "expected $REAL_GIT_HASH (dpkg) found $(cat GIT_HASH.txt) (repo)"
exit 1
fi
if [ "$(cat VERSION.txt)" != "$REAL_VERSION" ]; then
>&2 echo "startos.raspberrypi.squashfs VERSION.txt mismatch"
exit 1
fi
if [ "$(cat ENVIRONMENT.txt)" != "$REAL_ENVIRONMENT" ]; then
>&2 echo "startos.raspberrypi.squashfs ENVIRONMENT.txt mismatch"
exit 1
fi
fi

View File

@@ -15,13 +15,12 @@ if [ "$SKIP_DL" != "1" ]; then
fi fi
if [ -n "$RUN_ID" ]; then if [ -n "$RUN_ID" ]; then
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree raspberrypi; do for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.squashfs -D $(pwd); do sleep 1; done while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.squashfs -D $(pwd); do sleep 1; done
done done
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.iso -D $(pwd); do sleep 1; done while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.iso -D $(pwd); do sleep 1; done
done done
while ! gh run download -R Start9Labs/start-os $RUN_ID -n raspberrypi.img -D $(pwd); do sleep 1; done
fi fi
if [ -n "$ST_RUN_ID" ]; then if [ -n "$ST_RUN_ID" ]; then
@@ -57,31 +56,23 @@ start-cli --registry=https://alpha-registry-x.start9.com registry os version add
if [ "$SKIP_UL" = "2" ]; then if [ "$SKIP_UL" = "2" ]; then
exit 2 exit 2
elif [ "$SKIP_UL" != "1" ]; then elif [ "$SKIP_UL" != "1" ]; then
for file in *.squashfs *.iso *.deb start-cli_*; do for file in *.deb start-cli_*; do
gh release upload -R Start9Labs/start-os v$VERSION $file gh release upload -R Start9Labs/start-os v$VERSION $file
done done
for file in *.img; do for file in *.iso *.squashfs; do
if ! [ -f $file.gz ]; then s3cmd put -P $file s3://startos-images/v$VERSION/$file
cat $file | pigz > $file.gz
fi
gh release upload -R Start9Labs/start-os v$VERSION $file.gz
done done
fi fi
if [ "$SKIP_INDEX" != "1" ]; then if [ "$SKIP_INDEX" != "1" ]; then
for arch in aarch64 aarch64-nonfree x86_64 x86_64-nonfree; do for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
for file in *_$arch.squashfs *_$arch.iso; do for file in *_$arch.squashfs *_$arch.iso; do
start-cli --registry=https://alpha-registry-x.start9.com registry os asset add --platform=$arch --version=$VERSION $file https://github.com/Start9Labs/start-os/releases/download/v$VERSION/$(echo -n "$file" | sed 's/~/./g') start-cli --registry=https://alpha-registry-x.start9.com registry os asset add --platform=$arch --version=$VERSION $file https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$file
done
done
for arch in raspberrypi; do
for file in *_$arch.squashfs; do
start-cli --registry=https://alpha-registry-x.start9.com registry os asset add --platform=$arch --version=$VERSION $file https://github.com/Start9Labs/start-os/releases/download/v$VERSION/$(echo -n "$file" | sed 's/~/./g')
done done
done done
fi fi
for file in *.iso *.img *.img.gz *.squashfs *.deb start-cli_*; do for file in *.iso *.squashfs *.deb start-cli_*; do
gpg -u 7CFFDA41CA66056A --detach-sign --armor -o "${file}.asc" "$file" gpg -u 7CFFDA41CA66056A --detach-sign --armor -o "${file}.asc" "$file"
done done
@@ -90,20 +81,30 @@ tar -czvf signatures.tar.gz *.asc
gh release upload -R Start9Labs/start-os v$VERSION signatures.tar.gz gh release upload -R Start9Labs/start-os v$VERSION signatures.tar.gz
cat << EOF
# ISO Downloads
- [x86_64/AMD64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64-nonfree.iso))
- [x86_64/AMD64-slim (FOSS-only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64.iso) "Without proprietary software or drivers")
- [aarch64/ARM64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64-nonfree.iso))
- [aarch64/ARM64-slim (FOSS-Only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64.iso) "Without proprietary software or drivers")
- [RISCV64 (RVA23)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_riscv64.iso))
EOF
cat << 'EOF' cat << 'EOF'
# StartOS Checksums # StartOS Checksums
## SHA-256 ## SHA-256
``` ```
EOF EOF
sha256sum *.iso *.img *img.gz *.squashfs sha256sum *.iso *.squashfs
cat << 'EOF' cat << 'EOF'
``` ```
## BLAKE-3 ## BLAKE-3
``` ```
EOF EOF
b3sum *.iso *.img *.img.gz *.squashfs b3sum *.iso *.squashfs
cat << 'EOF' cat << 'EOF'
``` ```
@@ -138,5 +139,4 @@ EOF
b3sum start-cli_* b3sum start-cli_*
cat << 'EOF' cat << 'EOF'
``` ```
EOF EOF

View File

@@ -1,16 +1,21 @@
# Container RPC SERVER Specification # Container RPC Server Specification
The container runtime exposes a JSON-RPC server over a Unix socket at `/media/startos/rpc/service.sock`.
## Methods ## Methods
### init ### init
initialize runtime (mount `/proc`, `/sys`, `/dev`, and `/run` to each image in `/media/images`) Initialize the runtime and system.
called after os has mounted js and images to the container #### params
#### args ```ts
{
`[]` id: string,
kind: "install" | "update" | "restore" | null,
}
```
#### response #### response
@@ -18,11 +23,16 @@ called after os has mounted js and images to the container
### exit ### exit
shutdown runtime Shutdown runtime and optionally run exit hooks for a target version.
#### args #### params
`[]` ```ts
{
id: string,
target: string | null, // ExtendedVersion or VersionRange
}
```
#### response #### response
@@ -30,11 +40,11 @@ shutdown runtime
### start ### start
run main method if not already running Run main method if not already running.
#### args #### params
`[]` None
#### response #### response
@@ -42,11 +52,11 @@ run main method if not already running
### stop ### stop
stop main method by sending SIGTERM to child processes, and SIGKILL after timeout Stop main method by sending SIGTERM to child processes, and SIGKILL after timeout.
#### args #### params
`{ timeout: millis }` None
#### response #### response
@@ -54,15 +64,16 @@ stop main method by sending SIGTERM to child processes, and SIGKILL after timeou
### execute ### execute
run a specific package procedure Run a specific package procedure.
#### args #### params
```ts ```ts
{ {
procedure: JsonPath, id: string, // event ID
input: any, procedure: string, // JSON path (e.g., "/backup/create", "/actions/{name}/run")
timeout: millis, input: any,
timeout: number | null,
} }
``` ```
@@ -72,18 +83,64 @@ run a specific package procedure
### sandbox ### sandbox
run a specific package procedure in sandbox mode Run a specific package procedure in sandbox mode. Same interface as `execute`.
#### args UNIMPLEMENTED: this feature is planned but does not exist
#### params
```ts ```ts
{ {
procedure: JsonPath, id: string,
input: any, procedure: string,
timeout: millis, input: any,
timeout: number | null,
} }
``` ```
#### response #### response
`any` `any`
### callback
Handle a callback from an effect.
#### params
```ts
{
id: number,
args: any[],
}
```
#### response
`null` (no response sent)
### eval
Evaluate a script in the runtime context. Used for debugging.
#### params
```ts
{
script: string,
}
```
#### response
`any`
## Procedures
The `execute` and `sandbox` methods route to procedures based on the `procedure` path:
| Procedure | Description |
|-----------|-------------|
| `/backup/create` | Create a backup |
| `/actions/{name}/getInput` | Get input spec for an action |
| `/actions/{name}/run` | Run an action with input |

View File

@@ -38,7 +38,7 @@
}, },
"../sdk/dist": { "../sdk/dist": {
"name": "@start9labs/start-sdk", "name": "@start9labs/start-sdk",
"version": "0.4.0-beta.47", "version": "0.4.0-beta.48",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@iarna/toml": "^3.0.0", "@iarna/toml": "^3.0.0",

View File

@@ -319,6 +319,7 @@ export function makeEffects(context: EffectContext): Effects {
} }
if (context.callbacks?.onLeaveContext) if (context.callbacks?.onLeaveContext)
self.onLeaveContext(() => { self.onLeaveContext(() => {
self.constRetry = undefined
self.isInContext = false self.isInContext = false
self.onLeaveContext = () => { self.onLeaveContext = () => {
console.warn( console.warn(

915
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -15,7 +15,7 @@ license = "MIT"
name = "start-os" name = "start-os"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Start9Labs/start-os" repository = "https://github.com/Start9Labs/start-os"
version = "0.4.0-alpha.17" # VERSION_BUMP version = "0.4.0-alpha.19" # VERSION_BUMP
[lib] [lib]
name = "startos" name = "startos"
@@ -176,6 +176,7 @@ mio = "1"
new_mime_guess = "4" new_mime_guess = "4"
nix = { version = "0.30.1", features = [ nix = { version = "0.30.1", features = [
"fs", "fs",
"hostname",
"mount", "mount",
"net", "net",
"process", "process",
@@ -213,6 +214,7 @@ reqwest = { version = "0.12.25", features = [
reqwest_cookie_store = "0.9.0" reqwest_cookie_store = "0.9.0"
rpassword = "7.2.0" rpassword = "7.2.0"
rust-argon2 = "3.0.0" rust-argon2 = "3.0.0"
rust-i18n = "3.1.5"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" } rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true } safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
semver = { version = "1.0.20", features = ["serde"] } semver = { version = "1.0.20", features = ["serde"] }
@@ -263,7 +265,7 @@ tower-service = "0.3.3"
tracing = "0.1.39" tracing = "0.1.39"
tracing-error = "0.2.0" tracing-error = "0.2.0"
tracing-journald = "0.3.0" tracing-journald = "0.3.0"
tracing-subscriber = { version = "0.3.17", features = ["env-filter"] } tracing-subscriber = { version = "=0.3.19", features = ["env-filter"] }
ts-rs = "9.0.1" ts-rs = "9.0.1"
typed-builder = "0.23.2" typed-builder = "0.23.2"
url = { version = "2.4.1", features = ["serde"] } url = { version = "2.4.1", features = ["serde"] }

View File

@@ -26,7 +26,7 @@ PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else else
if [ "$PROFILE" != "debug"]; then if [ "$PROFILE" != "debug" ]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..." >&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug PROFILE=debug
fi fi

5343
core/locales/i18n.yaml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -23,7 +23,7 @@ pub fn action_api<C: Context>() -> ParentHandler<C> {
"get-input", "get-input",
from_fn_async(get_action_input) from_fn_async(get_action_input)
.with_display_serializable() .with_display_serializable()
.with_about("Get action input spec") .with_about("about.get-action-input-spec")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -36,14 +36,14 @@ pub fn action_api<C: Context>() -> ParentHandler<C> {
} }
Ok(()) Ok(())
}) })
.with_about("Run service action") .with_about("about.run-service-action")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"clear-task", "clear-task",
from_fn_async(clear_task) from_fn_async(clear_task)
.no_display() .no_display()
.with_about("Clear a service task") .with_about("about.clear-service-task")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -63,7 +63,9 @@ pub struct ActionInput {
#[derive(Deserialize, Serialize, TS, Parser)] #[derive(Deserialize, Serialize, TS, Parser)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct GetActionInputParams { pub struct GetActionInputParams {
#[arg(help = "help.arg.package-id")]
pub package_id: PackageId, pub package_id: PackageId,
#[arg(help = "help.arg.action-id")]
pub action_id: ActionId, pub action_id: ActionId,
} }
@@ -280,8 +282,11 @@ pub struct RunActionParams {
#[derive(Parser)] #[derive(Parser)]
struct CliRunActionParams { struct CliRunActionParams {
#[arg(help = "help.arg.package-id")]
pub package_id: PackageId, pub package_id: PackageId,
#[arg(help = "help.arg.event-id")]
pub event_id: Option<Guid>, pub event_id: Option<Guid>,
#[arg(help = "help.arg.action-id")]
pub action_id: ActionId, pub action_id: ActionId,
#[command(flatten)] #[command(flatten)]
pub input: StdinDeserializable<Option<Value>>, pub input: StdinDeserializable<Option<Value>>,
@@ -360,9 +365,11 @@ pub async fn run_action(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ClearTaskParams { pub struct ClearTaskParams {
#[arg(help = "help.arg.package-id")]
pub package_id: PackageId, pub package_id: PackageId,
#[arg(help = "help.arg.replay-id")]
pub replay_id: ReplayId, pub replay_id: ReplayId,
#[arg(long)] #[arg(long, help = "help.arg.force-clear-task")]
#[serde(default)] #[serde(default)]
pub force: bool, pub force: bool,
} }

View File

@@ -51,7 +51,10 @@ pub async fn write_shadow(password: &str) -> Result<(), Error> {
match line.split_once(":") { match line.split_once(":") {
Some((user, rest)) if user == "start9" || user == "kiosk" => { Some((user, rest)) if user == "start9" || user == "kiosk" => {
let (_, rest) = rest.split_once(":").ok_or_else(|| { let (_, rest) = rest.split_once(":").ok_or_else(|| {
Error::new(eyre!("malformed /etc/shadow"), ErrorKind::ParseSysInfo) Error::new(
eyre!("{}", t!("auth.malformed-etc-shadow")),
ErrorKind::ParseSysInfo,
)
})?; })?;
shadow_file shadow_file
.write_all(format!("{user}:{hash}:{rest}\n").as_bytes()) .write_all(format!("{user}:{hash}:{rest}\n").as_bytes())
@@ -81,7 +84,7 @@ impl PasswordType {
PasswordType::String(x) => Ok(x), PasswordType::String(x) => Ok(x),
PasswordType::EncryptedWire(x) => x.decrypt(current_secret).ok_or_else(|| { PasswordType::EncryptedWire(x) => x.decrypt(current_secret).ok_or_else(|| {
Error::new( Error::new(
color_eyre::eyre::eyre!("Couldn't decode password"), color_eyre::eyre::eyre!("{}", t!("auth.couldnt-decode-password")),
crate::ErrorKind::Unknown, crate::ErrorKind::Unknown,
) )
}), }),
@@ -125,19 +128,19 @@ where
"login", "login",
from_fn_async(cli_login::<AC>) from_fn_async(cli_login::<AC>)
.no_display() .no_display()
.with_about("Log in a new auth session"), .with_about("about.login-new-auth-session"),
) )
.subcommand( .subcommand(
"logout", "logout",
from_fn_async(logout::<AC>) from_fn_async(logout::<AC>)
.with_metadata("get_session", Value::Bool(true)) .with_metadata("get_session", Value::Bool(true))
.no_display() .no_display()
.with_about("Log out of current auth session") .with_about("about.logout-current-auth-session")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"session", "session",
session::<C, AC>().with_about("List or kill auth sessions"), session::<C, AC>().with_about("about.list-or-kill-auth-sessions"),
) )
.subcommand( .subcommand(
"reset-password", "reset-password",
@@ -147,14 +150,14 @@ where
"reset-password", "reset-password",
from_fn_async(cli_reset_password) from_fn_async(cli_reset_password)
.no_display() .no_display()
.with_about("Reset password"), .with_about("about.reset-password"),
) )
.subcommand( .subcommand(
"get-pubkey", "get-pubkey",
from_fn_async(get_pubkey) from_fn_async(get_pubkey)
.with_metadata("authenticated", Value::Bool(false)) .with_metadata("authenticated", Value::Bool(false))
.no_display() .no_display()
.with_about("Get public key derived from server private key") .with_about("about.get-pubkey-from-server")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -208,12 +211,12 @@ pub fn check_password(hash: &str, password: &str) -> Result<(), Error> {
ensure_code!( ensure_code!(
argon2::verify_encoded(&hash, password.as_bytes()).map_err(|_| { argon2::verify_encoded(&hash, password.as_bytes()).map_err(|_| {
Error::new( Error::new(
eyre!("Password Incorrect"), eyre!("{}", t!("auth.password-incorrect")),
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
) )
})?, })?,
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
"Password Incorrect" t!("auth.password-incorrect")
); );
Ok(()) Ok(())
} }
@@ -327,14 +330,14 @@ where
.with_metadata("get_session", Value::Bool(true)) .with_metadata("get_session", Value::Bool(true))
.with_display_serializable() .with_display_serializable()
.with_custom_display_fn(|handle, result| display_sessions(handle.params, result)) .with_custom_display_fn(|handle, result| display_sessions(handle.params, result))
.with_about("Display all auth sessions") .with_about("about.display-all-auth-sessions")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"kill", "kill",
from_fn_async(kill::<AC>) from_fn_async(kill::<AC>)
.no_display() .no_display()
.with_about("Terminate existing auth session(s)") .with_about("about.terminate-auth-sessions")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -418,6 +421,7 @@ impl AsLogoutSessionId for KillSessionId {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct KillParams { pub struct KillParams {
#[arg(help = "help.arg.session-ids")]
ids: Vec<String>, ids: Vec<String>,
} }
@@ -434,7 +438,9 @@ pub async fn kill<C: SessionAuthContext>(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ResetPasswordParams { pub struct ResetPasswordParams {
#[arg(help = "help.arg.old-password")]
old_password: Option<PasswordType>, old_password: Option<PasswordType>,
#[arg(help = "help.arg.new-password")]
new_password: Option<PasswordType>, new_password: Option<PasswordType>,
} }
@@ -447,13 +453,13 @@ async fn cli_reset_password(
.. ..
}: HandlerArgs<CliContext>, }: HandlerArgs<CliContext>,
) -> Result<(), RpcError> { ) -> Result<(), RpcError> {
let old_password = rpassword::prompt_password("Current Password: ")?; let old_password = rpassword::prompt_password(&t!("auth.prompt-current-password"))?;
let new_password = { let new_password = {
let new_password = rpassword::prompt_password("New Password: ")?; let new_password = rpassword::prompt_password(&t!("auth.prompt-new-password"))?;
if new_password != rpassword::prompt_password("Confirm: ")? { if new_password != rpassword::prompt_password(&t!("auth.prompt-confirm"))? {
return Err(Error::new( return Err(Error::new(
eyre!("Passwords do not match"), eyre!("{}", t!("auth.passwords-do-not-match")),
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
) )
.into()); .into());
@@ -486,7 +492,7 @@ pub async fn reset_password_impl(
.with_kind(crate::ErrorKind::IncorrectPassword)? .with_kind(crate::ErrorKind::IncorrectPassword)?
{ {
return Err(Error::new( return Err(Error::new(
eyre!("Incorrect Password"), eyre!("{}", t!("auth.password-incorrect")),
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
)); ));
} }

View File

@@ -33,11 +33,13 @@ use crate::version::VersionT;
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct BackupParams { pub struct BackupParams {
#[arg(help = "help.arg.backup-target-id")]
target_id: BackupTargetId, target_id: BackupTargetId,
#[arg(long = "old-password")] #[arg(long = "old-password", help = "help.arg.old-backup-password")]
old_password: Option<crate::auth::PasswordType>, old_password: Option<crate::auth::PasswordType>,
#[arg(long = "package-ids")] #[arg(long = "package-ids", help = "help.arg.package-ids-to-backup")]
package_ids: Option<Vec<PackageId>>, package_ids: Option<Vec<PackageId>>,
#[arg(help = "help.arg.backup-password")]
password: crate::auth::PasswordType, password: crate::auth::PasswordType,
} }
@@ -69,8 +71,8 @@ impl BackupStatusGuard {
db, db,
None, None,
NotificationLevel::Success, NotificationLevel::Success,
"Backup Complete".to_owned(), t!("backup.bulk.complete-title").to_string(),
"Your backup has completed".to_owned(), t!("backup.bulk.complete-message").to_string(),
BackupReport { BackupReport {
server: ServerBackupReport { server: ServerBackupReport {
attempted: true, attempted: true,
@@ -88,9 +90,8 @@ impl BackupStatusGuard {
db, db,
None, None,
NotificationLevel::Warning, NotificationLevel::Warning,
"Backup Complete".to_owned(), t!("backup.bulk.complete-title").to_string(),
"Your backup has completed, but some package(s) failed to backup" t!("backup.bulk.complete-with-failures").to_string(),
.to_owned(),
BackupReport { BackupReport {
server: ServerBackupReport { server: ServerBackupReport {
attempted: true, attempted: true,
@@ -103,7 +104,7 @@ impl BackupStatusGuard {
.await .await
} }
Err(e) => { Err(e) => {
tracing::error!("Backup Failed: {}", e); tracing::error!("{}", t!("backup.bulk.failed-error", error = e));
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
let err_string = e.to_string(); let err_string = e.to_string();
db.mutate(|db| { db.mutate(|db| {
@@ -111,8 +112,8 @@ impl BackupStatusGuard {
db, db,
None, None,
NotificationLevel::Error, NotificationLevel::Error,
"Backup Failed".to_owned(), t!("backup.bulk.failed-title").to_string(),
"Your backup failed to complete.".to_owned(), t!("backup.bulk.failed-message").to_string(),
BackupReport { BackupReport {
server: ServerBackupReport { server: ServerBackupReport {
attempted: true, attempted: true,
@@ -224,7 +225,7 @@ fn assure_backing_up<'a>(
.as_backup_progress_mut(); .as_backup_progress_mut();
if backing_up.transpose_ref().is_some() { if backing_up.transpose_ref().is_some() {
return Err(Error::new( return Err(Error::new(
eyre!("Server is already backing up!"), eyre!("{}", t!("backup.bulk.already-backing-up")),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
)); ));
} }
@@ -303,7 +304,7 @@ async fn perform_backup(
let mut backup_guard = Arc::try_unwrap(backup_guard).map_err(|_| { let mut backup_guard = Arc::try_unwrap(backup_guard).map_err(|_| {
Error::new( Error::new(
eyre!("leaked reference to BackupMountGuard"), eyre!("{}", t!("backup.bulk.leaked-reference")),
ErrorKind::Incoherent, ErrorKind::Incoherent,
) )
})?; })?;

View File

@@ -37,12 +37,12 @@ pub fn backup<C: Context>() -> ParentHandler<C> {
"create", "create",
from_fn_async(backup_bulk::backup_all) from_fn_async(backup_bulk::backup_all)
.no_display() .no_display()
.with_about("Create backup for all packages") .with_about("about.create-backup-all-packages")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"target", "target",
target::target::<C>().with_about("Commands related to a backup target"), target::target::<C>().with_about("about.commands-backup-target"),
) )
} }
@@ -51,7 +51,7 @@ pub fn package_backup<C: Context>() -> ParentHandler<C> {
"restore", "restore",
from_fn_async(restore::restore_packages_rpc) from_fn_async(restore::restore_packages_rpc)
.no_display() .no_display()
.with_about("Restore package(s) from backup") .with_about("about.restore-packages-from-backup")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }

View File

@@ -23,16 +23,19 @@ use crate::progress::ProgressUnits;
use crate::s9pk::S9pk; use crate::s9pk::S9pk;
use crate::service::service_map::DownloadInstallFuture; use crate::service::service_map::DownloadInstallFuture;
use crate::setup::SetupExecuteProgress; use crate::setup::SetupExecuteProgress;
use crate::system::sync_kiosk; use crate::system::{save_language, sync_kiosk};
use crate::util::serde::IoFormat; use crate::util::serde::{IoFormat, Pem};
use crate::{PLATFORM, PackageId}; use crate::{PLATFORM, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct RestorePackageParams { pub struct RestorePackageParams {
#[arg(help = "help.arg.package-ids")]
pub ids: Vec<PackageId>, pub ids: Vec<PackageId>,
#[arg(help = "help.arg.backup-target-id")]
pub target_id: BackupTargetId, pub target_id: BackupTargetId,
#[arg(help = "help.arg.backup-password")]
pub password: String, pub password: String,
} }
@@ -63,7 +66,10 @@ pub async fn restore_packages_rpc(
match async { res.await?.await }.await { match async { res.await?.await }.await {
Ok(_) => (), Ok(_) => (),
Err(err) => { Err(err) => {
tracing::error!("Error restoring package {}: {}", id, err); tracing::error!(
"{}",
t!("backup.restore.package-error", id = id, error = err)
);
tracing::debug!("{:?}", err); tracing::debug!("{:?}", err);
} }
} }
@@ -75,10 +81,10 @@ pub async fn restore_packages_rpc(
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn recover_full_embassy( pub async fn recover_full_server(
ctx: &SetupContext, ctx: &SetupContext,
disk_guid: Arc<String>, disk_guid: InternedString,
start_os_password: String, password: String,
recovery_source: TmpMountGuard, recovery_source: TmpMountGuard,
server_id: &str, server_id: &str,
recovery_password: &str, recovery_password: &str,
@@ -102,7 +108,7 @@ pub async fn recover_full_embassy(
)?; )?;
os_backup.account.password = argon2::hash_encoded( os_backup.account.password = argon2::hash_encoded(
start_os_password.as_bytes(), password.as_bytes(),
&rand::random::<[u8; 16]>()[..], &rand::random::<[u8; 16]>()[..],
&argon2::Config::rfc9106_low_mem(), &argon2::Config::rfc9106_low_mem(),
) )
@@ -111,16 +117,32 @@ pub async fn recover_full_embassy(
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi"); let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
sync_kiosk(kiosk).await?; sync_kiosk(kiosk).await?;
let language = ctx.language.peek(|a| a.clone());
let keyboard = ctx.keyboard.peek(|a| a.clone());
if let Some(language) = &language {
save_language(&**language).await?;
}
if let Some(keyboard) = &keyboard {
keyboard.save().await?;
}
let db = ctx.db().await?; let db = ctx.db().await?;
db.put(&ROOT, &Database::init(&os_backup.account, kiosk)?) db.put(
.await?; &ROOT,
&Database::init(&os_backup.account, kiosk, language, keyboard)?,
)
.await?;
drop(db); drop(db);
let init_result = init(&ctx.webserver, &ctx.config, init_phases).await?; let config = ctx.config.peek(|c| c.clone());
let init_result = init(&ctx.webserver, &config, init_phases).await?;
let rpc_ctx = RpcContext::init( let rpc_ctx = RpcContext::init(
&ctx.webserver, &ctx.webserver,
&ctx.config, &config,
disk_guid.clone(), disk_guid.clone(),
Some(init_result), Some(init_result),
rpc_ctx_phases, rpc_ctx_phases,
@@ -145,7 +167,10 @@ pub async fn recover_full_embassy(
match async { res.await?.await }.await { match async { res.await?.await }.await {
Ok(_) => (), Ok(_) => (),
Err(err) => { Err(err) => {
tracing::error!("Error restoring package {}: {}", id, err); tracing::error!(
"{}",
t!("backup.restore.package-error", id = id, error = err)
);
tracing::debug!("{:?}", err); tracing::debug!("{:?}", err);
} }
} }
@@ -155,7 +180,14 @@ pub async fn recover_full_embassy(
.await; .await;
restore_phase.lock().await.complete(); restore_phase.lock().await.complete();
Ok(((&os_backup.account).try_into()?, rpc_ctx)) Ok((
SetupResult {
hostname: os_backup.account.hostname,
root_ca: Pem(os_backup.account.root_ca_cert),
needs_restart: ctx.install_rootfs.peek(|a| a.is_some()),
},
rpc_ctx,
))
} }
#[instrument(skip(ctx, backup_guard))] #[instrument(skip(ctx, backup_guard))]

View File

@@ -52,21 +52,21 @@ pub fn cifs<C: Context>() -> ParentHandler<C> {
"add", "add",
from_fn_async(add) from_fn_async(add)
.no_display() .no_display()
.with_about("Add a new backup target") .with_about("about.add-new-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"update", "update",
from_fn_async(update) from_fn_async(update)
.no_display() .no_display()
.with_about("Update an existing backup target") .with_about("about.update-existing-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"remove", "remove",
from_fn_async(remove) from_fn_async(remove)
.no_display() .no_display()
.with_about("Remove an existing backup target") .with_about("about.remove-existing-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -75,9 +75,13 @@ pub fn cifs<C: Context>() -> ParentHandler<C> {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct AddParams { pub struct AddParams {
#[arg(help = "help.arg.cifs-hostname")]
pub hostname: String, pub hostname: String,
#[arg(help = "help.arg.cifs-path")]
pub path: PathBuf, pub path: PathBuf,
#[arg(help = "help.arg.cifs-username")]
pub username: String, pub username: String,
#[arg(help = "help.arg.cifs-password")]
pub password: Option<String>, pub password: Option<String>,
} }
@@ -130,10 +134,15 @@ pub async fn add(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UpdateParams { pub struct UpdateParams {
#[arg(help = "help.arg.backup-target-id")]
pub id: BackupTargetId, pub id: BackupTargetId,
#[arg(help = "help.arg.cifs-hostname")]
pub hostname: String, pub hostname: String,
#[arg(help = "help.arg.cifs-path")]
pub path: PathBuf, pub path: PathBuf,
#[arg(help = "help.arg.cifs-username")]
pub username: String, pub username: String,
#[arg(help = "help.arg.cifs-password")]
pub password: Option<String>, pub password: Option<String>,
} }
@@ -151,7 +160,7 @@ pub async fn update(
id id
} else { } else {
return Err(Error::new( return Err(Error::new(
eyre!("Backup Target ID {} Not Found", id), eyre!("{}", t!("backup.target.cifs.target-not-found", id = id)),
ErrorKind::NotFound, ErrorKind::NotFound,
)); ));
}; };
@@ -171,7 +180,13 @@ pub async fn update(
.as_idx_mut(&id) .as_idx_mut(&id)
.ok_or_else(|| { .ok_or_else(|| {
Error::new( Error::new(
eyre!("Backup Target ID {} Not Found", BackupTargetId::Cifs { id }), eyre!(
"{}",
t!(
"backup.target.cifs.target-not-found",
id = BackupTargetId::Cifs { id }
)
),
ErrorKind::NotFound, ErrorKind::NotFound,
) )
})? })?
@@ -195,6 +210,7 @@ pub async fn update(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct RemoveParams { pub struct RemoveParams {
#[arg(help = "help.arg.backup-target-id")]
pub id: BackupTargetId, pub id: BackupTargetId,
} }
@@ -203,7 +219,7 @@ pub async fn remove(ctx: RpcContext, RemoveParams { id }: RemoveParams) -> Resul
id id
} else { } else {
return Err(Error::new( return Err(Error::new(
eyre!("Backup Target ID {} Not Found", id), eyre!("{}", t!("backup.target.cifs.target-not-found", id = id)),
ErrorKind::NotFound, ErrorKind::NotFound,
)); ));
}; };
@@ -220,7 +236,7 @@ pub fn load(db: &DatabaseModel, id: u32) -> Result<Cifs, Error> {
.as_idx(&id) .as_idx(&id)
.ok_or_else(|| { .ok_or_else(|| {
Error::new( Error::new(
eyre!("Backup Target ID {} Not Found", id), eyre!("{}", t!("backup.target.cifs.target-not-found-id", id = id)),
ErrorKind::NotFound, ErrorKind::NotFound,
) )
})? })?

View File

@@ -143,13 +143,13 @@ pub fn target<C: Context>() -> ParentHandler<C> {
ParentHandler::new() ParentHandler::new()
.subcommand( .subcommand(
"cifs", "cifs",
cifs::cifs::<C>().with_about("Add, remove, or update a backup target"), cifs::cifs::<C>().with_about("about.add-remove-update-backup-target"),
) )
.subcommand( .subcommand(
"list", "list",
from_fn_async(list) from_fn_async(list)
.with_display_serializable() .with_display_serializable()
.with_about("List existing backup targets") .with_about("about.list-existing-backup-targets")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -159,20 +159,20 @@ pub fn target<C: Context>() -> ParentHandler<C> {
.with_custom_display_fn::<CliContext, _>(|params, info| { .with_custom_display_fn::<CliContext, _>(|params, info| {
display_backup_info(params.params, info) display_backup_info(params.params, info)
}) })
.with_about("Display package backup information") .with_about("about.display-package-backup-information")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"mount", "mount",
from_fn_async(mount) from_fn_async(mount)
.with_about("Mount backup target") .with_about("about.mount-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"umount", "umount",
from_fn_async(umount) from_fn_async(umount)
.no_display() .no_display()
.with_about("Unmount backup target") .with_about("about.unmount-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -268,8 +268,11 @@ fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) -> Re
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct InfoParams { pub struct InfoParams {
#[arg(help = "help.arg.backup-target-id")]
target_id: BackupTargetId, target_id: BackupTargetId,
#[arg(help = "help.arg.server-id")]
server_id: String, server_id: String,
#[arg(help = "help.arg.backup-password")]
password: String, password: String,
} }
@@ -305,11 +308,13 @@ lazy_static::lazy_static! {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct MountParams { pub struct MountParams {
#[arg(help = "help.arg.backup-target-id")]
target_id: BackupTargetId, target_id: BackupTargetId,
#[arg(long)] #[arg(long, help = "help.arg.server-id")]
server_id: Option<String>, server_id: Option<String>,
#[arg(help = "help.arg.backup-password")]
password: String, // TODO: rpassword password: String, // TODO: rpassword
#[arg(long)] #[arg(long, help = "help.arg.allow-partial-backup")]
allow_partial: bool, allow_partial: bool,
} }
@@ -385,6 +390,7 @@ pub async fn mount(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UmountParams { pub struct UmountParams {
#[arg(help = "help.arg.backup-target-id")]
target_id: Option<BackupTargetId>, target_id: Option<BackupTargetId>,
} }

View File

@@ -17,6 +17,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
|cfg: ContainerClientConfig| Ok(ContainerCliContext::init(cfg)), |cfg: ContainerClientConfig| Ok(ContainerCliContext::init(cfg)),
crate::service::effects::handler(), crate::service::effects::handler(),
) )
.mutate_command(super::translate_cli)
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -1,9 +1,11 @@
use rust_i18n::t;
pub fn renamed(old: &str, new: &str) -> ! { pub fn renamed(old: &str, new: &str) -> ! {
eprintln!("{old} has been renamed to {new}"); eprintln!("{}", t!("bins.deprecated.renamed", old = old, new = new));
std::process::exit(1) std::process::exit(1)
} }
pub fn removed(name: &str) -> ! { pub fn removed(name: &str) -> ! {
eprintln!("{name} has been removed"); eprintln!("{}", t!("bins.deprecated.removed", name = name));
std::process::exit(1) std::process::exit(1)
} }

View File

@@ -2,6 +2,8 @@ use std::collections::{BTreeMap, VecDeque};
use std::ffi::OsString; use std::ffi::OsString;
use std::path::Path; use std::path::Path;
use rust_i18n::t;
pub mod container_cli; pub mod container_cli;
pub mod deprecated; pub mod deprecated;
pub mod registry; pub mod registry;
@@ -10,75 +12,182 @@ pub mod start_init;
pub mod startd; pub mod startd;
pub mod tunnel; pub mod tunnel;
pub fn set_locale_from_env() {
let lang = std::env::var("LANG").ok();
let lang = lang
.as_deref()
.map_or("C", |l| l.strip_suffix(".UTF-8").unwrap_or(l));
set_locale(lang)
}
pub fn set_locale(lang: &str) {
let mut best = None;
let prefix = lang.split_inclusive("_").next().unwrap();
for l in rust_i18n::available_locales!() {
if l == lang {
best = Some(l);
break;
}
if best.is_none() && l.starts_with(prefix) {
best = Some(l);
}
}
rust_i18n::set_locale(best.unwrap_or(lang));
}
pub fn translate_cli(mut cmd: clap::Command) -> clap::Command {
fn translate(s: impl std::fmt::Display) -> String {
t!(s.to_string()).into_owned()
}
if let Some(s) = cmd.get_about() {
let s = translate(s);
cmd = cmd.about(s);
}
if let Some(s) = cmd.get_long_about() {
let s = translate(s);
cmd = cmd.long_about(s);
}
if let Some(s) = cmd.get_before_help() {
let s = translate(s);
cmd = cmd.before_help(s);
}
if let Some(s) = cmd.get_before_long_help() {
let s = translate(s);
cmd = cmd.before_long_help(s);
}
if let Some(s) = cmd.get_after_help() {
let s = translate(s);
cmd = cmd.after_help(s);
}
if let Some(s) = cmd.get_after_long_help() {
let s = translate(s);
cmd = cmd.after_long_help(s);
}
let arg_ids = cmd
.get_arguments()
.map(|a| a.get_id().clone())
.collect::<Vec<_>>();
for id in arg_ids {
cmd = cmd.mut_arg(id, |arg| {
let arg = if let Some(s) = arg.get_help() {
let s = translate(s);
arg.help(s)
} else {
arg
};
if let Some(s) = arg.get_long_help() {
let s = translate(s);
arg.long_help(s)
} else {
arg
}
});
}
for cmd in cmd.get_subcommands_mut() {
*cmd = translate_cli(cmd.clone());
}
cmd
}
#[derive(Default)] #[derive(Default)]
pub struct MultiExecutable(BTreeMap<&'static str, fn(VecDeque<OsString>)>); pub struct MultiExecutable {
default: Option<&'static str>,
bins: BTreeMap<&'static str, fn(VecDeque<OsString>)>,
}
impl MultiExecutable { impl MultiExecutable {
pub fn enable_startd(&mut self) -> &mut Self { pub fn enable_startd(&mut self) -> &mut Self {
self.0.insert("startd", startd::main); self.bins.insert("startd", startd::main);
self.0 self.bins
.insert("embassyd", |_| deprecated::renamed("embassyd", "startd")); .insert("embassyd", |_| deprecated::renamed("embassyd", "startd"));
self.0 self.bins
.insert("embassy-init", |_| deprecated::removed("embassy-init")); .insert("embassy-init", |_| deprecated::removed("embassy-init"));
self self
} }
pub fn enable_start_cli(&mut self) -> &mut Self { pub fn enable_start_cli(&mut self) -> &mut Self {
self.0.insert("start-cli", start_cli::main); self.bins.insert("start-cli", start_cli::main);
self.0.insert("embassy-cli", |_| { self.bins.insert("embassy-cli", |_| {
deprecated::renamed("embassy-cli", "start-cli") deprecated::renamed("embassy-cli", "start-cli")
}); });
self.0 self.bins
.insert("embassy-sdk", |_| deprecated::removed("embassy-sdk")); .insert("embassy-sdk", |_| deprecated::removed("embassy-sdk"));
self self
} }
pub fn enable_start_container(&mut self) -> &mut Self { pub fn enable_start_container(&mut self) -> &mut Self {
self.0.insert("start-container", container_cli::main); self.bins.insert("start-container", container_cli::main);
self self
} }
pub fn enable_start_registryd(&mut self) -> &mut Self { pub fn enable_start_registryd(&mut self) -> &mut Self {
self.0.insert("start-registryd", registry::main); self.bins.insert("start-registryd", registry::main);
self self
} }
pub fn enable_start_registry(&mut self) -> &mut Self { pub fn enable_start_registry(&mut self) -> &mut Self {
self.0.insert("start-registry", registry::cli); self.bins.insert("start-registry", registry::cli);
self self
} }
pub fn enable_start_tunneld(&mut self) -> &mut Self { pub fn enable_start_tunneld(&mut self) -> &mut Self {
self.0.insert("start-tunneld", tunnel::main); self.bins.insert("start-tunneld", tunnel::main);
self self
} }
pub fn enable_start_tunnel(&mut self) -> &mut Self { pub fn enable_start_tunnel(&mut self) -> &mut Self {
self.0.insert("start-tunnel", tunnel::cli); self.bins.insert("start-tunnel", tunnel::cli);
self
}
pub fn set_default(&mut self, name: &str) -> &mut Self {
if let Some((name, _)) = self.bins.get_key_value(name) {
self.default = Some(*name);
} else {
panic!("{}", t!("bins.mod.does-not-exist", name = name));
}
self self
} }
fn select_executable(&self, name: &str) -> Option<fn(VecDeque<OsString>)> { fn select_executable(&self, name: &str) -> Option<fn(VecDeque<OsString>)> {
self.0.get(&name).copied() self.bins.get(&name).copied()
} }
pub fn execute(&self) { pub fn execute(&self) {
set_locale_from_env();
let mut popped = Vec::with_capacity(2);
let mut args = std::env::args_os().collect::<VecDeque<_>>(); let mut args = std::env::args_os().collect::<VecDeque<_>>();
for _ in 0..2 { for _ in 0..2 {
if let Some(s) = args.pop_front() { if let Some(s) = args.pop_front() {
if let Some(name) = Path::new(&*s).file_name().and_then(|s| s.to_str()) { if let Some(name) = Path::new(&*s).file_name().and_then(|s| s.to_str()) {
if name == "--contents" { if name == "--contents" {
for name in self.0.keys() { for name in self.bins.keys() {
println!("{name}"); println!("{name}");
} }
return;
} }
if let Some(x) = self.select_executable(&name) { if let Some(x) = self.select_executable(&name) {
args.push_front(s); args.push_front(s);
return x(args); return x(args);
} }
} }
popped.push(s);
} }
} }
if let Some(default) = self.default {
while let Some(arg) = popped.pop() {
args.push_front(arg);
}
return self.bins[default](args);
}
let args = std::env::args().collect::<VecDeque<_>>(); let args = std::env::args().collect::<VecDeque<_>>();
eprintln!( eprintln!(
"unknown executable: {}", "{}",
args.get(1) t!(
.or_else(|| args.get(0)) "bins.mod.unknown-executable",
.map(|s| s.as_str()) name = args
.unwrap_or("N/A") .get(1)
.or_else(|| args.get(0))
.map(|s| s.as_str())
.unwrap_or("N/A")
)
); );
std::process::exit(1); std::process::exit(1);
} }

View File

@@ -3,6 +3,7 @@ use std::ffi::OsString;
use clap::Parser; use clap::Parser;
use futures::FutureExt; use futures::FutureExt;
use rpc_toolkit::CliApp; use rpc_toolkit::CliApp;
use rust_i18n::t;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
@@ -77,7 +78,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
let rt = tokio::runtime::Builder::new_multi_thread() let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all() .enable_all()
.build() .build()
.expect("failed to initialize runtime"); .expect(&t!("bins.registry.failed-to-initialize-runtime"));
rt.block_on(inner_main(&config)) rt.block_on(inner_main(&config))
}; };
@@ -99,6 +100,7 @@ pub fn cli(args: impl IntoIterator<Item = OsString>) {
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?), |cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::registry::registry_api(), crate::registry::registry_api(),
) )
.mutate_command(super::translate_cli)
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -19,6 +19,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?), |cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::main_api(), crate::main_api(),
) )
.mutate_command(super::translate_cli)
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -1,11 +1,9 @@
use std::sync::Arc;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases; use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, RpcContext, SetupContext};
use crate::disk::REPAIR_DISK_PATH; use crate::disk::REPAIR_DISK_PATH;
use crate::disk::fsck::RepairStrategy; use crate::disk::fsck::RepairStrategy;
use crate::disk::main::DEFAULT_PASSWORD; use crate::disk::main::DEFAULT_PASSWORD;
@@ -27,7 +25,13 @@ async fn setup_or_init(
if let Some(firmware) = check_for_firmware_update() if let Some(firmware) = check_for_firmware_update()
.await .await
.map_err(|e| { .map_err(|e| {
tracing::warn!("Error checking for firmware update: {e}"); tracing::warn!(
"{}",
t!(
"bins.start-init.error-checking-firmware",
error = e.to_string()
)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
}) })
.ok() .ok()
@@ -35,14 +39,21 @@ async fn setup_or_init(
{ {
let init_ctx = InitContext::init(config).await?; let init_ctx = InitContext::init(config).await?;
let handle = &init_ctx.progress; let handle = &init_ctx.progress;
let mut update_phase = handle.add_phase("Updating Firmware".into(), Some(10)); let mut update_phase =
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1)); handle.add_phase(t!("bins.start-init.updating-firmware").into(), Some(10));
let mut reboot_phase = handle.add_phase(t!("bins.start-init.rebooting").into(), Some(1));
server.serve_ui_for(init_ctx); server.serve_ui_for(init_ctx);
update_phase.start(); update_phase.start();
if let Err(e) = update_firmware(firmware).await { if let Err(e) = update_firmware(firmware).await {
tracing::warn!("Error performing firmware update: {e}"); tracing::warn!(
"{}",
t!(
"bins.start-init.error-firmware-update",
error = e.to_string()
)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} else { } else {
update_phase.complete(); update_phase.complete();
@@ -79,40 +90,11 @@ async fn setup_or_init(
.invoke(crate::ErrorKind::OpenSsl) .invoke(crate::ErrorKind::OpenSsl)
.await?; .await?;
if tokio::fs::metadata("/run/live/medium").await.is_ok() {
Command::new("sed")
.arg("-i")
.arg("s/PasswordAuthentication no/PasswordAuthentication yes/g")
.arg("/etc/ssh/sshd_config")
.invoke(crate::ErrorKind::Filesystem)
.await?;
Command::new("systemctl")
.arg("reload")
.arg("ssh")
.invoke(crate::ErrorKind::OpenSsh)
.await?;
let ctx = InstallContext::init().await?;
server.serve_ui_for(ctx.clone());
ctx.shutdown
.subscribe()
.recv()
.await
.expect("context dropped");
return Ok(Err(Shutdown {
disk_guid: None,
restart: true,
}));
}
if tokio::fs::metadata("/media/startos/config/disk.guid") if tokio::fs::metadata("/media/startos/config/disk.guid")
.await .await
.is_err() .is_err()
{ {
let ctx = SetupContext::init(server, config)?; let ctx = SetupContext::init(server, config.clone())?;
server.serve_ui_for(ctx.clone()); server.serve_ui_for(ctx.clone());
@@ -127,7 +109,13 @@ async fn setup_or_init(
.invoke(ErrorKind::NotFound) .invoke(ErrorKind::NotFound)
.await .await
{ {
tracing::error!("Failed to kill kiosk: {}", e); tracing::error!(
"{}",
t!(
"bins.start-init.failed-to-kill-kiosk",
error = e.to_string()
)
);
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
} }
@@ -136,7 +124,7 @@ async fn setup_or_init(
Some(Err(e)) => return Err(e.clone_output()), Some(Err(e)) => return Err(e.clone_output()),
None => { None => {
return Err(Error::new( return Err(Error::new(
eyre!("Setup mode exited before setup completed"), eyre!("{}", t!("bins.start-init.setup-mode-exited")),
ErrorKind::Unknown, ErrorKind::Unknown,
)); ));
} }
@@ -146,7 +134,8 @@ async fn setup_or_init(
let handle = init_ctx.progress.clone(); let handle = init_ctx.progress.clone();
let err_channel = init_ctx.error.clone(); let err_channel = init_ctx.error.clone();
let mut disk_phase = handle.add_phase("Opening data drive".into(), Some(10)); let mut disk_phase =
handle.add_phase(t!("bins.start-init.opening-data-drive").into(), Some(10));
let init_phases = InitPhases::new(&handle); let init_phases = InitPhases::new(&handle);
let rpc_ctx_phases = InitRpcContextPhases::new(&handle); let rpc_ctx_phases = InitRpcContextPhases::new(&handle);
@@ -156,9 +145,9 @@ async fn setup_or_init(
disk_phase.start(); disk_phase.start();
let guid_string = tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy let guid_string = tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await?; .await?;
let disk_guid = Arc::new(String::from(guid_string.trim())); let disk_guid = InternedString::intern(guid_string.trim());
let requires_reboot = crate::disk::main::import( let requires_reboot = crate::disk::main::import(
&**disk_guid, &*disk_guid,
DATA_DIR, DATA_DIR,
if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() { if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() {
RepairStrategy::Aggressive RepairStrategy::Aggressive
@@ -178,11 +167,12 @@ async fn setup_or_init(
.with_ctx(|_| (crate::ErrorKind::Filesystem, REPAIR_DISK_PATH))?; .with_ctx(|_| (crate::ErrorKind::Filesystem, REPAIR_DISK_PATH))?;
} }
disk_phase.complete(); disk_phase.complete();
tracing::info!("Loaded Disk"); tracing::info!("{}", t!("bins.start-init.loaded-disk"));
if requires_reboot.0 { if requires_reboot.0 {
tracing::info!("Rebooting..."); tracing::info!("{}", t!("bins.start-init.rebooting"));
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1)); let mut reboot_phase =
handle.add_phase(t!("bins.start-init.rebooting").into(), Some(1));
reboot_phase.start(); reboot_phase.start();
return Ok(Err(Shutdown { return Ok(Err(Shutdown {
disk_guid: Some(disk_guid), disk_guid: Some(disk_guid),
@@ -236,11 +226,10 @@ pub async fn main(
.await .await
.is_ok() .is_ok()
{ {
Some(Arc::new( Some(InternedString::intern(
tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await? .await?
.trim() .trim(),
.to_owned(),
)) ))
} else { } else {
None None

View File

@@ -1,11 +1,11 @@
use std::cmp::max; use std::cmp::max;
use std::ffi::OsString; use std::ffi::OsString;
use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use clap::Parser; use clap::Parser;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::{FutureExt, TryFutureExt}; use futures::{FutureExt, TryFutureExt};
use rust_i18n::t;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
@@ -15,11 +15,11 @@ use crate::context::{DiagnosticContext, InitContext, RpcContext};
use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener}; use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener};
use crate::net::static_server::refresher; use crate::net::static_server::refresher;
use crate::net::web_server::{Acceptor, WebServer}; use crate::net::web_server::{Acceptor, WebServer};
use crate::prelude::*;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::system::launch_metrics_task; use crate::system::launch_metrics_task;
use crate::util::io::append_file; use crate::util::io::append_file;
use crate::util::logger::LOGGER; use crate::util::logger::LOGGER;
use crate::{Error, ErrorKind, ResultExt};
#[instrument(skip_all)] #[instrument(skip_all)]
async fn inner_main( async fn inner_main(
@@ -53,11 +53,10 @@ async fn inner_main(
let ctx = RpcContext::init( let ctx = RpcContext::init(
&server.acceptor_setter(), &server.acceptor_setter(),
config, config,
Arc::new( InternedString::intern(
tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await? .await?
.trim() .trim(),
.to_owned(),
), ),
None, None,
rpc_ctx_phases, rpc_ctx_phases,
@@ -114,11 +113,11 @@ async fn inner_main(
metrics_task metrics_task
.map_err(|e| { .map_err(|e| {
Error::new( Error::new(
eyre!("{}", e).wrap_err("Metrics daemon panicked!"), eyre!("{}", e).wrap_err(t!("bins.startd.metrics-daemon-panicked").to_string()),
ErrorKind::Unknown, ErrorKind::Unknown,
) )
}) })
.map_ok(|_| tracing::debug!("Metrics daemon Shutdown")) .map_ok(|_| tracing::debug!("{}", t!("bins.startd.metrics-daemon-shutdown")))
.await?; .await?;
let shutdown = shutdown_recv let shutdown = shutdown_recv
@@ -146,7 +145,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
.worker_threads(max(1, num_cpus::get())) .worker_threads(max(1, num_cpus::get()))
.enable_all() .enable_all()
.build() .build()
.expect("failed to initialize runtime"); .expect(&t!("bins.startd.failed-to-initialize-runtime"));
let res = rt.block_on(async { let res = rt.block_on(async {
let mut server = WebServer::new( let mut server = WebServer::new(
Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)), Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)),
@@ -167,11 +166,10 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
.await .await
.is_ok() .is_ok()
{ {
Some(Arc::new( Some(InternedString::intern(
tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await? .await?
.trim() .trim(),
.to_owned(),
)) ))
} else { } else {
None None

View File

@@ -6,6 +6,7 @@ use std::time::Duration;
use clap::Parser; use clap::Parser;
use futures::FutureExt; use futures::FutureExt;
use rpc_toolkit::CliApp; use rpc_toolkit::CliApp;
use rust_i18n::t;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
use visit_rs::Visit; use visit_rs::Visit;
@@ -70,7 +71,7 @@ async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
true true
} }
Err(e) => { Err(e) => {
tracing::error!("error adding ssl listener: {e}"); tracing::error!("{}", t!("bins.tunnel.error-adding-ssl-listener", error = e.to_string()));
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
false false
@@ -92,7 +93,7 @@ async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
} }
.await .await
{ {
tracing::error!("error updating webserver bind: {e}"); tracing::error!("{}", t!("bins.tunnel.error-updating-webserver-bind", error = e.to_string()));
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
tokio::time::sleep(Duration::from_secs(5)).await; tokio::time::sleep(Duration::from_secs(5)).await;
} }
@@ -157,7 +158,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
let rt = tokio::runtime::Builder::new_multi_thread() let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all() .enable_all()
.build() .build()
.expect("failed to initialize runtime"); .expect(&t!("bins.tunnel.failed-to-initialize-runtime"));
rt.block_on(inner_main(&config)) rt.block_on(inner_main(&config))
}; };
@@ -179,6 +180,7 @@ pub fn cli(args: impl IntoIterator<Item = OsString>) {
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?), |cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::tunnel::api::tunnel_api(), crate::tunnel::api::tunnel_api(),
) )
.mutate_command(super::translate_cli)
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -7,6 +7,7 @@ use std::sync::Arc;
use cookie::{Cookie, Expiration, SameSite}; use cookie::{Cookie, Expiration, SameSite};
use cookie_store::CookieStore; use cookie_store::CookieStore;
use http::HeaderMap; use http::HeaderMap;
use imbl::OrdMap;
use imbl_value::InternedString; use imbl_value::InternedString;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use once_cell::sync::OnceCell; use once_cell::sync::OnceCell;
@@ -22,7 +23,7 @@ use tracing::instrument;
use super::setup::CURRENT_SECRET; use super::setup::CURRENT_SECRET;
use crate::context::config::{ClientConfig, local_config_path}; use crate::context::config::{ClientConfig, local_config_path};
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, RpcContext, SetupContext};
use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path}; use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path};
use crate::middleware::auth::local::LocalAuthContext; use crate::middleware::auth::local::LocalAuthContext;
use crate::prelude::*; use crate::prelude::*;
@@ -37,6 +38,8 @@ pub struct CliContextSeed {
pub registry_url: Option<Url>, pub registry_url: Option<Url>,
pub registry_hostname: Vec<InternedString>, pub registry_hostname: Vec<InternedString>,
pub registry_listen: Option<SocketAddr>, pub registry_listen: Option<SocketAddr>,
pub s9pk_s3base: Option<Url>,
pub s9pk_s3bucket: Option<InternedString>,
pub tunnel_addr: Option<SocketAddr>, pub tunnel_addr: Option<SocketAddr>,
pub tunnel_listen: Option<SocketAddr>, pub tunnel_listen: Option<SocketAddr>,
pub client: Client, pub client: Client,
@@ -128,6 +131,8 @@ impl CliContext {
.transpose()?, .transpose()?,
registry_hostname: config.registry_hostname.unwrap_or_default(), registry_hostname: config.registry_hostname.unwrap_or_default(),
registry_listen: config.registry_listen, registry_listen: config.registry_listen,
s9pk_s3base: config.s9pk_s3base,
s9pk_s3bucket: config.s9pk_s3bucket,
tunnel_addr: config.tunnel, tunnel_addr: config.tunnel,
tunnel_listen: config.tunnel_listen, tunnel_listen: config.tunnel_listen,
client: { client: {
@@ -159,21 +164,23 @@ impl CliContext {
if !path.exists() { if !path.exists() {
continue; continue;
} }
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem( let pair =
&std::fs::read_to_string(path)?, <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
) &std::fs::read_to_string(path)?,
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
) )
})?; .with_kind(crate::ErrorKind::Pem)?;
return Ok(secret.into()) let secret =
ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("{}", t!("context.cli.pkcs8-key-incorrect-length")),
ErrorKind::OpenSsl,
)
})?;
return Ok(secret.into());
} }
Err(Error::new( Err(Error::new(
eyre!("Developer Key does not exist! Please run `start-cli init-key` before running this command."), eyre!("{}", t!("context.cli.developer-key-does-not-exist")),
crate::ErrorKind::Uninitialized crate::ErrorKind::Uninitialized,
)) ))
}) })
} }
@@ -188,14 +195,18 @@ impl CliContext {
"http" => "ws", "http" => "ws",
_ => { _ => {
return Err(Error::new( return Err(Error::new(
eyre!("Cannot parse scheme from base URL"), eyre!("{}", t!("context.cli.cannot-parse-scheme-from-base-url")),
crate::ErrorKind::ParseUrl, crate::ErrorKind::ParseUrl,
) )
.into()); .into());
} }
}; };
url.set_scheme(ws_scheme) url.set_scheme(ws_scheme).map_err(|_| {
.map_err(|_| Error::new(eyre!("Cannot set URL scheme"), crate::ErrorKind::ParseUrl))?; Error::new(
eyre!("{}", t!("context.cli.cannot-set-url-scheme")),
crate::ErrorKind::ParseUrl,
)
})?;
url.path_segments_mut() url.path_segments_mut()
.map_err(|_| eyre!("Url cannot be base")) .map_err(|_| eyre!("Url cannot be base"))
.with_kind(crate::ErrorKind::ParseUrl)? .with_kind(crate::ErrorKind::ParseUrl)?
@@ -238,10 +249,16 @@ impl CliContext {
where where
Self: CallRemote<RemoteContext>, Self: CallRemote<RemoteContext>,
{ {
<Self as CallRemote<RemoteContext, Empty>>::call_remote(&self, method, params, Empty {}) <Self as CallRemote<RemoteContext, Empty>>::call_remote(
.await &self,
.map_err(Error::from) method,
.with_ctx(|e| (e.kind, method)) OrdMap::new(),
params,
Empty {},
)
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
} }
pub async fn call_remote_with<RemoteContext, T>( pub async fn call_remote_with<RemoteContext, T>(
&self, &self,
@@ -252,10 +269,16 @@ impl CliContext {
where where
Self: CallRemote<RemoteContext, T>, Self: CallRemote<RemoteContext, T>,
{ {
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra) <Self as CallRemote<RemoteContext, T>>::call_remote(
.await &self,
.map_err(Error::from) method,
.with_ctx(|e| (e.kind, method)) OrdMap::new(),
params,
extra,
)
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
} }
} }
impl AsRef<Jwk> for CliContext { impl AsRef<Jwk> for CliContext {
@@ -292,7 +315,13 @@ impl AsRef<Client> for CliContext {
} }
impl CallRemote<RpcContext> for CliContext { impl CallRemote<RpcContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(
&self,
method: &str,
_: OrdMap<&'static str, Value>,
params: Value,
_: Empty,
) -> Result<Value, RpcError> {
if let Ok(local) = read_file_to_string(RpcContext::LOCAL_AUTH_COOKIE_PATH).await { if let Ok(local) = read_file_to_string(RpcContext::LOCAL_AUTH_COOKIE_PATH).await {
self.cookie_store self.cookie_store
.lock() .lock()
@@ -319,7 +348,13 @@ impl CallRemote<RpcContext> for CliContext {
} }
} }
impl CallRemote<DiagnosticContext> for CliContext { impl CallRemote<DiagnosticContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(
&self,
method: &str,
_: OrdMap<&'static str, Value>,
params: Value,
_: Empty,
) -> Result<Value, RpcError> {
crate::middleware::auth::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
@@ -332,7 +367,13 @@ impl CallRemote<DiagnosticContext> for CliContext {
} }
} }
impl CallRemote<InitContext> for CliContext { impl CallRemote<InitContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(
&self,
method: &str,
_: OrdMap<&'static str, Value>,
params: Value,
_: Empty,
) -> Result<Value, RpcError> {
crate::middleware::auth::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
@@ -345,20 +386,13 @@ impl CallRemote<InitContext> for CliContext {
} }
} }
impl CallRemote<SetupContext> for CliContext { impl CallRemote<SetupContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(
crate::middleware::auth::signature::call_remote( &self,
self, method: &str,
self.rpc_url.clone(), _: OrdMap<&'static str, Value>,
HeaderMap::new(), params: Value,
self.rpc_url.host_str(), _: Empty,
method, ) -> Result<Value, RpcError> {
params,
)
.await
}
}
impl CallRemote<InstallContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
crate::middleware::auth::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),

View File

@@ -58,27 +58,31 @@ pub trait ContextConfig: DeserializeOwned + Default {
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
#[command(version = crate::version::Current::default().semver().to_string())] #[command(version = crate::version::Current::default().semver().to_string())]
pub struct ClientConfig { pub struct ClientConfig {
#[arg(short = 'c', long)] #[arg(short = 'c', long, help = "help.arg.config-file-path")]
pub config: Option<PathBuf>, pub config: Option<PathBuf>,
#[arg(short = 'H', long)] #[arg(short = 'H', long, help = "help.arg.host-url")]
pub host: Option<Url>, pub host: Option<Url>,
#[arg(short = 'r', long)] #[arg(short = 'r', long, help = "help.arg.registry-url")]
pub registry: Option<Url>, pub registry: Option<Url>,
#[arg(long)] #[arg(long, help = "help.arg.registry-hostname")]
pub registry_hostname: Option<Vec<InternedString>>, pub registry_hostname: Option<Vec<InternedString>>,
#[arg(skip)] #[arg(skip)]
pub registry_listen: Option<SocketAddr>, pub registry_listen: Option<SocketAddr>,
#[arg(short = 't', long)] #[arg(long, help = "help.s9pk-s3base")]
pub s9pk_s3base: Option<Url>,
#[arg(long, help = "help.s9pk-s3bucket")]
pub s9pk_s3bucket: Option<InternedString>,
#[arg(short = 't', long, help = "help.arg.tunnel-address")]
pub tunnel: Option<SocketAddr>, pub tunnel: Option<SocketAddr>,
#[arg(skip)] #[arg(skip)]
pub tunnel_listen: Option<SocketAddr>, pub tunnel_listen: Option<SocketAddr>,
#[arg(short = 'p', long)] #[arg(short = 'p', long, help = "help.arg.proxy-url")]
pub proxy: Option<Url>, pub proxy: Option<Url>,
#[arg(skip)] #[arg(skip)]
pub socks_listen: Option<SocketAddr>, pub socks_listen: Option<SocketAddr>,
#[arg(long)] #[arg(long, help = "help.arg.cookie-path")]
pub cookie_path: Option<PathBuf>, pub cookie_path: Option<PathBuf>,
#[arg(long)] #[arg(long, help = "help.arg.developer-key-path")]
pub developer_key_path: Option<PathBuf>, pub developer_key_path: Option<PathBuf>,
} }
impl ContextConfig for ClientConfig { impl ContextConfig for ClientConfig {
@@ -89,8 +93,13 @@ impl ContextConfig for ClientConfig {
self.host = self.host.take().or(other.host); self.host = self.host.take().or(other.host);
self.registry = self.registry.take().or(other.registry); self.registry = self.registry.take().or(other.registry);
self.registry_hostname = self.registry_hostname.take().or(other.registry_hostname); self.registry_hostname = self.registry_hostname.take().or(other.registry_hostname);
self.registry_listen = self.registry_listen.take().or(other.registry_listen);
self.s9pk_s3base = self.s9pk_s3base.take().or(other.s9pk_s3base);
self.s9pk_s3bucket = self.s9pk_s3bucket.take().or(other.s9pk_s3bucket);
self.tunnel = self.tunnel.take().or(other.tunnel); self.tunnel = self.tunnel.take().or(other.tunnel);
self.tunnel_listen = self.tunnel_listen.take().or(other.tunnel_listen);
self.proxy = self.proxy.take().or(other.proxy); self.proxy = self.proxy.take().or(other.proxy);
self.socks_listen = self.socks_listen.take().or(other.socks_listen);
self.cookie_path = self.cookie_path.take().or(other.cookie_path); self.cookie_path = self.cookie_path.take().or(other.cookie_path);
self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path); self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path);
} }
@@ -109,21 +118,19 @@ impl ClientConfig {
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ServerConfig { pub struct ServerConfig {
#[arg(short, long)] #[arg(short, long, help = "help.arg.config-file-path")]
pub config: Option<PathBuf>, pub config: Option<PathBuf>,
#[arg(long)]
pub ethernet_interface: Option<String>,
#[arg(skip)] #[arg(skip)]
pub os_partitions: Option<OsPartitionInfo>, pub os_partitions: Option<OsPartitionInfo>,
#[arg(long)] #[arg(long, help = "help.arg.socks-listen-address")]
pub socks_listen: Option<SocketAddr>, pub socks_listen: Option<SocketAddr>,
#[arg(long)] #[arg(long, help = "help.arg.revision-cache-size")]
pub revision_cache_size: Option<usize>, pub revision_cache_size: Option<usize>,
#[arg(long)] #[arg(long, help = "help.arg.disable-encryption")]
pub disable_encryption: Option<bool>, pub disable_encryption: Option<bool>,
#[arg(long)] #[arg(long, help = "help.arg.multi-arch-s9pks")]
pub multi_arch_s9pks: Option<bool>, pub multi_arch_s9pks: Option<bool>,
#[arg(long)] #[arg(long, help = "help.arg.developer-key-path")]
pub developer_key_path: Option<PathBuf>, pub developer_key_path: Option<PathBuf>,
} }
impl ContextConfig for ServerConfig { impl ContextConfig for ServerConfig {
@@ -131,7 +138,6 @@ impl ContextConfig for ServerConfig {
self.config.take() self.config.take()
} }
fn merge_with(&mut self, other: Self) { fn merge_with(&mut self, other: Self) {
self.ethernet_interface = self.ethernet_interface.take().or(other.ethernet_interface);
self.os_partitions = self.os_partitions.take().or(other.os_partitions); self.os_partitions = self.os_partitions.take().or(other.os_partitions);
self.socks_listen = self.socks_listen.take().or(other.socks_listen); self.socks_listen = self.socks_listen.take().or(other.socks_listen);
self.revision_cache_size = self self.revision_cache_size = self

View File

@@ -6,15 +6,15 @@ use rpc_toolkit::yajrc::RpcError;
use tokio::sync::broadcast::Sender; use tokio::sync::broadcast::Sender;
use tracing::instrument; use tracing::instrument;
use crate::Error;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::prelude::*;
use crate::rpc_continuations::RpcContinuations; use crate::rpc_continuations::RpcContinuations;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
pub struct DiagnosticContextSeed { pub struct DiagnosticContextSeed {
pub shutdown: Sender<Shutdown>, pub shutdown: Sender<Shutdown>,
pub error: Arc<RpcError>, pub error: Arc<RpcError>,
pub disk_guid: Option<Arc<String>>, pub disk_guid: Option<InternedString>,
pub rpc_continuations: RpcContinuations, pub rpc_continuations: RpcContinuations,
} }
@@ -24,10 +24,13 @@ impl DiagnosticContext {
#[instrument(skip_all)] #[instrument(skip_all)]
pub fn init( pub fn init(
_config: &ServerConfig, _config: &ServerConfig,
disk_guid: Option<Arc<String>>, disk_guid: Option<InternedString>,
error: Error, error: Error,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
tracing::error!("Error: {}: Starting diagnostic UI", error); tracing::error!(
"{}",
t!("context.diagnostic.starting-diagnostic-ui", error = error)
);
tracing::debug!("{:?}", error); tracing::debug!("{:?}", error);
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);

View File

@@ -1,44 +0,0 @@
use std::ops::Deref;
use std::sync::Arc;
use rpc_toolkit::Context;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::Error;
use crate::net::utils::find_eth_iface;
use crate::rpc_continuations::RpcContinuations;
pub struct InstallContextSeed {
pub ethernet_interface: String,
pub shutdown: Sender<()>,
pub rpc_continuations: RpcContinuations,
}
#[derive(Clone)]
pub struct InstallContext(Arc<InstallContextSeed>);
impl InstallContext {
#[instrument(skip_all)]
pub async fn init() -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
Ok(Self(Arc::new(InstallContextSeed {
ethernet_interface: find_eth_iface().await?,
shutdown,
rpc_continuations: RpcContinuations::new(),
})))
}
}
impl AsRef<RpcContinuations> for InstallContext {
fn as_ref(&self) -> &RpcContinuations {
&self.rpc_continuations
}
}
impl Context for InstallContext {}
impl Deref for InstallContext {
type Target = InstallContextSeed;
fn deref(&self) -> &Self::Target {
&*self.0
}
}

View File

@@ -2,13 +2,11 @@ pub mod cli;
pub mod config; pub mod config;
pub mod diagnostic; pub mod diagnostic;
pub mod init; pub mod init;
pub mod install;
pub mod rpc; pub mod rpc;
pub mod setup; pub mod setup;
pub use cli::CliContext; pub use cli::CliContext;
pub use diagnostic::DiagnosticContext; pub use diagnostic::DiagnosticContext;
pub use init::InitContext; pub use init::InitContext;
pub use install::InstallContext;
pub use rpc::RpcContext; pub use rpc::RpcContext;
pub use setup::SetupContext; pub use setup::SetupContext;

View File

@@ -15,6 +15,7 @@ use josekit::jwk::Jwk;
use reqwest::{Client, Proxy}; use reqwest::{Client, Proxy};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{CallRemote, Context, Empty}; use rpc_toolkit::{CallRemote, Context, Empty};
use tokio::process::Command;
use tokio::sync::{RwLock, broadcast, oneshot, watch}; use tokio::sync::{RwLock, broadcast, oneshot, watch};
use tokio::time::Instant; use tokio::time::Instant;
use tracing::instrument; use tracing::instrument;
@@ -26,6 +27,10 @@ use crate::context::config::ServerConfig;
use crate::db::model::Database; use crate::db::model::Database;
use crate::db::model::package::TaskSeverity; use crate::db::model::package::TaskSeverity;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::disk::mount::filesystem::bind::Bind;
use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::{FileSystem, ReadOnly};
use crate::disk::mount::guard::MountGuard;
use crate::init::{InitResult, check_time_is_synchronized}; use crate::init::{InitResult, check_time_is_synchronized};
use crate::install::PKG_ARCHIVE_DIR; use crate::install::PKG_ARCHIVE_DIR;
use crate::lxc::LxcManager; use crate::lxc::LxcManager;
@@ -41,19 +46,21 @@ use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations};
use crate::service::ServiceMap; use crate::service::ServiceMap;
use crate::service::action::update_tasks; use crate::service::action::update_tasks;
use crate::service::effects::callbacks::ServiceCallbacks; use crate::service::effects::callbacks::ServiceCallbacks;
use crate::service::effects::subcontainer::NVIDIA_OVERLAY_PATH;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::util::Invoke;
use crate::util::future::NonDetachingJoinHandle; use crate::util::future::NonDetachingJoinHandle;
use crate::util::io::delete_file; use crate::util::io::{TmpDir, delete_file};
use crate::util::lshw::LshwDevice; use crate::util::lshw::LshwDevice;
use crate::util::sync::{SyncMutex, SyncRwLock, Watch}; use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
use crate::{ActionId, DATA_DIR, PackageId}; use crate::{ActionId, DATA_DIR, PLATFORM, PackageId};
pub struct RpcContextSeed { pub struct RpcContextSeed {
is_closed: AtomicBool, is_closed: AtomicBool,
pub os_partitions: OsPartitionInfo, pub os_partitions: OsPartitionInfo,
pub wifi_interface: Option<String>, pub wifi_interface: Option<String>,
pub ethernet_interface: String, pub ethernet_interface: String,
pub disk_guid: Arc<String>, pub disk_guid: InternedString,
pub ephemeral_sessions: SyncMutex<Sessions>, pub ephemeral_sessions: SyncMutex<Sessions>,
pub db: TypedPatchDb<Database>, pub db: TypedPatchDb<Database>,
pub sync_db: watch::Sender<u64>, pub sync_db: watch::Sender<u64>,
@@ -77,7 +84,7 @@ pub struct RpcContextSeed {
} }
impl Drop for RpcContextSeed { impl Drop for RpcContextSeed {
fn drop(&mut self) { fn drop(&mut self) {
tracing::info!("RpcContext is dropped"); tracing::info!("{}", t!("context.rpc.rpc-context-dropped"));
} }
} }
@@ -127,7 +134,7 @@ impl RpcContext {
pub async fn init( pub async fn init(
webserver: &WebServerAcceptorSetter<UpgradableListener>, webserver: &WebServerAcceptorSetter<UpgradableListener>,
config: &ServerConfig, config: &ServerConfig,
disk_guid: Arc<String>, disk_guid: InternedString,
init_result: Option<InitResult>, init_result: Option<InitResult>,
InitRpcContextPhases { InitRpcContextPhases {
mut load_db, mut load_db,
@@ -148,7 +155,7 @@ impl RpcContext {
let peek = db.peek().await; let peek = db.peek().await;
let account = AccountInfo::load(&peek)?; let account = AccountInfo::load(&peek)?;
load_db.complete(); load_db.complete();
tracing::info!("Opened PatchDB"); tracing::info!("{}", t!("context.rpc.opened-patchdb"));
init_net_ctrl.start(); init_net_ctrl.start();
let (net_controller, os_net_service) = if let Some(InitResult { let (net_controller, os_net_service) = if let Some(InitResult {
@@ -165,7 +172,125 @@ impl RpcContext {
(net_ctrl, os_net_service) (net_ctrl, os_net_service)
}; };
init_net_ctrl.complete(); init_net_ctrl.complete();
tracing::info!("Initialized Net Controller"); tracing::info!("{}", t!("context.rpc.initialized-net-controller"));
if PLATFORM.ends_with("-nonfree") {
if let Err(e) = Command::new("nvidia-smi")
.invoke(ErrorKind::ParseSysInfo)
.await
{
tracing::warn!("{}", t!("context.rpc.nvidia-smi-error", error = e));
tracing::info!("{}", t!("context.rpc.nvidia-warning-can-be-ignored"));
} else {
async {
let version: InternedString = String::from_utf8(
Command::new("modinfo")
.arg("-F")
.arg("version")
.arg("nvidia")
.invoke(ErrorKind::ParseSysInfo)
.await?,
)?
.trim()
.into();
let nvidia_dir =
Path::new("/media/startos/data/package-data/nvidia").join(&*version);
// Generate single squashfs with both debian and generic overlays
let sqfs = nvidia_dir.join("container-overlay.squashfs");
if tokio::fs::metadata(&sqfs).await.is_err() {
let tmp = TmpDir::new().await?;
// Generate debian overlay (libs in /usr/lib/aarch64-linux-gnu/)
let debian_dir = tmp.join("debian");
tokio::fs::create_dir_all(&debian_dir).await?;
// Create /etc/debian_version to trigger debian path detection
tokio::fs::create_dir_all(debian_dir.join("etc")).await?;
tokio::fs::write(debian_dir.join("etc/debian_version"), "").await?;
let procfs = MountGuard::mount(
&Bind::new("/proc"),
debian_dir.join("proc"),
ReadOnly,
)
.await?;
Command::new("nvidia-container-cli")
.arg("configure")
.arg("--no-devbind")
.arg("--no-cgroups")
.arg("--utility")
.arg("--compute")
.arg("--graphics")
.arg("--video")
.arg(&debian_dir)
.invoke(ErrorKind::Unknown)
.await?;
procfs.unmount(true).await?;
// Run ldconfig to create proper symlinks for all NVIDIA libraries
Command::new("ldconfig")
.arg("-r")
.arg(&debian_dir)
.invoke(ErrorKind::Unknown)
.await?;
// Remove /etc/debian_version - it was only needed for nvidia-container-cli detection
tokio::fs::remove_file(debian_dir.join("etc/debian_version")).await?;
// Generate generic overlay (libs in /usr/lib64/)
let generic_dir = tmp.join("generic");
tokio::fs::create_dir_all(&generic_dir).await?;
// No /etc/debian_version - will use generic /usr/lib64 paths
let procfs = MountGuard::mount(
&Bind::new("/proc"),
generic_dir.join("proc"),
ReadOnly,
)
.await?;
Command::new("nvidia-container-cli")
.arg("configure")
.arg("--no-devbind")
.arg("--no-cgroups")
.arg("--utility")
.arg("--compute")
.arg("--graphics")
.arg("--video")
.arg(&generic_dir)
.invoke(ErrorKind::Unknown)
.await?;
procfs.unmount(true).await?;
// Run ldconfig to create proper symlinks for all NVIDIA libraries
Command::new("ldconfig")
.arg("-r")
.arg(&generic_dir)
.invoke(ErrorKind::Unknown)
.await?;
// Create squashfs with UID/GID mapping (avoids chown on readonly mounts)
if let Some(p) = sqfs.parent() {
tokio::fs::create_dir_all(p)
.await
.with_ctx(|_| (ErrorKind::Filesystem, format!("mkdir -p {p:?}")))?;
}
Command::new("mksquashfs")
.arg(&*tmp)
.arg(&sqfs)
.arg("-force-uid")
.arg("100000")
.arg("-force-gid")
.arg("100000")
.invoke(ErrorKind::Filesystem)
.await?;
// tmp.unmount_and_delete().await?;
}
BlockDev::new(&sqfs)
.mount(NVIDIA_OVERLAY_PATH, ReadOnly)
.await?;
Ok::<_, Error>(())
}
.await
.log_err();
}
}
let services = ServiceMap::default(); let services = ServiceMap::default();
let metrics_cache = Watch::<Option<crate::system::Metrics>>::new(None); let metrics_cache = Watch::<Option<crate::system::Metrics>>::new(None);
@@ -210,16 +335,12 @@ impl RpcContext {
is_closed: AtomicBool::new(false), is_closed: AtomicBool::new(false),
os_partitions: config.os_partitions.clone().ok_or_else(|| { os_partitions: config.os_partitions.clone().ok_or_else(|| {
Error::new( Error::new(
eyre!("OS Partition Information Missing"), eyre!("{}", t!("context.rpc.os-partition-info-missing")),
ErrorKind::Filesystem, ErrorKind::Filesystem,
) )
})?, })?,
wifi_interface: wifi_interface.clone(), wifi_interface: wifi_interface.clone(),
ethernet_interface: if let Some(eth) = config.ethernet_interface.clone() { ethernet_interface: find_eth_iface().await?,
eth
} else {
find_eth_iface().await?
},
disk_guid, disk_guid,
ephemeral_sessions: SyncMutex::new(Sessions::new()), ephemeral_sessions: SyncMutex::new(Sessions::new()),
sync_db: watch::Sender::new(db.sequence().await), sync_db: watch::Sender::new(db.sequence().await),
@@ -244,9 +365,9 @@ impl RpcContext {
current_secret: Arc::new( current_secret: Arc::new(
Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| { Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| {
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
tracing::error!("Couldn't generate ec key"); tracing::error!("{}", t!("context.rpc.couldnt-generate-ec-key"));
Error::new( Error::new(
color_eyre::eyre::eyre!("Couldn't generate ec key"), color_eyre::eyre::eyre!("{}", t!("context.rpc.couldnt-generate-ec-key")),
crate::ErrorKind::Unknown, crate::ErrorKind::Unknown,
) )
})?, })?,
@@ -261,10 +382,10 @@ impl RpcContext {
let res = Self(seed.clone()); let res = Self(seed.clone());
res.cleanup_and_initialize(cleanup_init).await?; res.cleanup_and_initialize(cleanup_init).await?;
tracing::info!("Cleaned up transient states"); tracing::info!("{}", t!("context.rpc.cleaned-up-transient-states"));
crate::version::post_init(&res, run_migrations).await?; crate::version::post_init(&res, run_migrations).await?;
tracing::info!("Completed migrations"); tracing::info!("{}", t!("context.rpc.completed-migrations"));
Ok(res) Ok(res)
} }
@@ -273,7 +394,7 @@ impl RpcContext {
self.crons.mutate(|c| std::mem::take(c)); self.crons.mutate(|c| std::mem::take(c));
self.services.shutdown_all().await?; self.services.shutdown_all().await?;
self.is_closed.store(true, Ordering::SeqCst); self.is_closed.store(true, Ordering::SeqCst);
tracing::info!("RpcContext is shutdown"); tracing::info!("{}", t!("context.rpc.rpc-context-shutdown"));
Ok(()) Ok(())
} }
@@ -342,7 +463,10 @@ impl RpcContext {
.await .await
.result .result
{ {
tracing::error!("Error in session cleanup cron: {e}"); tracing::error!(
"{}",
t!("context.rpc.error-in-session-cleanup-cron", error = e)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
} }
@@ -455,24 +579,33 @@ impl RpcContext {
pub async fn call_remote<RemoteContext>( pub async fn call_remote<RemoteContext>(
&self, &self,
method: &str, method: &str,
metadata: OrdMap<&'static str, Value>,
params: Value, params: Value,
) -> Result<Value, RpcError> ) -> Result<Value, RpcError>
where where
Self: CallRemote<RemoteContext>, Self: CallRemote<RemoteContext>,
{ {
<Self as CallRemote<RemoteContext, Empty>>::call_remote(&self, method, params, Empty {}) <Self as CallRemote<RemoteContext, Empty>>::call_remote(
.await &self,
method,
metadata,
params,
Empty {},
)
.await
} }
pub async fn call_remote_with<RemoteContext, T>( pub async fn call_remote_with<RemoteContext, T>(
&self, &self,
method: &str, method: &str,
metadata: OrdMap<&'static str, Value>,
params: Value, params: Value,
extra: T, extra: T,
) -> Result<Value, RpcError> ) -> Result<Value, RpcError>
where where
Self: CallRemote<RemoteContext, T>, Self: CallRemote<RemoteContext, T>,
{ {
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await <Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, metadata, params, extra)
.await
} }
} }
impl AsRef<Client> for RpcContext { impl AsRef<Client> for RpcContext {

View File

@@ -6,6 +6,7 @@ use std::time::Duration;
use futures::{Future, StreamExt}; use futures::{Future, StreamExt};
use imbl_value::InternedString; use imbl_value::InternedString;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use openssl::x509::X509;
use patch_db::PatchDb; use patch_db::PatchDb;
use rpc_toolkit::Context; use rpc_toolkit::Context;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -15,10 +16,9 @@ use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use crate::MAIN_DATA; use crate::MAIN_DATA;
use crate::account::AccountInfo;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::disk::OsPartitionInfo; use crate::disk::mount::guard::{MountGuard, TmpMountGuard};
use crate::hostname::Hostname; use crate::hostname::Hostname;
use crate::net::gateway::UpgradableListener; use crate::net::gateway::UpgradableListener;
use crate::net::web_server::{WebServer, WebServerAcceptorSetter}; use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
@@ -27,12 +27,15 @@ use crate::progress::FullProgressTracker;
use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations}; use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations};
use crate::setup::SetupProgress; use crate::setup::SetupProgress;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::system::KeyboardOptions;
use crate::util::future::NonDetachingJoinHandle; use crate::util::future::NonDetachingJoinHandle;
use crate::util::serde::Pem;
use crate::util::sync::SyncMutex;
lazy_static::lazy_static! { lazy_static::lazy_static! {
pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| { pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| {
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
tracing::error!("Couldn't generate ec key"); tracing::error!("{}", t!("context.setup.couldnt-generate-ec-key"));
panic!("Couldn't generate ec key") panic!("Couldn't generate ec key")
}); });
} }
@@ -41,40 +44,25 @@ lazy_static::lazy_static! {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[ts(export)] #[ts(export)]
pub struct SetupResult { pub struct SetupResult {
pub tor_addresses: Vec<String>,
#[ts(type = "string")] #[ts(type = "string")]
pub hostname: Hostname, pub hostname: Hostname,
#[ts(type = "string")] pub root_ca: Pem<X509>,
pub lan_address: InternedString, pub needs_restart: bool,
pub root_ca: String,
}
impl TryFrom<&AccountInfo> for SetupResult {
type Error = Error;
fn try_from(value: &AccountInfo) -> Result<Self, Self::Error> {
Ok(Self {
tor_addresses: value
.tor_keys
.iter()
.map(|tor_key| format!("https://{}", tor_key.onion_address()))
.collect(),
hostname: value.hostname.clone(),
lan_address: value.hostname.lan_address(),
root_ca: String::from_utf8(value.root_ca_cert.to_pem()?)?,
})
}
} }
pub struct SetupContextSeed { pub struct SetupContextSeed {
pub webserver: WebServerAcceptorSetter<UpgradableListener>, pub webserver: WebServerAcceptorSetter<UpgradableListener>,
pub config: ServerConfig, pub config: SyncMutex<ServerConfig>,
pub os_partitions: OsPartitionInfo,
pub disable_encryption: bool, pub disable_encryption: bool,
pub progress: FullProgressTracker, pub progress: FullProgressTracker,
pub task: OnceCell<NonDetachingJoinHandle<()>>, pub task: OnceCell<NonDetachingJoinHandle<()>>,
pub result: OnceCell<Result<(SetupResult, RpcContext), Error>>, pub result: OnceCell<Result<(SetupResult, RpcContext), Error>>,
pub disk_guid: OnceCell<Arc<String>>, pub disk_guid: OnceCell<InternedString>,
pub shutdown: Sender<Option<Shutdown>>, pub shutdown: Sender<Option<Shutdown>>,
pub rpc_continuations: RpcContinuations, pub rpc_continuations: RpcContinuations,
pub install_rootfs: SyncMutex<Option<(TmpMountGuard, MountGuard)>>,
pub keyboard: SyncMutex<Option<KeyboardOptions>>,
pub language: SyncMutex<Option<InternedString>>,
} }
#[derive(Clone)] #[derive(Clone)]
@@ -83,27 +71,24 @@ impl SetupContext {
#[instrument(skip_all)] #[instrument(skip_all)]
pub fn init( pub fn init(
webserver: &WebServer<UpgradableListener>, webserver: &WebServer<UpgradableListener>,
config: &ServerConfig, config: ServerConfig,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);
let mut progress = FullProgressTracker::new(); let mut progress = FullProgressTracker::new();
progress.enable_logging(true); progress.enable_logging(true);
Ok(Self(Arc::new(SetupContextSeed { Ok(Self(Arc::new(SetupContextSeed {
webserver: webserver.acceptor_setter(), webserver: webserver.acceptor_setter(),
config: config.clone(),
os_partitions: config.os_partitions.clone().ok_or_else(|| {
Error::new(
eyre!("missing required configuration: `os-partitions`"),
ErrorKind::NotFound,
)
})?,
disable_encryption: config.disable_encryption.unwrap_or(false), disable_encryption: config.disable_encryption.unwrap_or(false),
config: SyncMutex::new(config),
progress, progress,
task: OnceCell::new(), task: OnceCell::new(),
result: OnceCell::new(), result: OnceCell::new(),
disk_guid: OnceCell::new(), disk_guid: OnceCell::new(),
shutdown, shutdown,
rpc_continuations: RpcContinuations::new(), rpc_continuations: RpcContinuations::new(),
install_rootfs: SyncMutex::new(None),
language: SyncMutex::new(None),
keyboard: SyncMutex::new(None),
}))) })))
} }
#[instrument(skip_all)] #[instrument(skip_all)]
@@ -129,11 +114,14 @@ impl SetupContext {
.get_or_init(|| async { .get_or_init(|| async {
match f().await { match f().await {
Ok(res) => { Ok(res) => {
tracing::info!("Setup complete!"); tracing::info!("{}", t!("context.setup.setup-complete"));
Ok(res) Ok(res)
} }
Err(e) => { Err(e) => {
tracing::error!("Setup failed: {e}"); tracing::error!(
"{}",
t!("context.setup.setup-failed", error = e)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
Err(e) Err(e)
} }
@@ -146,10 +134,13 @@ impl SetupContext {
) )
.map_err(|_| { .map_err(|_| {
if self.result.initialized() { if self.result.initialized() {
Error::new(eyre!("Setup already complete"), ErrorKind::InvalidRequest) Error::new(
eyre!("{}", t!("context.setup.setup-already-complete")),
ErrorKind::InvalidRequest,
)
} else { } else {
Error::new( Error::new(
eyre!("Setup already in progress"), eyre!("{}", t!("context.setup.setup-already-in-progress")),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
) )
} }
@@ -199,7 +190,7 @@ impl SetupContext {
} }
.await .await
{ {
tracing::error!("Error in setup progress websocket: {e}"); tracing::error!("{}", t!("context.setup.error-in-setup-progress-websocket", error = e));
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
}, },

View File

@@ -11,6 +11,7 @@ use crate::{Error, PackageId};
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ControlParams { pub struct ControlParams {
#[arg(help = "help.arg.package-id")]
pub id: PackageId, pub id: PackageId,
} }

View File

@@ -54,7 +54,7 @@ pub fn db<C: Context>() -> ParentHandler<C> {
"dump", "dump",
from_fn_async(cli_dump) from_fn_async(cli_dump)
.with_display_serializable() .with_display_serializable()
.with_about("Filter/query db to display tables and records"), .with_about("about.filter-query-db"),
) )
.subcommand("dump", from_fn_async(dump).no_cli()) .subcommand("dump", from_fn_async(dump).no_cli())
.subcommand( .subcommand(
@@ -65,13 +65,13 @@ pub fn db<C: Context>() -> ParentHandler<C> {
) )
.subcommand( .subcommand(
"put", "put",
put::<C>().with_about("Command for adding UI record to db"), put::<C>().with_about("about.command-add-ui-record-db"),
) )
.subcommand( .subcommand(
"apply", "apply",
from_fn_async(cli_apply) from_fn_async(cli_apply)
.no_display() .no_display()
.with_about("Update a db record"), .with_about("about.update-db-record"),
) )
.subcommand("apply", from_fn_async(apply).no_cli()) .subcommand("apply", from_fn_async(apply).no_cli())
} }
@@ -87,9 +87,14 @@ pub enum RevisionsRes {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct CliDumpParams { pub struct CliDumpParams {
#[arg(long = "include-private", short = 'p')] #[arg(
long = "include-private",
short = 'p',
help = "help.arg.include-private-data"
)]
#[serde(default)] #[serde(default)]
include_private: bool, include_private: bool,
#[arg(help = "help.arg.db-path")]
path: Option<PathBuf>, path: Option<PathBuf>,
} }
@@ -258,9 +263,11 @@ pub async fn subscribe(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct CliApplyParams { pub struct CliApplyParams {
#[arg(long)] #[arg(long, help = "help.arg.allow-model-mismatch")]
allow_model_mismatch: bool, allow_model_mismatch: bool,
#[arg(help = "help.arg.db-apply-expr")]
expr: String, expr: String,
#[arg(help = "help.arg.db-path")]
path: Option<PathBuf>, path: Option<PathBuf>,
} }
@@ -327,6 +334,7 @@ async fn cli_apply(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ApplyParams { pub struct ApplyParams {
#[arg(help = "help.arg.db-apply-expr")]
expr: String, expr: String,
} }
@@ -358,7 +366,7 @@ pub fn put<C: Context>() -> ParentHandler<C> {
"ui", "ui",
from_fn_async(ui) from_fn_async(ui)
.with_display_serializable() .with_display_serializable()
.with_about("Add path and value to db") .with_about("about.add-path-value-db")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -366,8 +374,10 @@ pub fn put<C: Context>() -> ParentHandler<C> {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UiParams { pub struct UiParams {
#[arg(help = "help.arg.json-pointer")]
#[ts(type = "string")] #[ts(type = "string")]
pointer: JsonPointer, pointer: JsonPointer,
#[arg(help = "help.arg.json-value")]
#[ts(type = "any")] #[ts(type = "any")]
value: Value, value: Value,
} }

View File

@@ -14,6 +14,7 @@ use crate::notifications::Notifications;
use crate::prelude::*; use crate::prelude::*;
use crate::sign::AnyVerifyingKey; use crate::sign::AnyVerifyingKey;
use crate::ssh::SshKeys; use crate::ssh::SshKeys;
use crate::system::KeyboardOptions;
use crate::util::serde::Pem; use crate::util::serde::Pem;
pub mod package; pub mod package;
@@ -28,9 +29,14 @@ pub struct Database {
pub private: Private, pub private: Private,
} }
impl Database { impl Database {
pub fn init(account: &AccountInfo, kiosk: Option<bool>) -> Result<Self, Error> { pub fn init(
account: &AccountInfo,
kiosk: Option<bool>,
language: Option<InternedString>,
keyboard: Option<KeyboardOptions>,
) -> Result<Self, Error> {
Ok(Self { Ok(Self {
public: Public::init(account, kiosk)?, public: Public::init(account, kiosk, language, keyboard)?,
private: Private { private: Private {
key_store: KeyStore::new(account)?, key_store: KeyStore::new(account)?,
password: account.password.clone(), password: account.password.clone(),

View File

@@ -14,7 +14,7 @@ use crate::net::host::Hosts;
use crate::net::service_interface::ServiceInterface; use crate::net::service_interface::ServiceInterface;
use crate::prelude::*; use crate::prelude::*;
use crate::progress::FullProgress; use crate::progress::FullProgress;
use crate::s9pk::manifest::Manifest; use crate::s9pk::manifest::{LocaleString, Manifest};
use crate::status::StatusInfo; use crate::status::StatusInfo;
use crate::util::DataUrl; use crate::util::DataUrl;
use crate::util::serde::{Pem, is_partial_of}; use crate::util::serde::{Pem, is_partial_of};
@@ -417,8 +417,7 @@ impl Map for CurrentDependencies {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[model = "Model<Self>"] #[model = "Model<Self>"]
pub struct CurrentDependencyInfo { pub struct CurrentDependencyInfo {
#[ts(type = "string | null")] pub title: Option<LocaleString>,
pub title: Option<InternedString>,
pub icon: Option<DataUrl<'static>>, pub icon: Option<DataUrl<'static>>,
#[serde(flatten)] #[serde(flatten)]
pub kind: CurrentDependencyKind, pub kind: CurrentDependencyKind,

View File

@@ -25,7 +25,7 @@ use crate::net::utils::ipv6_is_local;
use crate::net::vhost::AlpnInfo; use crate::net::vhost::AlpnInfo;
use crate::prelude::*; use crate::prelude::*;
use crate::progress::FullProgress; use crate::progress::FullProgress;
use crate::system::SmtpValue; use crate::system::{KeyboardOptions, SmtpValue};
use crate::util::cpupower::Governor; use crate::util::cpupower::Governor;
use crate::util::lshw::LshwDevice; use crate::util::lshw::LshwDevice;
use crate::util::serde::MaybeUtf8String; use crate::util::serde::MaybeUtf8String;
@@ -45,7 +45,12 @@ pub struct Public {
pub ui: Value, pub ui: Value,
} }
impl Public { impl Public {
pub fn init(account: &AccountInfo, kiosk: Option<bool>) -> Result<Self, Error> { pub fn init(
account: &AccountInfo,
kiosk: Option<bool>,
language: Option<InternedString>,
keyboard: Option<KeyboardOptions>,
) -> Result<Self, Error> {
Ok(Self { Ok(Self {
server_info: ServerInfo { server_info: ServerInfo {
arch: get_arch(), arch: get_arch(),
@@ -139,6 +144,8 @@ impl Public {
ram: 0, ram: 0,
devices: Vec::new(), devices: Vec::new(),
kiosk, kiosk,
language,
keyboard,
}, },
package_data: AllPackageData::default(), package_data: AllPackageData::default(),
ui: serde_json::from_str(*DB_UI_SEED_CELL.get().unwrap_or(&"null")) ui: serde_json::from_str(*DB_UI_SEED_CELL.get().unwrap_or(&"null"))
@@ -195,6 +202,8 @@ pub struct ServerInfo {
pub ram: u64, pub ram: u64,
pub devices: Vec<LshwDevice>, pub devices: Vec<LshwDevice>,
pub kiosk: Option<bool>, pub kiosk: Option<bool>,
pub language: Option<InternedString>,
pub keyboard: Option<KeyboardOptions>,
} }
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)] #[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]

View File

@@ -416,6 +416,51 @@ impl<T: Map> Model<T> {
} }
} }
impl<T: Map> Model<T>
where
T::Key: FromStr,
Error: From<<T::Key as FromStr>::Err>,
{
/// Retains only the elements specified by the predicate.
/// The predicate can mutate the values and returns whether to keep each entry.
pub fn retain<F>(&mut self, mut f: F) -> Result<(), Error>
where
F: FnMut(&T::Key, &mut Model<T::Value>) -> Result<bool, Error>,
{
let mut to_remove = Vec::new();
match &mut self.value {
Value::Object(o) => {
for (k, v) in o.iter_mut() {
let key = T::Key::from_str(&**k)?;
if !f(&key, patch_db::ModelExt::value_as_mut(v))? {
to_remove.push(k.clone());
}
}
}
v => {
use serde::de::Error;
return Err(patch_db::value::Error {
source: patch_db::value::ErrorSource::custom(format!(
"expected object found {v}"
)),
kind: patch_db::value::ErrorKind::Deserialization,
}
.into());
}
}
// Remove entries that didn't pass the filter
if let Value::Object(o) = &mut self.value {
for k in to_remove {
o.remove(&k);
}
}
Ok(())
}
}
#[repr(transparent)] #[repr(transparent)]
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct JsonKey<T>(pub T); pub struct JsonKey<T>(pub T);

View File

@@ -1,11 +1,11 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::path::Path; use std::path::Path;
use imbl_value::InternedString;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use crate::prelude::*; use crate::prelude::*;
use crate::s9pk::manifest::LocaleString;
use crate::util::PathOrUrl; use crate::util::PathOrUrl;
use crate::{Error, PackageId}; use crate::{Error, PackageId};
@@ -28,7 +28,7 @@ impl Map for Dependencies {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[model = "Model<Self>"] #[model = "Model<Self>"]
pub struct DepInfo { pub struct DepInfo {
pub description: Option<String>, pub description: Option<LocaleString>,
pub optional: bool, pub optional: bool,
#[serde(flatten)] #[serde(flatten)]
pub metadata: Option<MetadataSrc>, pub metadata: Option<MetadataSrc>,
@@ -73,7 +73,7 @@ pub enum MetadataSrc {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[ts(export)] #[ts(export)]
pub struct Metadata { pub struct Metadata {
pub title: InternedString, pub title: LocaleString,
pub icon: PathOrUrl, pub icon: PathOrUrl,
} }
@@ -82,5 +82,5 @@ pub struct Metadata {
#[model = "Model<Self>"] #[model = "Model<Self>"]
pub struct DependencyMetadata { pub struct DependencyMetadata {
#[ts(type = "string")] #[ts(type = "string")]
pub title: InternedString, pub title: LocaleString,
} }

View File

@@ -17,45 +17,46 @@ pub fn diagnostic<C: Context>() -> ParentHandler<C> {
.subcommand( .subcommand(
"error", "error",
from_fn(error) from_fn(error)
.with_about("Display diagnostic error") .with_about("about.display-diagnostic-error")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"logs", "logs",
crate::system::logs::<DiagnosticContext>().with_about("Display OS logs"), crate::system::logs::<DiagnosticContext>().with_about("about.display-os-logs"),
) )
.subcommand( .subcommand(
"logs", "logs",
from_fn_async(crate::logs::cli_logs::<DiagnosticContext, Empty>) from_fn_async(crate::logs::cli_logs::<DiagnosticContext, Empty>)
.no_display() .no_display()
.with_about("Display OS logs"), .with_about("about.display-os-logs"),
) )
.subcommand( .subcommand(
"kernel-logs", "kernel-logs",
crate::system::kernel_logs::<DiagnosticContext>().with_about("Display kernel logs"), crate::system::kernel_logs::<DiagnosticContext>()
.with_about("about.display-kernel-logs"),
) )
.subcommand( .subcommand(
"kernel-logs", "kernel-logs",
from_fn_async(crate::logs::cli_logs::<DiagnosticContext, Empty>) from_fn_async(crate::logs::cli_logs::<DiagnosticContext, Empty>)
.no_display() .no_display()
.with_about("Display kernal logs"), .with_about("about.display-kernel-logs"),
) )
.subcommand( .subcommand(
"restart", "restart",
from_fn(restart) from_fn(restart)
.no_display() .no_display()
.with_about("Restart the server") .with_about("about.restart-server")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"disk", "disk",
disk::<C>().with_about("Command to remove disk from filesystem"), disk::<C>().with_about("about.command-remove-disk-filesystem"),
) )
.subcommand( .subcommand(
"rebuild", "rebuild",
from_fn_async(rebuild) from_fn_async(rebuild)
.no_display() .no_display()
.with_about("Teardown and rebuild service containers") .with_about("about.teardown-rebuild-containers")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -89,16 +90,16 @@ pub fn disk<C: Context>() -> ParentHandler<C> {
from_fn_async(forget_disk::<RpcContext>).no_display(), from_fn_async(forget_disk::<RpcContext>).no_display(),
) )
.no_display() .no_display()
.with_about("Remove disk from filesystem"), .with_about("about.remove-disk-filesystem"),
) )
.subcommand("repair", from_fn_async(|_: C| repair()).no_cli()) .subcommand("repair", from_fn_async(|_: C| repair()).no_cli())
.subcommand( .subcommand(
"repair", "repair",
CallRemoteHandler::<CliContext, _, _>::new( CallRemoteHandler::<CliContext, _, _>::new(
from_fn_async(|_: RpcContext| repair()) from_fn_async(|_: RpcContext| repair()).no_display(),
.no_display() )
.with_about("Repair disk in the event of corruption"), .no_display()
), .with_about("about.repair-disk-corruption"),
) )
} }

View File

@@ -4,6 +4,7 @@ use std::path::Path;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::FutureExt; use futures::FutureExt;
use futures::future::BoxFuture; use futures::future::BoxFuture;
use rust_i18n::t;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
@@ -62,33 +63,39 @@ async fn e2fsck_runner(
let e2fsck_stderr = String::from_utf8(e2fsck_out.stderr)?; let e2fsck_stderr = String::from_utf8(e2fsck_out.stderr)?;
let code = e2fsck_out.status.code().ok_or_else(|| { let code = e2fsck_out.status.code().ok_or_else(|| {
Error::new( Error::new(
eyre!("e2fsck: process terminated by signal"), eyre!("{}", t!("disk.fsck.process-terminated-by-signal")),
crate::ErrorKind::DiskManagement, crate::ErrorKind::DiskManagement,
) )
})?; })?;
if code & 4 != 0 { if code & 4 != 0 {
tracing::error!( tracing::error!(
"some filesystem errors NOT corrected on {}:\n{}", "{}",
logicalname.as_ref().display(), t!(
e2fsck_stderr, "disk.fsck.errors-not-corrected",
device = logicalname.as_ref().display(),
stderr = e2fsck_stderr
),
); );
} else if code & 1 != 0 { } else if code & 1 != 0 {
tracing::warn!( tracing::warn!(
"filesystem errors corrected on {}:\n{}", "{}",
logicalname.as_ref().display(), t!(
e2fsck_stderr, "disk.fsck.errors-corrected",
device = logicalname.as_ref().display(),
stderr = e2fsck_stderr
),
); );
} }
if code < 8 { if code < 8 {
if code & 2 != 0 { if code & 2 != 0 {
tracing::warn!("reboot required"); tracing::warn!("{}", t!("disk.fsck.reboot-required"));
Ok(RequiresReboot(true)) Ok(RequiresReboot(true))
} else { } else {
Ok(RequiresReboot(false)) Ok(RequiresReboot(false))
} }
} else { } else {
Err(Error::new( Err(Error::new(
eyre!("e2fsck: {}", e2fsck_stderr), eyre!("{}", t!("disk.fsck.e2fsck-error", stderr = e2fsck_stderr)),
crate::ErrorKind::DiskManagement, crate::ErrorKind::DiskManagement,
)) ))
} }

View File

@@ -2,6 +2,8 @@ use std::collections::BTreeMap;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use imbl_value::InternedString;
use rust_i18n::t;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
@@ -20,10 +22,10 @@ pub const MAIN_FS_SIZE: FsSize = FsSize::Gigabytes(8);
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn create<I, P>( pub async fn create<I, P>(
disks: &I, disks: &I,
pvscan: &BTreeMap<PathBuf, Option<String>>, pvscan: &BTreeMap<PathBuf, Option<InternedString>>,
datadir: impl AsRef<Path>, datadir: impl AsRef<Path>,
password: Option<&str>, password: Option<&str>,
) -> Result<String, Error> ) -> Result<InternedString, Error>
where where
for<'a> &'a I: IntoIterator<Item = &'a P>, for<'a> &'a I: IntoIterator<Item = &'a P>,
P: AsRef<Path>, P: AsRef<Path>,
@@ -37,9 +39,9 @@ where
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn create_pool<I, P>( pub async fn create_pool<I, P>(
disks: &I, disks: &I,
pvscan: &BTreeMap<PathBuf, Option<String>>, pvscan: &BTreeMap<PathBuf, Option<InternedString>>,
encrypted: bool, encrypted: bool,
) -> Result<String, Error> ) -> Result<InternedString, Error>
where where
for<'a> &'a I: IntoIterator<Item = &'a P>, for<'a> &'a I: IntoIterator<Item = &'a P>,
P: AsRef<Path>, P: AsRef<Path>,
@@ -79,7 +81,7 @@ where
cmd.arg(disk.as_ref()); cmd.arg(disk.as_ref());
} }
cmd.invoke(crate::ErrorKind::DiskManagement).await?; cmd.invoke(crate::ErrorKind::DiskManagement).await?;
Ok(guid) Ok(guid.into())
} }
#[derive(Debug, Clone, Copy)] #[derive(Debug, Clone, Copy)]
@@ -224,7 +226,7 @@ pub async fn import<P: AsRef<Path>>(
.is_none() .is_none()
{ {
return Err(Error::new( return Err(Error::new(
eyre!("StartOS disk not found."), eyre!("{}", t!("disk.main.disk-not-found")),
crate::ErrorKind::DiskNotAvailable, crate::ErrorKind::DiskNotAvailable,
)); ));
} }
@@ -234,7 +236,7 @@ pub async fn import<P: AsRef<Path>>(
.any(|id| id == guid) .any(|id| id == guid)
{ {
return Err(Error::new( return Err(Error::new(
eyre!("A StartOS disk was found, but it is not the correct disk for this device."), eyre!("{}", t!("disk.main.incorrect-disk")),
crate::ErrorKind::IncorrectDisk, crate::ErrorKind::IncorrectDisk,
)); ));
} }

View File

@@ -25,6 +25,8 @@ pub struct OsPartitionInfo {
pub bios: Option<PathBuf>, pub bios: Option<PathBuf>,
pub boot: PathBuf, pub boot: PathBuf,
pub root: PathBuf, pub root: PathBuf,
#[serde(skip)] // internal use only
pub data: Option<PathBuf>,
} }
impl OsPartitionInfo { impl OsPartitionInfo {
pub fn contains(&self, logicalname: impl AsRef<Path>) -> bool { pub fn contains(&self, logicalname: impl AsRef<Path>) -> bool {
@@ -49,7 +51,7 @@ pub fn disk<C: Context>() -> ParentHandler<C> {
from_fn_async(list) from_fn_async(list)
.with_display_serializable() .with_display_serializable()
.with_custom_display_fn(|handle, result| display_disk_info(handle.params, result)) .with_custom_display_fn(|handle, result| display_disk_info(handle.params, result))
.with_about("List disk info") .with_about("about.list-disk-info")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand("repair", from_fn_async(|_: C| repair()).no_cli()) .subcommand("repair", from_fn_async(|_: C| repair()).no_cli())
@@ -58,7 +60,7 @@ pub fn disk<C: Context>() -> ParentHandler<C> {
CallRemoteHandler::<CliContext, _, _>::new( CallRemoteHandler::<CliContext, _, _>::new(
from_fn_async(|_: RpcContext| repair()) from_fn_async(|_: RpcContext| repair())
.no_display() .no_display()
.with_about("Repair disk in the event of corruption"), .with_about("about.repair-disk-corruption"),
), ),
) )
} }

View File

@@ -29,25 +29,31 @@ impl Default for FileType {
pub struct Bind<Src: AsRef<Path>> { pub struct Bind<Src: AsRef<Path>> {
src: Src, src: Src,
filetype: FileType, filetype: FileType,
recursive: bool,
} }
impl<Src: AsRef<Path>> Bind<Src> { impl<Src: AsRef<Path>> Bind<Src> {
pub fn new(src: Src) -> Self { pub fn new(src: Src) -> Self {
Self { Self {
src, src,
filetype: FileType::Directory, filetype: FileType::Directory,
recursive: false,
} }
} }
pub fn with_type(mut self, filetype: FileType) -> Self { pub fn with_type(mut self, filetype: FileType) -> Self {
self.filetype = filetype; self.filetype = filetype;
self self
} }
pub fn recursive(mut self, recursive: bool) -> Self {
self.recursive = recursive;
self
}
} }
impl<Src: AsRef<Path> + Send + Sync> FileSystem for Bind<Src> { impl<Src: AsRef<Path> + Send + Sync> FileSystem for Bind<Src> {
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> { async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
Ok(Some(&self.src)) Ok(Some(&self.src))
} }
fn extra_args(&self) -> impl IntoIterator<Item = impl AsRef<std::ffi::OsStr>> { fn extra_args(&self) -> impl IntoIterator<Item = impl AsRef<std::ffi::OsStr>> {
["--bind"] [if self.recursive { "--rbind" } else { "--bind" }]
} }
async fn pre_mount(&self, mountpoint: &Path, mount_type: MountType) -> Result<(), Error> { async fn pre_mount(&self, mountpoint: &Path, mount_type: MountType) -> Result<(), Error> {
let from_meta = tokio::fs::metadata(&self.src).await.ok(); let from_meta = tokio::fs::metadata(&self.src).await.ok();

View File

@@ -4,6 +4,7 @@ use std::path::Path;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser}; use digest::{Digest, OutputSizeUser};
use itertools::Itertools;
use sha2::Sha256; use sha2::Sha256;
use crate::disk::mount::filesystem::{FileSystem, MountType, ReadWrite}; use crate::disk::mount::filesystem::{FileSystem, MountType, ReadWrite};
@@ -12,12 +13,13 @@ use crate::prelude::*;
use crate::util::io::TmpDir; use crate::util::io::TmpDir;
pub struct OverlayFs<P0: AsRef<Path>, P1: AsRef<Path>, P2: AsRef<Path>> { pub struct OverlayFs<P0: AsRef<Path>, P1: AsRef<Path>, P2: AsRef<Path>> {
lower: P0, lower: Vec<P0>,
upper: P1, upper: P1,
work: P2, work: P2,
} }
impl<P0: AsRef<Path>, P1: AsRef<Path>, P2: AsRef<Path>> OverlayFs<P0, P1, P2> { impl<P0: AsRef<Path>, P1: AsRef<Path>, P2: AsRef<Path>> OverlayFs<P0, P1, P2> {
pub fn new(lower: P0, upper: P1, work: P2) -> Self { /// layers are top to bottom
pub fn new(lower: Vec<P0>, upper: P1, work: P2) -> Self {
Self { lower, upper, work } Self { lower, upper, work }
} }
} }
@@ -32,8 +34,10 @@ impl<P0: AsRef<Path> + Send + Sync, P1: AsRef<Path> + Send + Sync, P2: AsRef<Pat
} }
fn mount_options(&self) -> impl IntoIterator<Item = impl Display> { fn mount_options(&self) -> impl IntoIterator<Item = impl Display> {
[ [
Box::new(lazy_format!("lowerdir={}", self.lower.as_ref().display())) Box::new(lazy_format!(
as Box<dyn Display>, "lowerdir={}",
self.lower.iter().map(|p| p.as_ref().display()).join(":")
)) as Box<dyn Display>,
Box::new(lazy_format!("upperdir={}", self.upper.as_ref().display())), Box::new(lazy_format!("upperdir={}", self.upper.as_ref().display())),
Box::new(lazy_format!("workdir={}", self.work.as_ref().display())), Box::new(lazy_format!("workdir={}", self.work.as_ref().display())),
] ]
@@ -51,18 +55,21 @@ impl<P0: AsRef<Path> + Send + Sync, P1: AsRef<Path> + Send + Sync, P2: AsRef<Pat
tokio::fs::create_dir_all(self.work.as_ref()).await?; tokio::fs::create_dir_all(self.work.as_ref()).await?;
let mut sha = Sha256::new(); let mut sha = Sha256::new();
sha.update("OverlayFs"); sha.update("OverlayFs");
sha.update( for lower in &self.lower {
tokio::fs::canonicalize(self.lower.as_ref()) sha.update(
.await tokio::fs::canonicalize(lower.as_ref())
.with_ctx(|_| { .await
( .with_ctx(|_| {
crate::ErrorKind::Filesystem, (
self.lower.as_ref().display().to_string(), crate::ErrorKind::Filesystem,
) lower.as_ref().display().to_string(),
})? )
.as_os_str() })?
.as_bytes(), .as_os_str()
); .as_bytes(),
);
sha.update(b"\0");
}
sha.update( sha.update(
tokio::fs::canonicalize(self.upper.as_ref()) tokio::fs::canonicalize(self.upper.as_ref())
.await .await
@@ -75,6 +82,7 @@ impl<P0: AsRef<Path> + Send + Sync, P1: AsRef<Path> + Send + Sync, P2: AsRef<Pat
.as_os_str() .as_os_str()
.as_bytes(), .as_bytes(),
); );
sha.update(b"\0");
sha.update( sha.update(
tokio::fs::canonicalize(self.work.as_ref()) tokio::fs::canonicalize(self.work.as_ref())
.await .await
@@ -87,6 +95,7 @@ impl<P0: AsRef<Path> + Send + Sync, P1: AsRef<Path> + Send + Sync, P2: AsRef<Pat
.as_os_str() .as_os_str()
.as_bytes(), .as_bytes(),
); );
sha.update(b"\0");
Ok(sha.finalize()) Ok(sha.finalize())
} }
} }
@@ -98,11 +107,20 @@ pub struct OverlayGuard<G: GenericMountGuard> {
inner_guard: MountGuard, inner_guard: MountGuard,
} }
impl<G: GenericMountGuard> OverlayGuard<G> { impl<G: GenericMountGuard> OverlayGuard<G> {
pub async fn mount(lower: G, mountpoint: impl AsRef<Path>) -> Result<Self, Error> { pub async fn mount_layers<P: AsRef<Path>>(
pre: &[P],
guard: G,
post: &[P],
mountpoint: impl AsRef<Path>,
) -> Result<Self, Error> {
let upper = TmpDir::new().await?; let upper = TmpDir::new().await?;
let inner_guard = MountGuard::mount( let inner_guard = MountGuard::mount(
&OverlayFs::new( &OverlayFs::new(
lower.path(), std::iter::empty()
.chain(pre.into_iter().map(|p| p.as_ref()))
.chain([guard.path()])
.chain(post.into_iter().map(|p| p.as_ref()))
.collect(),
upper.as_ref().join("upper"), upper.as_ref().join("upper"),
upper.as_ref().join("work"), upper.as_ref().join("work"),
), ),
@@ -111,11 +129,14 @@ impl<G: GenericMountGuard> OverlayGuard<G> {
) )
.await?; .await?;
Ok(Self { Ok(Self {
lower: Some(lower), lower: Some(guard),
upper: Some(upper), upper: Some(upper),
inner_guard, inner_guard,
}) })
} }
pub async fn mount(lower: G, mountpoint: impl AsRef<Path>) -> Result<Self, Error> {
Self::mount_layers::<&Path>(&[], lower, &[], mountpoint).await
}
pub async fn unmount(mut self, delete_mountpoint: bool) -> Result<(), Error> { pub async fn unmount(mut self, delete_mountpoint: bool) -> Result<(), Error> {
self.inner_guard.take().unmount(delete_mountpoint).await?; self.inner_guard.take().unmount(delete_mountpoint).await?;
if let Some(lower) = self.lower.take() { if let Some(lower) = self.lower.take() {

View File

@@ -3,6 +3,7 @@ use std::path::Path;
use tracing::instrument; use tracing::instrument;
use crate::Error; use crate::Error;
use crate::prelude::*;
use crate::util::Invoke; use crate::util::Invoke;
pub async fn is_mountpoint(path: impl AsRef<Path>) -> Result<bool, Error> { pub async fn is_mountpoint(path: impl AsRef<Path>) -> Result<bool, Error> {
@@ -22,9 +23,12 @@ pub async fn bind<P0: AsRef<Path>, P1: AsRef<Path>>(
read_only: bool, read_only: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
tracing::info!( tracing::info!(
"Binding {} to {}", "{}",
src.as_ref().display(), t!(
dst.as_ref().display() "disk.mount.binding",
src = src.as_ref().display(),
dst = dst.as_ref().display()
)
); );
if is_mountpoint(&dst).await? { if is_mountpoint(&dst).await? {
unmount(dst.as_ref(), true).await?; unmount(dst.as_ref(), true).await?;
@@ -56,3 +60,42 @@ pub async fn unmount<P: AsRef<Path>>(mountpoint: P, lazy: bool) -> Result<(), Er
.await?; .await?;
Ok(()) Ok(())
} }
/// Unmounts all mountpoints under (and including) the given path, in reverse
/// depth order so that nested mounts are unmounted before their parents.
#[instrument(skip_all)]
pub async fn unmount_all_under<P: AsRef<Path>>(path: P, lazy: bool) -> Result<(), Error> {
let path = path.as_ref();
let canonical_path = tokio::fs::canonicalize(path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("canonicalize {path:?}")))?;
let mounts_content = tokio::fs::read_to_string("/proc/mounts")
.await
.with_ctx(|_| (ErrorKind::Filesystem, "read /proc/mounts"))?;
// Collect all mountpoints under our path
let mut mountpoints: Vec<&str> = mounts_content
.lines()
.filter_map(|line| {
let mountpoint = line.split_whitespace().nth(1)?;
// Check if this mountpoint is under our target path
let mp_path = Path::new(mountpoint);
if mp_path.starts_with(&canonical_path) {
Some(mountpoint)
} else {
None
}
})
.collect();
// Sort by path length descending so we unmount deepest first
mountpoints.sort_by(|a, b| b.len().cmp(&a.len()));
for mountpoint in mountpoints {
tracing::debug!("Unmounting nested mountpoint: {}", mountpoint);
unmount(mountpoint, lazy).await?;
}
Ok(())
}

View File

@@ -20,9 +20,9 @@ use super::mount::guard::TmpMountGuard;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::disk::mount::guard::GenericMountGuard; use crate::disk::mount::guard::GenericMountGuard;
use crate::hostname::Hostname; use crate::hostname::Hostname;
use crate::prelude::*;
use crate::util::Invoke; use crate::util::Invoke;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::{Error, ResultExt as _};
#[derive(Clone, Copy, Debug, Deserialize, Serialize)] #[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
@@ -40,7 +40,7 @@ pub struct DiskInfo {
pub model: Option<String>, pub model: Option<String>,
pub partitions: Vec<PartitionInfo>, pub partitions: Vec<PartitionInfo>,
pub capacity: u64, pub capacity: u64,
pub guid: Option<String>, pub guid: Option<InternedString>,
} }
#[derive(Clone, Debug, Deserialize, Serialize)] #[derive(Clone, Debug, Deserialize, Serialize)]
@@ -51,7 +51,7 @@ pub struct PartitionInfo {
pub capacity: u64, pub capacity: u64,
pub used: Option<u64>, pub used: Option<u64>,
pub start_os: BTreeMap<String, StartOsRecoveryInfo>, pub start_os: BTreeMap<String, StartOsRecoveryInfo>,
pub guid: Option<String>, pub guid: Option<InternedString>,
} }
#[derive(Clone, Debug, Default, Deserialize, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Serialize)]
@@ -95,7 +95,7 @@ pub async fn get_vendor<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error
Path::new(SYS_BLOCK_PATH) Path::new(SYS_BLOCK_PATH)
.join(path.as_ref().strip_prefix("/dev").map_err(|_| { .join(path.as_ref().strip_prefix("/dev").map_err(|_| {
Error::new( Error::new(
eyre!("not a canonical block device"), eyre!("{}", t!("disk.util.not-canonical-block-device")),
crate::ErrorKind::BlockDevice, crate::ErrorKind::BlockDevice,
) )
})?) })?)
@@ -118,7 +118,7 @@ pub async fn get_model<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error>
Path::new(SYS_BLOCK_PATH) Path::new(SYS_BLOCK_PATH)
.join(path.as_ref().strip_prefix("/dev").map_err(|_| { .join(path.as_ref().strip_prefix("/dev").map_err(|_| {
Error::new( Error::new(
eyre!("not a canonical block device"), eyre!("{}", t!("disk.util.not-canonical-block-device")),
crate::ErrorKind::BlockDevice, crate::ErrorKind::BlockDevice,
) )
})?) })?)
@@ -215,7 +215,7 @@ pub async fn get_percentage<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn pvscan() -> Result<BTreeMap<PathBuf, Option<String>>, Error> { pub async fn pvscan() -> Result<BTreeMap<PathBuf, Option<InternedString>>, Error> {
let pvscan_out = Command::new("pvscan") let pvscan_out = Command::new("pvscan")
.invoke(crate::ErrorKind::DiskManagement) .invoke(crate::ErrorKind::DiskManagement)
.await?; .await?;
@@ -259,6 +259,31 @@ pub async fn recovery_info(
Ok(res) Ok(res)
} }
/// Returns the canonical path of the source device for a given mount point,
/// or None if the mount point doesn't exist or isn't mounted.
#[instrument(skip_all)]
pub async fn get_mount_source(mountpoint: impl AsRef<Path>) -> Result<Option<PathBuf>, Error> {
let mounts_content = tokio::fs::read_to_string("/proc/mounts")
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, "/proc/mounts"))?;
let mountpoint = mountpoint.as_ref();
for line in mounts_content.lines() {
let mut parts = line.split_whitespace();
let source = parts.next();
let mount = parts.next();
if let (Some(source), Some(mount)) = (source, mount) {
if Path::new(mount) == mountpoint {
// Try to canonicalize the source path
if let Ok(canonical) = tokio::fs::canonicalize(source).await {
return Ok(Some(canonical));
}
}
}
}
Ok(None)
}
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> { pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> {
struct DiskIndex { struct DiskIndex {
@@ -374,23 +399,53 @@ async fn disk_info(disk: PathBuf) -> DiskInfo {
.await .await
.map_err(|e| { .map_err(|e| {
tracing::warn!( tracing::warn!(
"Could not get partition table of {}: {}", "{}",
disk.display(), t!(
e.source "disk.util.could-not-get-partition-table",
disk = disk.display(),
error = e.source
)
) )
}) })
.unwrap_or_default(); .unwrap_or_default();
let vendor = get_vendor(&disk) let vendor = get_vendor(&disk)
.await .await
.map_err(|e| tracing::warn!("Could not get vendor of {}: {}", disk.display(), e.source)) .map_err(|e| {
tracing::warn!(
"{}",
t!(
"disk.util.could-not-get-vendor",
disk = disk.display(),
error = e.source
)
)
})
.unwrap_or_default(); .unwrap_or_default();
let model = get_model(&disk) let model = get_model(&disk)
.await .await
.map_err(|e| tracing::warn!("Could not get model of {}: {}", disk.display(), e.source)) .map_err(|e| {
tracing::warn!(
"{}",
t!(
"disk.util.could-not-get-model",
disk = disk.display(),
error = e.source
)
)
})
.unwrap_or_default(); .unwrap_or_default();
let capacity = get_capacity(&disk) let capacity = get_capacity(&disk)
.await .await
.map_err(|e| tracing::warn!("Could not get capacity of {}: {}", disk.display(), e.source)) .map_err(|e| {
tracing::warn!(
"{}",
t!(
"disk.util.could-not-get-capacity",
disk = disk.display(),
error = e.source
)
)
})
.unwrap_or_default(); .unwrap_or_default();
DiskInfo { DiskInfo {
logicalname: disk, logicalname: disk,
@@ -407,21 +462,49 @@ async fn part_info(part: PathBuf) -> PartitionInfo {
let mut start_os = BTreeMap::new(); let mut start_os = BTreeMap::new();
let label = get_label(&part) let label = get_label(&part)
.await .await
.map_err(|e| tracing::warn!("Could not get label of {}: {}", part.display(), e.source)) .map_err(|e| {
tracing::warn!(
"{}",
t!(
"disk.util.could-not-get-label",
part = part.display(),
error = e.source
)
)
})
.unwrap_or_default(); .unwrap_or_default();
let capacity = get_capacity(&part) let capacity = get_capacity(&part)
.await .await
.map_err(|e| tracing::warn!("Could not get capacity of {}: {}", part.display(), e.source)) .map_err(|e| {
tracing::warn!(
"{}",
t!(
"disk.util.could-not-get-capacity-part",
part = part.display(),
error = e.source
)
)
})
.unwrap_or_default(); .unwrap_or_default();
let mut used = None; let mut used = None;
match TmpMountGuard::mount(&BlockDev::new(&part), ReadOnly).await { match TmpMountGuard::mount(&BlockDev::new(&part), ReadOnly).await {
Err(e) => tracing::warn!("Could not collect usage information: {}", e.source), Err(e) => tracing::warn!(
"{}",
t!("disk.util.could-not-collect-usage-info", error = e.source)
),
Ok(mount_guard) => { Ok(mount_guard) => {
used = get_used(mount_guard.path()) used = get_used(mount_guard.path())
.await .await
.map_err(|e| { .map_err(|e| {
tracing::warn!("Could not get usage of {}: {}", part.display(), e.source) tracing::warn!(
"{}",
t!(
"disk.util.could-not-get-usage",
part = part.display(),
error = e.source
)
)
}) })
.ok(); .ok();
match recovery_info(mount_guard.path()).await { match recovery_info(mount_guard.path()).await {
@@ -429,11 +512,21 @@ async fn part_info(part: PathBuf) -> PartitionInfo {
start_os = a; start_os = a;
} }
Err(e) => { Err(e) => {
tracing::error!("Error fetching unencrypted backup metadata: {}", e); tracing::error!(
"{}",
t!("disk.util.error-fetching-backup-metadata", error = e)
);
} }
} }
if let Err(e) = mount_guard.unmount().await { if let Err(e) = mount_guard.unmount().await {
tracing::error!("Error unmounting partition {}: {}", part.display(), e); tracing::error!(
"{}",
t!(
"disk.util.error-unmounting-partition",
part = part.display(),
error = e
)
);
} }
} }
} }
@@ -448,7 +541,7 @@ async fn part_info(part: PathBuf) -> PartitionInfo {
} }
} }
fn parse_pvscan_output(pvscan_output: &str) -> BTreeMap<PathBuf, Option<String>> { fn parse_pvscan_output(pvscan_output: &str) -> BTreeMap<PathBuf, Option<InternedString>> {
fn parse_line(line: &str) -> IResult<&str, (&str, Option<&str>)> { fn parse_line(line: &str) -> IResult<&str, (&str, Option<&str>)> {
let pv_parse = preceded( let pv_parse = preceded(
tag(" PV "), tag(" PV "),
@@ -471,10 +564,10 @@ fn parse_pvscan_output(pvscan_output: &str) -> BTreeMap<PathBuf, Option<String>>
for entry in entries { for entry in entries {
match parse_line(entry) { match parse_line(entry) {
Ok((_, (pv, vg))) => { Ok((_, (pv, vg))) => {
ret.insert(PathBuf::from(pv), vg.map(|s| s.to_owned())); ret.insert(PathBuf::from(pv), vg.map(InternedString::intern));
} }
Err(_) => { Err(_) => {
tracing::warn!("Failed to parse pvscan output line: {}", entry); tracing::warn!("{}", t!("disk.util.failed-to-parse-pvscan", line = entry));
} }
} }
} }

View File

@@ -4,17 +4,19 @@ use axum::http::StatusCode;
use axum::http::uri::InvalidUri; use axum::http::uri::InvalidUri;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use num_enum::TryFromPrimitive; use num_enum::TryFromPrimitive;
use patch_db::Revision; use patch_db::Value;
use rpc_toolkit::reqwest; use rpc_toolkit::reqwest;
use rpc_toolkit::yajrc::{ use rpc_toolkit::yajrc::{
INVALID_PARAMS_ERROR, INVALID_REQUEST_ERROR, METHOD_NOT_FOUND_ERROR, PARSE_ERROR, RpcError, INVALID_PARAMS_ERROR, INVALID_REQUEST_ERROR, METHOD_NOT_FOUND_ERROR, PARSE_ERROR, RpcError,
}; };
use rust_i18n::t;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::task::JoinHandle; use tokio::task::JoinHandle;
use tokio_rustls::rustls; use tokio_rustls::rustls;
use ts_rs::TS; use ts_rs::TS;
use crate::InvalidId; use crate::InvalidId;
use crate::prelude::to_value;
#[derive(Debug, Clone, Copy, PartialEq, Eq, TryFromPrimitive)] #[derive(Debug, Clone, Copy, PartialEq, Eq, TryFromPrimitive)]
#[repr(i32)] #[repr(i32)]
@@ -97,95 +99,98 @@ pub enum ErrorKind {
InstallFailed = 76, InstallFailed = 76,
UpdateFailed = 77, UpdateFailed = 77,
Smtp = 78, Smtp = 78,
SetSysInfo = 79,
} }
impl ErrorKind { impl ErrorKind {
pub fn as_str(&self) -> &'static str { pub fn as_str(&self) -> String {
use ErrorKind::*; use ErrorKind::*;
match self { match self {
Unknown => "Unknown Error", Unknown => t!("error.unknown"),
Filesystem => "Filesystem I/O Error", Filesystem => t!("error.filesystem"),
Docker => "Docker Error", Docker => t!("error.docker"),
ConfigSpecViolation => "Config Spec Violation", ConfigSpecViolation => t!("error.config-spec-violation"),
ConfigRulesViolation => "Config Rules Violation", ConfigRulesViolation => t!("error.config-rules-violation"),
NotFound => "Not Found", NotFound => t!("error.not-found"),
IncorrectPassword => "Incorrect Password", IncorrectPassword => t!("error.incorrect-password"),
VersionIncompatible => "Version Incompatible", VersionIncompatible => t!("error.version-incompatible"),
Network => "Network Error", Network => t!("error.network"),
Registry => "Registry Error", Registry => t!("error.registry"),
Serialization => "Serialization Error", Serialization => t!("error.serialization"),
Deserialization => "Deserialization Error", Deserialization => t!("error.deserialization"),
Utf8 => "UTF-8 Parse Error", Utf8 => t!("error.utf8"),
ParseVersion => "Version Parsing Error", ParseVersion => t!("error.parse-version"),
IncorrectDisk => "Incorrect Disk", IncorrectDisk => t!("error.incorrect-disk"),
// Nginx => "Nginx Error", // Nginx => t!("error.nginx"),
Dependency => "Dependency Error", Dependency => t!("error.dependency"),
ParseS9pk => "S9PK Parsing Error", ParseS9pk => t!("error.parse-s9pk"),
ParseUrl => "URL Parsing Error", ParseUrl => t!("error.parse-url"),
DiskNotAvailable => "Disk Not Available", DiskNotAvailable => t!("error.disk-not-available"),
BlockDevice => "Block Device Error", BlockDevice => t!("error.block-device"),
InvalidOnionAddress => "Invalid Onion Address", InvalidOnionAddress => t!("error.invalid-onion-address"),
Pack => "Pack Error", Pack => t!("error.pack"),
ValidateS9pk => "S9PK Validation Error", ValidateS9pk => t!("error.validate-s9pk"),
DiskCorrupted => "Disk Corrupted", // Remove DiskCorrupted => t!("error.disk-corrupted"), // Remove
Tor => "Tor Daemon Error", Tor => t!("error.tor"),
ConfigGen => "Config Generation Error", ConfigGen => t!("error.config-gen"),
ParseNumber => "Number Parsing Error", ParseNumber => t!("error.parse-number"),
Database => "Database Error", Database => t!("error.database"),
InvalidId => "Invalid ID", InvalidId => t!("error.invalid-id"),
InvalidSignature => "Invalid Signature", InvalidSignature => t!("error.invalid-signature"),
Backup => "Backup Error", Backup => t!("error.backup"),
Restore => "Restore Error", Restore => t!("error.restore"),
Authorization => "Unauthorized", Authorization => t!("error.authorization"),
AutoConfigure => "Auto-Configure Error", AutoConfigure => t!("error.auto-configure"),
Action => "Action Failed", Action => t!("error.action"),
RateLimited => "Rate Limited", RateLimited => t!("error.rate-limited"),
InvalidRequest => "Invalid Request", InvalidRequest => t!("error.invalid-request"),
MigrationFailed => "Migration Failed", MigrationFailed => t!("error.migration-failed"),
Uninitialized => "Uninitialized", Uninitialized => t!("error.uninitialized"),
ParseNetAddress => "Net Address Parsing Error", ParseNetAddress => t!("error.parse-net-address"),
ParseSshKey => "SSH Key Parsing Error", ParseSshKey => t!("error.parse-ssh-key"),
SoundError => "Sound Interface Error", SoundError => t!("error.sound-error"),
ParseTimestamp => "Timestamp Parsing Error", ParseTimestamp => t!("error.parse-timestamp"),
ParseSysInfo => "System Info Parsing Error", ParseSysInfo => t!("error.parse-sys-info"),
Wifi => "WiFi Internal Error", Wifi => t!("error.wifi"),
Journald => "Journald Error", Journald => t!("error.journald"),
DiskManagement => "Disk Management Error", DiskManagement => t!("error.disk-management"),
OpenSsl => "OpenSSL Internal Error", OpenSsl => t!("error.openssl"),
PasswordHashGeneration => "Password Hash Generation Error", PasswordHashGeneration => t!("error.password-hash-generation"),
DiagnosticMode => "Server is in Diagnostic Mode", DiagnosticMode => t!("error.diagnostic-mode"),
ParseDbField => "Database Field Parse Error", ParseDbField => t!("error.parse-db-field"),
Duplicate => "Duplication Error", Duplicate => t!("error.duplicate"),
MultipleErrors => "Multiple Errors", MultipleErrors => t!("error.multiple-errors"),
Incoherent => "Incoherent", Incoherent => t!("error.incoherent"),
InvalidBackupTargetId => "Invalid Backup Target ID", InvalidBackupTargetId => t!("error.invalid-backup-target-id"),
ProductKeyMismatch => "Incompatible Product Keys", ProductKeyMismatch => t!("error.product-key-mismatch"),
LanPortConflict => "Incompatible LAN Port Configuration", LanPortConflict => t!("error.lan-port-conflict"),
Javascript => "Javascript Engine Error", Javascript => t!("error.javascript"),
Pem => "PEM Encoding Error", Pem => t!("error.pem"),
TLSInit => "TLS Backend Initialization Error", TLSInit => t!("error.tls-init"),
Ascii => "ASCII Parse Error", Ascii => t!("error.ascii"),
MissingHeader => "Missing Header", MissingHeader => t!("error.missing-header"),
Grub => "Grub Error", Grub => t!("error.grub"),
Systemd => "Systemd Error", Systemd => t!("error.systemd"),
OpenSsh => "OpenSSH Error", OpenSsh => t!("error.openssh"),
Zram => "Zram Error", Zram => t!("error.zram"),
Lshw => "LSHW Error", Lshw => t!("error.lshw"),
CpuSettings => "CPU Settings Error", CpuSettings => t!("error.cpu-settings"),
Firmware => "Firmware Error", Firmware => t!("error.firmware"),
Timeout => "Timeout Error", Timeout => t!("error.timeout"),
Lxc => "LXC Error", Lxc => t!("error.lxc"),
Cancelled => "Cancelled", Cancelled => t!("error.cancelled"),
Git => "Git Error", Git => t!("error.git"),
DBus => "DBus Error", DBus => t!("error.dbus"),
InstallFailed => "Install Failed", InstallFailed => t!("error.install-failed"),
UpdateFailed => "Update Failed", UpdateFailed => t!("error.update-failed"),
Smtp => "SMTP Error", Smtp => t!("error.smtp"),
SetSysInfo => t!("error.set-sys-info"),
} }
.to_string()
} }
} }
impl Display for ErrorKind { impl Display for ErrorKind {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str()) write!(f, "{}", &self.as_str())
} }
} }
@@ -193,13 +198,13 @@ pub struct Error {
pub source: color_eyre::eyre::Error, pub source: color_eyre::eyre::Error,
pub debug: Option<color_eyre::eyre::Error>, pub debug: Option<color_eyre::eyre::Error>,
pub kind: ErrorKind, pub kind: ErrorKind,
pub revision: Option<Revision>, pub info: Value,
pub task: Option<JoinHandle<()>>, pub task: Option<JoinHandle<()>>,
} }
impl Display for Error { impl Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}: {:#}", self.kind.as_str(), self.source) write!(f, "{}: {:#}", &self.kind.as_str(), self.source)
} }
} }
impl Debug for Error { impl Debug for Error {
@@ -207,7 +212,7 @@ impl Debug for Error {
write!( write!(
f, f,
"{}: {:?}", "{}: {:?}",
self.kind.as_str(), &self.kind.as_str(),
self.debug.as_ref().unwrap_or(&self.source) self.debug.as_ref().unwrap_or(&self.source)
) )
} }
@@ -224,7 +229,7 @@ impl Error {
source: source.into(), source: source.into(),
debug, debug,
kind, kind,
revision: None, info: Value::Null,
task: None, task: None,
} }
} }
@@ -233,7 +238,7 @@ impl Error {
source: eyre!("{}", self.source), source: eyre!("{}", self.source),
debug: self.debug.as_ref().map(|e| eyre!("{e}")), debug: self.debug.as_ref().map(|e| eyre!("{e}")),
kind: self.kind, kind: self.kind,
revision: self.revision.clone(), info: self.info.clone(),
task: None, task: None,
} }
} }
@@ -241,6 +246,10 @@ impl Error {
self.task = Some(task); self.task = Some(task);
self self
} }
pub fn with_info(mut self, info: Value) -> Self {
self.info = info;
self
}
pub async fn wait(mut self) -> Self { pub async fn wait(mut self) -> Self {
if let Some(task) = &mut self.task { if let Some(task) = &mut self.task {
task.await.log_err(); task.await.log_err();
@@ -419,6 +428,8 @@ impl From<patch_db::value::Error> for Error {
pub struct ErrorData { pub struct ErrorData {
pub details: String, pub details: String,
pub debug: String, pub debug: String,
#[serde(default)]
pub info: Value,
} }
impl Display for ErrorData { impl Display for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
@@ -436,6 +447,7 @@ impl From<Error> for ErrorData {
Self { Self {
details: value.to_string(), details: value.to_string(),
debug: format!("{:?}", value), debug: format!("{:?}", value),
info: value.info,
} }
} }
} }
@@ -466,47 +478,40 @@ impl From<&RpcError> for ErrorData {
.or_else(|| d.as_str().map(|s| s.to_owned())) .or_else(|| d.as_str().map(|s| s.to_owned()))
}) })
.unwrap_or_else(|| value.message.clone().into_owned()), .unwrap_or_else(|| value.message.clone().into_owned()),
info: to_value(
&value
.data
.as_ref()
.and_then(|d| d.as_object().and_then(|d| d.get("info"))),
)
.unwrap_or_default(),
} }
} }
} }
impl From<Error> for RpcError { impl From<Error> for RpcError {
fn from(e: Error) -> Self { fn from(e: Error) -> Self {
let mut data_object = serde_json::Map::with_capacity(3); let kind = e.kind;
data_object.insert("details".to_owned(), format!("{}", e.source).into()); let data = ErrorData::from(e);
data_object.insert("debug".to_owned(), format!("{:?}", e.source).into()); RpcError {
data_object.insert( code: kind as i32,
"revision".to_owned(), message: kind.as_str().into(),
match serde_json::to_value(&e.revision) { data: Some(match serde_json::to_value(&data) {
Ok(a) => a, Ok(a) => a,
Err(e) => { Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e); tracing::warn!("Error serializing ErrorData object: {}", e);
serde_json::Value::Null serde_json::Value::Null
} }
}, }),
);
RpcError {
code: e.kind as i32,
message: e.kind.as_str().into(),
data: Some(
match serde_json::to_value(&ErrorData {
details: format!("{}", e.source),
debug: format!("{:?}", e.source),
}) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
),
} }
} }
} }
impl From<RpcError> for Error { impl From<RpcError> for Error {
fn from(e: RpcError) -> Self { fn from(e: RpcError) -> Self {
let data = ErrorData::from(&e);
let info = data.info.clone();
Error::new( Error::new(
ErrorData::from(&e), data,
if let Ok(kind) = e.code.try_into() { if let Ok(kind) = e.code.try_into() {
kind kind
} else if e.code == METHOD_NOT_FOUND_ERROR.code { } else if e.code == METHOD_NOT_FOUND_ERROR.code {
@@ -520,6 +525,7 @@ impl From<RpcError> for Error {
ErrorKind::Unknown ErrorKind::Unknown
}, },
) )
.with_info(info)
} }
} }
@@ -602,7 +608,7 @@ where
kind, kind,
source, source,
debug, debug,
revision: None, info: Value::Null,
task: None, task: None,
} }
}) })

View File

@@ -4,7 +4,6 @@ use std::sync::Arc;
use std::time::{Duration, SystemTime}; use std::time::{Duration, SystemTime};
use axum::extract::ws; use axum::extract::ws;
use const_format::formatcp;
use futures::{StreamExt, TryStreamExt}; use futures::{StreamExt, TryStreamExt};
use itertools::Itertools; use itertools::Itertools;
use rpc_toolkit::{Context, Empty, HandlerArgs, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Context, Empty, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
@@ -82,26 +81,28 @@ impl InitPhases {
pub fn new(handle: &FullProgressTracker) -> Self { pub fn new(handle: &FullProgressTracker) -> Self {
Self { Self {
preinit: if Path::new("/media/startos/config/preinit.sh").exists() { preinit: if Path::new("/media/startos/config/preinit.sh").exists() {
Some(handle.add_phase("Running preinit.sh".into(), Some(5))) Some(handle.add_phase(t!("init.running-preinit").into(), Some(5)))
} else { } else {
None None
}, },
local_auth: handle.add_phase("Enabling local authentication".into(), Some(1)), local_auth: handle.add_phase(t!("init.enabling-local-auth").into(), Some(1)),
load_database: handle.add_phase("Loading database".into(), Some(5)), load_database: handle.add_phase(t!("init.loading-database").into(), Some(5)),
load_ssh_keys: handle.add_phase("Loading SSH Keys".into(), Some(1)), load_ssh_keys: handle.add_phase(t!("init.loading-ssh-keys").into(), Some(1)),
start_net: handle.add_phase("Starting network controller".into(), Some(1)), start_net: handle.add_phase(t!("init.starting-network-controller").into(), Some(1)),
mount_logs: handle.add_phase("Switching logs to write to data drive".into(), Some(1)), mount_logs: handle.add_phase(t!("init.switching-logs-to-data-drive").into(), Some(1)),
load_ca_cert: handle.add_phase("Loading CA certificate".into(), Some(1)), load_ca_cert: handle.add_phase(t!("init.loading-ca-certificate").into(), Some(1)),
load_wifi: handle.add_phase("Loading WiFi configuration".into(), Some(1)), load_wifi: handle.add_phase(t!("init.loading-wifi-configuration").into(), Some(1)),
init_tmp: handle.add_phase("Initializing temporary files".into(), Some(1)), init_tmp: handle.add_phase(t!("init.initializing-temporary-files").into(), Some(1)),
set_governor: handle.add_phase("Setting CPU performance profile".into(), Some(1)), set_governor: handle
sync_clock: handle.add_phase("Synchronizing system clock".into(), Some(10)), .add_phase(t!("init.setting-cpu-performance-profile").into(), Some(1)),
enable_zram: handle.add_phase("Enabling ZRAM".into(), Some(1)), sync_clock: handle.add_phase(t!("init.synchronizing-system-clock").into(), Some(10)),
update_server_info: handle.add_phase("Updating server info".into(), Some(1)), enable_zram: handle.add_phase(t!("init.enabling-zram").into(), Some(1)),
launch_service_network: handle.add_phase("Launching service intranet".into(), Some(1)), update_server_info: handle.add_phase(t!("init.updating-server-info").into(), Some(1)),
validate_db: handle.add_phase("Validating database".into(), Some(1)), launch_service_network: handle
.add_phase(t!("init.launching-service-intranet").into(), Some(1)),
validate_db: handle.add_phase(t!("init.validating-database").into(), Some(1)),
postinit: if Path::new("/media/startos/config/postinit.sh").exists() { postinit: if Path::new("/media/startos/config/postinit.sh").exists() {
Some(handle.add_phase("Running postinit.sh".into(), Some(5))) Some(handle.add_phase(t!("init.running-postinit").into(), Some(5)))
} else { } else {
None None
}, },
@@ -128,7 +129,14 @@ pub async fn run_script<P: AsRef<Path>>(path: P, mut progress: PhaseProgressTrac
} }
.await .await
{ {
tracing::error!("Error Running {}: {}", script.display(), e); tracing::error!(
"{}",
t!(
"init.error-running-script",
script = script.display(),
error = e
)
);
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
} }
progress.complete(); progress.complete();
@@ -231,6 +239,7 @@ pub async fn init(
.arg("-R") .arg("-R")
.arg("+C") .arg("+C")
.arg("/var/log/journal") .arg("/var/log/journal")
.env("LANG", "C.UTF-8")
.invoke(ErrorKind::Filesystem) .invoke(ErrorKind::Filesystem)
.await .await
{ {
@@ -315,14 +324,17 @@ pub async fn init(
{ {
Some(governor) Some(governor)
} else { } else {
tracing::warn!("CPU Governor \"{governor}\" Not Available"); tracing::warn!(
"{}",
t!("init.cpu-governor-not-available", governor = governor)
);
None None
} }
} else { } else {
cpupower::get_preferred_governor().await? cpupower::get_preferred_governor().await?
}; };
if let Some(governor) = governor { if let Some(governor) = governor {
tracing::info!("Setting CPU Governor to \"{governor}\""); tracing::info!("{}", t!("init.setting-cpu-governor", governor = governor));
cpupower::set_governor(governor).await?; cpupower::set_governor(governor).await?;
} }
set_governor.complete(); set_governor.complete();
@@ -350,14 +362,14 @@ pub async fn init(
} }
} }
if !ntp_synced { if !ntp_synced {
tracing::warn!("Timed out waiting for system time to synchronize"); tracing::warn!("{}", t!("init.clock-sync-timeout"));
} }
sync_clock.complete(); sync_clock.complete();
enable_zram.start(); enable_zram.start();
if server_info.as_zram().de()? { if server_info.as_zram().de()? {
crate::system::enable_zram().await?; crate::system::enable_zram().await?;
tracing::info!("Enabled ZRAM"); tracing::info!("{}", t!("init.enabled-zram"));
} }
enable_zram.complete(); enable_zram.complete();
@@ -405,7 +417,7 @@ pub async fn init(
run_script("/media/startos/config/postinit.sh", progress).await; run_script("/media/startos/config/postinit.sh", progress).await;
} }
tracing::info!("System initialized."); tracing::info!("{}", t!("init.system-initialized"));
Ok(InitResult { Ok(InitResult {
net_ctrl, net_ctrl,
@@ -417,30 +429,30 @@ pub fn init_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new() ParentHandler::new()
.subcommand( .subcommand(
"logs", "logs",
crate::system::logs::<InitContext>().with_about("Disply OS logs"), crate::system::logs::<InitContext>().with_about("about.display-os-logs"),
) )
.subcommand( .subcommand(
"logs", "logs",
from_fn_async(crate::logs::cli_logs::<InitContext, Empty>) from_fn_async(crate::logs::cli_logs::<InitContext, Empty>)
.no_display() .no_display()
.with_about("Display OS logs"), .with_about("about.display-os-logs"),
) )
.subcommand( .subcommand(
"kernel-logs", "kernel-logs",
crate::system::kernel_logs::<InitContext>().with_about("Display kernel logs"), crate::system::kernel_logs::<InitContext>().with_about("about.display-kernel-logs"),
) )
.subcommand( .subcommand(
"kernel-logs", "kernel-logs",
from_fn_async(crate::logs::cli_logs::<InitContext, Empty>) from_fn_async(crate::logs::cli_logs::<InitContext, Empty>)
.no_display() .no_display()
.with_about("Display kernel logs"), .with_about("about.display-kernel-logs"),
) )
.subcommand("subscribe", from_fn_async(init_progress).no_cli()) .subcommand("subscribe", from_fn_async(init_progress).no_cli())
.subcommand( .subcommand(
"subscribe", "subscribe",
from_fn_async(cli_init_progress) from_fn_async(cli_init_progress)
.no_display() .no_display()
.with_about("Get initialization progress"), .with_about("about.get-initialization-progress"),
) )
} }
@@ -496,7 +508,7 @@ pub async fn init_progress(ctx: InitContext) -> Result<InitProgressRes, Error> {
); );
if let Err(e) = ws.close_result(res.map(|_| "complete")).await { if let Err(e) = ws.close_result(res.map(|_| "complete")).await {
tracing::error!("error closing init progress websocket: {e}"); tracing::error!("{}", t!("init.error-closing-websocket", error = e));
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
}, },
@@ -527,7 +539,7 @@ pub async fn cli_init_progress(
.await?, .await?,
)?; )?;
let mut ws = ctx.ws_continuation(res.guid).await?; let mut ws = ctx.ws_continuation(res.guid).await?;
let mut bar = PhasedProgressBar::new("Initializing..."); let mut bar = PhasedProgressBar::new(&t!("init.initializing"));
while let Some(msg) = ws.try_next().await.with_kind(ErrorKind::Network)? { while let Some(msg) = ws.try_next().await.with_kind(ErrorKind::Network)? {
if let tokio_tungstenite::tungstenite::Message::Text(msg) = msg { if let tokio_tungstenite::tungstenite::Message::Text(msg) = msg {
bar.update(&serde_json::from_str(&msg).with_kind(ErrorKind::Deserialization)?); bar.update(&serde_json::from_str(&msg).with_kind(ErrorKind::Deserialization)?);

View File

@@ -131,9 +131,13 @@ pub async fn install(
let package: GetPackageResponse = from_value( let package: GetPackageResponse = from_value(
ctx.call_remote_with::<RegistryContext, _>( ctx.call_remote_with::<RegistryContext, _>(
"package.get", "package.get",
[("get_device_info", Value::Bool(true))]
.into_iter()
.collect(),
json!({ json!({
"id": id, "id": id,
"targetVersion": VersionRange::exactly(version.deref().clone()), "targetVersion": VersionRange::exactly(version.deref().clone()),
"otherVersions": "none",
}), }),
RegistryUrlParams { RegistryUrlParams {
registry: registry.clone(), registry: registry.clone(),
@@ -142,16 +146,16 @@ pub async fn install(
.await?, .await?,
)?; )?;
let asset = &package let (_, asset) = package
.best .best
.get(&version) .get(&version)
.and_then(|i| i.s9pks.first())
.ok_or_else(|| { .ok_or_else(|| {
Error::new( Error::new(
eyre!("{id}@{version} not found on {registry}"), eyre!("{id}@{version} not found on {registry}"),
ErrorKind::NotFound, ErrorKind::NotFound,
) )
})? })?;
.s9pk;
asset.validate(SIG_CONTEXT, asset.all_signers())?; asset.validate(SIG_CONTEXT, asset.all_signers())?;
@@ -283,6 +287,7 @@ pub async fn sideload(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct CancelInstallParams { pub struct CancelInstallParams {
#[arg(help = "help.arg.package-id")]
pub id: PackageId, pub id: PackageId,
} }
@@ -299,7 +304,9 @@ pub fn cancel_install(
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct QueryPackageParams { pub struct QueryPackageParams {
#[arg(help = "help.arg.package-id")]
id: PackageId, id: PackageId,
#[arg(help = "help.arg.version-range")]
version: Option<VersionRange>, version: Option<VersionRange>,
} }
@@ -357,6 +364,7 @@ impl FromArgMatches for CliInstallParams {
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)] #[ts(export)]
pub struct InstalledVersionParams { pub struct InstalledVersionParams {
#[arg(help = "help.arg.package-id")]
id: PackageId, id: PackageId,
} }
@@ -477,7 +485,7 @@ pub async fn cli_install(
let mut packages: GetPackageResponse = from_value( let mut packages: GetPackageResponse = from_value(
ctx.call_remote::<RegistryContext>( ctx.call_remote::<RegistryContext>(
"package.get", "package.get",
json!({ "id": &id, "targetVersion": version, "sourceVersion": source_version }), json!({ "id": &id, "targetVersion": version, "sourceVersion": source_version, "otherVersions": "none" }),
) )
.await?, .await?,
)?; )?;
@@ -516,11 +524,12 @@ pub async fn cli_install(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UninstallParams { pub struct UninstallParams {
#[arg(help = "help.arg.package-id")]
id: PackageId, id: PackageId,
#[arg(long, help = "Do not delete the service data")] #[arg(long, help = "help.arg.soft-uninstall")]
#[serde(default)] #[serde(default)]
soft: bool, soft: bool,
#[arg(long, help = "Ignore errors in service uninit script")] #[arg(long, help = "help.arg.force-uninstall")]
#[serde(default)] #[serde(default)]
force: bool, force: bool,
} }

View File

@@ -1,5 +1,7 @@
use const_format::formatcp; use const_format::formatcp;
rust_i18n::i18n!("locales", fallback = ["en_US"]);
pub const DATA_DIR: &str = "/media/startos/data"; pub const DATA_DIR: &str = "/media/startos/data";
pub const MAIN_DATA: &str = formatcp!("{DATA_DIR}/main"); pub const MAIN_DATA: &str = formatcp!("{DATA_DIR}/main");
pub const PACKAGE_DATA: &str = formatcp!("{DATA_DIR}/package-data"); pub const PACKAGE_DATA: &str = formatcp!("{DATA_DIR}/package-data");
@@ -8,7 +10,7 @@ pub use std::env::consts::ARCH;
lazy_static::lazy_static! { lazy_static::lazy_static! {
pub static ref PLATFORM: String = { pub static ref PLATFORM: String = {
if let Ok(platform) = std::fs::read_to_string("/usr/lib/startos/PLATFORM.txt") { if let Ok(platform) = std::fs::read_to_string("/usr/lib/startos/PLATFORM.txt") {
platform platform.trim().to_string()
} else { } else {
ARCH.to_string() ARCH.to_string()
} }
@@ -18,6 +20,17 @@ lazy_static::lazy_static! {
}; };
} }
/// Map a platform string to its architecture
pub fn platform_to_arch(platform: &str) -> &str {
if let Some(arch) = platform.strip_suffix("-nonfree") {
return arch;
}
match platform {
"raspberrypi" | "rockchip64" => "aarch64",
_ => platform,
}
}
mod cap { mod cap {
#![allow(non_upper_case_globals)] #![allow(non_upper_case_globals)]
@@ -97,6 +110,7 @@ use crate::util::serde::{HandlerExtSerde, WithIoFormat, display_serializable};
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
#[ts(export)] #[ts(export)]
pub struct EchoParams { pub struct EchoParams {
#[arg(help = "help.arg.echo-message")]
message: String, message: String,
} }
@@ -122,80 +136,63 @@ pub fn main_api<C: Context>() -> ParentHandler<C> {
let mut api = ParentHandler::new() let mut api = ParentHandler::new()
.subcommand( .subcommand(
"git-info", "git-info",
from_fn(|_: C| version::git_info()).with_about("Display the githash of StartOS CLI"), from_fn(|_: C| version::git_info()).with_about("about.display-githash"),
) )
.subcommand( .subcommand(
"echo", "echo",
from_fn(echo::<RpcContext>) from_fn(echo::<RpcContext>)
.with_metadata("authenticated", Value::Bool(false)) .with_metadata("authenticated", Value::Bool(false))
.with_about("Echo a message") .with_about("about.echo-message")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"state", "state",
from_fn(|_: RpcContext| Ok::<_, Error>(ApiState::Running)) from_fn(|_: RpcContext| Ok::<_, Error>(ApiState::Running))
.with_metadata("authenticated", Value::Bool(false)) .with_metadata("authenticated", Value::Bool(false))
.with_about("Display the API that is currently serving") .with_about("about.display-current-api")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"state", "state",
from_fn(|_: InitContext| Ok::<_, Error>(ApiState::Initializing)) from_fn(|_: InitContext| Ok::<_, Error>(ApiState::Initializing))
.with_metadata("authenticated", Value::Bool(false)) .with_metadata("authenticated", Value::Bool(false))
.with_about("Display the API that is currently serving") .with_about("about.display-current-api")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"state", "state",
from_fn(|_: DiagnosticContext| Ok::<_, Error>(ApiState::Error)) from_fn(|_: DiagnosticContext| Ok::<_, Error>(ApiState::Error))
.with_metadata("authenticated", Value::Bool(false)) .with_metadata("authenticated", Value::Bool(false))
.with_about("Display the API that is currently serving") .with_about("about.display-current-api")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand("server", server::<C>().with_about("about.commands-server"))
"server",
server::<C>()
.with_about("Commands related to the server i.e. restart, update, and shutdown"),
)
.subcommand( .subcommand(
"package", "package",
package::<C>().with_about("Commands related to packages"), package::<C>().with_about("about.commands-packages"),
) )
.subcommand( .subcommand(
"net", "net",
net::net_api::<C>().with_about("Network commands related to tor and dhcp"), net::net_api::<C>().with_about("about.network-commands"),
) )
.subcommand( .subcommand(
"auth", "auth",
auth::auth::<C, RpcContext>() auth::auth::<C, RpcContext>().with_about("about.commands-authentication"),
.with_about("Commands related to Authentication i.e. login, logout"),
)
.subcommand(
"db",
db::db::<C>().with_about("Commands to interact with the db i.e. dump, put, apply"),
)
.subcommand(
"ssh",
ssh::ssh::<C>()
.with_about("Commands for interacting with ssh keys i.e. add, delete, list"),
) )
.subcommand("db", db::db::<C>().with_about("about.commands-db"))
.subcommand("ssh", ssh::ssh::<C>().with_about("about.commands-ssh-keys"))
.subcommand( .subcommand(
"wifi", "wifi",
net::wifi::wifi::<C>() net::wifi::wifi::<C>().with_about("about.commands-wifi"),
.with_about("Commands related to wifi networks i.e. add, connect, delete"),
)
.subcommand(
"disk",
disk::disk::<C>().with_about("Commands for listing disk info and repairing"),
) )
.subcommand("disk", disk::disk::<C>().with_about("about.commands-disk"))
.subcommand( .subcommand(
"notification", "notification",
notifications::notification::<C>().with_about("Create, delete, or list notifications"), notifications::notification::<C>().with_about("about.commands-notifications"),
) )
.subcommand( .subcommand(
"backup", "backup",
backup::backup::<C>() backup::backup::<C>().with_about("about.commands-backup"),
.with_about("Commands related to backup creation and backup targets"),
) )
.subcommand( .subcommand(
"registry", "registry",
@@ -206,7 +203,7 @@ pub fn main_api<C: Context>() -> ParentHandler<C> {
) )
.subcommand( .subcommand(
"registry", "registry",
registry::registry_api::<CliContext>().with_about("Commands related to the registry"), registry::registry_api::<CliContext>().with_about("about.commands-registry"),
) )
.subcommand( .subcommand(
"tunnel", "tunnel",
@@ -215,41 +212,46 @@ pub fn main_api<C: Context>() -> ParentHandler<C> {
) )
.subcommand( .subcommand(
"tunnel", "tunnel",
tunnel::api::tunnel_api::<CliContext>().with_about("Commands related to StartTunnel"), tunnel::api::tunnel_api::<CliContext>().with_about("about.commands-tunnel"),
)
.subcommand(
"s9pk",
s9pk::rpc::s9pk().with_about("Commands for interacting with s9pk files"),
) )
.subcommand("s9pk", s9pk::rpc::s9pk().with_about("about.commands-s9pk"))
.subcommand( .subcommand(
"util", "util",
util::rpc::util::<C>().with_about("Command for calculating the blake3 hash of a file"), util::rpc::util::<C>().with_about("about.command-blake3-hash"),
) )
.subcommand( .subcommand(
"init-key", "init-key",
from_fn_async(developer::init) from_fn_async(developer::init)
.no_display() .no_display()
.with_about("Create developer key if it doesn't exist"), .with_about("about.create-developer-key"),
) )
.subcommand( .subcommand(
"pubkey", "pubkey",
from_fn_blocking(developer::pubkey) from_fn_blocking(developer::pubkey).with_about("about.get-developer-pubkey"),
.with_about("Get public key for developer private key"),
) )
.subcommand( .subcommand(
"diagnostic", "diagnostic",
diagnostic::diagnostic::<C>() diagnostic::diagnostic::<C>().with_about("about.commands-diagnostic"),
.with_about("Commands to display logs, restart the server, etc"),
) )
.subcommand("init", init::init_api::<C>())
.subcommand("setup", setup::setup::<C>())
.subcommand( .subcommand(
"install", "init",
os_install::install::<C>() init::init_api::<C>().with_about("about.commands-init"),
.with_about("Commands to list disk info, install StartOS, and reboot"), )
.subcommand(
"setup",
setup::setup::<C>().with_about("about.commands-setup"),
); );
if &*PLATFORM != "raspberrypi" { if &*PLATFORM != "raspberrypi" {
api = api.subcommand("kiosk", kiosk::<C>()); api = api.subcommand("kiosk", kiosk::<C>().with_about("about.commands-kiosk"));
}
#[cfg(target_os = "linux")]
{
api = api.subcommand(
"flash-os",
from_fn_async(os_install::cli_install_os)
.no_display()
.with_about("about.flash-startos"),
);
} }
api api
} }
@@ -263,29 +265,32 @@ pub fn server<C: Context>() -> ParentHandler<C> {
.with_custom_display_fn(|handle, result| { .with_custom_display_fn(|handle, result| {
system::display_time(handle.params, result) system::display_time(handle.params, result)
}) })
.with_about("Display current time and server uptime") .with_about("about.display-time-uptime")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"experimental", "experimental",
system::experimental::<C>() system::experimental::<C>().with_about("about.commands-experimental"),
.with_about("Commands related to configuring experimental options such as zram and cpu governor"),
) )
.subcommand( .subcommand(
"logs", "logs",
system::logs::<RpcContext>().with_about("Display OS logs"), system::logs::<RpcContext>().with_about("about.display-os-logs"),
) )
.subcommand( .subcommand(
"logs", "logs",
from_fn_async(logs::cli_logs::<RpcContext, Empty>).no_display().with_about("Display OS logs"), from_fn_async(logs::cli_logs::<RpcContext, Empty>)
.no_display()
.with_about("about.display-os-logs"),
) )
.subcommand( .subcommand(
"kernel-logs", "kernel-logs",
system::kernel_logs::<RpcContext>().with_about("Display Kernel logs"), system::kernel_logs::<RpcContext>().with_about("about.display-kernel-logs"),
) )
.subcommand( .subcommand(
"kernel-logs", "kernel-logs",
from_fn_async(logs::cli_logs::<RpcContext, Empty>).no_display().with_about("Display Kernel logs"), from_fn_async(logs::cli_logs::<RpcContext, Empty>)
.no_display()
.with_about("about.display-kernel-logs"),
) )
.subcommand( .subcommand(
"metrics", "metrics",
@@ -293,35 +298,31 @@ pub fn server<C: Context>() -> ParentHandler<C> {
.root_handler( .root_handler(
from_fn_async(system::metrics) from_fn_async(system::metrics)
.with_display_serializable() .with_display_serializable()
.with_about("Display information about the server i.e. temperature, RAM, CPU, and disk usage") .with_about("about.display-server-metrics")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
)
.subcommand(
"follow",
from_fn_async(system::metrics_follow)
.no_cli()
) )
.subcommand("follow", from_fn_async(system::metrics_follow).no_cli()),
) )
.subcommand( .subcommand(
"shutdown", "shutdown",
from_fn_async(shutdown::shutdown) from_fn_async(shutdown::shutdown)
.no_display() .no_display()
.with_about("Shutdown the server") .with_about("about.shutdown-server")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"restart", "restart",
from_fn_async(shutdown::restart) from_fn_async(shutdown::restart)
.no_display() .no_display()
.with_about("Restart the server") .with_about("about.restart-server")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"rebuild", "rebuild",
from_fn_async(shutdown::rebuild) from_fn_async(shutdown::rebuild)
.no_display() .no_display()
.with_about("Teardown and rebuild service containers") .with_about("about.teardown-rebuild-containers")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"update", "update",
@@ -331,7 +332,9 @@ pub fn server<C: Context>() -> ParentHandler<C> {
) )
.subcommand( .subcommand(
"update", "update",
from_fn_async(update::cli_update_system).no_display().with_about("Check a given registry for StartOS updates and update if available"), from_fn_async(update::cli_update_system)
.no_display()
.with_about("about.check-update-startos"),
) )
.subcommand( .subcommand(
"update-firmware", "update-firmware",
@@ -346,37 +349,55 @@ pub fn server<C: Context>() -> ParentHandler<C> {
.with_custom_display_fn(|_handle, result| { .with_custom_display_fn(|_handle, result| {
Ok(firmware::display_firmware_update_result(result)) Ok(firmware::display_firmware_update_result(result))
}) })
.with_about("Update the mainboard's firmware to the latest firmware available in this version of StartOS if available. Note: This command does not reach out to the Internet") .with_about("about.update-firmware")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"set-smtp", "set-smtp",
from_fn_async(system::set_system_smtp) from_fn_async(system::set_system_smtp)
.no_display() .no_display()
.with_about("Set system smtp server and credentials") .with_about("about.set-smtp")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"test-smtp", "test-smtp",
from_fn_async(system::test_smtp) from_fn_async(system::test_smtp)
.no_display() .no_display()
.with_about("Send test email using provided smtp server and credentials") .with_about("about.test-smtp")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"clear-smtp", "clear-smtp",
from_fn_async(system::clear_system_smtp) from_fn_async(system::clear_system_smtp)
.no_display() .no_display()
.with_about("Remove system smtp server and credentials") .with_about("about.clear-smtp")
.with_call_remote::<CliContext>() .with_call_remote::<CliContext>(),
).subcommand("host", net::host::server_host_api::<C>().with_about("Commands for modifying the host for the system ui")) )
.subcommand(
"host",
net::host::server_host_api::<C>().with_about("about.commands-host-system-ui"),
)
.subcommand(
"set-keyboard",
from_fn_async(system::set_keyboard)
.no_display()
.with_about("about.set-keyboard")
.with_call_remote::<CliContext>(),
)
.subcommand(
"set-language",
from_fn_async(system::set_language)
.no_display()
.with_about("about.set-language")
.with_call_remote::<CliContext>(),
)
} }
pub fn package<C: Context>() -> ParentHandler<C> { pub fn package<C: Context>() -> ParentHandler<C> {
ParentHandler::new() ParentHandler::new()
.subcommand( .subcommand(
"action", "action",
action::action_api::<C>().with_about("Commands to get action input or run an action"), action::action_api::<C>().with_about("about.commands-action"),
) )
.subcommand( .subcommand(
"install", "install",
@@ -394,13 +415,13 @@ pub fn package<C: Context>() -> ParentHandler<C> {
"install", "install",
from_fn_async_local(install::cli_install) from_fn_async_local(install::cli_install)
.no_display() .no_display()
.with_about("Install a package from a marketplace or via sideloading"), .with_about("about.install-package"),
) )
.subcommand( .subcommand(
"cancel-install", "cancel-install",
from_fn(install::cancel_install) from_fn(install::cancel_install)
.no_display() .no_display()
.with_about("Cancel an install of a package") .with_about("about.cancel-install-package")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -408,21 +429,21 @@ pub fn package<C: Context>() -> ParentHandler<C> {
from_fn_async(install::uninstall) from_fn_async(install::uninstall)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Remove a package") .with_about("about.remove-package")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"list", "list",
from_fn_async(install::list) from_fn_async(install::list)
.with_display_serializable() .with_display_serializable()
.with_about("List installed packages") .with_about("about.list-installed-packages")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"installed-version", "installed-version",
from_fn_async(install::installed_version) from_fn_async(install::installed_version)
.with_display_serializable() .with_display_serializable()
.with_about("Display installed version for a PackageId") .with_about("about.display-installed-version")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -430,7 +451,7 @@ pub fn package<C: Context>() -> ParentHandler<C> {
from_fn_async(control::start) from_fn_async(control::start)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Start a service") .with_about("about.start-service")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -438,7 +459,7 @@ pub fn package<C: Context>() -> ParentHandler<C> {
from_fn_async(control::stop) from_fn_async(control::stop)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Stop a service") .with_about("about.stop-service")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -446,7 +467,7 @@ pub fn package<C: Context>() -> ParentHandler<C> {
from_fn_async(control::restart) from_fn_async(control::restart)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Restart a service") .with_about("about.restart-service")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -454,7 +475,7 @@ pub fn package<C: Context>() -> ParentHandler<C> {
from_fn_async(service::rebuild) from_fn_async(service::rebuild)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Rebuild service container") .with_about("about.rebuild-service-container")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -494,35 +515,37 @@ pub fn package<C: Context>() -> ParentHandler<C> {
table.print_tty(false)?; table.print_tty(false)?;
Ok(()) Ok(())
}) })
.with_about("List information related to the lxc containers i.e. CPU, Memory, Disk") .with_about("about.list-lxc-container-info")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand("logs", logs::package_logs()) .subcommand("logs", logs::package_logs())
.subcommand( .subcommand(
"logs", "logs",
logs::package_logs().with_about("Display package logs"), logs::package_logs().with_about("about.display-package-logs"),
) )
.subcommand( .subcommand(
"logs", "logs",
from_fn_async(logs::cli_logs::<RpcContext, logs::PackageIdParams>) from_fn_async(logs::cli_logs::<RpcContext, logs::PackageIdParams>)
.no_display() .no_display()
.with_about("Display package logs"), .with_about("about.display-package-logs"),
) )
.subcommand( .subcommand(
"backup", "backup",
backup::package_backup::<C>() backup::package_backup::<C>().with_about("about.commands-restore-backup"),
.with_about("Commands for restoring package(s) from backup"),
) )
.subcommand( .subcommand(
"attach", "attach",
from_fn_async(service::attach) from_fn_async(service::attach)
.with_metadata("get_session", Value::Bool(true)) .with_metadata("get_session", Value::Bool(true))
.with_about("Execute commands within a service container") .with_about("about.execute-commands-container")
.no_cli(), .no_cli(),
) )
.subcommand("attach", from_fn_async(service::cli_attach).no_display()) .subcommand(
"attach",
from_fn_async_local(service::cli_attach).no_display(),
)
.subcommand( .subcommand(
"host", "host",
net::host::host_api::<C>().with_about("Manage network hosts for a package"), net::host::host_api::<C>().with_about("about.manage-network-hosts-package"),
) )
} }

View File

@@ -6,7 +6,6 @@ use std::str::FromStr;
use std::time::{Duration, UNIX_EPOCH}; use std::time::{Duration, UNIX_EPOCH};
use axum::extract::ws; use axum::extract::ws;
use crate::util::net::WebSocket;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use clap::builder::ValueParserFactory; use clap::builder::ValueParserFactory;
use clap::{Args, FromArgMatches, Parser}; use clap::{Args, FromArgMatches, Parser};
@@ -31,6 +30,7 @@ use crate::context::{CliContext, RpcContext};
use crate::error::ResultExt; use crate::error::ResultExt;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations}; use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations};
use crate::util::net::WebSocket;
use crate::util::serde::Reversible; use crate::util::serde::Reversible;
use crate::util::{FromStrParser, Invoke}; use crate::util::{FromStrParser, Invoke};
@@ -232,6 +232,7 @@ pub const SYSTEM_UNIT: &str = "startd";
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct PackageIdParams { pub struct PackageIdParams {
#[arg(help = "help.arg.package-id")]
id: PackageId, id: PackageId,
} }
@@ -327,14 +328,24 @@ pub struct LogsParams<Extra: FromArgMatches + Args = Empty> {
#[command(flatten)] #[command(flatten)]
#[serde(flatten)] #[serde(flatten)]
extra: Extra, extra: Extra,
#[arg(short = 'l', long = "limit")] #[arg(short = 'l', long = "limit", help = "help.arg.log-limit")]
limit: Option<usize>, limit: Option<usize>,
#[arg(short = 'c', long = "cursor", conflicts_with = "follow")] #[arg(
short = 'c',
long = "cursor",
conflicts_with = "follow",
help = "help.arg.log-cursor"
)]
cursor: Option<String>, cursor: Option<String>,
#[arg(short = 'b', long = "boot")] #[arg(short = 'b', long = "boot", help = "help.arg.log-boot")]
#[serde(default)] #[serde(default)]
boot: Option<BootIdentifier>, boot: Option<BootIdentifier>,
#[arg(short = 'B', long = "before", conflicts_with = "follow")] #[arg(
short = 'B',
long = "before",
conflicts_with = "follow",
help = "help.arg.log-before"
)]
#[serde(default)] #[serde(default)]
before: bool, before: bool,
} }
@@ -346,7 +357,7 @@ pub struct CliLogsParams<Extra: FromArgMatches + Args = Empty> {
#[command(flatten)] #[command(flatten)]
#[serde(flatten)] #[serde(flatten)]
rpc_params: LogsParams<Extra>, rpc_params: LogsParams<Extra>,
#[arg(short = 'f', long = "follow")] #[arg(short = 'f', long = "follow", help = "help.arg.log-follow")]
#[serde(default)] #[serde(default)]
follow: bool, follow: bool,
} }
@@ -552,10 +563,12 @@ pub async fn journalctl(
follow_cmd.arg("--lines=0"); follow_cmd.arg("--lines=0");
} }
let mut child = follow_cmd.stdout(Stdio::piped()).spawn()?; let mut child = follow_cmd.stdout(Stdio::piped()).spawn()?;
let out = let out = BufReader::new(child.stdout.take().ok_or_else(|| {
BufReader::new(child.stdout.take().ok_or_else(|| { Error::new(
Error::new(eyre!("No stdout available"), crate::ErrorKind::Journald) eyre!("{}", t!("logs.no-stdout-available")),
})?); crate::ErrorKind::Journald,
)
})?);
let journalctl_entries = LinesStream::new(out.lines()); let journalctl_entries = LinesStream::new(out.lines());
@@ -700,7 +713,10 @@ pub async fn follow_logs<Context: AsRef<RpcContinuations>>(
RpcContinuation::ws( RpcContinuation::ws(
move |socket| async move { move |socket| async move {
if let Err(e) = ws_handler(first_entry, stream, socket).await { if let Err(e) = ws_handler(first_entry, stream, socket).await {
tracing::error!("Error in log stream: {}", e); tracing::error!(
"{}",
t!("logs.error-in-log-stream", error = e.to_string())
);
} }
}, },
Duration::from_secs(30), Duration::from_secs(30),

View File

@@ -5,11 +5,13 @@ use std::sync::{Arc, Weak};
use std::time::Duration; use std::time::Duration;
use clap::builder::ValueParserFactory; use clap::builder::ValueParserFactory;
use futures::StreamExt; use futures::future::BoxFuture;
use futures::{FutureExt, StreamExt};
use imbl_value::InternedString; use imbl_value::InternedString;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{RpcRequest, RpcResponse}; use rpc_toolkit::{RpcRequest, RpcResponse};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::fs::ReadDir;
use tokio::io::{AsyncBufReadExt, BufReader}; use tokio::io::{AsyncBufReadExt, BufReader};
use tokio::process::Command; use tokio::process::Command;
use tokio::sync::Mutex; use tokio::sync::Mutex;
@@ -27,7 +29,7 @@ use crate::disk::mount::util::unmount;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::{Guid, RpcContinuation}; use crate::rpc_continuations::{Guid, RpcContinuation};
use crate::service::ServiceStats; use crate::service::ServiceStats;
use crate::util::io::open_file; use crate::util::io::{open_file, write_file_owned_atomic};
use crate::util::rpc_client::UnixRpcClient; use crate::util::rpc_client::UnixRpcClient;
use crate::util::{FromStrParser, Invoke, new_guid}; use crate::util::{FromStrParser, Invoke, new_guid};
use crate::{InvalidId, PackageId}; use crate::{InvalidId, PackageId};
@@ -37,6 +39,7 @@ const RPC_DIR: &str = "media/startos/rpc"; // must not be absolute path
pub const CONTAINER_RPC_SERVER_SOCKET: &str = "service.sock"; // must not be absolute path pub const CONTAINER_RPC_SERVER_SOCKET: &str = "service.sock"; // must not be absolute path
pub const HOST_RPC_SERVER_SOCKET: &str = "host.sock"; // must not be absolute path pub const HOST_RPC_SERVER_SOCKET: &str = "host.sock"; // must not be absolute path
const CONTAINER_DHCP_TIMEOUT: Duration = Duration::from_secs(30); const CONTAINER_DHCP_TIMEOUT: Duration = Duration::from_secs(30);
const HARDWARE_ACCELERATION_PATHS: &[&str] = &["/dev/dri", "/dev/nvidia*", "/dev/kfd"];
#[derive( #[derive(
Clone, Debug, Serialize, Deserialize, Default, PartialEq, Eq, PartialOrd, Ord, Hash, TS, Clone, Debug, Serialize, Deserialize, Default, PartialEq, Eq, PartialOrd, Ord, Hash, TS,
@@ -138,7 +141,7 @@ impl LxcManager {
> 0 > 0
{ {
return Err(Error::new( return Err(Error::new(
eyre!("rootfs is not empty, refusing to delete"), eyre!("{}", t!("lxc.mod.rootfs-not-empty")),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
)); ));
} }
@@ -174,12 +177,8 @@ impl LxcContainer {
let machine_id = hex::encode(rand::random::<[u8; 16]>()); let machine_id = hex::encode(rand::random::<[u8; 16]>());
let container_dir = Path::new(LXC_CONTAINER_DIR).join(&*guid); let container_dir = Path::new(LXC_CONTAINER_DIR).join(&*guid);
tokio::fs::create_dir_all(&container_dir).await?; tokio::fs::create_dir_all(&container_dir).await?;
tokio::fs::write( let config_str = format!(include_str!("./config.template"), guid = &*guid);
container_dir.join("config"), tokio::fs::write(container_dir.join("config"), config_str).await?;
format!(include_str!("./config.template"), guid = &*guid),
)
.await?;
// TODO: append config
let rootfs_dir = container_dir.join("rootfs"); let rootfs_dir = container_dir.join("rootfs");
let rootfs = OverlayGuard::mount( let rootfs = OverlayGuard::mount(
TmpMountGuard::mount( TmpMountGuard::mount(
@@ -197,8 +196,25 @@ impl LxcContainer {
&rootfs_dir, &rootfs_dir,
) )
.await?; .await?;
tokio::fs::write(rootfs_dir.join("etc/machine-id"), format!("{machine_id}\n")).await?; Command::new("chown")
tokio::fs::write(rootfs_dir.join("etc/hostname"), format!("{guid}\n")).await?; .arg("100000:100000")
.arg(&rootfs_dir)
.invoke(ErrorKind::Filesystem)
.await?;
write_file_owned_atomic(
rootfs_dir.join("etc/machine-id"),
format!("{machine_id}\n"),
100000,
100000,
)
.await?;
write_file_owned_atomic(
rootfs_dir.join("etc/hostname"),
format!("{guid}\n"),
100000,
100000,
)
.await?;
Command::new("sed") Command::new("sed")
.arg("-i") .arg("-i")
.arg(format!("s/LXC_NAME/{guid}/g")) .arg(format!("s/LXC_NAME/{guid}/g"))
@@ -233,6 +249,7 @@ impl LxcContainer {
.arg("-R") .arg("-R")
.arg("+C") .arg("+C")
.arg(&log_mount_point) .arg(&log_mount_point)
.env("LANG", "C.UTF-8")
.invoke(ErrorKind::Filesystem) .invoke(ErrorKind::Filesystem)
.await .await
{ {
@@ -248,9 +265,13 @@ impl LxcContainer {
.arg("-d") .arg("-d")
.arg("--name") .arg("--name")
.arg(&*guid) .arg(&*guid)
.arg("-o")
.arg(format!("/run/startos/LXC_{guid}.log"))
.arg("-l")
.arg("DEBUG")
.invoke(ErrorKind::Lxc) .invoke(ErrorKind::Lxc)
.await?; .await?;
Ok(Self { let res = Self {
manager: Arc::downgrade(manager), manager: Arc::downgrade(manager),
rootfs, rootfs,
guid: Arc::new(ContainerId::try_from(&*guid)?), guid: Arc::new(ContainerId::try_from(&*guid)?),
@@ -258,7 +279,84 @@ impl LxcContainer {
config, config,
exited: false, exited: false,
log_mount, log_mount,
}) };
if res.config.hardware_acceleration {
res.handle_devices(
tokio::fs::read_dir("/dev")
.await
.with_ctx(|_| (ErrorKind::Filesystem, "readdir /dev"))?,
HARDWARE_ACCELERATION_PATHS,
)
.await?;
}
Ok(res)
}
#[cfg(not(target_os = "linux"))]
async fn handle_devices(&self, _: ReadDir, _: &[&str]) -> Result<(), Error> {
Ok(())
}
#[cfg(target_os = "linux")]
fn handle_devices<'a>(
&'a self,
mut dir: ReadDir,
matches: &'a [&'a str],
) -> BoxFuture<'a, Result<(), Error>> {
use std::os::linux::fs::MetadataExt;
use std::os::unix::fs::FileTypeExt;
async move {
while let Some(ent) = dir.next_entry().await? {
let path = ent.path();
if let Some(matches) = if matches.is_empty() {
Some(Vec::new())
} else {
let mut new_matches = Vec::new();
for mut m in matches.iter().copied() {
let could_match = if let Some(prefix) = m.strip_suffix("*") {
m = prefix;
path.to_string_lossy().starts_with(m)
} else {
path.starts_with(m)
} || Path::new(m).starts_with(&path);
if could_match {
new_matches.push(m);
}
}
if new_matches.is_empty() {
None
} else {
Some(new_matches)
}
} {
let meta = ent.metadata().await?;
let ty = meta.file_type();
if ty.is_dir() {
self.handle_devices(
tokio::fs::read_dir(&path).await.with_ctx(|_| {
(ErrorKind::Filesystem, format!("readdir {path:?}"))
})?,
&matches,
)
.await?;
} else {
let ty = if ty.is_char_device() {
'c'
} else if ty.is_block_device() {
'b'
} else {
continue;
};
let rdev = meta.st_rdev();
let major = ((rdev >> 8) & 0xfff) as u32;
let minor = ((rdev & 0xff) | ((rdev >> 12) & 0xfff00)) as u32;
self.mknod(&path, ty, major, minor).await?;
}
}
}
Ok(())
}
.boxed()
} }
pub fn rootfs_dir(&self) -> &Path { pub fn rootfs_dir(&self) -> &Path {
@@ -284,7 +382,7 @@ impl LxcContainer {
} }
if start.elapsed() > CONTAINER_DHCP_TIMEOUT { if start.elapsed() > CONTAINER_DHCP_TIMEOUT {
return Err(Error::new( return Err(Error::new(
eyre!("Timed out waiting for container to acquire DHCP lease"), eyre!("{}", t!("lxc.mod.dhcp-timeout")),
ErrorKind::Timeout, ErrorKind::Timeout,
)); ));
} }
@@ -310,9 +408,12 @@ impl LxcContainer {
if !output.status.success() { if !output.status.success() {
return Err(Error::new( return Err(Error::new(
eyre!( eyre!(
"Command failed with exit code: {:?} \n Message: {:?}", "{}",
output.status.code(), t!(
String::from_utf8(output.stderr) "lxc.mod.command-failed",
code = format!("{:?}", output.status.code()),
message = format!("{:?}", String::from_utf8(output.stderr))
)
), ),
ErrorKind::Docker, ErrorKind::Docker,
)); ));
@@ -329,7 +430,7 @@ impl LxcContainer {
.await?; .await?;
self.rpc_bind.take().unmount().await?; self.rpc_bind.take().unmount().await?;
if let Some(log_mount) = self.log_mount.take() { if let Some(log_mount) = self.log_mount.take() {
log_mount.unmount(true).await?; log_mount.unmount(false).await?;
} }
self.rootfs.take().unmount(true).await?; self.rootfs.take().unmount(true).await?;
let rootfs_path = self.rootfs_dir(); let rootfs_path = self.rootfs_dir();
@@ -340,7 +441,7 @@ impl LxcContainer {
> 0 > 0
{ {
return Err(Error::new( return Err(Error::new(
eyre!("rootfs is not empty, refusing to delete"), eyre!("{}", t!("lxc.mod.rootfs-not-empty")),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
)); ));
} }
@@ -351,7 +452,10 @@ impl LxcContainer {
.invoke(ErrorKind::Lxc) .invoke(ErrorKind::Lxc)
.await?; .await?;
self.exited = true; #[allow(unused_assignments)]
{
self.exited = true;
}
Ok(()) Ok(())
} }
@@ -361,23 +465,125 @@ impl LxcContainer {
let sock_path = self.rpc_dir().join(CONTAINER_RPC_SERVER_SOCKET); let sock_path = self.rpc_dir().join(CONTAINER_RPC_SERVER_SOCKET);
while tokio::fs::metadata(&sock_path).await.is_err() { while tokio::fs::metadata(&sock_path).await.is_err() {
if timeout.map_or(false, |t| started.elapsed() > t) { if timeout.map_or(false, |t| started.elapsed() > t) {
tracing::error!(
"{:?}",
Command::new("lxc-attach")
.arg(&**self.guid)
.arg("--")
.arg("systemctl")
.arg("status")
.arg("container-runtime")
.invoke(ErrorKind::Unknown)
.await
);
return Err(Error::new( return Err(Error::new(
eyre!("timed out waiting for socket"), eyre!("{}", t!("lxc.mod.socket-timeout")),
ErrorKind::Timeout, ErrorKind::Timeout,
)); ));
} }
tokio::time::sleep(Duration::from_millis(100)).await; tokio::time::sleep(Duration::from_millis(100)).await;
} }
tracing::info!("Connected to socket in {:?}", started.elapsed()); tracing::info!(
"{}",
t!(
"lxc.mod.connected-to-socket",
elapsed = format!("{:?}", started.elapsed())
)
);
Ok(UnixRpcClient::new(sock_path)) Ok(UnixRpcClient::new(sock_path))
} }
pub async fn mknod(&self, path: &Path, ty: char, major: u32, minor: u32) -> Result<(), Error> {
if let Ok(dev_rel) = path.strip_prefix("/dev") {
let parent = dev_rel.parent();
let media_dev = self.rootfs_dir().join("media/startos/dev");
let target_path = media_dev.join(dev_rel);
if tokio::fs::metadata(&target_path).await.is_ok() {
return Ok(());
}
if let Some(parent) = parent {
let p = media_dev.join(parent);
tokio::fs::create_dir_all(&p)
.await
.with_ctx(|_| (ErrorKind::Filesystem, format!("mkdir -p {p:?}")))?;
for p in parent.ancestors() {
Command::new("chown")
.arg("100000:100000")
.arg(media_dev.join(p))
.invoke(ErrorKind::Filesystem)
.await?;
}
}
Command::new("mknod")
.arg(&target_path)
.arg(&*InternedString::from_display(&ty))
.arg(&*InternedString::from_display(&major))
.arg(&*InternedString::from_display(&minor))
.invoke(ErrorKind::Filesystem)
.await?;
Command::new("chown")
.arg("100000:100000")
.arg(&target_path)
.invoke(ErrorKind::Filesystem)
.await?;
if let Some(parent) = parent {
Command::new("lxc-attach")
.arg(&**self.guid)
.arg("--")
.arg("mkdir")
.arg("-p")
.arg(Path::new("/dev").join(parent))
.invoke(ErrorKind::Lxc)
.await?;
}
Command::new("lxc-attach")
.arg(&**self.guid)
.arg("--")
.arg("touch")
.arg(&path)
.invoke(ErrorKind::Lxc)
.await?;
Command::new("lxc-attach")
.arg(&**self.guid)
.arg("--")
.arg("mount")
.arg("--bind")
.arg(Path::new("/media/startos/dev").join(dev_rel))
.arg(&path)
.invoke(ErrorKind::Lxc)
.await?;
} else {
let target_path = self
.rootfs_dir()
.join(path.strip_prefix("/").unwrap_or(&path));
if tokio::fs::metadata(&target_path).await.is_ok() {
return Ok(());
}
Command::new("mknod")
.arg(&target_path)
.arg(&*InternedString::from_display(&ty))
.arg(&*InternedString::from_display(&major))
.arg(&*InternedString::from_display(&minor))
.invoke(ErrorKind::Filesystem)
.await?;
Command::new("chown")
.arg("100000:100000")
.arg(&target_path)
.invoke(ErrorKind::Filesystem)
.await?;
}
Ok(())
}
} }
impl Drop for LxcContainer { impl Drop for LxcContainer {
fn drop(&mut self) { fn drop(&mut self) {
if !self.exited { if !self.exited {
tracing::warn!( tracing::warn!(
"Container {} was ungracefully dropped. Cleaning up dangling containers...", "{}",
&**self.guid t!(
"lxc.mod.container-ungracefully-dropped",
container = &**self.guid
)
); );
let rootfs = self.rootfs.take(); let rootfs = self.rootfs.take();
let guid = std::mem::take(&mut self.guid); let guid = std::mem::take(&mut self.guid);
@@ -396,16 +602,25 @@ impl Drop for LxcContainer {
} }
.await .await
{ {
tracing::error!("Error reading logs from crashed container: {e}"); tracing::error!(
"{}",
t!("lxc.mod.error-reading-crashed-logs", error = e.to_string())
);
tracing::debug!("{e:?}") tracing::debug!("{e:?}")
} }
rootfs.unmount(true).await.log_err(); rootfs.unmount(true).await.log_err();
drop(guid); drop(guid);
if let Err(e) = manager.gc().await { if let Err(e) = manager.gc().await {
tracing::error!("Error cleaning up dangling LXC containers: {e}"); tracing::error!(
"{}",
t!(
"lxc.mod.error-cleaning-up-containers",
error = e.to_string()
)
);
tracing::debug!("{e:?}") tracing::debug!("{e:?}")
} else { } else {
tracing::info!("Successfully cleaned up dangling LXC containers"); tracing::info!("{}", t!("lxc.mod.cleaned-up-containers"));
} }
}); });
} }
@@ -414,7 +629,10 @@ impl Drop for LxcContainer {
} }
#[derive(Default, Serialize)] #[derive(Default, Serialize)]
pub struct LxcConfig {} pub struct LxcConfig {
pub hardware_acceleration: bool,
}
pub async fn connect(ctx: &RpcContext, container: &LxcContainer) -> Result<Guid, Error> { pub async fn connect(ctx: &RpcContext, container: &LxcContainer) -> Result<Guid, Error> {
use axum::extract::ws::Message; use axum::extract::ws::Message;

View File

@@ -15,5 +15,8 @@ fn main() {
}) { }) {
PREFER_DOCKER.set(true).ok(); PREFER_DOCKER.set(true).ok();
} }
MultiExecutable::default().enable_start_cli().execute() MultiExecutable::default()
.enable_start_cli()
.set_default("start-cli")
.execute()
} }

View File

@@ -3,5 +3,6 @@ use startos::bins::MultiExecutable;
fn main() { fn main() {
MultiExecutable::default() MultiExecutable::default()
.enable_start_container() .enable_start_container()
.set_default("start-container")
.execute() .execute()
} }

View File

@@ -11,11 +11,6 @@ fn main() {
"$CARGO_MANIFEST_DIR/../web/dist/static/setup-wizard" "$CARGO_MANIFEST_DIR/../web/dist/static/setup-wizard"
)) ))
.ok(); .ok();
startos::net::static_server::INSTALL_WIZARD_CELL
.set(include_dir::include_dir!(
"$CARGO_MANIFEST_DIR/../web/dist/static/install-wizard"
))
.ok();
#[cfg(not(feature = "beta"))] #[cfg(not(feature = "beta"))]
startos::db::model::public::DB_UI_SEED_CELL startos::db::model::public::DB_UI_SEED_CELL
.set(include_str!(concat!( .set(include_str!(concat!(

View File

@@ -40,7 +40,10 @@ impl LocalAuthContext for RpcContext {
} }
fn unauthorized() -> Error { fn unauthorized() -> Error {
Error::new(eyre!("UNAUTHORIZED"), crate::ErrorKind::Authorization) Error::new(
eyre!("{}", t!("middleware.auth.unauthorized")),
crate::ErrorKind::Authorization,
)
} }
async fn check_from_header<C: LocalAuthContext>(header: Option<&HeaderValue>) -> Result<(), Error> { async fn check_from_header<C: LocalAuthContext>(header: Option<&HeaderValue>) -> Result<(), Error> {

View File

@@ -146,7 +146,7 @@ impl HashSessionToken {
} }
} }
Err(Error::new( Err(Error::new(
eyre!("UNAUTHORIZED"), eyre!("{}", t!("middleware.auth.unauthorized")),
crate::ErrorKind::Authorization, crate::ErrorKind::Authorization,
)) ))
} }
@@ -221,7 +221,7 @@ impl ValidSessionToken {
} }
} }
Err(Error::new( Err(Error::new(
eyre!("UNAUTHORIZED"), eyre!("{}", t!("middleware.auth.unauthorized")),
crate::ErrorKind::Authorization, crate::ErrorKind::Authorization,
)) ))
} }
@@ -244,7 +244,10 @@ impl ValidSessionToken {
C::access_sessions(db) C::access_sessions(db)
.as_idx_mut(session_hash) .as_idx_mut(session_hash)
.ok_or_else(|| { .ok_or_else(|| {
Error::new(eyre!("UNAUTHORIZED"), crate::ErrorKind::Authorization) Error::new(
eyre!("{}", t!("middleware.auth.unauthorized")),
crate::ErrorKind::Authorization,
)
})? })?
.mutate(|s| { .mutate(|s| {
s.last_active = Utc::now(); s.last_active = Utc::now();
@@ -305,7 +308,7 @@ impl<C: SessionAuthContext> Middleware<C> for SessionAuth {
self.rate_limiter.mutate(|(count, time)| { self.rate_limiter.mutate(|(count, time)| {
if time.elapsed() < Duration::from_secs(20) && *count >= 3 { if time.elapsed() < Duration::from_secs(20) && *count >= 3 {
Err(Error::new( Err(Error::new(
eyre!("Please limit login attempts to 3 per 20 seconds."), eyre!("{}", t!("middleware.auth.rate-limited-login")),
crate::ErrorKind::RateLimited, crate::ErrorKind::RateLimited,
)) ))
} else { } else {

View File

@@ -90,7 +90,7 @@ impl SignatureAuthContext for RpcContext {
} }
Err(Error::new( Err(Error::new(
eyre!("Key is not authorized"), eyre!("{}", t!("middleware.auth.key-not-authorized")),
ErrorKind::IncorrectPassword, ErrorKind::IncorrectPassword,
)) ))
} }
@@ -141,7 +141,7 @@ impl SignatureAuth {
let mut cache = self.nonce_cache.lock().await; let mut cache = self.nonce_cache.lock().await;
if cache.values().any(|n| *n == nonce) { if cache.values().any(|n| *n == nonce) {
return Err(Error::new( return Err(Error::new(
eyre!("replay attack detected"), eyre!("{}", t!("middleware.auth.replay-attack-detected")),
ErrorKind::Authorization, ErrorKind::Authorization,
)); ));
} }
@@ -226,7 +226,7 @@ impl<C: SignatureAuthContext> Middleware<C> for SignatureAuth {
context.sig_context().await.into_iter().fold( context.sig_context().await.into_iter().fold(
Err(Error::new( Err(Error::new(
eyre!("no valid signature context available to verify"), eyre!("{}", t!("middleware.auth.no-valid-sig-context")),
ErrorKind::Authorization, ErrorKind::Authorization,
)), )),
|acc, x| { |acc, x| {
@@ -249,7 +249,7 @@ impl<C: SignatureAuthContext> Middleware<C> for SignatureAuth {
.unwrap_or_else(|e| e.duration().as_secs() as i64 * -1); .unwrap_or_else(|e| e.duration().as_secs() as i64 * -1);
if (now - commitment.timestamp).abs() > 30 { if (now - commitment.timestamp).abs() > 30 {
return Err(Error::new( return Err(Error::new(
eyre!("timestamp not within 30s of now"), eyre!("{}", t!("middleware.auth.timestamp-not-within-30s")),
ErrorKind::InvalidSignature, ErrorKind::InvalidSignature,
)); ));
} }
@@ -347,6 +347,10 @@ pub async fn call_remote<Ctx: SigningContext + AsRef<Client>>(
.with_kind(ErrorKind::Deserialization)? .with_kind(ErrorKind::Deserialization)?
.result .result
} }
_ => Err(Error::new(eyre!("unknown content type"), ErrorKind::Network).into()), _ => Err(Error::new(
eyre!("{}", t!("middleware.auth.unknown-content-type")),
ErrorKind::Network,
)
.into()),
} }
} }

View File

@@ -2,6 +2,7 @@ use axum::response::Response;
use http::HeaderValue; use http::HeaderValue;
use http::header::InvalidHeaderValue; use http::header::InvalidHeaderValue;
use rpc_toolkit::{Middleware, RpcRequest, RpcResponse}; use rpc_toolkit::{Middleware, RpcRequest, RpcResponse};
use rust_i18n::t;
use serde::Deserialize; use serde::Deserialize;
use crate::context::RpcContext; use crate::context::RpcContext;
@@ -46,7 +47,13 @@ impl Middleware<RpcContext> for SyncDb {
} }
.await .await
{ {
tracing::error!("error writing X-Patch-Sequence header: {e}"); tracing::error!(
"{}",
t!(
"middleware.db.error-writing-patch-sequence-header",
error = e
)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
} }

View File

@@ -395,7 +395,7 @@ pub fn acme_api<C: Context>() -> ParentHandler<C> {
from_fn_async(init) from_fn_async(init)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Setup ACME certificate acquisition") .with_about("about.setup-acme-certificate-acquisition")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -403,7 +403,7 @@ pub fn acme_api<C: Context>() -> ParentHandler<C> {
from_fn_async(remove) from_fn_async(remove)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Remove ACME certificate acquisition configuration") .with_about("about.remove-acme-certificate-acquisition-configuration")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -463,9 +463,9 @@ impl ValueParserFactory for AcmeProvider {
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct InitAcmeParams { pub struct InitAcmeParams {
#[arg(long)] #[arg(long, help = "help.arg.acme-provider")]
pub provider: AcmeProvider, pub provider: AcmeProvider,
#[arg(long)] #[arg(long, help = "help.arg.acme-contact")]
pub contact: Vec<String>, pub contact: Vec<String>,
} }
@@ -488,7 +488,7 @@ pub async fn init(
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct RemoveAcmeParams { pub struct RemoveAcmeParams {
#[arg(long)] #[arg(long, help = "help.arg.acme-provider")]
pub provider: AcmeProvider, pub provider: AcmeProvider,
} }

View File

@@ -54,13 +54,13 @@ pub fn dns_api<C: Context>() -> ParentHandler<C> {
Ok(()) Ok(())
}) })
.with_about("Test the DNS configuration for a domain"), .with_about("about.test-dns-configuration-for-domain"),
) )
.subcommand( .subcommand(
"set-static", "set-static",
from_fn_async(set_static_dns) from_fn_async(set_static_dns)
.no_display() .no_display()
.with_about("Set static DNS servers") .with_about("about.set-static-dns-servers")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -88,13 +88,14 @@ pub fn dns_api<C: Context>() -> ParentHandler<C> {
Ok(()) Ok(())
}) })
.with_about("Dump address resolution table") .with_about("about.dump-address-resolution-table")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct QueryDnsParams { pub struct QueryDnsParams {
#[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString, pub fqdn: InternedString,
} }
@@ -134,6 +135,7 @@ pub fn query_dns<C: Context>(
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct SetStaticDnsParams { pub struct SetStaticDnsParams {
#[arg(help = "help.arg.dns-servers")]
pub servers: Option<Vec<String>>, pub servers: Option<Vec<String>>,
} }
@@ -292,7 +294,7 @@ impl Resolver {
.await .await
.map_err(|_| { .map_err(|_| {
Error::new( Error::new(
eyre!("timed out waiting to update dns catalog"), eyre!("{}", t!("net.dns.timeout-updating-catalog")),
ErrorKind::Timeout, ErrorKind::Timeout,
) )
})?; })?;
@@ -348,7 +350,13 @@ impl Resolver {
}) { }) {
return Some(res); return Some(res);
} else { } else {
tracing::warn!("Could not determine source interface of {src}"); tracing::warn!(
"{}",
t!(
"net.dns.could-not-determine-source-interface",
src = src.to_string()
)
);
} }
} }
if STARTOS.zone_of(name) || EMBASSY.zone_of(name) { if STARTOS.zone_of(name) || EMBASSY.zone_of(name) {
@@ -473,7 +481,10 @@ impl RequestHandler for Resolver {
Ok(Some(a)) => return a, Ok(Some(a)) => return a,
Ok(None) => (), Ok(None) => (),
Err(e) => { Err(e) => {
tracing::error!("Error resolving internal DNS: {e}"); tracing::error!(
"{}",
t!("net.dns.error-resolving-internal", error = e.to_string())
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
let mut header = Header::response_from_request(request.header()); let mut header = Header::response_from_request(request.header());
header.set_recursion_available(true); header.set_recursion_available(true);
@@ -557,7 +568,7 @@ impl DnsController {
}) })
} else { } else {
Err(Error::new( Err(Error::new(
eyre!("DNS Server Thread has exited"), eyre!("{}", t!("net.dns.server-thread-exited")),
crate::ErrorKind::Network, crate::ErrorKind::Network,
)) ))
} }
@@ -577,7 +588,7 @@ impl DnsController {
}) })
} else { } else {
Err(Error::new( Err(Error::new(
eyre!("DNS Server Thread has exited"), eyre!("{}", t!("net.dns.server-thread-exited")),
crate::ErrorKind::Network, crate::ErrorKind::Network,
)) ))
} }
@@ -598,7 +609,7 @@ impl DnsController {
}) })
} else { } else {
Err(Error::new( Err(Error::new(
eyre!("DNS Server Thread has exited"), eyre!("{}", t!("net.dns.server-thread-exited")),
crate::ErrorKind::Network, crate::ErrorKind::Network,
)) ))
} }
@@ -624,7 +635,7 @@ impl DnsController {
}) })
} else { } else {
Err(Error::new( Err(Error::new(
eyre!("DNS Server Thread has exited"), eyre!("{}", t!("net.dns.server-thread-exited")),
crate::ErrorKind::Network, crate::ErrorKind::Network,
)) ))
} }

View File

@@ -34,7 +34,7 @@ impl AvailablePorts {
pub fn alloc(&mut self) -> Result<u16, Error> { pub fn alloc(&mut self) -> Result<u16, Error> {
self.0.request_id().ok_or_else(|| { self.0.request_id().ok_or_else(|| {
Error::new( Error::new(
eyre!("No more dynamic ports available!"), eyre!("{}", t!("net.forward.no-dynamic-ports-available")),
ErrorKind::Network, ErrorKind::Network,
) )
}) })
@@ -240,7 +240,13 @@ impl PortForwardController {
} }
.await .await
{ {
tracing::error!("error initializing PortForwardController: {e:#}"); tracing::error!(
"{}",
t!(
"net.forward.error-initializing-controller",
error = format!("{e:#}")
)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
tokio::time::sleep(Duration::from_secs(5)).await; tokio::time::sleep(Duration::from_secs(5)).await;
} }
@@ -400,7 +406,7 @@ impl InterfaceForwardEntry {
) -> Result<Arc<()>, Error> { ) -> Result<Arc<()>, Error> {
if external != self.external { if external != self.external {
return Err(Error::new( return Err(Error::new(
eyre!("Mismatched external port in InterfaceForwardEntry"), eyre!("{}", t!("net.forward.mismatched-external-port")),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
)); ));
} }
@@ -477,7 +483,7 @@ impl InterfaceForwardState {
fn err_has_exited<T>(_: T) -> Error { fn err_has_exited<T>(_: T) -> Error {
Error::new( Error::new(
eyre!("PortForwardController thread has exited"), eyre!("{}", t!("net.forward.controller-thread-exited")),
ErrorKind::Unknown, ErrorKind::Unknown,
) )
} }

View File

@@ -95,7 +95,7 @@ pub fn gateway_api<C: Context>() -> ParentHandler<C> {
Ok(()) Ok(())
}) })
.with_about("Show gateways StartOS can listen on") .with_about("about.show-gateways-startos-can-listen-on")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -103,7 +103,7 @@ pub fn gateway_api<C: Context>() -> ParentHandler<C> {
from_fn_async(set_public) from_fn_async(set_public)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Indicate whether this gateway has inbound access from the WAN") .with_about("about.indicate-gateway-inbound-access-from-wan")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -111,10 +111,7 @@ pub fn gateway_api<C: Context>() -> ParentHandler<C> {
from_fn_async(unset_public) from_fn_async(unset_public)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about(concat!( .with_about("about.allow-gateway-infer-inbound-access-from-wan")
"Allow this gateway to infer whether it has",
" inbound access from the WAN based on its IPv4 address"
))
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -122,7 +119,7 @@ pub fn gateway_api<C: Context>() -> ParentHandler<C> {
from_fn_async(forget_iface) from_fn_async(forget_iface)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Forget a disconnected gateway") .with_about("about.forget-disconnected-gateway")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -130,7 +127,7 @@ pub fn gateway_api<C: Context>() -> ParentHandler<C> {
from_fn_async(set_name) from_fn_async(set_name)
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.no_display() .no_display()
.with_about("Rename a gateway") .with_about("about.rename-gateway")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -143,7 +140,9 @@ async fn list_interfaces(
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)] #[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
struct NetworkInterfaceSetPublicParams { struct NetworkInterfaceSetPublicParams {
#[arg(help = "help.arg.gateway-id")]
gateway: GatewayId, gateway: GatewayId,
#[arg(help = "help.arg.is-public")]
public: Option<bool>, public: Option<bool>,
} }
@@ -159,6 +158,7 @@ async fn set_public(
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)] #[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
struct UnsetPublicParams { struct UnsetPublicParams {
#[arg(help = "help.arg.gateway-id")]
gateway: GatewayId, gateway: GatewayId,
} }
@@ -174,6 +174,7 @@ async fn unset_public(
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)] #[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
struct ForgetGatewayParams { struct ForgetGatewayParams {
#[arg(help = "help.arg.gateway-id")]
gateway: GatewayId, gateway: GatewayId,
} }
@@ -186,7 +187,9 @@ async fn forget_iface(
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)] #[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
struct RenameGatewayParams { struct RenameGatewayParams {
#[arg(help = "help.arg.gateway-id")]
id: GatewayId, id: GatewayId,
#[arg(help = "help.arg.gateway-name")]
name: InternedString, name: InternedString,
} }
@@ -464,7 +467,8 @@ async fn watcher(
ensure_code!( ensure_code!(
!devices.is_empty(), !devices.is_empty(),
ErrorKind::Network, ErrorKind::Network,
"NetworkManager returned no devices. Trying again..." "{}",
t!("net.gateway.no-devices-returned")
); );
let mut ifaces = BTreeSet::new(); let mut ifaces = BTreeSet::new();
let mut jobs = Vec::new(); let mut jobs = Vec::new();
@@ -731,7 +735,8 @@ async fn watch_ip(
Ok(a) => a, Ok(a) => a,
Err(e) => { Err(e) => {
tracing::error!( tracing::error!(
"Failed to determine WAN IP for {iface}: {e}" "{}",
t!("net.gateway.failed-to-determine-wan-ip", iface = iface.to_string(), error = e.to_string())
); );
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
None None
@@ -1021,7 +1026,13 @@ impl NetworkInterfaceController {
info info
} }
Err(e) => { Err(e) => {
tracing::error!("Error loading network interface info: {e}"); tracing::error!(
"{}",
t!(
"net.gateway.error-loading-interface-info",
error = e.to_string()
)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
OrdMap::new() OrdMap::new()
} }
@@ -1050,7 +1061,10 @@ impl NetworkInterfaceController {
} }
.await .await
{ {
tracing::error!("Error syncing ip info to db: {e}"); tracing::error!(
"{}",
t!("net.gateway.error-syncing-ip-info", error = e.to_string())
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
@@ -1060,7 +1074,10 @@ impl NetworkInterfaceController {
} }
.await; .await;
if let Err(e) = res { if let Err(e) = res {
tracing::error!("Error syncing ip info to db: {e}"); tracing::error!(
"{}",
t!("net.gateway.error-syncing-ip-info", error = e.to_string())
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
}) })
@@ -1121,7 +1138,7 @@ impl NetworkInterfaceController {
.map_or(false, |i| i.ip_info.is_some()) .map_or(false, |i| i.ip_info.is_some())
{ {
err = Some(Error::new( err = Some(Error::new(
eyre!("Cannot forget currently connected interface"), eyre!("{}", t!("net.gateway.cannot-forget-connected-interface")),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
)); ));
return false; return false;
@@ -1167,7 +1184,7 @@ impl NetworkInterfaceController {
if &*ac == "/" { if &*ac == "/" {
return Err(Error::new( return Err(Error::new(
eyre!("Cannot delete device without active connection"), eyre!("{}", t!("net.gateway.cannot-delete-without-connection")),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
)); ));
} }

View File

@@ -120,7 +120,7 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.with_inherited(|_, a| a) .with_inherited(|_, a| a)
.no_display() .no_display()
.with_about("Add a public domain to this host") .with_about("about.add-public-domain-to-host")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -129,7 +129,7 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.with_inherited(|_, a| a) .with_inherited(|_, a| a)
.no_display() .no_display()
.with_about("Remove a public domain from this host") .with_about("about.remove-public-domain-from-host")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.with_inherited(|_, a| a), .with_inherited(|_, a| a),
@@ -143,7 +143,7 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.with_inherited(|_, a| a) .with_inherited(|_, a| a)
.no_display() .no_display()
.with_about("Add a private domain to this host") .with_about("about.add-private-domain-to-host")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -152,7 +152,7 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.with_inherited(|_, a| a) .with_inherited(|_, a| a)
.no_display() .no_display()
.with_about("Remove a private domain from this host") .with_about("about.remove-private-domain-from-host")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.with_inherited(|_, a| a), .with_inherited(|_, a| a),
@@ -168,7 +168,7 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.with_inherited(|_, a| a) .with_inherited(|_, a| a)
.no_display() .no_display()
.with_about("Add an address to this host") .with_about("about.add-address-to-host")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -177,7 +177,7 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.with_inherited(|_, a| a) .with_inherited(|_, a| a)
.no_display() .no_display()
.with_about("Remove an address from this host") .with_about("about.remove-address-from-host")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.with_inherited(Kind::inheritance), .with_inherited(Kind::inheritance),
@@ -230,16 +230,18 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
Ok(()) Ok(())
}) })
.with_about("List addresses for this host") .with_about("about.list-addresses-for-host")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct AddPublicDomainParams { pub struct AddPublicDomainParams {
#[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString, pub fqdn: InternedString,
#[arg(long)] #[arg(long, help = "help.arg.acme-provider")]
pub acme: Option<AcmeProvider>, pub acme: Option<AcmeProvider>,
#[arg(help = "help.arg.gateway-id")]
pub gateway: GatewayId, pub gateway: GatewayId,
} }
@@ -284,6 +286,7 @@ pub async fn add_public_domain<Kind: HostApiKind>(
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct RemoveDomainParams { pub struct RemoveDomainParams {
#[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString, pub fqdn: InternedString,
} }
@@ -307,6 +310,7 @@ pub async fn remove_public_domain<Kind: HostApiKind>(
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct AddPrivateDomainParams { pub struct AddPrivateDomainParams {
#[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString, pub fqdn: InternedString,
} }
@@ -349,6 +353,7 @@ pub async fn remove_private_domain<Kind: HostApiKind>(
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct OnionParams { pub struct OnionParams {
#[arg(help = "help.arg.onion-address")]
pub onion: String, pub onion: String,
} }

View File

@@ -209,7 +209,7 @@ pub fn binding<C: Context, Kind: HostApiKind>()
Ok(()) Ok(())
}) })
.with_about("List bindinges for this host") .with_about("about.list-bindings-for-host")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -218,7 +218,7 @@ pub fn binding<C: Context, Kind: HostApiKind>()
.with_metadata("sync_db", Value::Bool(true)) .with_metadata("sync_db", Value::Bool(true))
.with_inherited(Kind::inheritance) .with_inherited(Kind::inheritance)
.no_display() .no_display()
.with_about("Set whether this gateway should be enabled for this binding") .with_about("about.set-gateway-enabled-for-binding")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -237,9 +237,11 @@ pub async fn list_bindings<Kind: HostApiKind>(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[ts(export)] #[ts(export)]
pub struct BindingGatewaySetEnabledParams { pub struct BindingGatewaySetEnabledParams {
#[arg(help = "help.arg.internal-port")]
internal_port: u16, internal_port: u16,
#[arg(help = "help.arg.gateway-id")]
gateway: GatewayId, gateway: GatewayId,
#[arg(long)] #[arg(long, help = "help.arg.binding-enabled")]
enabled: Option<bool>, enabled: Option<bool>,
} }

View File

@@ -166,11 +166,13 @@ impl Model<Host> {
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct RequiresPackageId { pub struct RequiresPackageId {
#[arg(help = "help.arg.package-id")]
package: PackageId, package: PackageId,
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
pub struct RequiresHostId { pub struct RequiresHostId {
#[arg(help = "help.arg.host-id")]
host: HostId, host: HostId,
} }
@@ -243,7 +245,7 @@ pub fn host_api<C: Context>() -> ParentHandler<C, RequiresPackageId> {
} }
Ok(()) Ok(())
}) })
.with_about("List host IDs available for this service"), .with_about("about.list-host-ids-for-service"),
) )
.subcommand( .subcommand(
"address", "address",

View File

@@ -23,32 +23,29 @@ pub mod wifi;
pub fn net_api<C: Context>() -> ParentHandler<C> { pub fn net_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new() ParentHandler::new()
.subcommand( .subcommand("tor", tor::tor_api::<C>().with_about("about.tor-commands"))
"tor",
tor::tor_api::<C>().with_about("Tor commands such as list-services, logs, and reset"),
)
.subcommand( .subcommand(
"acme", "acme",
acme::acme_api::<C>().with_about("Setup automatic clearnet certificate acquisition"), acme::acme_api::<C>().with_about("about.setup-acme-certificate"),
) )
.subcommand( .subcommand(
"dns", "dns",
dns::dns_api::<C>().with_about("Manage and query DNS"), dns::dns_api::<C>().with_about("about.manage-query-dns"),
) )
.subcommand( .subcommand(
"forward", "forward",
forward::forward_api::<C>().with_about("Manage port forwards"), forward::forward_api::<C>().with_about("about.manage-port-forwards"),
) )
.subcommand( .subcommand(
"gateway", "gateway",
gateway::gateway_api::<C>().with_about("View and edit gateway configurations"), gateway::gateway_api::<C>().with_about("about.view-edit-gateway-configs"),
) )
.subcommand( .subcommand(
"tunnel", "tunnel",
tunnel::tunnel_api::<C>().with_about("Manage tunnels"), tunnel::tunnel_api::<C>().with_about("about.manage-tunnels"),
) )
.subcommand( .subcommand(
"vhost", "vhost",
vhost::vhost_api::<C>().with_about("Manage ssl virtual host proxy"), vhost::vhost_api::<C>().with_about("about.manage-ssl-vhost-proxy"),
) )
} }

View File

@@ -170,7 +170,7 @@ impl FullchainCertData {
] ]
.into_iter() .into_iter()
.min() .min()
.ok_or_else(|| Error::new(eyre!("unreachable"), ErrorKind::Unknown)) .ok_or_else(|| Error::new(eyre!("{}", t!("net.ssl.unreachable")), ErrorKind::Unknown))
} }
} }

View File

@@ -30,7 +30,7 @@ use tokio::io::{AsyncRead, AsyncReadExt, AsyncSeekExt, BufReader};
use tokio_util::io::ReaderStream; use tokio_util::io::ReaderStream;
use url::Url; use url::Url;
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, RpcContext, SetupContext};
use crate::hostname::Hostname; use crate::hostname::Hostname;
use crate::middleware::auth::Auth; use crate::middleware::auth::Auth;
use crate::middleware::auth::session::ValidSessionToken; use crate::middleware::auth::session::ValidSessionToken;
@@ -178,20 +178,6 @@ impl UiContext for SetupContext {
} }
} }
pub static INSTALL_WIZARD_CELL: OnceLock<Dir<'static>> = OnceLock::new();
impl UiContext for InstallContext {
fn ui_dir() -> &'static Dir<'static> {
INSTALL_WIZARD_CELL.get().unwrap_or(&EMPTY_DIR)
}
fn api() -> ParentHandler<Self> {
main_api()
}
fn middleware(server: Server<Self>) -> HttpServer<Self> {
server.middleware(Cors::new())
}
}
pub fn rpc_router<C: Context + Clone + AsRef<RpcContinuations>>( pub fn rpc_router<C: Context + Clone + AsRef<RpcContinuations>>(
ctx: C, ctx: C,
server: HttpServer<C>, server: HttpServer<C>,

View File

@@ -151,103 +151,109 @@ where
cx: &mut std::task::Context<'_>, cx: &mut std::task::Context<'_>,
) -> Poll<Result<(Self::Metadata, AcceptStream), Error>> { ) -> Poll<Result<(Self::Metadata, AcceptStream), Error>> {
self.in_progress.mutate(|in_progress| { self.in_progress.mutate(|in_progress| {
loop { // First, check if any in-progress handshakes have completed
if !in_progress.is_empty() { if !in_progress.is_empty() {
if let Poll::Ready(Some((handler, res))) = in_progress.poll_next_unpin(cx) { if let Poll::Ready(Some((handler, res))) = in_progress.poll_next_unpin(cx) {
if let Some(res) = res.transpose() { if let Some(res) = res.transpose() {
self.tls_handler = handler; self.tls_handler = handler;
return Poll::Ready(res); return Poll::Ready(res);
}
continue;
} }
// Connection was rejected (preprocess returned None).
// Yield to the runtime to avoid busy-looping, but wake
// immediately to continue processing.
cx.waker().wake_by_ref();
return Poll::Pending;
} }
}
let (metadata, stream) = ready!(self.accept.poll_accept(cx)?); // Try to accept a new connection
let mut tls_handler = self.tls_handler.clone(); let (metadata, stream) = ready!(self.accept.poll_accept(cx)?);
let mut fut = async move { let mut tls_handler = self.tls_handler.clone();
let res = async { let mut fut = async move {
let mut acceptor = LazyConfigAcceptor::new( let res = async {
Acceptor::default(), let mut acceptor =
BackTrackingIO::new(stream), LazyConfigAcceptor::new(Acceptor::default(), BackTrackingIO::new(stream));
); let mut mid: tokio_rustls::StartHandshake<BackTrackingIO<AcceptStream>> =
let mut mid: tokio_rustls::StartHandshake<BackTrackingIO<AcceptStream>> = match (&mut acceptor).await {
match (&mut acceptor).await { Ok(a) => a,
Ok(a) => a, Err(e) => {
Err(e) => { let mut stream = acceptor.take_io().or_not_found("acceptor io")?;
let mut stream = let (_, buf) = stream.rewind();
acceptor.take_io().or_not_found("acceptor io")?; if std::str::from_utf8(buf)
let (_, buf) = stream.rewind(); .ok()
if std::str::from_utf8(buf) .and_then(|buf| {
.ok() buf.lines()
.and_then(|buf| { .map(|l| l.trim())
buf.lines() .filter(|l| !l.is_empty())
.map(|l| l.trim()) .next()
.filter(|l| !l.is_empty()) })
.next() .map_or(false, |buf| {
}) regex::Regex::new("[A-Z]+ (.+) HTTP/1")
.map_or(false, |buf| { .unwrap()
regex::Regex::new("[A-Z]+ (.+) HTTP/1") .is_match(buf)
.unwrap() })
.is_match(buf) {
}) handle_http_on_https(stream).await.log_err();
{
handle_http_on_https(stream).await.log_err();
return Ok(None); return Ok(None);
} else { } else {
return Err(e).with_kind(ErrorKind::Network); return Err(e).with_kind(ErrorKind::Network);
}
} }
}; }
let hello = mid.client_hello(); };
if let Some(cfg) = tls_handler.get_config(&hello, &metadata).await { let hello = mid.client_hello();
let buffered = mid.io.stop_buffering(); if let Some(cfg) = tls_handler.get_config(&hello, &metadata).await {
mid.io let buffered = mid.io.stop_buffering();
.write_all(&buffered) mid.io
.await .write_all(&buffered)
.with_kind(ErrorKind::Network)?; .await
return Ok(match mid.into_stream(Arc::new(cfg)).await { .with_kind(ErrorKind::Network)?;
Ok(stream) => { return Ok(match mid.into_stream(Arc::new(cfg)).await {
let s = stream.get_ref().1; Ok(stream) => {
Some(( let s = stream.get_ref().1;
TlsMetadata { Some((
inner: metadata, TlsMetadata {
tls_info: TlsHandshakeInfo { inner: metadata,
sni: s.server_name().map(InternedString::intern), tls_info: TlsHandshakeInfo {
alpn: s sni: s.server_name().map(InternedString::intern),
.alpn_protocol() alpn: s
.map(|a| MaybeUtf8String(a.to_vec())), .alpn_protocol()
}, .map(|a| MaybeUtf8String(a.to_vec())),
}, },
Box::pin(stream) as AcceptStream, },
)) Box::pin(stream) as AcceptStream,
} ))
Err(e) => { }
tracing::trace!("Error completing TLS handshake: {e}"); Err(e) => {
tracing::trace!("{e:?}"); tracing::trace!("Error completing TLS handshake: {e}");
None tracing::trace!("{e:?}");
} None
}); }
} });
}
Ok(None) Ok(None)
}
.await;
(tls_handler, res)
} }
.boxed(); .await;
match fut.poll_unpin(cx) { (tls_handler, res)
Poll::Pending => { }
in_progress.push(fut); .boxed();
return Poll::Pending; match fut.poll_unpin(cx) {
Poll::Pending => {
in_progress.push(fut);
Poll::Pending
}
Poll::Ready((handler, res)) => {
if let Some(res) = res.transpose() {
self.tls_handler = handler;
return Poll::Ready(res);
} }
Poll::Ready((handler, res)) => { // Connection was rejected (preprocess returned None).
if let Some(res) = res.transpose() { // Yield to the runtime to avoid busy-looping, but wake
self.tls_handler = handler; // immediately to continue processing.
return Poll::Ready(res); cx.waker().wake_by_ref();
} Poll::Pending
} }
};
} }
}) })
} }

Some files were not shown because too many files have changed in this diff Show More