Compare commits

..

33 Commits

Author SHA1 Message Date
Matt Hill
5ba68a3124 translations 2026-02-16 00:34:41 -07:00
Matt Hill
bb68c3b91c update binding for API types, add ARCHITECTURE 2026-02-16 00:05:24 -07:00
Aiden McClelland
3518eccc87 feat: add port_forwards field to Host for tracking gateway forwarding rules 2026-02-14 16:40:21 -07:00
Matt Hill
2f19188dae looking good 2026-02-14 16:37:04 -07:00
Aiden McClelland
3a63f3b840 feat: add mdns hostname metadata variant and fix vhost routing
- Add HostnameMetadata::Mdns variant to distinguish mDNS from private domains
- Mark mDNS addresses as private (public: false) since mDNS is local-only
- Fall back to null SNI entry when hostname not found in vhost mapping
- Simplify public detection in ProxyTarget filter
- Pass hostname to update_addresses for mDNS domain name generation
2026-02-14 15:34:48 -07:00
Matt Hill
098d9275f4 new service interfacee page 2026-02-14 12:24:16 -07:00
Matt Hill
d5c74bc22e re-arrange (#3123) 2026-02-14 08:15:50 -07:00
Aiden McClelland
49d4da03ca feat: refactor NetService to watch DB and reconcile network state
- NetService sync task now uses PatchDB DbWatch instead of being called
  directly after DB mutations
- Read gateways from DB instead of network interface context when
  updating host addresses
- gateway sync updates all host addresses in the DB
- Add Watch<u64> channel for callers to wait on sync completion
- Fix ts-rs codegen bug with #[ts(skip)] on flattened Plugin field
- Update SDK getServiceInterface.ts for new HostnameInfo shape
- Remove unnecessary HTTPS redirect in static_server.rs
- Fix tunnel/api.rs to filter for WAN IPv4 address
2026-02-13 16:21:57 -07:00
Aiden McClelland
3765465618 chore: update ts bindings for preferred port design 2026-02-13 14:23:48 -07:00
Aiden McClelland
61f820d09e Merge branch 'feat/preferred-port-design' of github.com:Start9Labs/start-os into feat/preferred-port-design 2026-02-13 13:39:25 -07:00
Aiden McClelland
db7f3341ac wip refactor 2026-02-12 14:51:33 -07:00
Matt Hill
4decf9335c fix license display in marketplace 2026-02-12 13:07:19 -07:00
Matt Hill
339e5f799a build ts types and fix i18n 2026-02-12 11:32:29 -07:00
Aiden McClelland
89d3e0cf35 Merge branch 'feat/preferred-port-design' of github.com:Start9Labs/start-os into feat/preferred-port-design 2026-02-12 10:51:32 -07:00
Aiden McClelland
638ed27599 feat: replace SourceFilter with IpNet, add policy routing, remove MASQUERADE 2026-02-12 10:51:26 -07:00
Matt Hill
da75b8498e Merge branch 'next/major' of github.com:Start9Labs/start-os into feat/preferred-port-design 2026-02-12 08:28:36 -07:00
Matt Hill
8ef4ecf5ac outbound gateway support (#3120)
* Multiple (#3111)

* fix alerts i18n, fix status display, better, remove usb media, hide shutdown for install complete

* trigger chnage detection for localize pipe and round out implementing localize pipe for consistency even though not needed

* Fix PackageInfoShort to handle LocaleString on releaseNotes (#3112)

* Fix PackageInfoShort to handle LocaleString on releaseNotes

* fix: filter by target_version in get_matching_models and pass otherVersions from install

* chore: add exver documentation for ai agents

* frontend plus some be types

---------

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2026-02-12 08:27:09 -07:00
Matt Hill
0260c1532d fix ssh, undeprecate wifi (#3121) 2026-02-12 08:10:01 -07:00
Aiden McClelland
2a54625f43 feat: replace InterfaceFilter with ForwardRequirements, add WildcardListener, complete alpha.20 bump
- Replace DynInterfaceFilter with ForwardRequirements for per-IP forward
  precision with source-subnet iptables filtering for private forwards
- Add WildcardListener (binds [::]:port) to replace the per-gateway
  NetworkInterfaceListener/SelfContainedNetworkInterfaceListener/
  UpgradableListener infrastructure
- Update forward-port script with src_subnet and excluded_src env vars
- Remove unused filter types and listener infrastructure from gateway.rs
- Add availablePorts migration (IdPool -> BTreeMap<u16, bool>) to alpha.20
- Complete version bump to 0.4.0-alpha.20 in SDK and web
2026-02-11 18:10:27 -07:00
Aiden McClelland
4e638fb58e feat: implement preferred port allocation and per-address enable/disable
- Add AvailablePorts::try_alloc() with SSL tracking (BTreeMap<u16, bool>)
- Add DerivedAddressInfo on BindInfo with private_disabled/public_enabled/possible sets
- Add Bindings wrapper with Map impl for patchdb indexed access
- Flatten HostAddress from single-variant enum to struct
- Replace set-gateway-enabled RPC with set-address-enabled
- Remove hostname_info from Host; computed addresses now in BindInfo.addresses.possible
- Compute possible addresses inline in NetServiceData::update()
- Update DB migration, SDK types, frontend, and container-runtime
2026-02-10 17:38:51 -07:00
Aiden McClelland
73274ef6e0 docs: update TODO.md with DerivedAddressInfo design, remove completed tor task 2026-02-10 14:45:50 -07:00
Aiden McClelland
e1915bf497 chore: format RPCSpec.md markdown table 2026-02-10 13:38:40 -07:00
Aiden McClelland
8204074bdf chore: flatten HostnameInfo from enum to struct
HostnameInfo only had one variant (Ip) after removing Tor. Flatten it
into a plain struct with fields gateway, public, hostname. Remove all
kind === 'ip' type guards and narrowing across SDK, frontend, and
container runtime. Update DB migration to strip the kind field.
2026-02-10 13:38:12 -07:00
Aiden McClelland
2ee403e7de chore: remove tor from startos core
Tor is being moved from a built-in OS feature to a service. This removes
the Arti-based Tor client, onion address management, hidden service
creation, and all related code from the core backend, frontend, and SDK.

- Delete core/src/net/tor/ module (~2060 lines)
- Remove OnionAddress, TorSecretKey, TorController from all consumers
- Remove HostnameInfo::Onion and HostAddress::Onion variants
- Remove onion CRUD RPC endpoints and tor subcommand
- Remove tor key handling from account and backup/restore
- Remove ~12 tor-related Cargo dependencies (arti-client, torut, etc.)
- Remove tor UI components, API methods, mock data, and routes
- Remove OnionHostname and tor patterns/regexes from SDK
- Add v0_4_0_alpha_20 database migration to strip onion data
- Bump version to 0.4.0-alpha.20
2026-02-10 13:28:24 -07:00
Aiden McClelland
1974dfd66f docs: move address enable/disable to overflow menu, add SSL indicator, defer UI placement decisions 2026-02-09 13:29:49 -07:00
Aiden McClelland
2e03a95e47 docs: overhaul interfaces page design with view/manage split and per-address controls 2026-02-09 13:10:57 -07:00
Aiden McClelland
b6262c8e13 Fix PackageInfoShort to handle LocaleString on releaseNotes (#3112)
* Fix PackageInfoShort to handle LocaleString on releaseNotes

* fix: filter by target_version in get_matching_models and pass otherVersions from install

* chore: add exver documentation for ai agents
2026-02-09 19:42:03 +00:00
Matt Hill
ba740a9ee2 Multiple (#3111)
* fix alerts i18n, fix status display, better, remove usb media, hide shutdown for install complete

* trigger chnage detection for localize pipe and round out implementing localize pipe for consistency even though not needed
2026-02-09 12:41:29 -07:00
Aiden McClelland
8f809dab21 docs: add user-controlled public/private and port forward mapping to design 2026-02-08 11:17:43 -07:00
Aiden McClelland
c0b2cbe1c8 docs: update preferred external port design in TODO 2026-02-06 09:30:35 -07:00
Aiden McClelland
f2142f0bb3 add documentation for ai agents (#3115)
* add documentation for ai agents

* docs: consolidate CLAUDE.md and CONTRIBUTING.md, add style guidelines

- Refactor CLAUDE.md to reference CONTRIBUTING.md for build/test/format info
- Expand CONTRIBUTING.md with comprehensive build targets, env vars, and testing
- Add code style guidelines section with conventional commits
- Standardize SDK prettier config to use single quotes (matching web)
- Add project-level Claude Code settings to disable co-author attribution

* style(sdk): apply prettier with single quotes

Run prettier across sdk/base and sdk/package to apply the
standardized quote style (single quotes matching web).

* docs: add USER.md for per-developer TODO filtering

- Add agents/USER.md to .gitignore (contains user identifier)
- Document session startup flow in CLAUDE.md:
  - Create USER.md if missing, prompting for identifier
  - Filter TODOs by @username tags
  - Offer relevant TODOs on session start

* docs: add i18n documentation task to agent TODOs

* docs: document i18n ID patterns in core/

Add agents/i18n-patterns.md covering rust-i18n setup, translation file
format, t!() macro usage, key naming conventions, and locale selection.
Remove completed TODO item and add reference in CLAUDE.md.

* chore: clarify that all builds work on any OS with Docker
2026-02-06 00:10:16 +01:00
gStart9
86ca23c093 Remove redundant https:// strings in start-tunnel installation output (#3114) 2026-02-05 23:22:31 +01:00
Dominion5254
463b6ca4ef propagate Error info (#3116) 2026-02-05 23:21:28 +01:00
569 changed files with 11995 additions and 16129 deletions

1
.claude/settings.json Normal file
View File

@@ -0,0 +1 @@
{}

4
.gitignore vendored
View File

@@ -19,4 +19,6 @@ secrets.db
/compiled.tar
/compiled-*.tar
/build/lib/firmware
tmp
tmp
web/.i18n-checked
docs/USER.md

101
ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,101 @@
# Architecture
StartOS is an open-source Linux distribution for running personal servers. It manages discovery, installation, network configuration, backups, and health monitoring of self-hosted services.
## Tech Stack
- Backend: Rust (async/Tokio, Axum web framework)
- Frontend: Angular 20 + TypeScript + TaigaUI
- Container runtime: Node.js/TypeScript with LXC
- Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync
- API: JSON-RPC via rpc-toolkit (see `core/rpc-toolkit.md`)
- Auth: Password + session cookie, public/private key signatures, local authcookie (see `core/src/middleware/auth/`)
## Project Structure
```bash
/
├── assets/ # Screenshots for README
├── build/ # Auxiliary files and scripts for deployed images
├── container-runtime/ # Node.js program managing package containers
├── core/ # Rust backend: API, daemon (startd), CLI (start-cli)
├── debian/ # Debian package maintainer scripts
├── image-recipe/ # Scripts for building StartOS images
├── patch-db/ # (submodule) Diff-based data store for frontend sync
├── sdk/ # TypeScript SDK for building StartOS packages
└── web/ # Web UIs (Angular)
```
## Components
- **`core/`** — Rust backend daemon. Produces a single binary `startbox` that is symlinked as `startd` (main daemon), `start-cli` (CLI), `start-container` (runs inside LXC containers), `registrybox` (package registry), and `tunnelbox` (VPN/tunnel). Handles all backend logic: RPC API, service lifecycle, networking (DNS, ACME, WiFi, Tor, WireGuard), backups, and database state management. See [core/ARCHITECTURE.md](core/ARCHITECTURE.md).
- **`web/`** — Angular 20 + TypeScript workspace using Taiga UI. Contains three applications (admin UI, setup wizard, VPN management) and two shared libraries (common components/services, marketplace). Communicates with the backend exclusively via JSON-RPC. See [web/ARCHITECTURE.md](web/ARCHITECTURE.md).
- **`container-runtime/`** — Node.js runtime that runs inside each service's LXC container. Loads the service's JavaScript from its S9PK package and manages subcontainers. Communicates with the host daemon via JSON-RPC over Unix socket. See [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md).
- **`sdk/`** — TypeScript SDK for packaging services for StartOS (`@start9labs/start-sdk`). Split into `base/` (core types, ABI definitions, effects interface, consumed by web as `@start9labs/start-sdk-base`) and `package/` (full SDK for service developers, consumed by container-runtime as `@start9labs/start-sdk`).
- **`patch-db/`** — Git submodule providing diff-based state synchronization. Uses CBOR encoding. Backend mutations produce diffs that are pushed to the frontend via WebSocket, enabling reactive UI updates without polling. See [patch-db repo](https://github.com/Start9Labs/patch-db).
## Build Pipeline
Components have a strict dependency chain. Changes flow in one direction:
```
Rust (core/)
→ cargo test exports ts-rs types to core/bindings/
→ rsync copies to sdk/base/lib/osBindings/
→ SDK build produces baseDist/ and dist/
→ web/ consumes baseDist/ (via @start9labs/start-sdk-base)
→ container-runtime/ consumes dist/ (via @start9labs/start-sdk)
```
Key make targets along this chain:
| Step | Command | What it does |
|---|---|---|
| 1 | `cargo check -p start-os` | Verify Rust compiles |
| 2 | `make ts-bindings` | Export ts-rs types → rsync to SDK |
| 3 | `cd sdk && make baseDist dist` | Build SDK packages |
| 4 | `cd web && npm run check` | Type-check Angular projects |
| 5 | `cd container-runtime && npm run check` | Type-check runtime |
**Important**: Editing `sdk/base/lib/osBindings/*.ts` alone is NOT sufficient — you must rebuild the SDK bundle (step 3) before web/container-runtime can see the changes.
## Cross-Layer Verification
When making changes across multiple layers (Rust, SDK, web, container-runtime), verify in this order:
1. **Rust**: `cargo check -p start-os` — verifies core compiles
2. **TS bindings**: `make ts-bindings` — regenerates TypeScript types from Rust `#[ts(export)]` structs
- Runs `./core/build/build-ts.sh` to export ts-rs types to `core/bindings/`
- Syncs `core/bindings/``sdk/base/lib/osBindings/` via rsync
- If you manually edit files in `sdk/base/lib/osBindings/`, you must still rebuild the SDK (step 3)
3. **SDK bundle**: `cd sdk && make baseDist dist` — compiles SDK source into packages
- `baseDist/` is consumed by `/web` (via `@start9labs/start-sdk-base`)
- `dist/` is consumed by `/container-runtime` (via `@start9labs/start-sdk`)
- Web and container-runtime reference the **built** SDK, not source files
4. **Web type check**: `cd web && npm run check` — type-checks all Angular projects
5. **Container runtime type check**: `cd container-runtime && npm run check` — type-checks the runtime
## Data Flow: Backend to Frontend
StartOS uses Patch-DB for reactive state synchronization:
1. The backend mutates state via `db.mutate()`, producing CBOR diffs
2. Diffs are pushed to the frontend over a persistent WebSocket connection
3. The frontend applies diffs to its local state copy and notifies observers
4. Components watch specific database paths via `PatchDB.watch$()`, receiving updates reactively
This means the UI is always eventually consistent with the backend — after any mutating API call, the frontend waits for the corresponding PatchDB diff before resolving, so the UI reflects the result immediately.
## Further Reading
- [core/ARCHITECTURE.md](core/ARCHITECTURE.md) — Rust backend architecture
- [web/ARCHITECTURE.md](web/ARCHITECTURE.md) — Angular frontend architecture
- [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md) — Container runtime details
- [core/rpc-toolkit.md](core/rpc-toolkit.md) — JSON-RPC handler patterns
- [core/s9pk-structure.md](core/s9pk-structure.md) — S9PK package format
- [docs/exver.md](docs/exver.md) — Extended versioning format
- [docs/VERSION_BUMP.md](docs/VERSION_BUMP.md) — Version bumping guide

55
CLAUDE.md Normal file
View File

@@ -0,0 +1,55 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Architecture
See [ARCHITECTURE.md](ARCHITECTURE.md) for the full system architecture, component map, build pipeline, and cross-layer verification order.
Each major component has its own `CLAUDE.md` with detailed guidance: `core/`, `web/`, `container-runtime/`, `sdk/`.
## Build & Development
See [CONTRIBUTING.md](CONTRIBUTING.md) for:
- Environment setup and requirements
- Build commands and make targets
- Testing and formatting commands
- Environment variables
**Quick reference:**
```bash
. ./devmode.sh # Enable dev mode
make update-startbox REMOTE=start9@<ip> # Fastest iteration (binary + UI)
make test-core # Run Rust tests
```
## Operating Rules
- Always verify cross-layer changes using the order described in [ARCHITECTURE.md](ARCHITECTURE.md#cross-layer-verification)
- Check component-level CLAUDE.md files for component-specific conventions
- Follow existing patterns before inventing new ones
## Supplementary Documentation
The `docs/` directory contains cross-cutting documentation for AI assistants:
- `TODO.md` - Pending tasks for AI agents (check this first, remove items when completed)
- `USER.md` - Current user identifier (gitignored, see below)
- `exver.md` - Extended versioning format (used across core, sdk, and web)
- `VERSION_BUMP.md` - Guide for bumping the StartOS version across the codebase
Component-specific docs live alongside their code (e.g., `core/rpc-toolkit.md`, `core/i18n-patterns.md`).
### Session Startup
On startup:
1. **Check for `docs/USER.md`** - If it doesn't exist, prompt the user for their name/identifier and create it. This file is gitignored since it varies per developer.
2. **Check `docs/TODO.md` for relevant tasks** - Show TODOs that either:
- Have no `@username` tag (relevant to everyone)
- Are tagged with the current user's identifier
Skip TODOs tagged with a different user.
3. **Ask "What would you like to do today?"** - Offer options for each relevant TODO item, plus "Something else" for other requests.

View File

@@ -1,133 +1,240 @@
# Contributing to StartOS
This guide is for contributing to the StartOS. If you are interested in packaging a service for StartOS, visit the [service packaging guide](https://docs.start9.com/latest/packaging-guide/). If you are interested in promoting, providing technical support, creating tutorials, or helping in other ways, please visit the [Start9 website](https://start9.com/contribute).
This guide is for contributing to the StartOS. If you are interested in packaging a service for StartOS, visit the [service packaging guide](https://github.com/Start9Labs/ai-service-packaging). If you are interested in promoting, providing technical support, creating tutorials, or helping in other ways, please visit the [Start9 website](https://start9.com/contribute).
## Collaboration
- [Matrix](https://matrix.to/#/#community-dev:matrix.start9labs.com)
- [Telegram](https://t.me/start9_labs/47471)
- [Matrix](https://matrix.to/#/#dev-startos:matrix.start9labs.com)
## Project Structure
```bash
/
├── assets/
├── container-runtime/
├── core/
├── build/
├── debian/
├── web/
├── image-recipe/
├── patch-db
└── sdk/
```
#### assets
screenshots for the StartOS README
#### container-runtime
A NodeJS program that dynamically loads maintainer scripts and communicates with the OS to manage packages
#### core
An API, daemon (startd), and CLI (start-cli) that together provide the core functionality of StartOS.
#### build
Auxiliary files and scripts to include in deployed StartOS images
#### debian
Maintainer scripts for the StartOS Debian package
#### web
Web UIs served under various conditions and used to interact with StartOS APIs.
#### image-recipe
Scripts for building StartOS images
#### patch-db (submodule)
A diff based data store used to synchronize data between the web interfaces and server.
#### sdk
A typescript sdk for building start-os packages
For project structure and system architecture, see [ARCHITECTURE.md](ARCHITECTURE.md).
## Environment Setup
#### Clone the StartOS repository
### Installing Dependencies (Debian/Ubuntu)
> Debian/Ubuntu is the only officially supported build environment.
> MacOS has limited build capabilities and Windows requires [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install).
```sh
git clone https://github.com/Start9Labs/start-os.git --recurse-submodules
sudo apt update
sudo apt install -y ca-certificates curl gpg build-essential
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg-architecture -q DEB_HOST_ARCH) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian bookworm stable" | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt update
sudo apt install -y sed grep gawk jq gzip brotli containerd.io docker-ce docker-ce-cli docker-compose-plugin qemu-user-static binfmt-support squashfs-tools git debspawn rsync b3sum
sudo mkdir -p /etc/debspawn/
echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml
sudo usermod -aG docker $USER
sudo su $USER
docker run --privileged --rm tonistiigi/binfmt --install all
docker buildx create --use
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # proceed with default installation
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
source ~/.bashrc
nvm install 24
nvm use 24
nvm alias default 24 # this prevents your machine from reverting back to another version
```
### Cloning the Repository
```sh
git clone --recursive https://github.com/Start9Labs/start-os.git --branch next/major
cd start-os
```
#### Continue to your project of interest for additional instructions:
### Development Mode
- [`core`](core/README.md)
- [`web-interfaces`](web-interfaces/README.md)
- [`build`](build/README.md)
- [`patch-db`](https://github.com/Start9Labs/patch-db)
For faster iteration during development:
```sh
. ./devmode.sh
```
This sets `ENVIRONMENT=dev` and `GIT_BRANCH_AS_HASH=1` to prevent rebuilds on every commit.
## Building
This project uses [GNU Make](https://www.gnu.org/software/make/) to build its components. To build any specific component, simply run `make <TARGET>` replacing `<TARGET>` with the name of the target you'd like to build
All builds can be performed on any operating system that can run Docker.
This project uses [GNU Make](https://www.gnu.org/software/make/) to build its components.
### Requirements
- [GNU Make](https://www.gnu.org/software/make/)
- [Docker](https://docs.docker.com/get-docker/)
- [Docker](https://docs.docker.com/get-docker/) or [Podman](https://podman.io/)
- [NodeJS v20.16.0](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
- [sed](https://www.gnu.org/software/sed/)
- [grep](https://www.gnu.org/software/grep/)
- [awk](https://www.gnu.org/software/gawk/)
- [Rust](https://rustup.rs/) (nightly for formatting)
- [sed](https://www.gnu.org/software/sed/), [grep](https://www.gnu.org/software/grep/), [awk](https://www.gnu.org/software/gawk/)
- [jq](https://jqlang.github.io/jq/)
- [gzip](https://www.gnu.org/software/gzip/)
- [brotli](https://github.com/google/brotli)
- [gzip](https://www.gnu.org/software/gzip/), [brotli](https://github.com/google/brotli)
### Environment variables
### Environment Variables
- `PLATFORM`: which platform you would like to build for. Must be one of `x86_64`, `x86_64-nonfree`, `aarch64`, `aarch64-nonfree`, `raspberrypi`
- NOTE: `nonfree` images are for including `nonfree` firmware packages in the built ISO
- `ENVIRONMENT`: a hyphen separated set of feature flags to enable
- `dev`: enables password ssh (INSECURE!) and does not compress frontends
- `unstable`: enables assertions that will cause errors on unexpected inconsistencies that are undesirable in production use either for performance or reliability reasons
- `docker`: use `docker` instead of `podman`
- `GIT_BRANCH_AS_HASH`: set to `1` to use the current git branch name as the git hash so that the project does not need to be rebuilt on each commit
| Variable | Description |
| -------------------- | --------------------------------------------------------------------------------------------------- |
| `PLATFORM` | Target platform: `x86_64`, `x86_64-nonfree`, `aarch64`, `aarch64-nonfree`, `riscv64`, `raspberrypi` |
| `ENVIRONMENT` | Hyphen-separated feature flags (see below) |
| `PROFILE` | Build profile: `release` (default) or `dev` |
| `GIT_BRANCH_AS_HASH` | Set to `1` to use git branch name as version hash (avoids rebuilds) |
### Useful Make Targets
**ENVIRONMENT flags:**
- `iso`: Create a full `.iso` image
- Only possible from Debian
- Not available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- `img`: Create a full `.img` image
- Only possible from Debian
- Only available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- `format`: Run automatic code formatting for the project
- Additional Requirements:
- [rust](https://rustup.rs/)
- `test`: Run automated tests for the project
- Additional Requirements:
- [rust](https://rustup.rs/)
- `update`: Deploy the current working project to a device over ssh as if through an over-the-air update
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `reflash`: Deploy the current working project to a device over ssh as if using a live `iso` image to reflash it
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `update-overlay`: Deploy the current working project to a device over ssh to the in-memory overlay without restarting it
- WARNING: changes will be reverted after the device is rebooted
- WARNING: changes to `init` will not take effect as the device is already initialized
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `wormhole`: Deploy the `startbox` to a device using [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- When the build it complete will emit a command to paste into the shell of the device to upgrade it
- Additional Requirements:
- [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- `clean`: Delete all compiled artifacts
- `dev` - Enables password SSH before setup, skips frontend compression
- `unstable` - Enables assertions and debugging with performance penalty
- `console` - Enables tokio-console for async debugging
**Platform notes:**
- `-nonfree` variants include proprietary firmware and drivers
- `raspberrypi` includes non-free components by necessity
- Platform is remembered between builds if not specified
### Make Targets
#### Building
| Target | Description |
| ------------- | ---------------------------------------------- |
| `iso` | Create full `.iso` image (not for raspberrypi) |
| `img` | Create full `.img` image (raspberrypi only) |
| `deb` | Build Debian package |
| `all` | Build all Rust binaries |
| `uis` | Build all web UIs |
| `ui` | Build main UI only |
| `ts-bindings` | Generate TypeScript bindings from Rust types |
#### Deploying to Device
For devices on the same network:
| Target | Description |
| ------------------------------------ | ----------------------------------------------- |
| `update-startbox REMOTE=start9@<ip>` | Deploy binary + UI only (fastest) |
| `update-deb REMOTE=start9@<ip>` | Deploy full Debian package |
| `update REMOTE=start9@<ip>` | OTA-style update |
| `reflash REMOTE=start9@<ip>` | Reflash as if using live ISO |
| `update-overlay REMOTE=start9@<ip>` | Deploy to in-memory overlay (reverts on reboot) |
For devices on different networks (uses [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)):
| Target | Description |
| ------------------- | -------------------- |
| `wormhole` | Send startbox binary |
| `wormhole-deb` | Send Debian package |
| `wormhole-squashfs` | Send squashfs image |
### Creating a VM
Install virt-manager:
```sh
sudo apt update
sudo apt install -y virt-manager
sudo usermod -aG libvirt $USER
sudo su $USER
virt-manager
```
Follow the screenshot walkthrough in [`assets/create-vm/`](assets/create-vm/) to create a new virtual machine. Key steps:
1. Create a new virtual machine
2. Browse for the ISO — create a storage pool pointing to your `results/` directory
3. Select "Generic or unknown OS"
4. Set memory and CPUs
5. Create a disk and name the VM
Build an ISO first:
```sh
PLATFORM=$(uname -m) ENVIRONMENT=dev make iso
```
#### Other
| Target | Description |
| ------------------------ | ------------------------------------------- |
| `format` | Run code formatting (Rust nightly required) |
| `test` | Run all automated tests |
| `test-core` | Run Rust tests |
| `test-sdk` | Run SDK tests |
| `test-container-runtime` | Run container runtime tests |
| `clean` | Delete all compiled artifacts |
## Testing
```bash
make test # All tests
make test-core # Rust tests (via ./core/run-tests.sh)
make test-sdk # SDK tests
make test-container-runtime # Container runtime tests
# Run specific Rust test
cd core && cargo test <test_name> --features=test
```
## Code Formatting
```bash
# Rust (requires nightly)
make format
# TypeScript/HTML/SCSS (web)
cd web && npm run format
```
## Code Style Guidelines
### Formatting
Run the formatters before committing. Configuration is handled by `rustfmt.toml` (Rust) and prettier configs (TypeScript).
### Documentation & Comments
**Rust:**
- Add doc comments (`///`) to public APIs, structs, and non-obvious functions
- Use `//` comments sparingly for complex logic that isn't self-evident
- Prefer self-documenting code (clear naming, small functions) over comments
**TypeScript:**
- Document exported functions and complex types with JSDoc
- Keep comments focused on "why" rather than "what"
**General:**
- Don't add comments that just restate the code
- Update or remove comments when code changes
- TODOs should include context: `// TODO(username): reason`
### Commit Messages
Use [Conventional Commits](https://www.conventionalcommits.org/):
```
<type>(<scope>): <description>
[optional body]
[optional footer]
```
**Types:**
- `feat` - New feature
- `fix` - Bug fix
- `docs` - Documentation only
- `style` - Formatting, no code change
- `refactor` - Code change that neither fixes a bug nor adds a feature
- `test` - Adding or updating tests
- `chore` - Build process, dependencies, etc.
**Examples:**
```
feat(web): add dark mode toggle
fix(core): resolve race condition in service startup
docs: update CONTRIBUTING.md with style guidelines
refactor(sdk): simplify package validation logic
```

View File

@@ -1,134 +0,0 @@
# Setting up your development environment on Debian/Ubuntu
A step-by-step guide
> This is the only officially supported build environment.
> MacOS has limited build capabilities and Windows requires [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install)
## Installing dependencies
Run the following commands one at a time
```sh
sudo apt update
sudo apt install -y ca-certificates curl gpg build-essential
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg-architecture -q DEB_HOST_ARCH) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian bookworm stable" | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt update
sudo apt install -y sed grep gawk jq gzip brotli containerd.io docker-ce docker-ce-cli docker-compose-plugin qemu-user-static binfmt-support squashfs-tools git debspawn rsync b3sum
sudo mkdir -p /etc/debspawn/
echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml
sudo usermod -aG docker $USER
sudo su $USER
docker run --privileged --rm tonistiigi/binfmt --install all
docker buildx create --use
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # proceed with default installation
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
source ~/.bashrc
nvm install 24
nvm use 24
nvm alias default 24 # this prevents your machine from reverting back to another version
```
## Cloning the repository
```sh
git clone --recursive https://github.com/Start9Labs/start-os.git --branch next/major
cd start-os
```
## Building an ISO
```sh
PLATFORM=$(uname -m) ENVIRONMENT=dev make iso
```
This will build an ISO for your current architecture. If you are building to run on an architecture other than the one you are currently on, replace `$(uname -m)` with the correct platform for the device (one of `aarch64`, `aarch64-nonfree`, `x86_64`, `x86_64-nonfree`, `raspberrypi`)
## Creating a VM
### Install virt-manager
```sh
sudo apt update
sudo apt install -y virt-manager
sudo usermod -aG libvirt $USER
sudo su $USER
```
### Launch virt-manager
```sh
virt-manager
```
### Create new virtual machine
![Select "Create a new virtual machine"](assets/create-vm/step-1.png)
![Click "Forward"](assets/create-vm/step-2.png)
![Click "Browse"](assets/create-vm/step-3.png)
![Click "+"](assets/create-vm/step-4.png)
#### make sure to set "Target Path" to the path to your results directory in start-os
![Create storage pool](assets/create-vm/step-5.png)
![Select storage pool](assets/create-vm/step-6.png)
![Select ISO](assets/create-vm/step-7.png)
![Select "Generic or unknown OS" and click "Forward"](assets/create-vm/step-8.png)
![Set Memory and CPUs](assets/create-vm/step-9.png)
![Create disk](assets/create-vm/step-10.png)
![Name VM](assets/create-vm/step-11.png)
![Create network](assets/create-vm/step-12.png)
## Updating a VM
The fastest way to update a VM to your latest code depends on what you changed:
### UI or startd:
```sh
PLATFORM=$(uname -m) ENVIRONMENT=dev make update-startbox REMOTE=start9@<VM IP>
```
### Container runtime or debian dependencies:
```sh
PLATFORM=$(uname -m) ENVIRONMENT=dev make update-deb REMOTE=start9@<VM IP>
```
### Image recipe:
```sh
PLATFORM=$(uname -m) ENVIRONMENT=dev make update-squashfs REMOTE=start9@<VM IP>
```
---
If the device you are building for is not available via ssh, it is also possible to use `magic-wormhole` to send the relevant files.
### Prerequisites:
```sh
sudo apt update
sudo apt install -y magic-wormhole
```
As before, the fastest way to update a VM to your latest code depends on what you changed. Each of the following commands will return a command to paste into the shell of the device you would like to upgrade.
### UI or startd:
```sh
PLATFORM=$(uname -m) ENVIRONMENT=dev make wormhole
```
### Container runtime or debian dependencies:
```sh
PLATFORM=$(uname -m) ENVIRONMENT=dev make wormhole-deb
```
### Image recipe:
```sh
PLATFORM=$(uname -m) ENVIRONMENT=dev make wormhole-squashfs
```

View File

@@ -236,9 +236,9 @@ update-startbox: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
update-deb: results/$(BASENAME).deb # better than update, but only available from debian
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(call mkdir,/media/startos/next/tmp/startos-deb)
$(call cp,results/$(BASENAME).deb,/media/startos/next/tmp/startos-deb/$(BASENAME).deb)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y --reinstall /tmp/startos-deb/$(BASENAME).deb"')
$(call mkdir,/media/startos/next/var/tmp/startos-deb)
$(call cp,results/$(BASENAME).deb,/media/startos/next/var/tmp/startos-deb/$(BASENAME).deb)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y --reinstall /var/tmp/startos-deb/$(BASENAME).deb"')
update-squashfs: results/$(BASENAME).squashfs
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi

View File

@@ -7,76 +7,64 @@
<a href="https://github.com/Start9Labs/start-os/actions/workflows/startos-iso.yaml">
<img src="https://github.com/Start9Labs/start-os/actions/workflows/startos-iso.yaml/badge.svg">
</a>
<a href="https://heyapollo.com/product/startos">
<a href="https://heyapollo.com/product/startos">
<img alt="Static Badge" src="https://img.shields.io/badge/apollo-review%20%E2%AD%90%E2%AD%90%E2%AD%90%E2%AD%90%E2%AD%90%20-slateblue">
</a>
<a href="https://twitter.com/start9labs">
<img alt="X (formerly Twitter) Follow" src="https://img.shields.io/twitter/follow/start9labs">
</a>
<a href="https://matrix.to/#/#community:matrix.start9labs.com">
<img alt="Static Badge" src="https://img.shields.io/badge/community-matrix-yellow?logo=matrix">
</a>
<a href="https://t.me/start9_labs">
<img alt="Static Badge" src="https://img.shields.io/badge/community-telegram-blue?logo=telegram">
</a>
<a href="https://docs.start9.com">
<img alt="Static Badge" src="https://img.shields.io/badge/docs-orange?label=%F0%9F%91%A4%20support">
</a>
<a href="https://matrix.to/#/#community-dev:matrix.start9labs.com">
<a href="https://matrix.to/#/#dev-startos:matrix.start9labs.com">
<img alt="Static Badge" src="https://img.shields.io/badge/developer-matrix-darkcyan?logo=matrix">
</a>
<a href="https://start9.com">
<img alt="Website" src="https://img.shields.io/website?up_message=online&down_message=offline&url=https%3A%2F%2Fstart9.com&logo=website&label=%F0%9F%8C%90%20website">
</a>
</div>
<br />
<div align="center">
<h3>
Welcome to the era of Sovereign Computing
</h3>
<p>
StartOS is an open source Linux distribution optimized for running a personal server. It facilitates the discovery, installation, network configuration, service configuration, data backup, dependency management, and health monitoring of self-hosted software services.
</p>
</div>
<br />
<p align="center">
<img src="assets/StartOS.png" alt="StartOS" width="85%">
</p>
<br />
## Running StartOS
> [!WARNING]
> StartOS is in beta. It lacks features. It doesn't always work perfectly. Start9 servers are not plug and play. Using them properly requires some effort and patience. Please do not use StartOS or purchase a server if you are unable or unwilling to follow instructions and learn new concepts.
## What is StartOS?
### 💰 Buy a Start9 server
This is the most convenient option. Simply [buy a server](https://store.start9.com) from Start9 and plug it in.
StartOS is an open-source Linux distribution for running a personal server. It handles discovery, installation, network configuration, data backup, dependency management, and health monitoring of self-hosted services.
### 👷 Build your own server
This option is easier than you might imagine, and there are 4 reasons why you might prefer it:
1. You already have hardware
1. You want to save on shipping costs
1. You prefer not to divulge your physical address
1. You just like building things
**Tech stack:** Rust backend (Tokio/Axum), Angular frontend, Node.js container runtime with LXC, and a custom diff-based database ([Patch-DB](https://github.com/Start9Labs/patch-db)) for reactive state synchronization.
To pursue this option, follow one of our [DIY guides](https://start9.com/latest/diy).
Services run in isolated LXC containers, packaged as [S9PKs](https://github.com/Start9Labs/start-os/blob/master/core/s9pk-structure.md) — a signed, merkle-archived format that supports partial downloads and cryptographic verification.
## ❤️ Contributing
There are multiple ways to contribute: work directly on StartOS, package a service for the marketplace, or help with documentation and guides. To learn more about contributing, see [here](https://start9.com/contribute/).
## What can you do with it?
To report security issues, please email our security team - security@start9.com.
StartOS lets you self-host services that would otherwise depend on third-party cloud providers — giving you full ownership of your data and infrastructure.
## 🌎 Marketplace
There are dozens of services available for StartOS, and new ones are being added all the time. Check out the full list of available services [here](https://marketplace.start9.com/marketplace). To read more about the Marketplace ecosystem, check out this [blog post](https://blog.start9.com/start9-marketplace-strategy/)
Browse available services on the [Start9 Marketplace](https://marketplace.start9.com/), including:
## 🖥️ User Interface Screenshots
- **Bitcoin & Lightning** — Run a full Bitcoin node, Lightning node, BTCPay Server, and other payment infrastructure
- **Communication** — Self-host Matrix, SimpleX, or other messaging platforms
- **Cloud Storage** — Run Nextcloud, Vaultwarden, and other productivity tools
<p align="center">
<img src="assets/registry.png" alt="StartOS Marketplace" width="49%">
<img src="assets/community.png" alt="StartOS Community Registry" width="49%">
<img src="assets/c-lightning.png" alt="StartOS NextCloud Service" width="49%">
<img src="assets/btcpay.png" alt="StartOS BTCPay Service" width="49%">
<img src="assets/nextcloud.png" alt="StartOS System Settings" width="49%">
<img src="assets/system.png" alt="StartOS System Settings" width="49%">
<img src="assets/welcome.png" alt="StartOS System Settings" width="49%">
<img src="assets/logs.png" alt="StartOS System Settings" width="49%">
</p>
Services are added by the community. If a service you want isn't available, you can [package it yourself](https://github.com/Start9Labs/ai-service-packaging/).
## Getting StartOS
### Buy a Start9 server
The easiest path. [Buy a server](https://store.start9.com) from Start9 and plug it in.
### Build your own
Install StartOS on your own hardware. Follow one of the [DIY guides](https://start9.com/latest/diy). Reasons to go this route:
1. You already have compatible hardware
2. You want to save on shipping costs
3. You prefer not to share your physical address
4. You enjoy building things
### Build from source
See [CONTRIBUTING.md](CONTRIBUTING.md) for environment setup, build instructions, and development workflow.
## Contributing
There are multiple ways to contribute: work directly on StartOS, package a service for the marketplace, or help with documentation and guides. See [CONTRIBUTING.md](CONTRIBUTING.md) or visit [start9.com/contribute](https://start9.com/contribute/).
To report security issues, email [security@start9.com](mailto:security@start9.com).

215
TODO.md Normal file
View File

@@ -0,0 +1,215 @@
# AI Agent TODOs
Pending tasks for AI agents. Remove items when completed.
## Unreviewed CLAUDE.md Sections
- [ ] Architecture - Web (`/web`) - @MattDHill
## Features
- [ ] Support preferred external ports besides 443 - @dr-bonez
**Problem**: Currently, port 443 is the only preferred external port that is actually honored. When a
service requests `preferred_external_port: 8443` (or any non-443 value) for SSL, the system ignores
the preference and assigns a dynamic-range port (49152-65535). The `preferred_external_port` is only
used as a label for Tor mappings and as a trigger for the port-443 special case in `update()`.
**Goal**: Honor `preferred_external_port` for both SSL and non-SSL binds when the requested port is
available, with proper conflict resolution and fallback to dynamic-range allocation.
### Design
**Key distinction**: There are two separate concepts for SSL port usage:
1. **Port ownership** (`assigned_ssl_port`) — A port exclusively owned by a binding, allocated from
`AvailablePorts`. Used for server hostnames (`.local`, mDNS, etc.) and iptables forwards.
2. **Domain SSL port** — The port used for domain-based vhost entries. A binding does NOT need to own
a port to have a domain vhost on it. The VHostController already supports multiple hostnames on the
same port via SNI. Any binding can create a domain vhost entry on any SSL port that the
VHostController has a listener for, regardless of who "owns" that port.
For example: the OS owns port 443 as its `assigned_ssl_port`. A service with
`preferred_external_port: 443` won't get 443 as its `assigned_ssl_port` (it's taken), but it CAN
still have domain vhost entries on port 443 — SNI routes by hostname.
#### 1. Preferred Port Allocation for Ownership ✅ DONE
`AvailablePorts::try_alloc(port) -> Option<u16>` added to `forward.rs`. `BindInfo::new()` and
`BindInfo::update()` attempt the preferred port first, falling back to dynamic-range allocation.
#### 2. Per-Address Enable/Disable ✅ DONE
Gateway-level `private_disabled`/`public_enabled` on `NetInfo` replaced with per-address
`DerivedAddressInfo` on `BindInfo`. `hostname_info` removed from `Host` — computed addresses now
live in `BindInfo.addresses.possible`.
**`DerivedAddressInfo` struct** (on `BindInfo`):
```rust
pub struct DerivedAddressInfo {
pub private_disabled: BTreeSet<HostnameInfo>,
pub public_enabled: BTreeSet<HostnameInfo>,
pub possible: BTreeSet<HostnameInfo>, // COMPUTED by update()
}
```
`DerivedAddressInfo::enabled()` returns `possible` filtered by the two sets. `HostnameInfo` derives
`Ord` for `BTreeSet` usage. `AddressFilter` (implementing `InterfaceFilter`) derives enabled
gateway set from `DerivedAddressInfo` for vhost/forward filtering.
**RPC endpoint**: `set-gateway-enabled` replaced with `set-address-enabled` (on both
`server.host.binding` and `package.host.binding`).
**How disabling works per address type** (enforcement deferred to Section 3):
- **WAN/LAN IP:port**: Will be enforced via **source-IP gating** in the vhost layer (Section 3).
- **Hostname-based addresses** (`.local`, domains): Disabled by **not creating the vhost/SNI
entry** for that hostname.
#### 3. Eliminate the Port 5443 Hack: Source-IP-Based WAN Blocking (`vhost.rs`, `net_controller.rs`)
**Current problem**: The `if ssl.preferred_external_port == 443` branch (line 341 of
`net_controller.rs`) creates a bespoke dual-vhost setup: port 5443 for private-only access and port
443 for public (or public+private). This exists because both public and private traffic arrive on the
same port 443 listener, and the current `InterfaceFilter`/`PublicFilter` model distinguishes
public/private by which _network interface_ the connection arrived on — which doesn't work when both
traffic types share a listener.
**Solution**: Determine public vs private based on **source IP** at the vhost level. Traffic arriving
from the gateway IP should be treated as public (the gateway may MASQUERADE/NAT internet traffic, so
anything from the gateway is potentially public). Traffic from LAN IPs is private.
This applies to **all** vhost targets, not just port 443:
- **Add a `public` field to `ProxyTarget`** (or an enum: `Public`, `Private`, `Both`) indicating
what traffic this target accepts, derived from the binding's user-controlled `public` field.
- **Modify `VHostTarget::filter()`** (`vhost.rs:342`): Instead of (or in addition to) checking the
network interface via `GatewayInfo`, check the source IP of the TCP connection against known gateway
IPs. If the source IP matches a gateway or IP outside the subnet, the connection is public;
otherwise it's private. Use this to gate against the target's `public` field.
- **Eliminate the 5443 port entirely**: A single vhost entry on port 443 (or any shared SSL port) can
serve both public and private traffic, with per-target source-IP gating determining which backend
handles which connections.
#### 4. Port Forward Mapping in Patch-DB
When a binding is marked `public = true`, StartOS must record the required port forwards in patch-db
so the frontend can display them to the user. The user then configures these on their router manually.
For each public binding, store:
- The external port the router should forward (the actual vhost port used for domains, or the
`assigned_port` / `assigned_ssl_port` for non-domain access)
- The protocol (TCP/UDP)
- The StartOS LAN IP as the forward target
- Which service/binding this forward is for (for display purposes)
This mapping should be in the public database model so the frontend can read and display it.
#### 5. Simplify `update()` Domain Vhost Logic (`net_controller.rs`)
With source-IP gating in the vhost controller:
- **Remove the `== 443` special case** and the 5443 secondary vhost.
- For **server hostnames** (`.local`, mDNS, embassy, startos, localhost): use `assigned_ssl_port`
(the port the binding owns).
- For **domain-based vhost entries**: attempt to use `preferred_external_port` as the vhost port.
This succeeds if the port is either unused or already has an SSL listener (SNI handles sharing).
It fails only if the port is already in use by a non-SSL binding, or is a restricted port. On
failure, fall back to `assigned_ssl_port`.
- The binding's `public` field determines the `ProxyTarget`'s public/private gating.
- Hostname info must exactly match the actual vhost port used: for server hostnames, report
`ssl_port: assigned_ssl_port`. For domains, report `ssl_port: preferred_external_port` if it was
successfully used for the domain vhost, otherwise report `ssl_port: assigned_ssl_port`.
#### 6. Reachability Test Endpoint
New RPC endpoint that tests whether an address is actually reachable, with diagnostic info on
failure.
**RPC endpoint** (`binding.rs` or new file):
- **`test-address`** — Test reachability of a specific address.
```ts
interface BindingTestAddressParams {
internalPort: number;
address: HostnameInfo;
}
```
The backend simply performs the raw checks and returns the results. The **frontend** owns all
interpretation — it already knows the address type, expected IP, expected port, etc. from the
`HostnameInfo` data, so it can compare against the backend results and construct fix messaging.
```ts
interface TestAddressResult {
dns: string[] | null; // resolved IPs, null if not a domain address or lookup failed
portOpen: boolean | null; // TCP connect result, null if not applicable
}
```
This yields two RPC methods:
- `server.host.binding.test-address`
- `package.host.binding.test-address`
The frontend already has the full `HostnameInfo` context (expected IP, domain, port, gateway,
public/private). It compares the backend's raw results against the expected state and constructs
localized fix instructions. For example:
- `dns` returned but doesn't contain the expected WAN IP → "Update DNS A record for {domain}
to {wanIp}"
- `dns` is `null` for a domain address → "DNS lookup failed for {domain}"
- `portOpen` is `false` → "Configure port forward on your router: external {port} TCP →
{lanIp}:{port}"
### Key Files
| File | Role |
| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `core/src/net/forward.rs` | `AvailablePorts` — port pool allocation, `try_alloc()` for preferred ports |
| `core/src/net/host/binding.rs` | `Bindings` (Map wrapper for patchdb), `BindInfo`/`NetInfo`/`DerivedAddressInfo`/`AddressFilter` — per-address enable/disable, `set-address-enabled` RPC |
| `core/src/net/net_controller.rs:259` | `NetServiceData::update()` — computes `DerivedAddressInfo.possible`, vhost/forward/DNS reconciliation, 5443 hack removal |
| `core/src/net/vhost.rs` | `VHostController` / `ProxyTarget` — source-IP gating for public/private |
| `core/src/net/gateway.rs` | `InterfaceFilter` trait and filter types (`AddressFilter`, `PublicFilter`, etc.) |
| `core/src/net/service_interface.rs` | `HostnameInfo` — derives `Ord` for `BTreeSet` usage |
| `core/src/net/host/address.rs` | `HostAddress` (flattened struct), domain CRUD endpoints |
| `sdk/base/lib/interfaces/Host.ts` | SDK `MultiHost.bindPort()` — no changes needed |
| `core/src/db/model/public.rs` | Public DB model — port forward mapping |
- [ ] Extract TS-exported types into a lightweight sub-crate for fast binding generation
**Problem**: `make ts-bindings` compiles the entire `start-os` crate (with all dependencies: tokio,
axum, openssl, etc.) just to run test functions that serialize type definitions to `.ts` files.
Even in debug mode, this takes minutes. The generated output is pure type info — no runtime code
is needed.
**Goal**: Generate TS bindings in seconds by isolating exported types in a small crate with minimal
dependencies.
**Approach**: Create a `core/bindings-types/` sub-crate containing (or re-exporting) all 168
`#[ts(export)]` types. This crate depends only on `serde`, `ts-rs`, `exver`, and other type-only
crates — not on tokio, axum, openssl, etc. Then `build-ts.sh` runs `cargo test -p bindings-types`
instead of `cargo test -p start-os`.
**Challenge**: The exported types are scattered across `core/src/` and reference each other and
other crate types. Extracting them requires either moving the type definitions into the sub-crate
(and importing them back into `start-os`) or restructuring to share a common types crate.
- [ ] Use auto-generated RPC types in the frontend instead of manual duplicates
**Problem**: The web frontend manually defines ~755 lines of API request/response types in
`web/projects/ui/src/app/services/api/api.types.ts` that can drift from the actual Rust types.
**Current state**: The Rust backend already has `#[ts(export)]` on RPC param types (e.g.
`AddTunnelParams`, `SetWifiEnabledParams`, `LoginParams`), and they are generated into
`core/bindings/`. However, commit `71b83245b` ("Chore/unexport api ts #2585", April 2024)
deliberately stopped building them into the SDK and had the frontend maintain its own types.
**Goal**: Reverse that decision — pipe the generated RPC types through the SDK into the frontend
so `api.types.ts` can import them instead of duplicating them. This eliminates drift between
backend and frontend API contracts.
- [ ] Auto-configure port forwards via UPnP/NAT-PMP/PCP - @dr-bonez
**Blocked by**: "Support preferred external ports besides 443" (must be implemented and tested
end-to-end first).
**Goal**: When a binding is marked public, automatically configure port forwards on the user's router
using UPnP, NAT-PMP, or PCP, instead of requiring manual router configuration. Fall back to
displaying manual instructions (the port forward mapping from patch-db) when auto-configuration is
unavailable or fails.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 396 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 591 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 319 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 521 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 331 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 402 KiB

View File

@@ -111,6 +111,6 @@ if [ "$CHROOT_RES" -eq 0 ]; then
reboot
fi
umount -R /media/startos/next
umount /media/startos/next
umount /media/startos/upper
rm -rf /media/startos/upper /media/startos/next

View File

@@ -5,7 +5,7 @@ if [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$dprefix" ] || [ -z "$sport" ] || [ -
exit 1
fi
NAME="F$(echo "$sip:$sport -> $dip/$dprefix:$dport" | sha256sum | head -c 15)"
NAME="F$(echo "$sip:$sport -> $dip/$dprefix:$dport ${src_subnet:-any}" | sha256sum | head -c 15)"
for kind in INPUT FORWARD ACCEPT; do
if ! iptables -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
@@ -13,7 +13,7 @@ for kind in INPUT FORWARD ACCEPT; do
iptables -A $kind -j "${NAME}_${kind}"
fi
done
for kind in PREROUTING INPUT OUTPUT POSTROUTING; do
for kind in PREROUTING OUTPUT; do
if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
iptables -t nat -N "${NAME}_${kind}" 2> /dev/null
iptables -t nat -A $kind -j "${NAME}_${kind}"
@@ -26,7 +26,7 @@ trap 'err=1' ERR
for kind in INPUT FORWARD ACCEPT; do
iptables -F "${NAME}_${kind}" 2> /dev/null
done
for kind in PREROUTING INPUT OUTPUT POSTROUTING; do
for kind in PREROUTING OUTPUT; do
iptables -t nat -F "${NAME}_${kind}" 2> /dev/null
done
if [ "$UNDO" = 1 ]; then
@@ -36,20 +36,21 @@ if [ "$UNDO" = 1 ]; then
fi
# DNAT: rewrite destination for incoming packets (external traffic)
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
# When src_subnet is set, only forward traffic from that subnet (private forwards)
if [ -n "$src_subnet" ]; then
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
else
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
fi
# DNAT: rewrite destination for locally-originated packets (hairpin from host itself)
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
# MASQUERADE: rewrite source for all forwarded traffic to the destination
# This ensures responses are routed back through the host regardless of source IP
iptables -t nat -A ${NAME}_POSTROUTING -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
iptables -t nat -A ${NAME}_POSTROUTING -d "$dip" -p udp --dport "$dport" -j MASQUERADE
# Allow new connections to be forwarded to the destination
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT
exit $err
exit $err

View File

@@ -0,0 +1,32 @@
# Container Runtime — Node.js Service Manager
Node.js runtime that manages service containers via JSON-RPC. See `RPCSpec.md` in this directory for the full RPC protocol.
## Architecture
```
LXC Container (uniform base for all services)
└── systemd
└── container-runtime.service
└── Loads /usr/lib/startos/package/index.js (from s9pk javascript.squashfs)
└── Package JS launches subcontainers (from images in s9pk)
```
The container runtime communicates with the host via JSON-RPC over Unix socket. Package JavaScript must export functions conforming to the `ABI` type defined in `sdk/base/lib/types.ts`.
## `/media/startos/` Directory (mounted by host into container)
| Path | Description |
| -------------------- | ----------------------------------------------------- |
| `volumes/<name>/` | Package data volumes (id-mapped, persistent) |
| `assets/` | Read-only assets from s9pk `assets.squashfs` |
| `images/<name>/` | Container images (squashfs, used for subcontainers) |
| `images/<name>.env` | Environment variables for image |
| `images/<name>.json` | Image metadata |
| `backup/` | Backup mount point (mounted during backup operations) |
| `rpc/service.sock` | RPC socket (container runtime listens here) |
| `rpc/host.sock` | Host RPC socket (for effects callbacks to host) |
## S9PK Structure
See `../core/s9pk-structure.md` for the S9PK package format.

View File

@@ -1,16 +1,21 @@
# Container RPC SERVER Specification
# Container RPC Server Specification
The container runtime exposes a JSON-RPC server over a Unix socket at `/media/startos/rpc/service.sock`.
## Methods
### init
initialize runtime (mount `/proc`, `/sys`, `/dev`, and `/run` to each image in `/media/images`)
Initialize the runtime and system.
called after os has mounted js and images to the container
#### params
#### args
`[]`
```ts
{
id: string,
kind: "install" | "update" | "restore" | null,
}
```
#### response
@@ -18,11 +23,16 @@ called after os has mounted js and images to the container
### exit
shutdown runtime
Shutdown runtime and optionally run exit hooks for a target version.
#### args
#### params
`[]`
```ts
{
id: string,
target: string | null, // ExtendedVersion or VersionRange
}
```
#### response
@@ -30,11 +40,11 @@ shutdown runtime
### start
run main method if not already running
Run main method if not already running.
#### args
#### params
`[]`
None
#### response
@@ -42,11 +52,11 @@ run main method if not already running
### stop
stop main method by sending SIGTERM to child processes, and SIGKILL after timeout
Stop main method by sending SIGTERM to child processes, and SIGKILL after timeout.
#### args
#### params
`{ timeout: millis }`
None
#### response
@@ -54,15 +64,16 @@ stop main method by sending SIGTERM to child processes, and SIGKILL after timeou
### execute
run a specific package procedure
Run a specific package procedure.
#### args
#### params
```ts
{
procedure: JsonPath,
input: any,
timeout: millis,
id: string, // event ID
procedure: string, // JSON path (e.g., "/backup/create", "/actions/{name}/run")
input: any,
timeout: number | null,
}
```
@@ -72,18 +83,64 @@ run a specific package procedure
### sandbox
run a specific package procedure in sandbox mode
Run a specific package procedure in sandbox mode. Same interface as `execute`.
#### args
UNIMPLEMENTED: this feature is planned but does not exist
#### params
```ts
{
procedure: JsonPath,
input: any,
timeout: millis,
id: string,
procedure: string,
input: any,
timeout: number | null,
}
```
#### response
`any`
### callback
Handle a callback from an effect.
#### params
```ts
{
id: number,
args: any[],
}
```
#### response
`null` (no response sent)
### eval
Evaluate a script in the runtime context. Used for debugging.
#### params
```ts
{
script: string,
}
```
#### response
`any`
## Procedures
The `execute` and `sandbox` methods route to procedures based on the `procedure` path:
| Procedure | Description |
| -------------------------- | ---------------------------- |
| `/backup/create` | Create a backup |
| `/actions/{name}/getInput` | Get input spec for an action |
| `/actions/{name}/run` | Run an action with input |

View File

@@ -82,18 +82,15 @@ export class DockerProcedureContainer extends Drop {
}),
)
} else if (volumeMount.type === "certificate") {
const hostInfo = await effects.getHostInfo({
hostId: volumeMount["interface-id"],
})
const hostnames = [
`${packageId}.embassy`,
...new Set(
Object.values(
(
await effects.getHostInfo({
hostId: volumeMount["interface-id"],
})
)?.hostnameInfo || {},
)
.flatMap((h) => h)
.flatMap((h) => (h.kind === "onion" ? [h.hostname.value] : [])),
Object.values(hostInfo?.bindings || {})
.flatMap((b) => b.addresses.available)
.map((h) => h.host),
).values(),
]
const certChain = await effects.getSslCertificate({

View File

@@ -1244,12 +1244,8 @@ async function updateConfig(
? ""
: catchFn(
() =>
(specValue.target === "lan-address"
? filled.addressInfo!.filter({ kind: "mdns" }) ||
filled.addressInfo!.onion
: filled.addressInfo!.onion ||
filled.addressInfo!.filter({ kind: "mdns" })
).hostnames[0].hostname.value,
filled.addressInfo!.filter({ kind: "mdns" })!.hostnames[0]
.host,
) || ""
mutConfigValue[key] = url
}

69
core/ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,69 @@
# Core Architecture
The Rust backend daemon for StartOS.
## Binaries
The crate produces a single binary `startbox` that is symlinked under different names for different behavior:
- `startbox` / `startd` — Main daemon
- `start-cli` — CLI interface
- `start-container` — Runs inside LXC containers; communicates with host and manages subcontainers
- `registrybox` — Registry daemon
- `tunnelbox` — VPN/tunnel daemon
## Crate Structure
- `startos` — Core library that supports building `startbox`
- `helpers` — Utility functions used across both `startos` and `js-engine`
- `models` — Types shared across `startos`, `js-engine`, and `helpers`
## Key Modules
- `src/context/` — Context types (RpcContext, CliContext, InitContext, DiagnosticContext)
- `src/service/` — Service lifecycle management with actor pattern (`service_actor.rs`)
- `src/db/model/` — Patch-DB models (`public.rs` synced to frontend, `private.rs` backend-only)
- `src/net/` — Networking (DNS, ACME, WiFi, Tor via Arti, WireGuard)
- `src/s9pk/` — S9PK package format (merkle archive)
- `src/registry/` — Package registry management
## RPC Pattern
The API is JSON-RPC (not REST). All endpoints are RPC methods organized in a hierarchical command structure using [rpc-toolkit](https://github.com/Start9Labs/rpc-toolkit). Handlers are registered in a tree of `ParentHandler` nodes, with four handler types: `from_fn_async` (standard), `from_fn_async_local` (non-Send), `from_fn` (sync), and `from_fn_blocking` (blocking). Metadata like `.with_about()` drives middleware and documentation.
See [rpc-toolkit.md](rpc-toolkit.md) for full handler patterns and configuration.
## Patch-DB Patterns
Patch-DB provides diff-based state synchronization. Changes to `db/model/public.rs` automatically sync to the frontend.
**Key patterns:**
- `db.peek().await` — Get a read-only snapshot of the database state
- `db.mutate(|db| { ... }).await` — Apply mutations atomically, returns `MutateResult`
- `#[derive(HasModel)]` — Derive macro for types stored in the database, generates typed accessors
**Generated accessor types** (from `HasModel` derive):
- `as_field()` — Immutable reference: `&Model<T>`
- `as_field_mut()` — Mutable reference: `&mut Model<T>`
- `into_field()` — Owned value: `Model<T>`
**`Model<T>` APIs** (from `db/prelude.rs`):
- `.de()` — Deserialize to `T`
- `.ser(&value)` — Serialize from `T`
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize
- For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()`
## i18n
See [i18n-patterns.md](i18n-patterns.md) for internationalization key conventions and the `t!()` macro.
## Rust Utilities & Patterns
See [core-rust-patterns.md](core-rust-patterns.md) for common utilities (Invoke trait, Guard pattern, mount guards, Apply trait, etc.).
## Related Documentation
- [rpc-toolkit.md](rpc-toolkit.md) — JSON-RPC handler patterns
- [i18n-patterns.md](i18n-patterns.md) — Internationalization conventions
- [core-rust-patterns.md](core-rust-patterns.md) — Common Rust utilities
- [s9pk-structure.md](s9pk-structure.md) — S9PK package format

25
core/CLAUDE.md Normal file
View File

@@ -0,0 +1,25 @@
# Core — Rust Backend
The Rust backend daemon for StartOS.
## Architecture
See [ARCHITECTURE.md](ARCHITECTURE.md) for binaries, modules, Patch-DB patterns, and related documentation.
See [CONTRIBUTING.md](CONTRIBUTING.md) for how to add RPC endpoints, TS-exported types, and i18n keys.
## Quick Reference
```bash
cargo check -p start-os # Type check
make test-core # Run tests
make ts-bindings # Regenerate TS types after changing #[ts(export)] structs
cd sdk && make baseDist dist # Rebuild SDK after ts-bindings
```
## Operating Rules
- Always run `cargo check -p start-os` after modifying Rust code
- When adding RPC endpoints, follow the patterns in [rpc-toolkit.md](rpc-toolkit.md)
- When modifying `#[ts(export)]` types, regenerate bindings and rebuild the SDK (see [ARCHITECTURE.md](../ARCHITECTURE.md#build-pipeline))
- When adding i18n keys, add all 5 locales in `core/locales/i18n.yaml` (see [i18n-patterns.md](i18n-patterns.md))

49
core/CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,49 @@
# Contributing to Core
For general environment setup, cloning, and build system, see the root [CONTRIBUTING.md](../CONTRIBUTING.md).
## Prerequisites
- [Rust](https://rustup.rs) (nightly for formatting)
- [rust-analyzer](https://rust-analyzer.github.io/) recommended
- [Docker](https://docs.docker.com/get-docker/) (for cross-compilation via `rust-zig-builder` container)
## Common Commands
```bash
cargo check -p start-os # Type check
cargo test --features=test # Run tests (or: make test-core)
make format # Format with nightly rustfmt
cd core && cargo test <test_name> --features=test # Run a specific test
```
## Adding a New RPC Endpoint
1. Define a params struct with `#[derive(Deserialize, Serialize)]`
2. Choose a handler type (`from_fn_async` for most cases)
3. Write the handler function: `async fn my_handler(ctx: RpcContext, params: MyParams) -> Result<MyResponse, Error>`
4. Register it in the appropriate `ParentHandler` tree
5. If params/response should be available in TypeScript, add `#[derive(TS)]` and `#[ts(export)]`
See [rpc-toolkit.md](rpc-toolkit.md) for full handler patterns and all four handler types.
## Adding TS-Exported Types
When a Rust type needs to be available in TypeScript (for the web frontend or SDK):
1. Add `ts_rs::TS` to the derive list and `#[ts(export)]` to the struct/enum
2. Use `#[serde(rename_all = "camelCase")]` for JS-friendly field names
3. For types that don't implement TS (like `DateTime<Utc>`, `exver::Version`), use `#[ts(type = "string")]` overrides
4. For `u64` fields that should be JS `number` (not `bigint`), use `#[ts(type = "number")]`
5. Run `make ts-bindings` to regenerate — files appear in `core/bindings/` then sync to `sdk/base/lib/osBindings/`
6. Rebuild the SDK: `cd sdk && make baseDist dist`
## Adding i18n Keys
1. Add the key to `core/locales/i18n.yaml` with all 5 language translations
2. Use the `t!("your.key.name")` macro in Rust code
3. Follow existing namespace conventions — match the module path where the key is used
4. Use kebab-case for multi-word segments
5. Translations are validated at compile time
See [i18n-patterns.md](i18n-patterns.md) for full conventions.

3387
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -15,7 +15,7 @@ license = "MIT"
name = "start-os"
readme = "README.md"
repository = "https://github.com/Start9Labs/start-os"
version = "0.4.0-alpha.19" # VERSION_BUMP
version = "0.4.0-alpha.20" # VERSION_BUMP
[lib]
name = "startos"
@@ -42,17 +42,6 @@ name = "tunnelbox"
path = "src/main/tunnelbox.rs"
[features]
arti = [
"arti-client",
"safelog",
"tor-cell",
"tor-hscrypto",
"tor-hsservice",
"tor-keymgr",
"tor-llcrypto",
"tor-proto",
"tor-rtcompat",
]
beta = []
console = ["console-subscriber", "tokio/tracing"]
default = []
@@ -62,16 +51,6 @@ unstable = ["backtrace-on-stack-overflow"]
[dependencies]
aes = { version = "0.7.5", features = ["ctr"] }
arti-client = { version = "0.33", features = [
"compression",
"ephemeral-keystore",
"experimental-api",
"onion-service-client",
"onion-service-service",
"rustls",
"static",
"tokio",
], default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
async-acme = { version = "0.6.0", git = "https://github.com/dr-bonez/async-acme.git", features = [
"use_rustls",
"use_tokio",
@@ -100,7 +79,6 @@ console-subscriber = { version = "0.5.0", optional = true }
const_format = "0.2.34"
cookie = "0.18.0"
cookie_store = "0.22.0"
curve25519-dalek = "4.1.3"
der = { version = "0.7.9", features = ["derive", "pem"] }
digest = "0.10.7"
divrem = "1.0.0"
@@ -216,7 +194,6 @@ rpassword = "7.2.0"
rust-argon2 = "3.0.0"
rust-i18n = "3.1.5"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" }
@@ -244,23 +221,6 @@ tokio-stream = { version = "0.1.14", features = ["io-util", "net", "sync"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] }
tokio-util = { version = "0.7.9", features = ["io"] }
tor-cell = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hscrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hsservice = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-keymgr = { version = "0.33", features = [
"ephemeral-keystore",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-llcrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-proto = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-rtcompat = { version = "0.33", features = [
"rustls",
"tokio",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
torut = "0.2.1"
tower-service = "0.3.3"
tracing = "0.1.39"
tracing-error = "0.2.0"

View File

@@ -22,9 +22,7 @@ several different names for different behavior:
- `start-sdk`: This is a CLI tool that aids in building and packaging services
you wish to deploy to StartOS
## Questions
## Documentation
If you have questions about how various pieces of the backend system work. Open
an issue and tag the following people
- dr-bonez
- [ARCHITECTURE.md](ARCHITECTURE.md) — Backend architecture, modules, and patterns
- [CONTRIBUTING.md](CONTRIBUTING.md) — How to contribute to core

View File

@@ -7,11 +7,11 @@ source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
PROFILE=${PROFILE:-debug}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
if [ "$PROFILE" != "debug" ]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
@@ -38,7 +38,7 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
fi
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features test,$FEATURES --locked 'export_bindings_'
rust-zig-builder cargo test --manifest-path=./core/Cargo.toml --lib $BUILD_FLAGS --features test,$FEATURES --locked 'export_bindings_'
if [ "$(ls -nd "core/bindings" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/bindings && chown -R $UID:$UID /usr/local/cargo"
fi

249
core/core-rust-patterns.md Normal file
View File

@@ -0,0 +1,249 @@
# Utilities & Patterns
This document covers common utilities and patterns used throughout the StartOS codebase.
## Util Module (`core/src/util/`)
The `util` module contains reusable utilities. Key submodules:
| Module | Purpose |
|--------|---------|
| `actor/` | Actor pattern implementation for concurrent state management |
| `collections/` | Custom collection types |
| `crypto.rs` | Cryptographic utilities (encryption, hashing) |
| `future.rs` | Future/async utilities |
| `io.rs` | File I/O helpers (create_file, canonicalize, etc.) |
| `iter.rs` | Iterator extensions |
| `net.rs` | Network utilities |
| `rpc.rs` | RPC helpers |
| `rpc_client.rs` | RPC client utilities |
| `serde.rs` | Serialization helpers (Base64, display/fromstr, etc.) |
| `sync.rs` | Synchronization primitives (SyncMutex, etc.) |
## Command Invocation (`Invoke` trait)
The `Invoke` trait provides a clean way to run external commands with error handling:
```rust
use crate::util::Invoke;
// Simple invocation
tokio::process::Command::new("ls")
.arg("-la")
.invoke(ErrorKind::Filesystem)
.await?;
// With timeout
tokio::process::Command::new("slow-command")
.timeout(Some(Duration::from_secs(30)))
.invoke(ErrorKind::Timeout)
.await?;
// With input
let mut input = Cursor::new(b"input data");
tokio::process::Command::new("cat")
.input(Some(&mut input))
.invoke(ErrorKind::Filesystem)
.await?;
// Piped commands
tokio::process::Command::new("cat")
.arg("file.txt")
.pipe(&mut tokio::process::Command::new("grep").arg("pattern"))
.invoke(ErrorKind::Filesystem)
.await?;
```
## Guard Pattern
Guards ensure cleanup happens when they go out of scope.
### `GeneralGuard` / `GeneralBoxedGuard`
For arbitrary cleanup actions:
```rust
use crate::util::GeneralGuard;
let guard = GeneralGuard::new(|| {
println!("Cleanup runs on drop");
});
// Do work...
// Explicit drop with action
guard.drop();
// Or skip the action
// guard.drop_without_action();
```
### `FileLock`
File-based locking with automatic unlock:
```rust
use crate::util::FileLock;
let lock = FileLock::new("/path/to/lockfile", true).await?; // blocking=true
// Lock held until dropped or explicitly unlocked
lock.unlock().await?;
```
## Mount Guard Pattern (`core/src/disk/mount/guard.rs`)
RAII guards for filesystem mounts. Ensures filesystems are unmounted when guards are dropped.
### `MountGuard`
Basic mount guard:
```rust
use crate::disk::mount::guard::MountGuard;
use crate::disk::mount::filesystem::{MountType, ReadOnly};
let guard = MountGuard::mount(&filesystem, "/mnt/target", ReadOnly).await?;
// Use the mounted filesystem at guard.path()
do_something(guard.path()).await?;
// Explicit unmount (or auto-unmounts on drop)
guard.unmount(false).await?; // false = don't delete mountpoint
```
### `TmpMountGuard`
Reference-counted temporary mount (mounts to `/media/startos/tmp/`):
```rust
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::filesystem::ReadOnly;
// Multiple clones share the same mount
let guard1 = TmpMountGuard::mount(&filesystem, ReadOnly).await?;
let guard2 = guard1.clone();
// Mount stays alive while any guard exists
// Auto-unmounts when last guard is dropped
```
### `GenericMountGuard` trait
All mount guards implement this trait:
```rust
pub trait GenericMountGuard: std::fmt::Debug + Send + Sync + 'static {
fn path(&self) -> &Path;
fn unmount(self) -> impl Future<Output = Result<(), Error>> + Send;
}
```
### `SubPath`
Wraps a mount guard to point to a subdirectory:
```rust
use crate::disk::mount::guard::SubPath;
let mount = TmpMountGuard::mount(&filesystem, ReadOnly).await?;
let subdir = SubPath::new(mount, "data/subdir");
// subdir.path() returns the full path including subdirectory
```
## FileSystem Implementations (`core/src/disk/mount/filesystem/`)
Various filesystem types that can be mounted:
| Type | Description |
|------|-------------|
| `bind.rs` | Bind mounts |
| `block_dev.rs` | Block device mounts |
| `cifs.rs` | CIFS/SMB network shares |
| `ecryptfs.rs` | Encrypted filesystem |
| `efivarfs.rs` | EFI variables |
| `httpdirfs.rs` | HTTP directory as filesystem |
| `idmapped.rs` | ID-mapped mounts |
| `label.rs` | Mount by label |
| `loop_dev.rs` | Loop device mounts |
| `overlayfs.rs` | Overlay filesystem |
## Other Useful Utilities
### `Apply` / `ApplyRef` traits
Fluent method chaining:
```rust
use crate::util::Apply;
let result = some_value
.apply(|v| transform(v))
.apply(|v| another_transform(v));
```
### `Container<T>`
Async-safe optional container:
```rust
use crate::util::Container;
let container = Container::new(None);
container.set(value).await;
let taken = container.take().await;
```
### `HashWriter<H, W>`
Write data while computing hash:
```rust
use crate::util::HashWriter;
use sha2::Sha256;
let writer = HashWriter::new(Sha256::new(), file);
// Write data...
let (hasher, file) = writer.finish();
let hash = hasher.finalize();
```
### `Never` type
Uninhabited type for impossible cases:
```rust
use crate::util::Never;
fn impossible() -> Never {
// This function can never return
}
let never: Never = impossible();
never.absurd::<String>() // Can convert to any type
```
### `MaybeOwned<'a, T>`
Either borrowed or owned data:
```rust
use crate::util::MaybeOwned;
fn accept_either(data: MaybeOwned<'_, String>) {
// Use &*data to access the value
}
accept_either(MaybeOwned::from(&existing_string));
accept_either(MaybeOwned::from(owned_string));
```
### `new_guid()`
Generate a random GUID:
```rust
use crate::util::new_guid;
let guid = new_guid(); // Returns InternedString
```

100
core/i18n-patterns.md Normal file
View File

@@ -0,0 +1,100 @@
# i18n Patterns in `core/`
## Library & Setup
**Crate:** [`rust-i18n`](https://crates.io/crates/rust-i18n) v3.1.5 (`core/Cargo.toml`)
**Initialization** (`core/src/lib.rs:3`):
```rust
rust_i18n::i18n!("locales", fallback = ["en_US"]);
```
This macro scans `core/locales/` at compile time and embeds all translations as constants.
**Prelude re-export** (`core/src/prelude.rs:4`):
```rust
pub use rust_i18n::t;
```
Most modules import `t!` via the prelude.
## Translation File
**Location:** `core/locales/i18n.yaml`
**Format:** YAML v2 (~755 keys)
**Supported languages:** `en_US`, `de_DE`, `es_ES`, `fr_FR`, `pl_PL`
**Entry structure:**
```yaml
namespace.sub.key-name:
en_US: "English text with %{param}"
de_DE: "German text with %{param}"
# ...
```
## Using `t!()`
```rust
// Simple key
t!("error.unknown")
// With parameter interpolation (%{name} in YAML)
t!("bins.deprecated.renamed", old = old_name, new = new_name)
```
## Key Naming Conventions
Keys use **dot-separated hierarchical namespaces** with **kebab-case** for multi-word segments:
```
<module>.<submodule>.<descriptive-name>
```
Examples:
- `error.incorrect-password` — error kind label
- `bins.start-init.updating-firmware` — startup phase message
- `backup.bulk.complete-title` — backup notification title
- `help.arg.acme-contact` — CLI help text for an argument
- `context.diagnostic.starting-diagnostic-ui` — diagnostic context status
### Top-Level Namespaces
| Namespace | Purpose |
|-----------|---------|
| `error.*` | `ErrorKind` display strings (see `src/error.rs`) |
| `bins.*` | CLI binary messages (deprecated, start-init, startd, etc.) |
| `init.*` | Initialization phase labels |
| `setup.*` | First-run setup messages |
| `context.*` | Context startup messages (diagnostic, setup, CLI) |
| `service.*` | Service lifecycle messages |
| `backup.*` | Backup/restore operation messages |
| `registry.*` | Package registry messages |
| `net.*` | Network-related messages |
| `middleware.*` | Request middleware messages (auth, etc.) |
| `disk.*` | Disk operation messages |
| `lxc.*` | Container management messages |
| `system.*` | System monitoring/metrics messages |
| `notifications.*` | User-facing notification messages |
| `update.*` | OS update messages |
| `util.*` | Utility messages (TUI, RPC) |
| `ssh.*` | SSH operation messages |
| `shutdown.*` | Shutdown-related messages |
| `logs.*` | Log-related messages |
| `auth.*` | Authentication messages |
| `help.*` | CLI help text (`help.arg.<arg-name>`) |
| `about.*` | CLI command descriptions |
## Locale Selection
`core/src/bins/mod.rs:15-36``set_locale_from_env()`:
1. Reads `LANG` environment variable
2. Strips `.UTF-8` suffix
3. Exact-matches against available locales, falls back to language-prefix match (e.g. `en_GB` matches `en_US`)
## Adding New Keys
1. Add the key to `core/locales/i18n.yaml` with all 5 language translations
2. Use the `t!("your.key.name")` macro in Rust code
3. Follow existing namespace conventions — match the module path where the key is used
4. Use kebab-case for multi-word segments
5. Translations are validated at compile time

226
core/rpc-toolkit.md Normal file
View File

@@ -0,0 +1,226 @@
# rpc-toolkit
StartOS uses [rpc-toolkit](https://github.com/Start9Labs/rpc-toolkit) for its JSON-RPC API. This document covers the patterns used in this codebase.
## Overview
The API is JSON-RPC (not REST). All endpoints are RPC methods organized in a hierarchical command structure.
## Handler Functions
There are four types of handler functions, chosen based on the function's characteristics:
### `from_fn_async` - Async handlers
For standard async functions. Most handlers use this.
```rust
pub async fn my_handler(ctx: RpcContext, params: MyParams) -> Result<MyResponse, Error> {
// Can use .await
}
from_fn_async(my_handler)
```
### `from_fn_async_local` - Non-thread-safe async handlers
For async functions that are not `Send` (cannot be safely moved between threads). Use when working with non-thread-safe types.
```rust
pub async fn cli_download(ctx: CliContext, params: Params) -> Result<(), Error> {
// Non-Send async operations
}
from_fn_async_local(cli_download)
```
### `from_fn_blocking` - Sync blocking handlers
For synchronous functions that perform blocking I/O or long computations.
```rust
pub fn query_dns(ctx: RpcContext, params: DnsParams) -> Result<DnsResponse, Error> {
// Blocking operations (file I/O, DNS lookup, etc.)
}
from_fn_blocking(query_dns)
```
### `from_fn` - Sync non-blocking handlers
For pure functions or quick synchronous operations with no I/O.
```rust
pub fn echo(ctx: RpcContext, params: EchoParams) -> Result<String, Error> {
Ok(params.message)
}
from_fn(echo)
```
## ParentHandler
Groups related RPC methods into a hierarchy:
```rust
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
pub fn my_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
.subcommand("list", from_fn_async(list_handler).with_call_remote::<CliContext>())
.subcommand("create", from_fn_async(create_handler).with_call_remote::<CliContext>())
}
```
## Handler Extensions
Chain methods to configure handler behavior.
**Ordering rules:**
1. `with_about()` must come AFTER other CLI modifiers (`no_display()`, `with_custom_display_fn()`, etc.)
2. `with_call_remote()` must be the LAST adapter in the chain
| Method | Purpose |
|--------|---------|
| `.with_metadata("key", Value)` | Attach metadata for middleware |
| `.no_cli()` | RPC-only, not available via CLI |
| `.no_display()` | No CLI output |
| `.with_display_serializable()` | Default JSON/YAML output for CLI |
| `.with_custom_display_fn(\|_, res\| ...)` | Custom CLI output formatting |
| `.with_about("about.description")` | Add help text (i18n key) - **after CLI modifiers** |
| `.with_call_remote::<CliContext>()` | Enable CLI to call remotely - **must be last** |
### Correct ordering example:
```rust
from_fn_async(my_handler)
.with_metadata("sync_db", Value::Bool(true)) // metadata early
.no_display() // CLI modifier
.with_about("about.my-handler") // after CLI modifiers
.with_call_remote::<CliContext>() // always last
```
## Metadata by Middleware
Metadata tags are processed by different middleware. Group them logically:
### Auth Middleware (`middleware/auth/mod.rs`)
| Metadata | Default | Description |
|----------|---------|-------------|
| `authenticated` | `true` | Whether endpoint requires authentication. Set to `false` for public endpoints. |
### Session Auth Middleware (`middleware/auth/session.rs`)
| Metadata | Default | Description |
|----------|---------|-------------|
| `login` | `false` | Special handling for login endpoints (rate limiting, cookie setting) |
| `get_session` | `false` | Inject session ID into params as `__Auth_session` |
### Signature Auth Middleware (`middleware/auth/signature.rs`)
| Metadata | Default | Description |
|----------|---------|-------------|
| `get_signer` | `false` | Inject signer public key into params as `__Auth_signer` |
### Registry Auth (extends Signature Auth)
| Metadata | Default | Description |
|----------|---------|-------------|
| `admin` | `false` | Require admin privileges (signer must be in admin list) |
| `get_device_info` | `false` | Inject device info header for hardware filtering |
### Database Middleware (`middleware/db.rs`)
| Metadata | Default | Description |
|----------|---------|-------------|
| `sync_db` | `false` | Sync database after mutation, add `X-Patch-Sequence` header |
## Context Types
Different contexts for different execution environments:
- `RpcContext` - Web/RPC requests with full service access
- `CliContext` - CLI operations, calls remote RPC
- `InitContext` - During system initialization
- `DiagnosticContext` - Diagnostic/recovery mode
- `RegistryContext` - Registry daemon context
- `EffectContext` - Service effects context (container-to-host calls)
## Parameter Structs
Parameters use derive macros for JSON-RPC, CLI parsing, and TypeScript generation:
```rust
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] // JSON-RPC uses camelCase
#[command(rename_all = "kebab-case")] // CLI uses kebab-case
#[ts(export)] // Generate TypeScript types
pub struct MyParams {
pub package_id: PackageId,
}
```
### Middleware Injection
Auth middleware can inject values into params using special field names:
```rust
#[derive(Deserialize, Serialize, Parser, TS)]
pub struct MyParams {
#[ts(skip)]
#[serde(rename = "__Auth_session")] // Injected by session auth
session: InternedString,
#[ts(skip)]
#[serde(rename = "__Auth_signer")] // Injected by signature auth
signer: AnyVerifyingKey,
#[ts(skip)]
#[serde(rename = "__Auth_userAgent")] // Injected during login
user_agent: Option<String>,
}
```
## Common Patterns
### Adding a New RPC Endpoint
1. Define params struct with `Deserialize, Serialize, Parser, TS`
2. Choose handler type based on sync/async and thread-safety
3. Write handler function taking `(Context, Params) -> Result<Response, Error>`
4. Add to parent handler with appropriate extensions (display modifiers before `with_about`)
5. TypeScript types auto-generated via `make ts-bindings`
### Public (Unauthenticated) Endpoint
```rust
from_fn_async(get_info)
.with_metadata("authenticated", Value::Bool(false))
.with_display_serializable()
.with_about("about.get-info")
.with_call_remote::<CliContext>() // last
```
### Mutating Endpoint with DB Sync
```rust
from_fn_async(update_config)
.with_metadata("sync_db", Value::Bool(true))
.no_display()
.with_about("about.update-config")
.with_call_remote::<CliContext>() // last
```
### Session-Aware Endpoint
```rust
from_fn_async(logout)
.with_metadata("get_session", Value::Bool(true))
.no_display()
.with_about("about.logout")
.with_call_remote::<CliContext>() // last
```
## File Locations
- Handler definitions: Throughout `core/src/` modules
- Main API tree: `core/src/lib.rs` (`main_api()`, `server()`, `package()`)
- Auth middleware: `core/src/middleware/auth/`
- DB middleware: `core/src/middleware/db.rs`
- Context types: `core/src/context/`

122
core/s9pk-structure.md Normal file
View File

@@ -0,0 +1,122 @@
# S9PK Package Format
S9PK is the package format for StartOS services. Version 2 uses a merkle archive structure for efficient downloading and cryptographic verification.
## File Format
S9PK files begin with a 3-byte header: `0x3b 0x3b 0x02` (magic bytes + version 2).
The archive is cryptographically signed using Ed25519 with prehashed content (SHA-512 over blake3 merkle root hash).
## Archive Structure
```
/
├── manifest.json # Package metadata (required)
├── icon.<ext> # Package icon - any image/* format (required)
├── LICENSE.md # License text (required)
├── dependencies/ # Dependency metadata (optional)
│ └── <package-id>/
│ ├── metadata.json # DependencyMetadata
│ └── icon.<ext> # Dependency icon
├── javascript.squashfs # Package JavaScript code (required)
├── assets.squashfs # Static assets (optional, legacy: assets/ directory)
└── images/ # Container images by architecture
└── <arch>/ # e.g., x86_64, aarch64, riscv64
├── <image-id>.squashfs # Container filesystem
├── <image-id>.json # Image metadata
└── <image-id>.env # Environment variables
```
## Components
### manifest.json
The package manifest contains all metadata:
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Package identifier (e.g., `bitcoind`) |
| `title` | string | Display name |
| `version` | string | Extended version string |
| `satisfies` | string[] | Version ranges this version satisfies |
| `releaseNotes` | string/object | Release notes (localized) |
| `canMigrateTo` | string | Version range for forward migration |
| `canMigrateFrom` | string | Version range for backward migration |
| `license` | string | License type |
| `wrapperRepo` | string | StartOS wrapper repository URL |
| `upstreamRepo` | string | Upstream project URL |
| `supportSite` | string | Support site URL |
| `marketingSite` | string | Marketing site URL |
| `donationUrl` | string? | Optional donation URL |
| `docsUrl` | string? | Optional documentation URL |
| `description` | object | Short and long descriptions (localized) |
| `images` | object | Image configurations by image ID |
| `volumes` | string[] | Volume IDs for persistent data |
| `alerts` | object | User alerts for lifecycle events |
| `dependencies` | object | Package dependencies |
| `hardwareRequirements` | object | Hardware requirements (arch, RAM, devices) |
| `hardwareAcceleration` | boolean | Whether package uses hardware acceleration |
| `gitHash` | string? | Git commit hash |
| `osVersion` | string | Minimum StartOS version |
| `sdkVersion` | string? | SDK version used to build |
### javascript.squashfs
Contains the package JavaScript that implements the `ABI` interface from `@start9labs/start-sdk-base`. This code runs in the container runtime and manages the package lifecycle.
The squashfs is mounted at `/usr/lib/startos/package/` and the runtime loads `index.js`.
### images/
Container images organized by architecture:
- **`<image-id>.squashfs`** - Container root filesystem
- **`<image-id>.json`** - Image metadata (entrypoint, user, workdir, etc.)
- **`<image-id>.env`** - Environment variables for the container
Images are built from Docker/Podman and converted to squashfs. The `ImageConfig` in manifest specifies:
- `arch` - Supported architectures
- `emulateMissingAs` - Fallback architecture for emulation
- `nvidiaContainer` - Whether to enable NVIDIA container support
### assets.squashfs
Static assets accessible to the package, mounted read-only at `/media/startos/assets/` in the container.
### dependencies/
Metadata for dependencies displayed in the UI:
- `metadata.json` - Just title for now
- `icon.<ext>` - Icon for the dependency
## Merkle Archive
The S9PK uses a merkle tree structure where each file and directory has a blake3 hash. This enables:
1. **Partial downloads** - Download and verify individual files
2. **Integrity verification** - Verify any subset of the archive
3. **Efficient updates** - Only download changed portions
4. **DOS protection** - Size limits enforced before downloading content
Files are sorted by priority for streaming (manifest first, then icon, license, dependencies, javascript, assets, images).
## Building S9PK
Use `start-cli s9pk pack` to build packages:
```bash
start-cli s9pk pack <manifest-path> -o <output.s9pk>
```
Images can be sourced from:
- Docker/Podman build (`--docker-build`)
- Existing Docker tag (`--docker-tag`)
- Pre-built squashfs files
## Related Code
- `core/src/s9pk/v2/mod.rs` - S9pk struct and serialization
- `core/src/s9pk/v2/manifest.rs` - Manifest types
- `core/src/s9pk/v2/pack.rs` - Packing logic
- `core/src/s9pk/merkle_archive/` - Merkle archive implementation

View File

@@ -8,7 +8,6 @@ use openssl::x509::X509;
use crate::db::model::DatabaseModel;
use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::ssl::{gen_nistp256, make_root_cert};
use crate::net::tor::TorSecretKey;
use crate::prelude::*;
use crate::util::serde::Pem;
@@ -26,7 +25,6 @@ pub struct AccountInfo {
pub server_id: String,
pub hostname: Hostname,
pub password: String,
pub tor_keys: Vec<TorSecretKey>,
pub root_ca_key: PKey<Private>,
pub root_ca_cert: X509,
pub ssh_key: ssh_key::PrivateKey,
@@ -36,7 +34,6 @@ impl AccountInfo {
pub fn new(password: &str, start_time: SystemTime) -> Result<Self, Error> {
let server_id = generate_id();
let hostname = generate_hostname();
let tor_key = vec![TorSecretKey::generate()];
let root_ca_key = gen_nistp256()?;
let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?;
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
@@ -48,7 +45,6 @@ impl AccountInfo {
server_id,
hostname,
password: hash_password(password)?,
tor_keys: tor_key,
root_ca_key,
root_ca_cert,
ssh_key,
@@ -61,17 +57,6 @@ impl AccountInfo {
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
let password = db.as_private().as_password().de()?;
let key_store = db.as_private().as_key_store();
let tor_addrs = db
.as_public()
.as_server_info()
.as_network()
.as_host()
.as_onions()
.de()?;
let tor_keys = tor_addrs
.into_iter()
.map(|tor_addr| key_store.as_onion().get_key(&tor_addr))
.collect::<Result<_, _>>()?;
let cert_store = key_store.as_local_certs();
let root_ca_key = cert_store.as_root_key().de()?.0;
let root_ca_cert = cert_store.as_root_cert().de()?.0;
@@ -82,7 +67,6 @@ impl AccountInfo {
server_id,
hostname,
password,
tor_keys,
root_ca_key,
root_ca_cert,
ssh_key,
@@ -97,17 +81,6 @@ impl AccountInfo {
server_info
.as_pubkey_mut()
.ser(&self.ssh_key.public_key().to_openssh()?)?;
server_info
.as_network_mut()
.as_host_mut()
.as_onions_mut()
.ser(
&self
.tor_keys
.iter()
.map(|tor_key| tor_key.onion_address())
.collect(),
)?;
server_info.as_password_hash_mut().ser(&self.password)?;
db.as_private_mut().as_password_mut().ser(&self.password)?;
db.as_private_mut()
@@ -117,9 +90,6 @@ impl AccountInfo {
.as_developer_key_mut()
.ser(Pem::new_ref(&self.developer_key))?;
let key_store = db.as_private_mut().as_key_store_mut();
for tor_key in &self.tor_keys {
key_store.as_onion_mut().insert_key(tor_key)?;
}
let cert_store = key_store.as_local_certs_mut();
if cert_store.as_root_cert().de()?.0 != self.root_ca_cert {
cert_store
@@ -148,11 +118,5 @@ impl AccountInfo {
self.hostname.no_dot_host_name(),
self.hostname.local_domain_name(),
]
.into_iter()
.chain(
self.tor_keys
.iter()
.map(|k| InternedString::from_display(&k.onion_address())),
)
}
}

View File

@@ -271,6 +271,7 @@ pub fn display_action_result<T: Serialize>(
}
#[derive(Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct RunActionParams {
pub package_id: PackageId,
@@ -362,6 +363,7 @@ pub async fn run_action(
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct ClearTaskParams {

View File

@@ -418,6 +418,7 @@ impl AsLogoutSessionId for KillSessionId {
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct KillParams {
@@ -435,6 +436,7 @@ pub async fn kill<C: SessionAuthContext>(
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct ResetPasswordParams {

View File

@@ -30,6 +30,7 @@ use crate::util::serde::IoFormat;
use crate::version::VersionT;
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct BackupParams {

View File

@@ -2,6 +2,7 @@ use std::collections::BTreeMap;
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::PackageId;
use crate::context::CliContext;
@@ -13,19 +14,22 @@ pub mod os;
pub mod restore;
pub mod target;
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
pub struct BackupReport {
server: ServerBackupReport,
packages: BTreeMap<PackageId, PackageBackupReport>,
}
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
pub struct ServerBackupReport {
attempted: bool,
error: Option<String>,
}
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
pub struct PackageBackupReport {
pub error: Option<String>,
}

View File

@@ -7,9 +7,7 @@ use ssh_key::private::Ed25519Keypair;
use crate::account::AccountInfo;
use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::tor::TorSecretKey;
use crate::prelude::*;
use crate::util::crypto::ed25519_expand_key;
use crate::util::serde::{Base32, Base64, Pem};
pub struct OsBackup {
@@ -85,10 +83,6 @@ impl OsBackupV0 {
&mut ssh_key::rand_core::OsRng::default(),
ssh_key::Algorithm::Ed25519,
)?,
tor_keys: TorSecretKey::from_bytes(self.tor_key.0)
.ok()
.into_iter()
.collect(),
developer_key: ed25519_dalek::SigningKey::generate(
&mut ssh_key::rand_core::OsRng::default(),
),
@@ -119,10 +113,6 @@ impl OsBackupV1 {
root_ca_key: self.root_ca_key.0,
root_ca_cert: self.root_ca_cert.0,
ssh_key: ssh_key::PrivateKey::from(Ed25519Keypair::from_seed(&self.net_key.0)),
tor_keys: TorSecretKey::from_bytes(ed25519_expand_key(&self.net_key.0))
.ok()
.into_iter()
.collect(),
developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
},
ui: self.ui,
@@ -140,7 +130,6 @@ struct OsBackupV2 {
root_ca_key: Pem<PKey<Private>>, // PEM Encoded OpenSSL Key
root_ca_cert: Pem<X509>, // PEM Encoded OpenSSL X509 Certificate
ssh_key: Pem<ssh_key::PrivateKey>, // PEM Encoded OpenSSH Key
tor_keys: Vec<TorSecretKey>, // Base64 Encoded Ed25519 Expanded Secret Key
compat_s9pk_key: Pem<ed25519_dalek::SigningKey>, // PEM Encoded ED25519 Key
ui: Value, // JSON Value
}
@@ -154,7 +143,6 @@ impl OsBackupV2 {
root_ca_key: self.root_ca_key.0,
root_ca_cert: self.root_ca_cert.0,
ssh_key: self.ssh_key.0,
tor_keys: self.tor_keys,
developer_key: self.compat_s9pk_key.0,
},
ui: self.ui,
@@ -167,7 +155,6 @@ impl OsBackupV2 {
root_ca_key: Pem(backup.account.root_ca_key.clone()),
root_ca_cert: Pem(backup.account.root_ca_cert.clone()),
ssh_key: Pem(backup.account.ssh_key.clone()),
tor_keys: backup.account.tor_keys.clone(),
compat_s9pk_key: Pem(backup.account.developer_key.clone()),
ui: backup.ui.clone(),
}

View File

@@ -30,6 +30,7 @@ use crate::{PLATFORM, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
#[ts(export)]
pub struct RestorePackageParams {
#[arg(help = "help.arg.package-ids")]
pub ids: Vec<PackageId>,

View File

@@ -36,7 +36,8 @@ impl Map for CifsTargets {
}
}
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct CifsBackupTarget {
hostname: String,
@@ -72,9 +73,10 @@ pub fn cifs<C: Context>() -> ParentHandler<C> {
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct AddParams {
pub struct CifsAddParams {
#[arg(help = "help.arg.cifs-hostname")]
pub hostname: String,
#[arg(help = "help.arg.cifs-path")]
@@ -87,12 +89,12 @@ pub struct AddParams {
pub async fn add(
ctx: RpcContext,
AddParams {
CifsAddParams {
hostname,
path,
username,
password,
}: AddParams,
}: CifsAddParams,
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
let cifs = Cifs {
hostname,
@@ -131,9 +133,10 @@ pub async fn add(
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct UpdateParams {
pub struct CifsUpdateParams {
#[arg(help = "help.arg.backup-target-id")]
pub id: BackupTargetId,
#[arg(help = "help.arg.cifs-hostname")]
@@ -148,13 +151,13 @@ pub struct UpdateParams {
pub async fn update(
ctx: RpcContext,
UpdateParams {
CifsUpdateParams {
id,
hostname,
path,
username,
password,
}: UpdateParams,
}: CifsUpdateParams,
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
let id = if let BackupTargetId::Cifs { id } = id {
id
@@ -207,14 +210,15 @@ pub async fn update(
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct RemoveParams {
pub struct CifsRemoveParams {
#[arg(help = "help.arg.backup-target-id")]
pub id: BackupTargetId,
}
pub async fn remove(ctx: RpcContext, RemoveParams { id }: RemoveParams) -> Result<(), Error> {
pub async fn remove(ctx: RpcContext, CifsRemoveParams { id }: CifsRemoveParams) -> Result<(), Error> {
let id = if let BackupTargetId::Cifs { id } = id {
id
} else {

View File

@@ -34,7 +34,8 @@ use crate::util::{FromStrParser, VersionString};
pub mod cifs;
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(tag = "type")]
#[serde(rename_all = "camelCase")]
pub enum BackupTarget {
@@ -49,7 +50,7 @@ pub enum BackupTarget {
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, TS)]
#[ts(type = "string")]
#[ts(export, type = "string")]
pub enum BackupTargetId {
Disk { logicalname: PathBuf },
Cifs { id: u32 },
@@ -111,6 +112,7 @@ impl Serialize for BackupTargetId {
}
#[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(tag = "type")]
#[serde(rename_all = "camelCase")]
pub enum BackupTargetFS {
@@ -210,20 +212,26 @@ pub async fn list(ctx: RpcContext) -> Result<BTreeMap<BackupTargetId, BackupTarg
.collect())
}
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[derive(Clone, Debug, Default, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct BackupInfo {
#[ts(type = "string")]
pub version: Version,
#[ts(type = "string | null")]
pub timestamp: Option<DateTime<Utc>>,
pub package_backups: BTreeMap<PackageId, PackageBackupInfo>,
}
#[derive(Clone, Debug, Deserialize, Serialize)]
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct PackageBackupInfo {
pub title: InternedString,
pub version: VersionString,
#[ts(type = "string")]
pub os_version: Version,
#[ts(type = "string")]
pub timestamp: DateTime<Utc>,
}
@@ -265,6 +273,7 @@ fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) -> Re
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct InfoParams {
@@ -387,6 +396,7 @@ pub async fn mount(
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct UmountParams {

View File

@@ -9,7 +9,7 @@ use crate::disk::fsck::RepairStrategy;
use crate::disk::main::DEFAULT_PASSWORD;
use crate::firmware::{check_for_firmware_update, update_firmware};
use crate::init::{InitPhases, STANDBY_MODE_PATH};
use crate::net::gateway::UpgradableListener;
use crate::net::gateway::WildcardListener;
use crate::net::web_server::WebServer;
use crate::prelude::*;
use crate::progress::FullProgressTracker;
@@ -19,7 +19,7 @@ use crate::{DATA_DIR, PLATFORM};
#[instrument(skip_all)]
async fn setup_or_init(
server: &mut WebServer<UpgradableListener>,
server: &mut WebServer<WildcardListener>,
config: &ServerConfig,
) -> Result<Result<(RpcContext, FullProgressTracker), Shutdown>, Error> {
if let Some(firmware) = check_for_firmware_update()
@@ -204,7 +204,7 @@ async fn setup_or_init(
#[instrument(skip_all)]
pub async fn main(
server: &mut WebServer<UpgradableListener>,
server: &mut WebServer<WildcardListener>,
config: &ServerConfig,
) -> Result<Result<(RpcContext, FullProgressTracker), Shutdown>, Error> {
if &*PLATFORM == "raspberrypi" && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() {

View File

@@ -12,7 +12,7 @@ use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, RpcContext};
use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener};
use crate::net::gateway::WildcardListener;
use crate::net::static_server::refresher;
use crate::net::web_server::{Acceptor, WebServer};
use crate::prelude::*;
@@ -23,7 +23,7 @@ use crate::util::logger::LOGGER;
#[instrument(skip_all)]
async fn inner_main(
server: &mut WebServer<UpgradableListener>,
server: &mut WebServer<WildcardListener>,
config: &ServerConfig,
) -> Result<Option<Shutdown>, Error> {
let rpc_ctx = if !tokio::fs::metadata("/run/startos/initialized")
@@ -148,7 +148,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
.expect(&t!("bins.startd.failed-to-initialize-runtime"));
let res = rt.block_on(async {
let mut server = WebServer::new(
Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)),
Acceptor::new(WildcardListener::new(80)?),
refresher(),
);
match inner_main(&mut server, &config).await {

View File

@@ -13,7 +13,7 @@ use visit_rs::Visit;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::net::gateway::{Bind, BindTcp};
use tokio::net::TcpListener;
use crate::net::tls::TlsListener;
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
use crate::prelude::*;
@@ -57,7 +57,12 @@ async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
if !a.contains_key(&key) {
match (|| {
Ok::<_, Error>(TlsListener::new(
BindTcp.bind(addr)?,
TcpListener::from_std(
mio::net::TcpListener::bind(addr)
.with_kind(ErrorKind::Network)?
.into(),
)
.with_kind(ErrorKind::Network)?,
TunnelCertHandler {
db: https_db.clone(),
crypto_provider: Arc::new(tokio_rustls::rustls::crypto::ring::default_provider()),

View File

@@ -34,7 +34,7 @@ use crate::disk::mount::guard::MountGuard;
use crate::init::{InitResult, check_time_is_synchronized};
use crate::install::PKG_ARCHIVE_DIR;
use crate::lxc::LxcManager;
use crate::net::gateway::UpgradableListener;
use crate::net::gateway::WildcardListener;
use crate::net::net_controller::{NetController, NetService};
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
use crate::net::utils::{find_eth_iface, find_wifi_iface};
@@ -132,7 +132,7 @@ pub struct RpcContext(Arc<RpcContextSeed>);
impl RpcContext {
#[instrument(skip_all)]
pub async fn init(
webserver: &WebServerAcceptorSetter<UpgradableListener>,
webserver: &WebServerAcceptorSetter<WildcardListener>,
config: &ServerConfig,
disk_guid: InternedString,
init_result: Option<InitResult>,
@@ -167,7 +167,7 @@ impl RpcContext {
} else {
let net_ctrl =
Arc::new(NetController::init(db.clone(), &account.hostname, socks_proxy).await?);
webserver.try_upgrade(|a| net_ctrl.net_iface.watcher.upgrade_listener(a))?;
webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe()));
let os_net_service = net_ctrl.os_bindings().await?;
(net_ctrl, os_net_service)
};

View File

@@ -20,7 +20,7 @@ use crate::context::RpcContext;
use crate::context::config::ServerConfig;
use crate::disk::mount::guard::{MountGuard, TmpMountGuard};
use crate::hostname::Hostname;
use crate::net::gateway::UpgradableListener;
use crate::net::gateway::WildcardListener;
use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
use crate::prelude::*;
use crate::progress::FullProgressTracker;
@@ -51,7 +51,7 @@ pub struct SetupResult {
}
pub struct SetupContextSeed {
pub webserver: WebServerAcceptorSetter<UpgradableListener>,
pub webserver: WebServerAcceptorSetter<WildcardListener>,
pub config: SyncMutex<ServerConfig>,
pub disable_encryption: bool,
pub progress: FullProgressTracker,
@@ -70,7 +70,7 @@ pub struct SetupContext(Arc<SetupContextSeed>);
impl SetupContext {
#[instrument(skip_all)]
pub fn init(
webserver: &WebServer<UpgradableListener>,
webserver: &WebServer<WildcardListener>,
config: ServerConfig,
) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);

View File

@@ -8,6 +8,7 @@ use crate::prelude::*;
use crate::{Error, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct ControlParams {

View File

@@ -18,7 +18,7 @@ use crate::s9pk::manifest::{LocaleString, Manifest};
use crate::status::StatusInfo;
use crate::util::DataUrl;
use crate::util::serde::{Pem, is_partial_of};
use crate::{ActionId, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
use crate::{ActionId, GatewayId, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
#[derive(Debug, Default, Deserialize, Serialize, TS)]
#[ts(export)]
@@ -381,6 +381,9 @@ pub struct PackageDataEntry {
pub hosts: Hosts,
#[ts(type = "string[]")]
pub store_exposed_dependents: Vec<JsonPointer>,
#[serde(default)]
#[ts(type = "string | null")]
pub outbound_gateway: Option<GatewayId>,
}
impl AsRef<PackageDataEntry> for PackageDataEntry {
fn as_ref(&self) -> &PackageDataEntry {

View File

@@ -20,8 +20,9 @@ use crate::db::model::Database;
use crate::db::model::package::AllPackageData;
use crate::net::acme::AcmeProvider;
use crate::net::host::Host;
use crate::net::host::binding::{AddSslOptions, BindInfo, BindOptions, NetInfo};
use crate::net::utils::ipv6_is_local;
use crate::net::host::binding::{
AddSslOptions, BindInfo, BindOptions, Bindings, DerivedAddressInfo, NetInfo,
};
use crate::net::vhost::AlpnInfo;
use crate::prelude::*;
use crate::progress::FullProgress;
@@ -63,36 +64,36 @@ impl Public {
post_init_migration_todos: BTreeMap::new(),
network: NetworkInfo {
host: Host {
bindings: [(
80,
BindInfo {
enabled: false,
options: BindOptions {
preferred_external_port: 80,
add_ssl: Some(AddSslOptions {
preferred_external_port: 443,
add_x_forwarded_headers: false,
alpn: Some(AlpnInfo::Specified(vec![
MaybeUtf8String("h2".into()),
MaybeUtf8String("http/1.1".into()),
])),
}),
secure: None,
bindings: Bindings(
[(
80,
BindInfo {
enabled: false,
options: BindOptions {
preferred_external_port: 80,
add_ssl: Some(AddSslOptions {
preferred_external_port: 443,
add_x_forwarded_headers: false,
alpn: Some(AlpnInfo::Specified(vec![
MaybeUtf8String("h2".into()),
MaybeUtf8String("http/1.1".into()),
])),
}),
secure: None,
},
net: NetInfo {
assigned_port: None,
assigned_ssl_port: Some(443),
},
addresses: DerivedAddressInfo::default(),
},
net: NetInfo {
assigned_port: None,
assigned_ssl_port: Some(443),
private_disabled: OrdSet::new(),
public_enabled: OrdSet::new(),
},
},
)]
.into_iter()
.collect(),
onions: account.tor_keys.iter().map(|k| k.onion_address()).collect(),
)]
.into_iter()
.collect(),
),
public_domains: BTreeMap::new(),
private_domains: BTreeSet::new(),
hostname_info: BTreeMap::new(),
private_domains: BTreeMap::new(),
port_forwards: BTreeSet::new(),
},
wifi: WifiInfo {
enabled: true,
@@ -117,6 +118,7 @@ impl Public {
acme
},
dns: Default::default(),
default_outbound: None,
},
status_info: ServerStatus {
backup_progress: None,
@@ -220,6 +222,9 @@ pub struct NetworkInfo {
pub acme: BTreeMap<AcmeProvider, AcmeSettings>,
#[serde(default)]
pub dns: DnsSettings,
#[serde(default)]
#[ts(type = "string | null")]
pub default_outbound: Option<GatewayId>,
}
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
@@ -239,41 +244,12 @@ pub struct DnsSettings {
#[ts(export)]
pub struct NetworkInterfaceInfo {
pub name: Option<InternedString>,
pub public: Option<bool>,
pub secure: Option<bool>,
pub ip_info: Option<Arc<IpInfo>>,
#[serde(default, rename = "type")]
pub gateway_type: Option<GatewayType>,
}
impl NetworkInterfaceInfo {
pub fn public(&self) -> bool {
self.public.unwrap_or_else(|| {
!self.ip_info.as_ref().map_or(true, |ip_info| {
let ip4s = ip_info
.subnets
.iter()
.filter_map(|ipnet| {
if let IpAddr::V4(ip4) = ipnet.addr() {
Some(ip4)
} else {
None
}
})
.collect::<BTreeSet<_>>();
if !ip4s.is_empty() {
return ip4s
.iter()
.all(|ip4| ip4.is_loopback() || ip4.is_private() || ip4.is_link_local());
}
ip_info.subnets.iter().all(|ipnet| {
if let IpAddr::V6(ip6) = ipnet.addr() {
ipv6_is_local(ip6)
} else {
true
}
})
})
})
}
pub fn secure(&self) -> bool {
self.secure.unwrap_or(false)
}
@@ -310,6 +286,28 @@ pub enum NetworkInterfaceType {
Loopback,
}
#[derive(
Clone,
Copy,
Debug,
Default,
PartialEq,
Eq,
PartialOrd,
Ord,
Deserialize,
Serialize,
TS,
clap::ValueEnum,
)]
#[ts(export)]
#[serde(rename_all = "kebab-case")]
pub enum GatewayType {
#[default]
InboundOutbound,
OutboundOnly,
}
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]

View File

@@ -43,22 +43,28 @@ pub struct DiskInfo {
pub guid: Option<InternedString>,
}
#[derive(Clone, Debug, Deserialize, Serialize)]
#[derive(Clone, Debug, Deserialize, Serialize, ts_rs::TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct PartitionInfo {
pub logicalname: PathBuf,
pub label: Option<String>,
#[ts(type = "number")]
pub capacity: u64,
#[ts(type = "number | null")]
pub used: Option<u64>,
pub start_os: BTreeMap<String, StartOsRecoveryInfo>,
pub guid: Option<InternedString>,
}
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[derive(Clone, Debug, Default, Deserialize, Serialize, ts_rs::TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct StartOsRecoveryInfo {
pub hostname: Hostname,
#[ts(type = "string")]
pub version: exver::Version,
#[ts(type = "string")]
pub timestamp: DateTime<Utc>,
pub password_hash: Option<String>,
pub wrapped_key: Option<String>,

View File

@@ -42,11 +42,11 @@ pub enum ErrorKind {
ParseUrl = 19,
DiskNotAvailable = 20,
BlockDevice = 21,
InvalidOnionAddress = 22,
// InvalidOnionAddress = 22,
Pack = 23,
ValidateS9pk = 24,
DiskCorrupted = 25, // Remove
Tor = 26,
// Tor = 26,
ConfigGen = 27,
ParseNumber = 28,
Database = 29,
@@ -126,11 +126,11 @@ impl ErrorKind {
ParseUrl => t!("error.parse-url"),
DiskNotAvailable => t!("error.disk-not-available"),
BlockDevice => t!("error.block-device"),
InvalidOnionAddress => t!("error.invalid-onion-address"),
// InvalidOnionAddress => t!("error.invalid-onion-address"),
Pack => t!("error.pack"),
ValidateS9pk => t!("error.validate-s9pk"),
DiskCorrupted => t!("error.disk-corrupted"), // Remove
Tor => t!("error.tor"),
// Tor => t!("error.tor"),
ConfigGen => t!("error.config-gen"),
ParseNumber => t!("error.parse-number"),
Database => t!("error.database"),
@@ -370,17 +370,6 @@ impl From<reqwest::Error> for Error {
Error::new(e, kind)
}
}
#[cfg(feature = "arti")]
impl From<arti_client::Error> for Error {
fn from(e: arti_client::Error) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<torut::control::ConnError> for Error {
fn from(e: torut::control::ConnError) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<zbus::Error> for Error {
fn from(e: zbus::Error) -> Self {
Error::new(e, ErrorKind::DBus)
@@ -508,8 +497,10 @@ impl From<Error> for RpcError {
}
impl From<RpcError> for Error {
fn from(e: RpcError) -> Self {
let data = ErrorData::from(&e);
let info = data.info.clone();
Error::new(
ErrorData::from(&e),
data,
if let Ok(kind) = e.code.try_into() {
kind
} else if e.code == METHOD_NOT_FOUND_ERROR.code {
@@ -523,6 +514,7 @@ impl From<RpcError> for Error {
ErrorKind::Unknown
},
)
.with_info(info)
}
}

View File

@@ -6,7 +6,8 @@ use tracing::instrument;
use crate::util::Invoke;
use crate::{Error, ErrorKind};
#[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize)]
#[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize, ts_rs::TS)]
#[ts(type = "string")]
pub struct Hostname(pub InternedString);
lazy_static::lazy_static! {

View File

@@ -20,7 +20,7 @@ use crate::db::model::public::ServerStatus;
use crate::developer::OS_DEVELOPER_KEY_PATH;
use crate::hostname::Hostname;
use crate::middleware::auth::local::LocalAuthContext;
use crate::net::gateway::UpgradableListener;
use crate::net::gateway::WildcardListener;
use crate::net::net_controller::{NetController, NetService};
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
use crate::net::utils::find_wifi_iface;
@@ -144,7 +144,7 @@ pub async fn run_script<P: AsRef<Path>>(path: P, mut progress: PhaseProgressTrac
#[instrument(skip_all)]
pub async fn init(
webserver: &WebServerAcceptorSetter<UpgradableListener>,
webserver: &WebServerAcceptorSetter<WildcardListener>,
cfg: &ServerConfig,
InitPhases {
preinit,
@@ -218,7 +218,7 @@ pub async fn init(
)
.await?,
);
webserver.try_upgrade(|a| net_ctrl.net_iface.watcher.upgrade_listener(a))?;
webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe()));
let os_net_service = net_ctrl.os_bindings().await?;
start_net.complete();

View File

@@ -137,6 +137,7 @@ pub async fn install(
json!({
"id": id,
"targetVersion": VersionRange::exactly(version.deref().clone()),
"otherVersions": "none",
}),
RegistryUrlParams {
registry: registry.clone(),
@@ -176,6 +177,7 @@ pub async fn install(
}
#[derive(Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct SideloadParams {
#[ts(skip)]
@@ -184,6 +186,7 @@ pub struct SideloadParams {
}
#[derive(Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct SideloadResponse {
pub upload: Guid,
@@ -283,6 +286,7 @@ pub async fn sideload(
}
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct CancelInstallParams {
@@ -484,7 +488,7 @@ pub async fn cli_install(
let mut packages: GetPackageResponse = from_value(
ctx.call_remote::<RegistryContext>(
"package.get",
json!({ "id": &id, "targetVersion": version, "sourceVersion": source_version }),
json!({ "id": &id, "targetVersion": version, "sourceVersion": source_version, "otherVersions": "none" }),
)
.await?,
)?;
@@ -520,6 +524,7 @@ pub async fn cli_install(
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct UninstallParams {

View File

@@ -24,6 +24,7 @@ use tokio::process::{Child, Command};
use tokio_stream::wrappers::LinesStream;
use tokio_tungstenite::tungstenite::Message;
use tracing::instrument;
use ts_rs::TS;
use crate::PackageId;
use crate::context::{CliContext, RpcContext};
@@ -109,23 +110,28 @@ async fn ws_handler(
}
}
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct LogResponse {
#[ts(as = "Vec<LogEntry>")]
pub entries: Reversible<LogEntry>,
start_cursor: Option<String>,
end_cursor: Option<String>,
}
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct LogFollowResponse {
start_cursor: Option<String>,
guid: Guid,
}
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct LogEntry {
#[ts(type = "string")]
timestamp: DateTime<Utc>,
message: String,
boot_id: String,
@@ -321,14 +327,17 @@ impl From<BootIdentifier> for String {
}
}
#[derive(Deserialize, Serialize, Parser)]
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export, concrete(Extra = Empty), bound = "")]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct LogsParams<Extra: FromArgMatches + Args = Empty> {
#[command(flatten)]
#[serde(flatten)]
#[ts(skip)]
extra: Extra,
#[arg(short = 'l', long = "limit", help = "help.arg.log-limit")]
#[ts(optional)]
limit: Option<usize>,
#[arg(
short = 'c',
@@ -336,9 +345,11 @@ pub struct LogsParams<Extra: FromArgMatches + Args = Empty> {
conflicts_with = "follow",
help = "help.arg.log-cursor"
)]
#[ts(optional)]
cursor: Option<String>,
#[arg(short = 'b', long = "boot", help = "help.arg.log-boot")]
#[serde(default)]
#[ts(optional, type = "number | string")]
boot: Option<BootIdentifier>,
#[arg(
short = 'B',

View File

@@ -71,7 +71,7 @@ impl SignatureAuthContext for RpcContext {
.as_network()
.as_host()
.as_private_domains()
.de()
.keys()
.map(|k| k.into_iter())
.transpose(),
)

View File

@@ -461,7 +461,8 @@ impl ValueParserFactory for AcmeProvider {
}
}
#[derive(Deserialize, Serialize, Parser)]
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct InitAcmeParams {
#[arg(long, help = "help.arg.acme-provider")]
pub provider: AcmeProvider,
@@ -486,7 +487,8 @@ pub async fn init(
Ok(())
}
#[derive(Deserialize, Serialize, Parser)]
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct RemoveAcmeParams {
#[arg(long, help = "help.arg.acme-provider")]
pub provider: AcmeProvider,

View File

@@ -10,7 +10,7 @@ use color_eyre::eyre::eyre;
use futures::{FutureExt, StreamExt, TryStreamExt};
use hickory_server::authority::{AuthorityObject, Catalog, MessageResponseBuilder};
use hickory_server::proto::op::{Header, ResponseCode};
use hickory_server::proto::rr::{LowerName, Name, Record, RecordType};
use hickory_server::proto::rr::{Name, Record, RecordType};
use hickory_server::resolver::config::{ResolverConfig, ResolverOpts};
use hickory_server::server::{Request, RequestHandler, ResponseHandler, ResponseInfo};
use hickory_server::store::forwarder::{ForwardAuthority, ForwardConfig};
@@ -25,6 +25,7 @@ use serde::{Deserialize, Serialize};
use tokio::net::{TcpListener, UdpSocket};
use tokio::sync::RwLock;
use tracing::instrument;
use ts_rs::TS;
use crate::context::{CliContext, RpcContext};
use crate::db::model::Database;
@@ -93,7 +94,8 @@ pub fn dns_api<C: Context>() -> ParentHandler<C> {
)
}
#[derive(Deserialize, Serialize, Parser)]
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct QueryDnsParams {
#[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString,
@@ -133,7 +135,8 @@ pub fn query_dns<C: Context>(
.map_err(Error::from)
}
#[derive(Deserialize, Serialize, Parser)]
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct SetStaticDnsParams {
#[arg(help = "help.arg.dns-servers")]
pub servers: Option<Vec<String>>,

View File

@@ -3,9 +3,11 @@ use std::net::{IpAddr, SocketAddrV4};
use std::sync::{Arc, Weak};
use std::time::Duration;
use ipnet::IpNet;
use futures::channel::oneshot;
use id_pool::IdPool;
use iddqd::{IdOrdItem, IdOrdMap};
use rand::Rng;
use imbl::OrdMap;
use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
@@ -15,7 +17,6 @@ use tokio::sync::mpsc;
use crate::GatewayId;
use crate::context::{CliContext, RpcContext};
use crate::db::model::public::NetworkInterfaceInfo;
use crate::net::gateway::{DynInterfaceFilter, InterfaceFilter};
use crate::prelude::*;
use crate::util::Invoke;
use crate::util::future::NonDetachingJoinHandle;
@@ -23,25 +24,66 @@ use crate::util::serde::{HandlerExtSerde, display_serializable};
use crate::util::sync::Watch;
pub const START9_BRIDGE_IFACE: &str = "lxcbr0";
pub const FIRST_DYNAMIC_PRIVATE_PORT: u16 = 49152;
const EPHEMERAL_PORT_START: u16 = 49152;
// vhost.rs:89 — not allowed: <=1024, >=32768, 5355, 5432, 9050, 6010, 9051, 5353
const RESTRICTED_PORTS: &[u16] = &[5353, 5355, 5432, 6010, 9050, 9051];
fn is_restricted(port: u16) -> bool {
port <= 1024 || RESTRICTED_PORTS.contains(&port)
}
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct ForwardRequirements {
pub public_gateways: BTreeSet<GatewayId>,
pub private_ips: BTreeSet<IpAddr>,
pub secure: bool,
}
impl std::fmt::Display for ForwardRequirements {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"ForwardRequirements {{ public: {:?}, private: {:?}, secure: {} }}",
self.public_gateways, self.private_ips, self.secure
)
}
}
#[derive(Debug, Deserialize, Serialize)]
pub struct AvailablePorts(IdPool);
pub struct AvailablePorts(BTreeMap<u16, bool>);
impl AvailablePorts {
pub fn new() -> Self {
Self(IdPool::new_ranged(FIRST_DYNAMIC_PRIVATE_PORT..u16::MAX))
Self(BTreeMap::new())
}
pub fn alloc(&mut self) -> Result<u16, Error> {
self.0.request_id().ok_or_else(|| {
Error::new(
eyre!("{}", t!("net.forward.no-dynamic-ports-available")),
ErrorKind::Network,
)
})
pub fn alloc(&mut self, ssl: bool) -> Result<u16, Error> {
let mut rng = rand::rng();
for _ in 0..1000 {
let port = rng.random_range(EPHEMERAL_PORT_START..u16::MAX);
if !self.0.contains_key(&port) {
self.0.insert(port, ssl);
return Ok(port);
}
}
Err(Error::new(
eyre!("{}", t!("net.forward.no-dynamic-ports-available")),
ErrorKind::Network,
))
}
/// Try to allocate a specific port. Returns Some(port) if available, None if taken/restricted.
pub fn try_alloc(&mut self, port: u16, ssl: bool) -> Option<u16> {
if is_restricted(port) || self.0.contains_key(&port) {
return None;
}
self.0.insert(port, ssl);
Some(port)
}
/// Returns whether a given allocated port is SSL.
pub fn is_ssl(&self, port: u16) -> bool {
self.0.get(&port).copied().unwrap_or(false)
}
pub fn free(&mut self, ports: impl IntoIterator<Item = u16>) {
for port in ports {
self.0.return_id(port).unwrap_or_default();
self.0.remove(&port);
}
}
}
@@ -61,10 +103,10 @@ pub fn forward_api<C: Context>() -> ParentHandler<C> {
}
let mut table = Table::new();
table.add_row(row![bc => "FROM", "TO", "FILTER"]);
table.add_row(row![bc => "FROM", "TO", "REQS"]);
for (external, target) in res.0 {
table.add_row(row![external, target.target, target.filter]);
table.add_row(row![external, target.target, target.reqs]);
}
table.print_tty(false)?;
@@ -79,6 +121,7 @@ struct ForwardMapping {
source: SocketAddrV4,
target: SocketAddrV4,
target_prefix: u8,
src_filter: Option<IpNet>,
rc: Weak<()>,
}
@@ -93,9 +136,10 @@ impl PortForwardState {
source: SocketAddrV4,
target: SocketAddrV4,
target_prefix: u8,
src_filter: Option<IpNet>,
) -> Result<Arc<()>, Error> {
if let Some(existing) = self.mappings.get_mut(&source) {
if existing.target == target {
if existing.target == target && existing.src_filter == src_filter {
if let Some(existing_rc) = existing.rc.upgrade() {
return Ok(existing_rc);
} else {
@@ -104,21 +148,28 @@ impl PortForwardState {
return Ok(rc);
}
} else {
// Different target, need to remove old and add new
// Different target or src_filter, need to remove old and add new
if let Some(mapping) = self.mappings.remove(&source) {
unforward(mapping.source, mapping.target, mapping.target_prefix).await?;
unforward(
mapping.source,
mapping.target,
mapping.target_prefix,
mapping.src_filter.as_ref(),
)
.await?;
}
}
}
let rc = Arc::new(());
forward(source, target, target_prefix).await?;
forward(source, target, target_prefix, src_filter.as_ref()).await?;
self.mappings.insert(
source,
ForwardMapping {
source,
target,
target_prefix,
src_filter,
rc: Arc::downgrade(&rc),
},
);
@@ -136,7 +187,13 @@ impl PortForwardState {
for source in to_remove {
if let Some(mapping) = self.mappings.remove(&source) {
unforward(mapping.source, mapping.target, mapping.target_prefix).await?;
unforward(
mapping.source,
mapping.target,
mapping.target_prefix,
mapping.src_filter.as_ref(),
)
.await?;
}
}
Ok(())
@@ -157,9 +214,14 @@ impl Drop for PortForwardState {
let mappings = std::mem::take(&mut self.mappings);
tokio::spawn(async move {
for (_, mapping) in mappings {
unforward(mapping.source, mapping.target, mapping.target_prefix)
.await
.log_err();
unforward(
mapping.source,
mapping.target,
mapping.target_prefix,
mapping.src_filter.as_ref(),
)
.await
.log_err();
}
});
}
@@ -171,6 +233,7 @@ enum PortForwardCommand {
source: SocketAddrV4,
target: SocketAddrV4,
target_prefix: u8,
src_filter: Option<IpNet>,
respond: oneshot::Sender<Result<Arc<()>, Error>>,
},
Gc {
@@ -257,9 +320,12 @@ impl PortForwardController {
source,
target,
target_prefix,
src_filter,
respond,
} => {
let result = state.add_forward(source, target, target_prefix).await;
let result = state
.add_forward(source, target, target_prefix, src_filter)
.await;
respond.send(result).ok();
}
PortForwardCommand::Gc { respond } => {
@@ -284,6 +350,7 @@ impl PortForwardController {
source: SocketAddrV4,
target: SocketAddrV4,
target_prefix: u8,
src_filter: Option<IpNet>,
) -> Result<Arc<()>, Error> {
let (send, recv) = oneshot::channel();
self.req
@@ -291,6 +358,7 @@ impl PortForwardController {
source,
target,
target_prefix,
src_filter,
respond: send,
})
.map_err(err_has_exited)?;
@@ -321,14 +389,14 @@ struct InterfaceForwardRequest {
external: u16,
target: SocketAddrV4,
target_prefix: u8,
filter: DynInterfaceFilter,
reqs: ForwardRequirements,
rc: Arc<()>,
}
#[derive(Clone)]
struct InterfaceForwardEntry {
external: u16,
filter: BTreeMap<DynInterfaceFilter, (SocketAddrV4, u8, Weak<()>)>,
targets: BTreeMap<ForwardRequirements, (SocketAddrV4, u8, Weak<()>)>,
// Maps source SocketAddr -> strong reference for the forward created in PortForwardController
forwards: BTreeMap<SocketAddrV4, Arc<()>>,
}
@@ -346,7 +414,7 @@ impl InterfaceForwardEntry {
fn new(external: u16) -> Self {
Self {
external,
filter: BTreeMap::new(),
targets: BTreeMap::new(),
forwards: BTreeMap::new(),
}
}
@@ -358,28 +426,38 @@ impl InterfaceForwardEntry {
) -> Result<(), Error> {
let mut keep = BTreeSet::<SocketAddrV4>::new();
for (iface, info) in ip_info.iter() {
if let Some((target, target_prefix)) = self
.filter
.iter()
.filter(|(_, (_, _, rc))| rc.strong_count() > 0)
.find(|(filter, _)| filter.filter(iface, info))
.map(|(_, (target, target_prefix, _))| (*target, *target_prefix))
{
if let Some(ip_info) = &info.ip_info {
for addr in ip_info.subnets.iter().filter_map(|net| {
if let IpAddr::V4(ip) = net.addr() {
Some(SocketAddrV4::new(ip, self.external))
} else {
None
for (gw_id, info) in ip_info.iter() {
if let Some(ip_info) = &info.ip_info {
for subnet in ip_info.subnets.iter() {
if let IpAddr::V4(ip) = subnet.addr() {
let addr = SocketAddrV4::new(ip, self.external);
if keep.contains(&addr) {
continue;
}
}) {
keep.insert(addr);
if !self.forwards.contains_key(&addr) {
let rc = port_forward
.add_forward(addr, target, target_prefix)
for (reqs, (target, target_prefix, rc)) in self.targets.iter() {
if rc.strong_count() == 0 {
continue;
}
if !reqs.secure && !info.secure() {
continue;
}
let src_filter =
if reqs.public_gateways.contains(gw_id) {
None
} else if reqs.private_ips.contains(&IpAddr::V4(ip)) {
Some(subnet.trunc())
} else {
continue;
};
keep.insert(addr);
let fwd_rc = port_forward
.add_forward(addr, *target, *target_prefix, src_filter)
.await?;
self.forwards.insert(addr, rc);
self.forwards.insert(addr, fwd_rc);
break;
}
}
}
@@ -398,7 +476,7 @@ impl InterfaceForwardEntry {
external,
target,
target_prefix,
filter,
reqs,
mut rc,
}: InterfaceForwardRequest,
ip_info: &OrdMap<GatewayId, NetworkInterfaceInfo>,
@@ -412,8 +490,8 @@ impl InterfaceForwardEntry {
}
let entry = self
.filter
.entry(filter)
.targets
.entry(reqs)
.or_insert_with(|| (target, target_prefix, Arc::downgrade(&rc)));
if entry.0 != target {
entry.0 = target;
@@ -436,7 +514,7 @@ impl InterfaceForwardEntry {
ip_info: &OrdMap<GatewayId, NetworkInterfaceInfo>,
port_forward: &PortForwardController,
) -> Result<(), Error> {
self.filter.retain(|_, (_, _, rc)| rc.strong_count() > 0);
self.targets.retain(|_, (_, _, rc)| rc.strong_count() > 0);
self.update(ip_info, port_forward).await
}
@@ -495,7 +573,7 @@ pub struct ForwardTable(pub BTreeMap<u16, ForwardTarget>);
pub struct ForwardTarget {
pub target: SocketAddrV4,
pub target_prefix: u8,
pub filter: String,
pub reqs: String,
}
impl From<&InterfaceForwardState> for ForwardTable {
@@ -506,16 +584,16 @@ impl From<&InterfaceForwardState> for ForwardTable {
.iter()
.flat_map(|entry| {
entry
.filter
.targets
.iter()
.filter(|(_, (_, _, rc))| rc.strong_count() > 0)
.map(|(filter, (target, target_prefix, _))| {
.map(|(reqs, (target, target_prefix, _))| {
(
entry.external,
ForwardTarget {
target: *target,
target_prefix: *target_prefix,
filter: format!("{:#?}", filter),
reqs: format!("{reqs}"),
},
)
})
@@ -534,16 +612,6 @@ enum InterfaceForwardCommand {
DumpTable(oneshot::Sender<ForwardTable>),
}
#[test]
fn test() {
use crate::net::gateway::SecureFilter;
assert_ne!(
false.into_dyn(),
SecureFilter { secure: false }.into_dyn().into_dyn()
);
}
pub struct InterfacePortForwardController {
req: mpsc::UnboundedSender<InterfaceForwardCommand>,
_thread: NonDetachingJoinHandle<()>,
@@ -593,7 +661,7 @@ impl InterfacePortForwardController {
pub async fn add(
&self,
external: u16,
filter: DynInterfaceFilter,
reqs: ForwardRequirements,
target: SocketAddrV4,
target_prefix: u8,
) -> Result<Arc<()>, Error> {
@@ -605,7 +673,7 @@ impl InterfacePortForwardController {
external,
target,
target_prefix,
filter,
reqs,
rc,
},
send,
@@ -637,15 +705,18 @@ async fn forward(
source: SocketAddrV4,
target: SocketAddrV4,
target_prefix: u8,
src_filter: Option<&IpNet>,
) -> Result<(), Error> {
Command::new("/usr/lib/startos/scripts/forward-port")
.env("sip", source.ip().to_string())
let mut cmd = Command::new("/usr/lib/startos/scripts/forward-port");
cmd.env("sip", source.ip().to_string())
.env("dip", target.ip().to_string())
.env("dprefix", target_prefix.to_string())
.env("sport", source.port().to_string())
.env("dport", target.port().to_string())
.invoke(ErrorKind::Network)
.await?;
.env("dport", target.port().to_string());
if let Some(subnet) = src_filter {
cmd.env("src_subnet", subnet.to_string());
}
cmd.invoke(ErrorKind::Network).await?;
Ok(())
}
@@ -653,15 +724,18 @@ async fn unforward(
source: SocketAddrV4,
target: SocketAddrV4,
target_prefix: u8,
src_filter: Option<&IpNet>,
) -> Result<(), Error> {
Command::new("/usr/lib/startos/scripts/forward-port")
.env("UNDO", "1")
let mut cmd = Command::new("/usr/lib/startos/scripts/forward-port");
cmd.env("UNDO", "1")
.env("sip", source.ip().to_string())
.env("dip", target.ip().to_string())
.env("dprefix", target_prefix.to_string())
.env("sport", source.port().to_string())
.env("dport", target.port().to_string())
.invoke(ErrorKind::Network)
.await?;
.env("dport", target.port().to_string());
if let Some(subnet) = src_filter {
cmd.env("src_subnet", subnet.to_string());
}
cmd.invoke(ErrorKind::Network).await?;
Ok(())
}

File diff suppressed because it is too large Load Diff

View File

@@ -10,46 +10,29 @@ use ts_rs::TS;
use crate::GatewayId;
use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel;
use crate::hostname::Hostname;
use crate::net::acme::AcmeProvider;
use crate::net::host::{HostApiKind, all_hosts};
use crate::net::tor::OnionAddress;
use crate::prelude::*;
use crate::util::serde::{HandlerExtSerde, display_serializable};
#[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
#[serde(rename_all_fields = "camelCase")]
#[serde(tag = "kind")]
pub enum HostAddress {
Onion {
address: OnionAddress,
},
Domain {
address: InternedString,
public: Option<PublicDomainConfig>,
private: bool,
},
#[serde(rename_all = "camelCase")]
pub struct HostAddress {
pub address: InternedString,
pub public: Option<PublicDomainConfig>,
pub private: Option<BTreeSet<GatewayId>>,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
#[ts(export)]
pub struct PublicDomainConfig {
pub gateway: GatewayId,
pub acme: Option<AcmeProvider>,
}
fn handle_duplicates(db: &mut DatabaseModel) -> Result<(), Error> {
let mut onions = BTreeSet::<OnionAddress>::new();
let mut domains = BTreeSet::<InternedString>::new();
let check_onion = |onions: &mut BTreeSet<OnionAddress>, onion: OnionAddress| {
if onions.contains(&onion) {
return Err(Error::new(
eyre!("onion address {onion} is already in use"),
ErrorKind::InvalidRequest,
));
}
onions.insert(onion);
Ok(())
};
let check_domain = |domains: &mut BTreeSet<InternedString>, domain: InternedString| {
if domains.contains(&domain) {
return Err(Error::new(
@@ -68,35 +51,27 @@ fn handle_duplicates(db: &mut DatabaseModel) -> Result<(), Error> {
not_in_use.push(host);
continue;
}
for onion in host.as_onions().de()? {
check_onion(&mut onions, onion)?;
}
let public = host.as_public_domains().keys()?;
for domain in &public {
check_domain(&mut domains, domain.clone())?;
}
for domain in host.as_private_domains().de()? {
for domain in host.as_private_domains().keys()? {
if !public.contains(&domain) {
check_domain(&mut domains, domain)?;
}
}
}
for host in not_in_use {
host.as_onions_mut()
.mutate(|o| Ok(o.retain(|o| !onions.contains(o))))?;
host.as_public_domains_mut()
.mutate(|d| Ok(d.retain(|d, _| !domains.contains(d))))?;
host.as_private_domains_mut()
.mutate(|d| Ok(d.retain(|d| !domains.contains(d))))?;
.mutate(|d| Ok(d.retain(|d, _| !domains.contains(d))))?;
for onion in host.as_onions().de()? {
check_onion(&mut onions, onion)?;
}
let public = host.as_public_domains().keys()?;
for domain in &public {
check_domain(&mut domains, domain.clone())?;
}
for domain in host.as_private_domains().de()? {
for domain in host.as_private_domains().keys()? {
if !public.contains(&domain) {
check_domain(&mut domains, domain)?;
}
@@ -159,29 +134,6 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
)
.with_inherited(Kind::inheritance),
)
.subcommand(
"onion",
ParentHandler::<C, Empty, Kind::Inheritance>::new()
.subcommand(
"add",
from_fn_async(add_onion::<Kind>)
.with_metadata("sync_db", Value::Bool(true))
.with_inherited(|_, a| a)
.no_display()
.with_about("about.add-address-to-host")
.with_call_remote::<CliContext>(),
)
.subcommand(
"remove",
from_fn_async(remove_onion::<Kind>)
.with_metadata("sync_db", Value::Bool(true))
.with_inherited(|_, a| a)
.no_display()
.with_about("about.remove-address-from-host")
.with_call_remote::<CliContext>(),
)
.with_inherited(Kind::inheritance),
)
.subcommand(
"list",
from_fn_async(list_addresses::<Kind>)
@@ -196,35 +148,7 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
}
let mut table = Table::new();
table.add_row(row![bc => "ADDRESS", "PUBLIC", "ACME PROVIDER"]);
for address in &res {
match address {
HostAddress::Onion { address } => {
table.add_row(row![address, true, "N/A"]);
}
HostAddress::Domain {
address,
public: Some(PublicDomainConfig { gateway, acme }),
private,
} => {
table.add_row(row![
address,
&format!(
"{} ({gateway})",
if *private { "YES" } else { "ONLY" }
),
acme.as_ref().map(|a| a.0.as_str()).unwrap_or("NONE")
]);
}
HostAddress::Domain {
address,
public: None,
..
} => {
table.add_row(row![address, &format!("NO"), "N/A"]);
}
}
}
todo!("find a good way to represent this");
table.print_tty(false)?;
@@ -235,7 +159,8 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
)
}
#[derive(Deserialize, Serialize, Parser)]
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct AddPublicDomainParams {
#[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString,
@@ -271,7 +196,11 @@ pub async fn add_public_domain<Kind: HostApiKind>(
Kind::host_for(&inheritance, db)?
.as_public_domains_mut()
.insert(&fqdn, &PublicDomainConfig { acme, gateway })?;
handle_duplicates(db)
handle_duplicates(db)?;
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
let ports = db.as_private().as_available_ports().de()?;
Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
})
.await
.result?;
@@ -284,7 +213,8 @@ pub async fn add_public_domain<Kind: HostApiKind>(
.with_kind(ErrorKind::Unknown)?
}
#[derive(Deserialize, Serialize, Parser)]
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct RemoveDomainParams {
#[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString,
@@ -299,7 +229,11 @@ pub async fn remove_public_domain<Kind: HostApiKind>(
.mutate(|db| {
Kind::host_for(&inheritance, db)?
.as_public_domains_mut()
.remove(&fqdn)
.remove(&fqdn)?;
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
let ports = db.as_private().as_available_ports().de()?;
Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
})
.await
.result?;
@@ -308,23 +242,30 @@ pub async fn remove_public_domain<Kind: HostApiKind>(
Ok(())
}
#[derive(Deserialize, Serialize, Parser)]
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct AddPrivateDomainParams {
#[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString,
pub gateway: GatewayId,
}
pub async fn add_private_domain<Kind: HostApiKind>(
ctx: RpcContext,
AddPrivateDomainParams { fqdn }: AddPrivateDomainParams,
AddPrivateDomainParams { fqdn, gateway }: AddPrivateDomainParams,
inheritance: Kind::Inheritance,
) -> Result<(), Error> {
ctx.db
.mutate(|db| {
Kind::host_for(&inheritance, db)?
.as_private_domains_mut()
.mutate(|d| Ok(d.insert(fqdn)))?;
handle_duplicates(db)
.upsert(&fqdn, || Ok(BTreeSet::new()))?
.mutate(|d| Ok(d.insert(gateway)))?;
handle_duplicates(db)?;
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
let ports = db.as_private().as_available_ports().de()?;
Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
})
.await
.result?;
@@ -342,7 +283,11 @@ pub async fn remove_private_domain<Kind: HostApiKind>(
.mutate(|db| {
Kind::host_for(&inheritance, db)?
.as_private_domains_mut()
.mutate(|d| Ok(d.remove(&domain)))
.mutate(|d| Ok(d.remove(&domain)))?;
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
let ports = db.as_private().as_available_ports().de()?;
Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
})
.await
.result?;
@@ -351,55 +296,6 @@ pub async fn remove_private_domain<Kind: HostApiKind>(
Ok(())
}
#[derive(Deserialize, Serialize, Parser)]
pub struct OnionParams {
#[arg(help = "help.arg.onion-address")]
pub onion: String,
}
pub async fn add_onion<Kind: HostApiKind>(
ctx: RpcContext,
OnionParams { onion }: OnionParams,
inheritance: Kind::Inheritance,
) -> Result<(), Error> {
let onion = onion.parse::<OnionAddress>()?;
ctx.db
.mutate(|db| {
db.as_private().as_key_store().as_onion().get_key(&onion)?;
Kind::host_for(&inheritance, db)?
.as_onions_mut()
.mutate(|a| Ok(a.insert(onion)))?;
handle_duplicates(db)
})
.await
.result?;
Kind::sync_host(&ctx, inheritance).await?;
Ok(())
}
pub async fn remove_onion<Kind: HostApiKind>(
ctx: RpcContext,
OnionParams { onion }: OnionParams,
inheritance: Kind::Inheritance,
) -> Result<(), Error> {
let onion = onion.parse::<OnionAddress>()?;
ctx.db
.mutate(|db| {
Kind::host_for(&inheritance, db)?
.as_onions_mut()
.mutate(|a| Ok(a.remove(&onion)))
})
.await
.result?;
Kind::sync_host(&ctx, inheritance).await?;
Ok(())
}
pub async fn list_addresses<Kind: HostApiKind>(
ctx: RpcContext,
_: Empty,

View File

@@ -1,23 +1,23 @@
use std::collections::{BTreeMap, BTreeSet};
use std::net::SocketAddr;
use std::str::FromStr;
use clap::Parser;
use clap::builder::ValueParserFactory;
use imbl::OrdSet;
use rpc_toolkit::{Context, Empty, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::HostId;
use crate::context::{CliContext, RpcContext};
use crate::db::model::public::NetworkInterfaceInfo;
use crate::db::prelude::Map;
use crate::net::forward::AvailablePorts;
use crate::net::gateway::InterfaceFilter;
use crate::net::host::HostApiKind;
use crate::net::service_interface::HostnameInfo;
use crate::net::vhost::AlpnInfo;
use crate::prelude::*;
use crate::util::FromStrParser;
use crate::util::serde::{HandlerExtSerde, display_serializable};
use crate::{GatewayId, HostId};
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize, TS)]
#[ts(export)]
@@ -45,25 +45,87 @@ impl FromStr for BindId {
}
}
#[derive(Debug, Deserialize, Serialize, TS)]
#[derive(Debug, Default, Clone, Deserialize, Serialize, TS, HasModel)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
#[model = "Model<Self>"]
pub struct DerivedAddressInfo {
/// User override: enable these addresses (only for public IP & port)
pub enabled: BTreeSet<SocketAddr>,
/// User override: disable these addresses (only for domains and private IP & port)
pub disabled: BTreeSet<(InternedString, u16)>,
/// COMPUTED: NetServiceData::update — all possible addresses for this binding
pub available: BTreeSet<HostnameInfo>,
}
impl DerivedAddressInfo {
/// Returns addresses that are currently enabled after applying overrides.
/// Default: public IPs are disabled, everything else is enabled.
/// Explicit `enabled`/`disabled` overrides take precedence.
pub fn enabled(&self) -> BTreeSet<&HostnameInfo> {
self.available
.iter()
.filter(|h| {
if h.public && h.metadata.is_ip() {
// Public IPs: disabled by default, explicitly enabled via SocketAddr
h.to_socket_addr().map_or(
true, // should never happen, but would rather see them if it does
|sa| self.enabled.contains(&sa),
)
} else {
!self
.disabled
.contains(&(h.host.clone(), h.port.unwrap_or_default())) // disablable addresses will always have a port
}
})
.collect()
}
}
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[model = "Model<Self>"]
#[ts(export)]
pub struct Bindings(pub BTreeMap<u16, BindInfo>);
impl Map for Bindings {
type Key = u16;
type Value = BindInfo;
fn key_str(key: &Self::Key) -> Result<impl AsRef<str>, Error> {
Self::key_string(key)
}
fn key_string(key: &Self::Key) -> Result<InternedString, Error> {
Ok(InternedString::from_display(key))
}
}
impl std::ops::Deref for Bindings {
type Target = BTreeMap<u16, BindInfo>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl std::ops::DerefMut for Bindings {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct BindInfo {
pub enabled: bool,
pub options: BindOptions,
pub net: NetInfo,
pub addresses: DerivedAddressInfo,
}
#[derive(Clone, Debug, Deserialize, Serialize, TS, PartialEq, Eq, PartialOrd, Ord)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct NetInfo {
#[ts(as = "BTreeSet::<GatewayId>")]
#[serde(default)]
pub private_disabled: OrdSet<GatewayId>,
#[ts(as = "BTreeSet::<GatewayId>")]
#[serde(default)]
pub public_enabled: OrdSet<GatewayId>,
pub assigned_port: Option<u16>,
pub assigned_ssl_port: Option<u16>,
}
@@ -71,25 +133,28 @@ impl BindInfo {
pub fn new(available_ports: &mut AvailablePorts, options: BindOptions) -> Result<Self, Error> {
let mut assigned_port = None;
let mut assigned_ssl_port = None;
if options.add_ssl.is_some() {
assigned_ssl_port = Some(available_ports.alloc()?);
if let Some(ssl) = &options.add_ssl {
assigned_ssl_port = available_ports
.try_alloc(ssl.preferred_external_port, true)
.or_else(|| Some(available_ports.alloc(true).ok()?));
}
if options
.secure
.map_or(true, |s| !(s.ssl && options.add_ssl.is_some()))
{
assigned_port = Some(available_ports.alloc()?);
assigned_port = available_ports
.try_alloc(options.preferred_external_port, false)
.or_else(|| Some(available_ports.alloc(false).ok()?));
}
Ok(Self {
enabled: true,
options,
net: NetInfo {
private_disabled: OrdSet::new(),
public_enabled: OrdSet::new(),
assigned_port,
assigned_ssl_port,
},
addresses: DerivedAddressInfo::default(),
})
}
pub fn update(
@@ -97,7 +162,11 @@ impl BindInfo {
available_ports: &mut AvailablePorts,
options: BindOptions,
) -> Result<Self, Error> {
let Self { net: mut lan, .. } = self;
let Self {
net: mut lan,
addresses,
..
} = self;
if options
.secure
.map_or(true, |s| !(s.ssl && options.add_ssl.is_some()))
@@ -105,19 +174,26 @@ impl BindInfo {
{
lan.assigned_port = if let Some(port) = lan.assigned_port.take() {
Some(port)
} else if let Some(port) =
available_ports.try_alloc(options.preferred_external_port, false)
{
Some(port)
} else {
Some(available_ports.alloc()?)
Some(available_ports.alloc(false)?)
};
} else {
if let Some(port) = lan.assigned_port.take() {
available_ports.free([port]);
}
}
if options.add_ssl.is_some() {
if let Some(ssl) = &options.add_ssl {
lan.assigned_ssl_port = if let Some(port) = lan.assigned_ssl_port.take() {
Some(port)
} else if let Some(port) = available_ports.try_alloc(ssl.preferred_external_port, true)
{
Some(port)
} else {
Some(available_ports.alloc()?)
Some(available_ports.alloc(true)?)
};
} else {
if let Some(port) = lan.assigned_ssl_port.take() {
@@ -128,22 +204,17 @@ impl BindInfo {
enabled: true,
options,
net: lan,
addresses: DerivedAddressInfo {
enabled: addresses.enabled,
disabled: addresses.disabled,
available: BTreeSet::new(),
},
})
}
pub fn disable(&mut self) {
self.enabled = false;
}
}
impl InterfaceFilter for NetInfo {
fn filter(&self, id: &GatewayId, info: &NetworkInterfaceInfo) -> bool {
info.ip_info.is_some()
&& if info.public() {
self.public_enabled.contains(id)
} else {
!self.private_disabled.contains(id)
}
}
}
#[derive(Debug, Clone, Copy, serde::Serialize, serde::Deserialize, TS)]
#[ts(export)]
@@ -188,7 +259,7 @@ pub fn binding<C: Context, Kind: HostApiKind>()
let mut table = Table::new();
table.add_row(row![bc => "INTERNAL PORT", "ENABLED", "EXTERNAL PORT", "EXTERNAL SSL PORT"]);
for (internal, info) in res {
for (internal, info) in res.iter() {
table.add_row(row![
internal,
info.enabled,
@@ -213,12 +284,12 @@ pub fn binding<C: Context, Kind: HostApiKind>()
.with_call_remote::<CliContext>(),
)
.subcommand(
"set-gateway-enabled",
from_fn_async(set_gateway_enabled::<Kind>)
"set-address-enabled",
from_fn_async(set_address_enabled::<Kind>)
.with_metadata("sync_db", Value::Bool(true))
.with_inherited(Kind::inheritance)
.no_display()
.with_about("about.set-gateway-enabled-for-binding")
.with_about("about.set-address-enabled-for-binding")
.with_call_remote::<CliContext>(),
)
}
@@ -227,7 +298,7 @@ pub async fn list_bindings<Kind: HostApiKind>(
ctx: RpcContext,
_: Empty,
inheritance: Kind::Inheritance,
) -> Result<BTreeMap<u16, BindInfo>, Error> {
) -> Result<Bindings, Error> {
Kind::host_for(&inheritance, &mut ctx.db.peek().await)?
.as_bindings()
.de()
@@ -236,50 +307,54 @@ pub async fn list_bindings<Kind: HostApiKind>(
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct BindingGatewaySetEnabledParams {
pub struct BindingSetAddressEnabledParams {
#[arg(help = "help.arg.internal-port")]
internal_port: u16,
#[arg(help = "help.arg.gateway-id")]
gateway: GatewayId,
#[arg(long, help = "help.arg.address")]
address: String,
#[arg(long, help = "help.arg.binding-enabled")]
enabled: Option<bool>,
}
pub async fn set_gateway_enabled<Kind: HostApiKind>(
pub async fn set_address_enabled<Kind: HostApiKind>(
ctx: RpcContext,
BindingGatewaySetEnabledParams {
BindingSetAddressEnabledParams {
internal_port,
gateway,
address,
enabled,
}: BindingGatewaySetEnabledParams,
}: BindingSetAddressEnabledParams,
inheritance: Kind::Inheritance,
) -> Result<(), Error> {
let enabled = enabled.unwrap_or(true);
let gateway_public = ctx
.net_controller
.net_iface
.watcher
.ip_info()
.get(&gateway)
.or_not_found(&gateway)?
.public();
let address: HostnameInfo =
serde_json::from_str(&address).with_kind(ErrorKind::Deserialization)?;
ctx.db
.mutate(|db| {
Kind::host_for(&inheritance, db)?
.as_bindings_mut()
.mutate(|b| {
let net = &mut b.get_mut(&internal_port).or_not_found(internal_port)?.net;
if gateway_public {
let bind = b.get_mut(&internal_port).or_not_found(internal_port)?;
if address.public && address.metadata.is_ip() {
// Public IPs: toggle via SocketAddr in `enabled` set
let sa = address.to_socket_addr().ok_or_else(|| {
Error::new(
eyre!("cannot convert address to socket addr"),
ErrorKind::InvalidRequest,
)
})?;
if enabled {
net.public_enabled.insert(gateway);
bind.addresses.enabled.insert(sa);
} else {
net.public_enabled.remove(&gateway);
bind.addresses.enabled.remove(&sa);
}
} else {
// Domains and private IPs: toggle via (host, port) in `disabled` set
let port = address.port.unwrap_or(if address.ssl { 443 } else { 80 });
let key = (address.host.clone(), port);
if enabled {
net.private_disabled.remove(&gateway);
bind.addresses.disabled.remove(&key);
} else {
net.private_disabled.insert(gateway);
bind.addresses.disabled.insert(key);
}
}
Ok(())

View File

@@ -1,23 +1,27 @@
use std::collections::{BTreeMap, BTreeSet};
use std::future::Future;
use std::net::{IpAddr, SocketAddrV4};
use std::panic::RefUnwindSafe;
use clap::Parser;
use imbl::OrdMap;
use imbl_value::InternedString;
use itertools::Itertools;
use patch_db::DestructureMut;
use rpc_toolkit::{Context, Empty, HandlerExt, OrEmpty, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::context::RpcContext;
use crate::db::model::DatabaseModel;
use crate::db::model::public::{NetworkInterfaceInfo, NetworkInterfaceType};
use crate::hostname::Hostname;
use crate::net::forward::AvailablePorts;
use crate::net::host::address::{HostAddress, PublicDomainConfig, address_api};
use crate::net::host::binding::{BindInfo, BindOptions, binding};
use crate::net::service_interface::HostnameInfo;
use crate::net::tor::OnionAddress;
use crate::net::host::binding::{BindInfo, BindOptions, Bindings, binding};
use crate::net::service_interface::{HostnameInfo, HostnameMetadata};
use crate::prelude::*;
use crate::{HostId, PackageId};
use crate::{GatewayId, HostId, PackageId};
pub mod address;
pub mod binding;
@@ -27,13 +31,23 @@ pub mod binding;
#[model = "Model<Self>"]
#[ts(export)]
pub struct Host {
pub bindings: BTreeMap<u16, BindInfo>,
#[ts(type = "string[]")]
pub onions: BTreeSet<OnionAddress>,
pub bindings: Bindings,
pub public_domains: BTreeMap<InternedString, PublicDomainConfig>,
pub private_domains: BTreeSet<InternedString>,
/// COMPUTED: NetService::update
pub hostname_info: BTreeMap<u16, Vec<HostnameInfo>>, // internal port -> Hostnames
pub private_domains: BTreeMap<InternedString, BTreeSet<GatewayId>>,
/// COMPUTED: port forwarding rules needed on gateways for public addresses to work.
#[serde(default)]
pub port_forwards: BTreeSet<PortForward>,
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct PortForward {
#[ts(type = "string")]
pub src: SocketAddrV4,
#[ts(type = "string")]
pub dst: SocketAddrV4,
pub gateway: GatewayId,
}
impl AsRef<Host> for Host {
@@ -46,31 +60,273 @@ impl Host {
Self::default()
}
pub fn addresses<'a>(&'a self) -> impl Iterator<Item = HostAddress> + 'a {
self.onions
self.public_domains
.iter()
.cloned()
.map(|address| HostAddress::Onion { address })
.chain(
self.public_domains
.iter()
.map(|(address, config)| HostAddress::Domain {
address: address.clone(),
public: Some(config.clone()),
private: self.private_domains.contains(address),
}),
)
.map(|(address, config)| HostAddress {
address: address.clone(),
public: Some(config.clone()),
private: self.private_domains.get(address).cloned(),
})
.chain(
self.private_domains
.iter()
.filter(|a| !self.public_domains.contains_key(*a))
.map(|address| HostAddress::Domain {
address: address.clone(),
.filter(|(domain, _)| !self.public_domains.contains_key(*domain))
.map(|(domain, gateways)| HostAddress {
address: domain.clone(),
public: None,
private: true,
private: Some(gateways.clone()),
}),
)
}
}
impl Model<Host> {
pub fn update_addresses(
&mut self,
mdns: &Hostname,
gateways: &OrdMap<GatewayId, NetworkInterfaceInfo>,
available_ports: &AvailablePorts,
) -> Result<(), Error> {
let this = self.destructure_mut();
// ips
for (_, bind) in this.bindings.as_entries_mut()? {
let net = bind.as_net().de()?;
let opt = bind.as_options().de()?;
let mut available = BTreeSet::new();
for (gid, g) in gateways {
let Some(ip_info) = &g.ip_info else {
continue;
};
let gateway_secure = g.secure();
for subnet in &ip_info.subnets {
let host = InternedString::from_display(&subnet.addr());
let metadata = if subnet.addr().is_ipv4() {
HostnameMetadata::Ipv4 {
gateway: gid.clone(),
}
} else {
HostnameMetadata::Ipv6 {
gateway: gid.clone(),
scope_id: ip_info.scope_id,
}
};
if let Some(port) = net.assigned_port.filter(|_| {
opt.secure
.map_or(gateway_secure, |s| !(s.ssl && opt.add_ssl.is_some()))
}) {
available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl),
public: false,
host: host.clone(),
port: Some(port),
metadata: metadata.clone(),
});
}
if let Some(port) = net.assigned_ssl_port {
available.insert(HostnameInfo {
ssl: true,
public: false,
host: host.clone(),
port: Some(port),
metadata,
});
}
}
if let Some(wan_ip) = &ip_info.wan_ip {
let host = InternedString::from_display(&wan_ip);
let metadata = HostnameMetadata::Ipv4 {
gateway: gid.clone(),
};
if let Some(port) = net.assigned_port.filter(|_| {
opt.secure.map_or(
false, // the public internet is never secure
|s| !(s.ssl && opt.add_ssl.is_some()),
)
}) {
available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl),
public: true,
host: host.clone(),
port: Some(port),
metadata: metadata.clone(),
});
}
if let Some(port) = net.assigned_ssl_port {
available.insert(HostnameInfo {
ssl: true,
public: true,
host: host.clone(),
port: Some(port),
metadata,
});
}
}
}
// mdns
let mdns_host = mdns.local_domain_name();
let mdns_gateways: BTreeSet<GatewayId> = gateways
.iter()
.filter(|(_, g)| {
matches!(
g.ip_info.as_ref().and_then(|i| i.device_type),
Some(NetworkInterfaceType::Ethernet | NetworkInterfaceType::Wireless)
)
})
.map(|(id, _)| id.clone())
.collect();
if let Some(port) = net.assigned_port.filter(|_| {
opt.secure
.map_or(true, |s| !(s.ssl && opt.add_ssl.is_some()))
}) {
available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl),
public: false,
host: mdns_host.clone(),
port: Some(port),
metadata: HostnameMetadata::Mdns {
gateways: mdns_gateways.clone(),
},
});
}
if let Some(port) = net.assigned_ssl_port {
available.insert(HostnameInfo {
ssl: true,
public: false,
host: mdns_host,
port: Some(port),
metadata: HostnameMetadata::Mdns {
gateways: mdns_gateways,
},
});
}
// public domains
for (domain, info) in this.public_domains.de()? {
let metadata = HostnameMetadata::PublicDomain {
gateway: info.gateway.clone(),
};
if let Some(port) = net.assigned_port.filter(|_| {
opt.secure.map_or(
false, // the public internet is never secure
|s| !(s.ssl && opt.add_ssl.is_some()),
)
}) {
available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl),
public: true,
host: domain.clone(),
port: Some(port),
metadata: metadata.clone(),
});
}
if let Some(mut port) = net.assigned_ssl_port {
if let Some(preferred) = opt
.add_ssl
.as_ref()
.map(|s| s.preferred_external_port)
.filter(|p| available_ports.is_ssl(*p))
{
port = preferred;
}
available.insert(HostnameInfo {
ssl: true,
public: true,
host: domain,
port: Some(port),
metadata,
});
}
}
// private domains
for (domain, domain_gateways) in this.private_domains.de()? {
if let Some(port) = net.assigned_port.filter(|_| {
opt.secure
.map_or(true, |s| !(s.ssl && opt.add_ssl.is_some()))
}) {
let gateways = if opt.secure.is_some() {
domain_gateways.clone()
} else {
domain_gateways
.iter()
.cloned()
.filter(|g| gateways.get(g).map_or(false, |g| g.secure()))
.collect()
};
available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl),
public: true,
host: domain.clone(),
port: Some(port),
metadata: HostnameMetadata::PrivateDomain { gateways },
});
}
if let Some(mut port) = net.assigned_ssl_port {
if let Some(preferred) = opt
.add_ssl
.as_ref()
.map(|s| s.preferred_external_port)
.filter(|p| available_ports.is_ssl(*p))
{
port = preferred;
}
available.insert(HostnameInfo {
ssl: true,
public: true,
host: domain,
port: Some(port),
metadata: HostnameMetadata::PrivateDomain {
gateways: domain_gateways,
},
});
}
}
bind.as_addresses_mut().as_available_mut().ser(&available)?;
}
// compute port forwards from available public addresses
let bindings: Bindings = this.bindings.de()?;
let mut port_forwards = BTreeSet::new();
for bind in bindings.values() {
for addr in &bind.addresses.available {
if !addr.public {
continue;
}
let Some(port) = addr.port else {
continue;
};
let gw_id = match &addr.metadata {
HostnameMetadata::Ipv4 { gateway }
| HostnameMetadata::PublicDomain { gateway } => gateway,
_ => continue,
};
let Some(gw_info) = gateways.get(gw_id) else {
continue;
};
let Some(ip_info) = &gw_info.ip_info else {
continue;
};
let Some(wan_ip) = ip_info.wan_ip else {
continue;
};
for subnet in &ip_info.subnets {
let IpAddr::V4(addr) = subnet.addr() else {
continue;
};
port_forwards.insert(PortForward {
src: SocketAddrV4::new(wan_ip, port),
dst: SocketAddrV4::new(addr, port),
gateway: gw_id.clone(),
});
}
}
}
this.port_forwards.ser(&port_forwards)?;
Ok(())
}
}
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[model = "Model<Self>"]
@@ -112,22 +368,7 @@ pub fn host_for<'a>(
.as_hosts_mut(),
)
}
let tor_key = if host_info(db, package_id)?.as_idx(host_id).is_none() {
Some(
db.as_private_mut()
.as_key_store_mut()
.as_onion_mut()
.new_key()?,
)
} else {
None
};
host_info(db, package_id)?.upsert(host_id, || {
let mut h = Host::new();
h.onions
.insert(tor_key.or_not_found("generated tor key")?.onion_address());
Ok(h)
})
host_info(db, package_id)?.upsert(host_id, || Ok(Host::new()))
}
pub fn all_hosts(db: &mut DatabaseModel) -> impl Iterator<Item = Result<&mut Model<Host>, Error>> {

View File

@@ -3,28 +3,21 @@ use serde::{Deserialize, Serialize};
use crate::account::AccountInfo;
use crate::net::acme::AcmeCertStore;
use crate::net::ssl::CertStore;
use crate::net::tor::OnionStore;
use crate::prelude::*;
#[derive(Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
#[serde(rename_all = "camelCase")]
pub struct KeyStore {
pub onion: OnionStore,
pub local_certs: CertStore,
#[serde(default)]
pub acme: AcmeCertStore,
}
impl KeyStore {
pub fn new(account: &AccountInfo) -> Result<Self, Error> {
let mut res = Self {
onion: OnionStore::new(),
Ok(Self {
local_certs: CertStore::new(account)?,
acme: AcmeCertStore::new(),
};
for tor_key in account.tor_keys.iter().cloned() {
res.onion.insert(tor_key);
}
Ok(res)
})
}
}

View File

@@ -14,7 +14,6 @@ pub mod socks;
pub mod ssl;
pub mod static_server;
pub mod tls;
pub mod tor;
pub mod tunnel;
pub mod utils;
pub mod vhost;
@@ -23,7 +22,6 @@ pub mod wifi;
pub fn net_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
.subcommand("tor", tor::tor_api::<C>().with_about("about.tor-commands"))
.subcommand(
"acme",
acme::acme_api::<C>().with_about("about.setup-acme-certificate"),

File diff suppressed because it is too large Load Diff

View File

@@ -1,36 +1,84 @@
use std::net::{Ipv4Addr, Ipv6Addr};
use std::collections::BTreeSet;
use std::net::SocketAddr;
use imbl_value::InternedString;
use imbl_value::{InOMap, InternedString};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::{GatewayId, HostId, ServiceInterfaceId};
use crate::prelude::*;
use crate::{GatewayId, HostId, PackageId, ServiceInterfaceId};
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct HostnameInfo {
pub ssl: bool,
pub public: bool,
pub host: InternedString,
pub port: Option<u16>,
pub metadata: HostnameMetadata,
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "kebab-case")]
#[serde(rename_all_fields = "camelCase")]
#[serde(tag = "kind")]
pub enum HostnameInfo {
Ip {
gateway: GatewayInfo,
public: bool,
hostname: IpHostname,
pub enum HostnameMetadata {
Ipv4 {
gateway: GatewayId,
},
Onion {
hostname: OnionHostname,
Ipv6 {
gateway: GatewayId,
scope_id: u32,
},
Mdns {
gateways: BTreeSet<GatewayId>,
},
PrivateDomain {
gateways: BTreeSet<GatewayId>,
},
PublicDomain {
gateway: GatewayId,
},
Plugin {
package: PackageId,
#[serde(flatten)]
#[ts(skip)]
extra: InOMap<InternedString, Value>,
},
}
impl HostnameInfo {
pub fn to_socket_addr(&self) -> Option<SocketAddr> {
let ip = self.host.parse().ok()?;
Some(SocketAddr::new(ip, self.port?))
}
pub fn to_san_hostname(&self) -> InternedString {
self.host.clone()
}
}
impl HostnameMetadata {
pub fn is_ip(&self) -> bool {
matches!(self, Self::Ipv4 { .. } | Self::Ipv6 { .. })
}
pub fn gateways(&self) -> Box<dyn Iterator<Item = &GatewayId> + '_> {
match self {
Self::Ip { hostname, .. } => hostname.to_san_hostname(),
Self::Onion { hostname } => hostname.to_san_hostname(),
Self::Ipv4 { gateway }
| Self::Ipv6 { gateway, .. }
| Self::PublicDomain { gateway } => Box::new(std::iter::once(gateway)),
Self::PrivateDomain { gateways } | Self::Mdns { gateways } => {
Box::new(gateways.iter())
}
Self::Plugin { .. } => Box::new(std::iter::empty()),
}
}
}
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct GatewayInfo {
@@ -39,63 +87,6 @@ pub struct GatewayInfo {
pub public: bool,
}
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct OnionHostname {
#[ts(type = "string")]
pub value: InternedString,
pub port: Option<u16>,
pub ssl_port: Option<u16>,
}
impl OnionHostname {
pub fn to_san_hostname(&self) -> InternedString {
self.value.clone()
}
}
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[serde(rename_all_fields = "camelCase")]
#[serde(tag = "kind")]
pub enum IpHostname {
Ipv4 {
value: Ipv4Addr,
port: Option<u16>,
ssl_port: Option<u16>,
},
Ipv6 {
value: Ipv6Addr,
#[serde(default)]
scope_id: u32,
port: Option<u16>,
ssl_port: Option<u16>,
},
Local {
#[ts(type = "string")]
value: InternedString,
port: Option<u16>,
ssl_port: Option<u16>,
},
Domain {
#[ts(type = "string")]
value: InternedString,
port: Option<u16>,
ssl_port: Option<u16>,
},
}
impl IpHostname {
pub fn to_san_hostname(&self) -> InternedString {
match self {
Self::Ipv4 { value, .. } => InternedString::from_display(value),
Self::Ipv6 { value, .. } => InternedString::from_display(value),
Self::Local { value, .. } => value.clone(),
Self::Domain { value, .. } => value.clone(),
}
}
}
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]

View File

@@ -8,7 +8,6 @@ use socks5_impl::server::{AuthAdaptor, ClientConnection, Server};
use tokio::net::{TcpListener, TcpStream};
use crate::HOST_IP;
use crate::net::tor::TorController;
use crate::prelude::*;
use crate::util::actor::background::BackgroundJobQueue;
use crate::util::future::NonDetachingJoinHandle;
@@ -22,7 +21,7 @@ pub struct SocksController {
_thread: NonDetachingJoinHandle<()>,
}
impl SocksController {
pub fn new(listen: SocketAddr, tor: TorController) -> Result<Self, Error> {
pub fn new(listen: SocketAddr) -> Result<Self, Error> {
Ok(Self {
_thread: tokio::spawn(async move {
let auth: AuthAdaptor<()> = Arc::new(NoAuth);
@@ -45,7 +44,6 @@ impl SocksController {
loop {
match server.accept().await {
Ok((stream, _)) => {
let tor = tor.clone();
bg.add_job(async move {
if let Err(e) = async {
match stream
@@ -57,40 +55,6 @@ impl SocksController {
.await
.with_kind(ErrorKind::Network)?
{
ClientConnection::Connect(
reply,
Address::DomainAddress(domain, port),
) if domain.ends_with(".onion") => {
if let Ok(mut target) = tor
.connect_onion(&domain.parse()?, port)
.await
{
let mut sock = reply
.reply(
Reply::Succeeded,
Address::unspecified(),
)
.await
.with_kind(ErrorKind::Network)?;
tokio::io::copy_bidirectional(
&mut sock,
&mut target,
)
.await
.with_kind(ErrorKind::Network)?;
} else {
let mut sock = reply
.reply(
Reply::HostUnreachable,
Address::unspecified(),
)
.await
.with_kind(ErrorKind::Network)?;
sock.shutdown()
.await
.with_kind(ErrorKind::Network)?;
}
}
ClientConnection::Connect(reply, addr) => {
if let Ok(mut target) = match addr {
Address::DomainAddress(domain, port) => {

View File

@@ -9,14 +9,14 @@ use async_compression::tokio::bufread::GzipEncoder;
use axum::Router;
use axum::body::Body;
use axum::extract::{self as x, Request};
use axum::response::{IntoResponse, Redirect, Response};
use axum::response::{IntoResponse, Response};
use axum::routing::{any, get};
use base64::display::Base64Display;
use digest::Digest;
use futures::future::ready;
use http::header::{
ACCEPT_ENCODING, ACCEPT_RANGES, CACHE_CONTROL, CONNECTION, CONTENT_ENCODING, CONTENT_LENGTH,
CONTENT_RANGE, CONTENT_TYPE, ETAG, HOST, RANGE,
CONTENT_RANGE, CONTENT_TYPE, ETAG, RANGE,
};
use http::request::Parts as RequestParts;
use http::{HeaderValue, Method, StatusCode};
@@ -36,8 +36,6 @@ use crate::middleware::auth::Auth;
use crate::middleware::auth::session::ValidSessionToken;
use crate::middleware::cors::Cors;
use crate::middleware::db::SyncDb;
use crate::net::gateway::GatewayInfo;
use crate::net::tls::TlsHandshakeInfo;
use crate::prelude::*;
use crate::rpc_continuations::{Guid, RpcContinuations};
use crate::s9pk::S9pk;
@@ -89,30 +87,6 @@ impl UiContext for RpcContext {
.middleware(SyncDb::new())
}
fn extend_router(self, router: Router) -> Router {
async fn https_redirect_if_public_http(
req: Request,
next: axum::middleware::Next,
) -> Response {
if req
.extensions()
.get::<GatewayInfo>()
.map_or(false, |p| p.info.public())
&& req.extensions().get::<TlsHandshakeInfo>().is_none()
{
Redirect::temporary(&format!(
"https://{}{}",
req.headers()
.get(HOST)
.and_then(|s| s.to_str().ok())
.unwrap_or("localhost"),
req.uri()
))
.into_response()
} else {
next.run(req).await
}
}
router
.route("/proxy/{url}", {
let ctx = self.clone();
@@ -136,7 +110,6 @@ impl UiContext for RpcContext {
}
}),
)
.layer(axum::middleware::from_fn(https_redirect_if_public_http))
}
}

View File

@@ -1,964 +0,0 @@
use std::borrow::Cow;
use std::collections::{BTreeMap, BTreeSet};
use std::net::SocketAddr;
use std::str::FromStr;
use std::sync::{Arc, Weak};
use std::time::{Duration, Instant};
use arti_client::config::onion_service::OnionServiceConfigBuilder;
use arti_client::{TorClient, TorClientConfig};
use base64::Engine;
use clap::Parser;
use color_eyre::eyre::eyre;
use futures::{FutureExt, StreamExt};
use imbl_value::InternedString;
use itertools::Itertools;
use rpc_toolkit::{Context, Empty, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpStream;
use tokio::sync::Notify;
use tor_cell::relaycell::msg::Connected;
use tor_hscrypto::pk::{HsId, HsIdKeypair};
use tor_hsservice::status::State as ArtiOnionServiceState;
use tor_hsservice::{HsNickname, RunningOnionService};
use tor_keymgr::config::ArtiKeystoreKind;
use tor_proto::client::stream::IncomingStreamRequest;
use tor_rtcompat::tokio::TokioRustlsRuntime;
use ts_rs::TS;
use crate::context::{CliContext, RpcContext};
use crate::prelude::*;
use crate::util::actor::background::BackgroundJobQueue;
use crate::util::future::{NonDetachingJoinHandle, Until};
use crate::util::io::ReadWriter;
use crate::util::serde::{
BASE64, Base64, HandlerExtSerde, WithIoFormat, deserialize_from_str, display_serializable,
serialize_display,
};
use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
const BOOTSTRAP_PROGRESS_TIMEOUT: Duration = Duration::from_secs(300);
const HS_BOOTSTRAP_TIMEOUT: Duration = Duration::from_secs(300);
const RETRY_COOLDOWN: Duration = Duration::from_secs(15);
const HEALTH_CHECK_FAILURE_ALLOWANCE: usize = 5;
const HEALTH_CHECK_COOLDOWN: Duration = Duration::from_secs(120);
#[derive(Debug, Clone, Copy)]
pub struct OnionAddress(pub HsId);
impl std::fmt::Display for OnionAddress {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
safelog::DisplayRedacted::fmt_unredacted(&self.0, f)
}
}
impl FromStr for OnionAddress {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(Self(
if s.ends_with(".onion") {
Cow::Borrowed(s)
} else {
Cow::Owned(format!("{s}.onion"))
}
.parse::<HsId>()
.with_kind(ErrorKind::Tor)?,
))
}
}
impl Serialize for OnionAddress {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serialize_display(self, serializer)
}
}
impl<'de> Deserialize<'de> for OnionAddress {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
deserialize_from_str(deserializer)
}
}
impl PartialEq for OnionAddress {
fn eq(&self, other: &Self) -> bool {
self.0.as_ref() == other.0.as_ref()
}
}
impl Eq for OnionAddress {}
impl PartialOrd for OnionAddress {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
self.0.as_ref().partial_cmp(other.0.as_ref())
}
}
impl Ord for OnionAddress {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
self.0.as_ref().cmp(other.0.as_ref())
}
}
pub struct TorSecretKey(pub HsIdKeypair);
impl TorSecretKey {
pub fn onion_address(&self) -> OnionAddress {
OnionAddress(HsId::from(self.0.as_ref().public().to_bytes()))
}
pub fn from_bytes(bytes: [u8; 64]) -> Result<Self, Error> {
Ok(Self(
tor_llcrypto::pk::ed25519::ExpandedKeypair::from_secret_key_bytes(bytes)
.ok_or_else(|| {
Error::new(
eyre!("{}", t!("net.tor.invalid-ed25519-key")),
ErrorKind::Tor,
)
})?
.into(),
))
}
pub fn generate() -> Self {
Self(
tor_llcrypto::pk::ed25519::ExpandedKeypair::from(
&tor_llcrypto::pk::ed25519::Keypair::generate(&mut rand::rng()),
)
.into(),
)
}
}
impl Clone for TorSecretKey {
fn clone(&self) -> Self {
Self(HsIdKeypair::from(
tor_llcrypto::pk::ed25519::ExpandedKeypair::from_secret_key_bytes(
self.0.as_ref().to_secret_key_bytes(),
)
.unwrap(),
))
}
}
impl std::fmt::Display for TorSecretKey {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
BASE64.encode(self.0.as_ref().to_secret_key_bytes())
)
}
}
impl FromStr for TorSecretKey {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::from_bytes(Base64::<[u8; 64]>::from_str(s)?.0)
}
}
impl Serialize for TorSecretKey {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serialize_display(self, serializer)
}
}
impl<'de> Deserialize<'de> for TorSecretKey {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
deserialize_from_str(deserializer)
}
}
#[derive(Default, Deserialize, Serialize)]
pub struct OnionStore(BTreeMap<OnionAddress, TorSecretKey>);
impl Map for OnionStore {
type Key = OnionAddress;
type Value = TorSecretKey;
fn key_str(key: &Self::Key) -> Result<impl AsRef<str>, Error> {
Self::key_string(key)
}
fn key_string(key: &Self::Key) -> Result<imbl_value::InternedString, Error> {
Ok(InternedString::from_display(key))
}
}
impl OnionStore {
pub fn new() -> Self {
Self::default()
}
pub fn insert(&mut self, key: TorSecretKey) {
self.0.insert(key.onion_address(), key);
}
}
impl Model<OnionStore> {
pub fn new_key(&mut self) -> Result<TorSecretKey, Error> {
let key = TorSecretKey::generate();
self.insert(&key.onion_address(), &key)?;
Ok(key)
}
pub fn insert_key(&mut self, key: &TorSecretKey) -> Result<(), Error> {
self.insert(&key.onion_address(), &key)
}
pub fn get_key(&self, address: &OnionAddress) -> Result<TorSecretKey, Error> {
self.as_idx(address)
.or_not_found(lazy_format!("private key for {address}"))?
.de()
}
}
impl std::fmt::Debug for OnionStore {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
struct OnionStoreMap<'a>(&'a BTreeMap<OnionAddress, TorSecretKey>);
impl<'a> std::fmt::Debug for OnionStoreMap<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
#[derive(Debug)]
struct KeyFor(#[allow(unused)] OnionAddress);
let mut map = f.debug_map();
for (k, v) in self.0 {
map.key(k);
map.value(&KeyFor(v.onion_address()));
}
map.finish()
}
}
f.debug_tuple("OnionStore")
.field(&OnionStoreMap(&self.0))
.finish()
}
}
pub fn tor_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
.subcommand(
"list-services",
from_fn_async(list_services)
.with_display_serializable()
.with_custom_display_fn(|handle, result| display_services(handle.params, result))
.with_about("about.display-tor-v3-onion-addresses")
.with_call_remote::<CliContext>(),
)
.subcommand(
"reset",
from_fn_async(reset)
.no_display()
.with_about("about.reset-tor-daemon")
.with_call_remote::<CliContext>(),
)
.subcommand(
"key",
key::<C>().with_about("about.manage-onion-service-key-store"),
)
}
pub fn key<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
.subcommand(
"generate",
from_fn_async(generate_key)
.with_about("about.generate-onion-service-key-add-to-store")
.with_call_remote::<CliContext>(),
)
.subcommand(
"add",
from_fn_async(add_key)
.with_about("about.add-onion-service-key-to-store")
.with_call_remote::<CliContext>(),
)
.subcommand(
"list",
from_fn_async(list_keys)
.with_custom_display_fn(|_, res| {
for addr in res {
println!("{addr}");
}
Ok(())
})
.with_about("about.list-onion-services-with-keys-in-store")
.with_call_remote::<CliContext>(),
)
}
pub async fn generate_key(ctx: RpcContext) -> Result<OnionAddress, Error> {
ctx.db
.mutate(|db| {
Ok(db
.as_private_mut()
.as_key_store_mut()
.as_onion_mut()
.new_key()?
.onion_address())
})
.await
.result
}
#[derive(Deserialize, Serialize, Parser)]
pub struct AddKeyParams {
#[arg(help = "help.arg.onion-secret-key")]
pub key: Base64<[u8; 64]>,
}
pub async fn add_key(
ctx: RpcContext,
AddKeyParams { key }: AddKeyParams,
) -> Result<OnionAddress, Error> {
let key = TorSecretKey::from_bytes(key.0)?;
ctx.db
.mutate(|db| {
db.as_private_mut()
.as_key_store_mut()
.as_onion_mut()
.insert_key(&key)
})
.await
.result?;
Ok(key.onion_address())
}
pub async fn list_keys(ctx: RpcContext) -> Result<BTreeSet<OnionAddress>, Error> {
ctx.db
.peek()
.await
.into_private()
.into_key_store()
.into_onion()
.keys()
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct ResetParams {
#[arg(
name = "wipe-state",
short = 'w',
long = "wipe-state",
help = "help.arg.wipe-tor-state"
)]
wipe_state: bool,
}
pub async fn reset(ctx: RpcContext, ResetParams { wipe_state }: ResetParams) -> Result<(), Error> {
ctx.net_controller.tor.reset(wipe_state).await
}
pub fn display_services(
params: WithIoFormat<Empty>,
services: BTreeMap<OnionAddress, OnionServiceInfo>,
) -> Result<(), Error> {
use prettytable::*;
if let Some(format) = params.format {
return display_serializable(format, services);
}
let mut table = Table::new();
table.add_row(row![bc => "ADDRESS", "STATE", "BINDINGS"]);
for (service, info) in services {
let row = row![
&service.to_string(),
&format!("{:?}", info.state),
&info
.bindings
.into_iter()
.map(|(port, addr)| lazy_format!("{port} -> {addr}"))
.join("; ")
];
table.add_row(row);
}
table.print_tty(false)?;
Ok(())
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub enum OnionServiceState {
Shutdown,
Bootstrapping,
DegradedReachable,
DegradedUnreachable,
Running,
Recovering,
Broken,
}
impl From<ArtiOnionServiceState> for OnionServiceState {
fn from(value: ArtiOnionServiceState) -> Self {
match value {
ArtiOnionServiceState::Shutdown => Self::Shutdown,
ArtiOnionServiceState::Bootstrapping => Self::Bootstrapping,
ArtiOnionServiceState::DegradedReachable => Self::DegradedReachable,
ArtiOnionServiceState::DegradedUnreachable => Self::DegradedUnreachable,
ArtiOnionServiceState::Running => Self::Running,
ArtiOnionServiceState::Recovering => Self::Recovering,
ArtiOnionServiceState::Broken => Self::Broken,
_ => unreachable!(),
}
}
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct OnionServiceInfo {
pub state: OnionServiceState,
pub bindings: BTreeMap<u16, SocketAddr>,
}
pub async fn list_services(
ctx: RpcContext,
_: Empty,
) -> Result<BTreeMap<OnionAddress, OnionServiceInfo>, Error> {
ctx.net_controller.tor.list_services().await
}
#[derive(Clone)]
pub struct TorController(Arc<TorControllerInner>);
struct TorControllerInner {
client: Watch<(usize, TorClient<TokioRustlsRuntime>)>,
_bootstrapper: NonDetachingJoinHandle<()>,
services: SyncMutex<BTreeMap<OnionAddress, OnionService>>,
reset: Arc<Notify>,
}
impl TorController {
pub fn new() -> Result<Self, Error> {
let mut config = TorClientConfig::builder();
config
.storage()
.keystore()
.primary()
.kind(ArtiKeystoreKind::Ephemeral.into());
let client = Watch::new((
0,
TorClient::with_runtime(TokioRustlsRuntime::current()?)
.config(config.build().with_kind(ErrorKind::Tor)?)
.local_resource_timeout(Duration::from_secs(0))
.create_unbootstrapped()?,
));
let reset = Arc::new(Notify::new());
let bootstrapper_reset = reset.clone();
let bootstrapper_client = client.clone();
let bootstrapper = tokio::spawn(async move {
loop {
let (epoch, client): (usize, _) = bootstrapper_client.read();
if let Err(e) = Until::new()
.with_async_fn(|| bootstrapper_reset.notified().map(Ok))
.run(async {
let mut events = client.bootstrap_events();
let bootstrap_fut =
client.bootstrap().map(|res| res.with_kind(ErrorKind::Tor));
let failure_fut = async {
let mut prev_frac = 0_f32;
let mut prev_inst = Instant::now();
while let Some(event) =
tokio::time::timeout(BOOTSTRAP_PROGRESS_TIMEOUT, events.next())
.await
.with_kind(ErrorKind::Tor)?
{
if event.ready_for_traffic() {
return Ok::<_, Error>(());
}
let frac = event.as_frac();
if frac == prev_frac {
if prev_inst.elapsed() > BOOTSTRAP_PROGRESS_TIMEOUT {
return Err(Error::new(
eyre!(
"{}",
t!(
"net.tor.bootstrap-no-progress",
duration = crate::util::serde::Duration::from(
BOOTSTRAP_PROGRESS_TIMEOUT
)
.to_string()
)
),
ErrorKind::Tor,
));
}
} else {
prev_frac = frac;
prev_inst = Instant::now();
}
}
futures::future::pending().await
};
if let Err::<(), Error>(e) = tokio::select! {
res = bootstrap_fut => res,
res = failure_fut => res,
} {
tracing::error!(
"{}",
t!("net.tor.bootstrap-error", error = e.to_string())
);
tracing::debug!("{e:?}");
} else {
bootstrapper_client.send_modify(|_| ());
for _ in 0..HEALTH_CHECK_FAILURE_ALLOWANCE {
if let Err::<(), Error>(e) = async {
loop {
let (bg, mut runner) = BackgroundJobQueue::new();
runner
.run_while(async {
const PING_BUF_LEN: usize = 8;
let key = TorSecretKey::generate();
let onion = key.onion_address();
let (hs, stream) = client
.launch_onion_service_with_hsid(
OnionServiceConfigBuilder::default()
.nickname(
onion
.to_string()
.trim_end_matches(".onion")
.parse::<HsNickname>()
.with_kind(ErrorKind::Tor)?,
)
.build()
.with_kind(ErrorKind::Tor)?,
key.clone().0,
)
.with_kind(ErrorKind::Tor)?;
bg.add_job(async move {
if let Err(e) = async {
let mut stream =
tor_hsservice::handle_rend_requests(
stream,
);
while let Some(req) = stream.next().await {
let mut stream = req
.accept(Connected::new_empty())
.await
.with_kind(ErrorKind::Tor)?;
let mut buf = [0; PING_BUF_LEN];
stream.read_exact(&mut buf).await?;
stream.write_all(&buf).await?;
stream.flush().await?;
stream.shutdown().await?;
}
Ok::<_, Error>(())
}
.await
{
tracing::error!(
"{}",
t!(
"net.tor.health-error",
error = e.to_string()
)
);
tracing::debug!("{e:?}");
}
});
tokio::time::timeout(HS_BOOTSTRAP_TIMEOUT, async {
let mut status = hs.status_events();
while let Some(status) = status.next().await {
if status.state().is_fully_reachable() {
return Ok(());
}
}
Err(Error::new(
eyre!(
"{}",
t!("net.tor.status-stream-ended")
),
ErrorKind::Tor,
))
})
.await
.with_kind(ErrorKind::Tor)??;
let mut stream = client
.connect((onion.to_string(), 8080))
.await?;
let mut ping_buf = [0; PING_BUF_LEN];
rand::fill(&mut ping_buf);
stream.write_all(&ping_buf).await?;
stream.flush().await?;
let mut ping_res = [0; PING_BUF_LEN];
stream.read_exact(&mut ping_res).await?;
ensure_code!(
ping_buf == ping_res,
ErrorKind::Tor,
"ping buffer mismatch"
);
stream.shutdown().await?;
Ok::<_, Error>(())
})
.await?;
tokio::time::sleep(HEALTH_CHECK_COOLDOWN).await;
}
}
.await
{
tracing::error!(
"{}",
t!("net.tor.client-health-error", error = e.to_string())
);
tracing::debug!("{e:?}");
}
}
tracing::error!(
"{}",
t!(
"net.tor.health-check-failed-recycling",
count = HEALTH_CHECK_FAILURE_ALLOWANCE
)
);
}
Ok(())
})
.await
{
tracing::error!(
"{}",
t!("net.tor.bootstrapper-error", error = e.to_string())
);
tracing::debug!("{e:?}");
}
if let Err::<(), Error>(e) = async {
tokio::time::sleep(RETRY_COOLDOWN).await;
bootstrapper_client.send((
epoch.wrapping_add(1),
TorClient::with_runtime(TokioRustlsRuntime::current()?)
.config(config.build().with_kind(ErrorKind::Tor)?)
.local_resource_timeout(Duration::from_secs(0))
.create_unbootstrapped_async()
.await?,
));
tracing::debug!("TorClient recycled");
Ok(())
}
.await
{
tracing::error!(
"{}",
t!("net.tor.client-creation-error", error = e.to_string())
);
tracing::debug!("{e:?}");
}
}
})
.into();
Ok(Self(Arc::new(TorControllerInner {
client,
_bootstrapper: bootstrapper,
services: SyncMutex::new(BTreeMap::new()),
reset,
})))
}
pub fn service(&self, key: TorSecretKey) -> Result<OnionService, Error> {
self.0.services.mutate(|s| {
use std::collections::btree_map::Entry;
let addr = key.onion_address();
match s.entry(addr) {
Entry::Occupied(e) => Ok(e.get().clone()),
Entry::Vacant(e) => Ok(e
.insert(OnionService::launch(self.0.client.clone(), key)?)
.clone()),
}
})
}
pub async fn gc(&self, addr: Option<OnionAddress>) -> Result<(), Error> {
if let Some(addr) = addr {
if let Some(s) = self.0.services.mutate(|s| {
let rm = if let Some(s) = s.get(&addr) {
!s.gc()
} else {
false
};
if rm { s.remove(&addr) } else { None }
}) {
s.shutdown().await
} else {
Ok(())
}
} else {
for s in self.0.services.mutate(|s| {
let mut rm = Vec::new();
s.retain(|_, s| {
if s.gc() {
true
} else {
rm.push(s.clone());
false
}
});
rm
}) {
s.shutdown().await?;
}
Ok(())
}
}
pub async fn reset(&self, wipe_state: bool) -> Result<(), Error> {
self.0.reset.notify_waiters();
Ok(())
}
pub async fn list_services(&self) -> Result<BTreeMap<OnionAddress, OnionServiceInfo>, Error> {
Ok(self
.0
.services
.peek(|s| s.iter().map(|(a, s)| (a.clone(), s.info())).collect()))
}
pub async fn connect_onion(
&self,
addr: &OnionAddress,
port: u16,
) -> Result<Box<dyn ReadWriter + Unpin + Send + Sync + 'static>, Error> {
if let Some(target) = self.0.services.peek(|s| {
s.get(addr).and_then(|s| {
s.0.bindings.peek(|b| {
b.get(&port).and_then(|b| {
b.iter()
.find(|(_, rc)| rc.strong_count() > 0)
.map(|(a, _)| *a)
})
})
})
}) {
let tcp_stream = TcpStream::connect(target)
.await
.with_kind(ErrorKind::Network)?;
if let Err(e) = socket2::SockRef::from(&tcp_stream).set_keepalive(true) {
tracing::error!(
"{}",
t!("net.tor.failed-to-set-tcp-keepalive", error = e.to_string())
);
tracing::debug!("{e:?}");
}
Ok(Box::new(tcp_stream))
} else {
let mut client = self.0.client.clone();
client
.wait_for(|(_, c)| c.bootstrap_status().ready_for_traffic())
.await;
let stream = client
.read()
.1
.connect((addr.to_string(), port))
.await
.with_kind(ErrorKind::Tor)?;
Ok(Box::new(stream))
}
}
}
#[derive(Clone)]
pub struct OnionService(Arc<OnionServiceData>);
struct OnionServiceData {
service: Arc<SyncMutex<Option<Arc<RunningOnionService>>>>,
bindings: Arc<SyncRwLock<BTreeMap<u16, BTreeMap<SocketAddr, Weak<()>>>>>,
_thread: NonDetachingJoinHandle<()>,
}
impl OnionService {
fn launch(
mut client: Watch<(usize, TorClient<TokioRustlsRuntime>)>,
key: TorSecretKey,
) -> Result<Self, Error> {
let service = Arc::new(SyncMutex::new(None));
let bindings = Arc::new(SyncRwLock::new(BTreeMap::<
u16,
BTreeMap<SocketAddr, Weak<()>>,
>::new()));
Ok(Self(Arc::new(OnionServiceData {
service: service.clone(),
bindings: bindings.clone(),
_thread: tokio::spawn(async move {
let (bg, mut runner) = BackgroundJobQueue::new();
runner
.run_while(async {
loop {
if let Err(e) = async {
client.wait_for(|(_,c)| c.bootstrap_status().ready_for_traffic()).await;
let epoch = client.peek(|(e, c)| {
ensure_code!(c.bootstrap_status().ready_for_traffic(), ErrorKind::Tor, "TorClient recycled");
Ok::<_, Error>(*e)
})?;
let addr = key.onion_address();
let (new_service, stream) = client.peek(|(_, c)| {
c.launch_onion_service_with_hsid(
OnionServiceConfigBuilder::default()
.nickname(
addr
.to_string()
.trim_end_matches(".onion")
.parse::<HsNickname>()
.with_kind(ErrorKind::Tor)?,
)
.build()
.with_kind(ErrorKind::Tor)?,
key.clone().0,
)
.with_kind(ErrorKind::Tor)
})?;
let mut status_stream = new_service.status_events();
let mut status = new_service.status();
if status.state().is_fully_reachable() {
tracing::debug!("{addr} is fully reachable");
} else {
tracing::debug!("{addr} is not fully reachable");
}
bg.add_job(async move {
while let Some(new_status) = status_stream.next().await {
if status.state().is_fully_reachable() && !new_status.state().is_fully_reachable() {
tracing::debug!("{addr} is no longer fully reachable");
} else if !status.state().is_fully_reachable() && new_status.state().is_fully_reachable() {
tracing::debug!("{addr} is now fully reachable");
}
status = new_status;
// TODO: health daemon?
}
});
service.replace(Some(new_service));
let mut stream = tor_hsservice::handle_rend_requests(stream);
while let Some(req) = tokio::select! {
req = stream.next() => req,
_ = client.wait_for(|(e, _)| *e != epoch) => None
} {
bg.add_job({
let bg = bg.clone();
let bindings = bindings.clone();
async move {
if let Err(e) = async {
let IncomingStreamRequest::Begin(begin) =
req.request()
else {
return req
.reject(tor_cell::relaycell::msg::End::new_with_reason(
tor_cell::relaycell::msg::EndReason::DONE,
))
.await
.with_kind(ErrorKind::Tor);
};
let Some(target) = bindings.peek(|b| {
b.get(&begin.port()).and_then(|a| {
a.iter()
.find(|(_, rc)| rc.strong_count() > 0)
.map(|(addr, _)| *addr)
})
}) else {
return req
.reject(tor_cell::relaycell::msg::End::new_with_reason(
tor_cell::relaycell::msg::EndReason::DONE,
))
.await
.with_kind(ErrorKind::Tor);
};
bg.add_job(async move {
if let Err(e) = async {
let mut outgoing =
TcpStream::connect(target)
.await
.with_kind(ErrorKind::Network)?;
if let Err(e) = socket2::SockRef::from(&outgoing).set_keepalive(true) {
tracing::error!("{}", t!("net.tor.failed-to-set-tcp-keepalive", error = e.to_string()));
tracing::debug!("{e:?}");
}
let mut incoming = req
.accept(Connected::new_empty())
.await
.with_kind(ErrorKind::Tor)?;
if let Err(e) =
tokio::io::copy_bidirectional(
&mut outgoing,
&mut incoming,
)
.await
{
tracing::trace!("Tor Stream Error: {e}");
tracing::trace!("{e:?}");
}
Ok::<_, Error>(())
}
.await
{
tracing::trace!("Tor Stream Error: {e}");
tracing::trace!("{e:?}");
}
});
Ok::<_, Error>(())
}
.await
{
tracing::trace!("Tor Request Error: {e}");
tracing::trace!("{e:?}");
}
}
});
}
Ok::<_, Error>(())
}
.await
{
tracing::error!("{}", t!("net.tor.client-error", error = e.to_string()));
tracing::debug!("{e:?}");
}
}
})
.await
})
.into(),
})))
}
pub async fn proxy_all<Rcs: FromIterator<Arc<()>>>(
&self,
bindings: impl IntoIterator<Item = (u16, SocketAddr)>,
) -> Result<Rcs, Error> {
Ok(self.0.bindings.mutate(|b| {
bindings
.into_iter()
.map(|(port, target)| {
let entry = b.entry(port).or_default().entry(target).or_default();
if let Some(rc) = entry.upgrade() {
rc
} else {
let rc = Arc::new(());
*entry = Arc::downgrade(&rc);
rc
}
})
.collect()
}))
}
pub fn gc(&self) -> bool {
self.0.bindings.mutate(|b| {
b.retain(|_, targets| {
targets.retain(|_, rc| rc.strong_count() > 0);
!targets.is_empty()
});
!b.is_empty()
})
}
pub async fn shutdown(self) -> Result<(), Error> {
self.0.service.replace(None);
self.0._thread.abort();
Ok(())
}
pub fn state(&self) -> OnionServiceState {
self.0
.service
.peek(|s| s.as_ref().map(|s| s.status().state().into()))
.unwrap_or(OnionServiceState::Bootstrapping)
}
pub fn info(&self) -> OnionServiceInfo {
OnionServiceInfo {
state: self.state(),
bindings: self.0.bindings.peek(|b| {
b.iter()
.filter_map(|(port, b)| {
b.iter()
.find(|(_, rc)| rc.strong_count() > 0)
.map(|(addr, _)| (*port, *addr))
})
.collect()
}),
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +0,0 @@
#[cfg(feature = "arti")]
mod arti;
#[cfg(not(feature = "arti"))]
mod ctor;
#[cfg(feature = "arti")]
pub use arti::{OnionAddress, OnionStore, TorController, TorSecretKey, tor_api};
#[cfg(not(feature = "arti"))]
pub use ctor::{OnionAddress, OnionStore, TorController, TorSecretKey, tor_api};

View File

@@ -8,7 +8,7 @@ use ts_rs::TS;
use crate::GatewayId;
use crate::context::{CliContext, RpcContext};
use crate::db::model::public::{NetworkInterfaceInfo, NetworkInterfaceType};
use crate::db::model::public::{GatewayType, NetworkInterfaceInfo, NetworkInterfaceType};
use crate::net::host::all_hosts;
use crate::prelude::*;
use crate::util::Invoke;
@@ -32,14 +32,19 @@ pub fn tunnel_api<C: Context>() -> ParentHandler<C> {
}
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct AddTunnelParams {
#[arg(help = "help.arg.tunnel-name")]
name: InternedString,
#[arg(help = "help.arg.wireguard-config")]
config: String,
#[arg(help = "help.arg.is-public")]
public: bool,
#[arg(help = "help.arg.gateway-type")]
#[serde(default, rename = "type")]
gateway_type: Option<GatewayType>,
#[arg(help = "help.arg.set-as-default-outbound")]
#[serde(default)]
set_as_default_outbound: bool,
}
fn sanitize_config(config: &str) -> String {
@@ -64,7 +69,8 @@ pub async fn add_tunnel(
AddTunnelParams {
name,
config,
public,
gateway_type,
set_as_default_outbound,
}: AddTunnelParams,
) -> Result<GatewayId, Error> {
let ifaces = ctx.net_controller.net_iface.watcher.subscribe();
@@ -76,9 +82,9 @@ pub async fn add_tunnel(
iface.clone(),
NetworkInterfaceInfo {
name: Some(name),
public: Some(public),
secure: None,
ip_info: None,
gateway_type,
},
);
return true;
@@ -120,6 +126,19 @@ pub async fn add_tunnel(
sub.recv().await;
if set_as_default_outbound {
ctx.db
.mutate(|db| {
db.as_public_mut()
.as_server_info_mut()
.as_network_mut()
.as_default_outbound_mut()
.ser(&Some(iface.clone()))
})
.await
.result?;
}
Ok(iface)
}
@@ -156,10 +175,14 @@ pub async fn remove_tunnel(
ctx.db
.mutate(|db| {
let hostname = crate::hostname::Hostname(db.as_public().as_server_info().as_hostname().de()?);
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
let ports = db.as_private().as_available_ports().de()?;
for host in all_hosts(db) {
let host = host?;
host.as_public_domains_mut()
.mutate(|p| Ok(p.retain(|_, v| v.gateway != id)))?;
host.update_addresses(&hostname, &gateways, &ports)?;
}
Ok(())
@@ -171,14 +194,19 @@ pub async fn remove_tunnel(
ctx.db
.mutate(|db| {
let hostname = crate::hostname::Hostname(db.as_public().as_server_info().as_hostname().de()?);
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
let ports = db.as_private().as_available_ports().de()?;
for host in all_hosts(db) {
let host = host?;
host.as_bindings_mut().mutate(|b| {
Ok(b.values_mut().for_each(|v| {
v.net.private_disabled.remove(&id);
v.net.public_enabled.remove(&id);
}))
host.as_private_domains_mut().mutate(|d| {
for gateways in d.values_mut() {
gateways.remove(&id);
}
d.retain(|_, gateways| !gateways.is_empty());
Ok(())
})?;
host.update_addresses(&hostname, &gateways, &ports)?;
}
Ok(())

View File

@@ -1,19 +1,19 @@
use std::any::Any;
use std::collections::{BTreeMap, BTreeSet};
use std::fmt;
use std::net::{IpAddr, SocketAddr};
use std::net::{IpAddr, SocketAddr, SocketAddrV6};
use std::sync::{Arc, Weak};
use std::task::{Poll, ready};
use std::time::Duration;
use async_acme::acme::ACME_TLS_ALPN_NAME;
use color_eyre::eyre::eyre;
use futures::FutureExt;
use futures::future::BoxFuture;
use imbl::OrdMap;
use imbl_value::{InOMap, InternedString};
use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn};
use serde::{Deserialize, Serialize};
use tokio::net::TcpStream;
use tokio::net::{TcpListener, TcpStream};
use tokio_rustls::TlsConnector;
use tokio_rustls::rustls::crypto::CryptoProvider;
use tokio_rustls::rustls::pki_types::ServerName;
@@ -23,28 +23,28 @@ use tracing::instrument;
use ts_rs::TS;
use visit_rs::Visit;
use crate::ResultExt;
use crate::context::{CliContext, RpcContext};
use crate::db::model::Database;
use crate::db::model::public::AcmeSettings;
use crate::db::model::public::{AcmeSettings, NetworkInterfaceInfo};
use crate::db::{DbAccessByKey, DbAccessMut};
use crate::net::acme::{
AcmeCertStore, AcmeProvider, AcmeTlsAlpnCache, AcmeTlsHandler, GetAcmeProvider,
};
use crate::net::gateway::{
AnyFilter, BindTcp, DynInterfaceFilter, GatewayInfo, InterfaceFilter,
NetworkInterfaceController, NetworkInterfaceListener,
GatewayInfo, NetworkInterfaceController, NetworkInterfaceListenerAcceptMetadata,
};
use crate::net::ssl::{CertStore, RootCaTlsHandler};
use crate::net::tls::{
ChainedHandler, TlsHandlerWrapper, TlsListener, TlsMetadata, WrapTlsHandler,
};
use crate::net::utils::ipv6_is_link_local;
use crate::net::web_server::{Accept, AcceptStream, ExtractVisitor, TcpMetadata, extract};
use crate::prelude::*;
use crate::util::collections::EqSet;
use crate::util::future::{NonDetachingJoinHandle, WeakFuture};
use crate::util::serde::{HandlerExtSerde, MaybeUtf8String, display_serializable};
use crate::util::sync::{SyncMutex, Watch};
use crate::{GatewayId, ResultExt};
pub fn vhost_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new().subcommand(
@@ -93,7 +93,7 @@ pub struct VHostController {
interfaces: Arc<NetworkInterfaceController>,
crypto_provider: Arc<CryptoProvider>,
acme_cache: AcmeTlsAlpnCache,
servers: SyncMutex<BTreeMap<u16, VHostServer<NetworkInterfaceListener>>>,
servers: SyncMutex<BTreeMap<u16, VHostServer<VHostBindListener>>>,
}
impl VHostController {
pub fn new(
@@ -114,14 +114,22 @@ impl VHostController {
&self,
hostname: Option<InternedString>,
external: u16,
target: DynVHostTarget<NetworkInterfaceListener>,
target: DynVHostTarget<VHostBindListener>,
) -> Result<Arc<()>, Error> {
self.servers.mutate(|writable| {
let server = if let Some(server) = writable.remove(&external) {
server
} else {
let bind_reqs = Watch::new(VHostBindRequirements::default());
let listener = VHostBindListener {
ip_info: self.interfaces.watcher.subscribe(),
port: external,
bind_reqs: bind_reqs.clone_unseen(),
listeners: BTreeMap::new(),
};
VHostServer::new(
self.interfaces.watcher.bind(BindTcp, external)?,
listener,
bind_reqs,
self.db.clone(),
self.crypto_provider.clone(),
self.acme_cache.clone(),
@@ -173,6 +181,142 @@ impl VHostController {
}
}
/// Union of all ProxyTargets' bind requirements for a VHostServer.
#[derive(Debug, Clone, Default, PartialEq, Eq)]
pub struct VHostBindRequirements {
pub public_gateways: BTreeSet<GatewayId>,
pub private_ips: BTreeSet<IpAddr>,
}
fn compute_bind_reqs<A: Accept + 'static>(mapping: &Mapping<A>) -> VHostBindRequirements {
let mut reqs = VHostBindRequirements::default();
for (_, targets) in mapping {
for (target, rc) in targets {
if rc.strong_count() > 0 {
let (pub_gw, priv_ip) = target.0.bind_requirements();
reqs.public_gateways.extend(pub_gw);
reqs.private_ips.extend(priv_ip);
}
}
}
reqs
}
/// Listener that manages its own TCP listeners with IP-level precision.
/// Binds ALL IPs of public gateways and ONLY matching private IPs.
pub struct VHostBindListener {
ip_info: Watch<OrdMap<GatewayId, NetworkInterfaceInfo>>,
port: u16,
bind_reqs: Watch<VHostBindRequirements>,
listeners: BTreeMap<SocketAddr, (TcpListener, GatewayInfo)>,
}
fn update_vhost_listeners(
listeners: &mut BTreeMap<SocketAddr, (TcpListener, GatewayInfo)>,
port: u16,
ip_info: &OrdMap<GatewayId, NetworkInterfaceInfo>,
reqs: &VHostBindRequirements,
) -> Result<(), Error> {
let mut keep = BTreeSet::<SocketAddr>::new();
for (gw_id, info) in ip_info {
if let Some(ip_info) = &info.ip_info {
for ipnet in &ip_info.subnets {
let ip = ipnet.addr();
let should_bind =
reqs.public_gateways.contains(gw_id) || reqs.private_ips.contains(&ip);
if should_bind {
let addr = match ip {
IpAddr::V6(ip6) => SocketAddrV6::new(
ip6,
port,
0,
if ipv6_is_link_local(ip6) {
ip_info.scope_id
} else {
0
},
)
.into(),
ip => SocketAddr::new(ip, port),
};
keep.insert(addr);
if let Some((_, existing_info)) = listeners.get_mut(&addr) {
*existing_info = GatewayInfo {
id: gw_id.clone(),
info: info.clone(),
};
} else {
let tcp = TcpListener::from_std(
mio::net::TcpListener::bind(addr)
.with_kind(ErrorKind::Network)?
.into(),
)
.with_kind(ErrorKind::Network)?;
listeners.insert(
addr,
(
tcp,
GatewayInfo {
id: gw_id.clone(),
info: info.clone(),
},
),
);
}
}
}
}
}
listeners.retain(|key, _| keep.contains(key));
Ok(())
}
impl Accept for VHostBindListener {
type Metadata = NetworkInterfaceListenerAcceptMetadata;
fn poll_accept(
&mut self,
cx: &mut std::task::Context<'_>,
) -> Poll<Result<(Self::Metadata, AcceptStream), Error>> {
// Update listeners when ip_info or bind_reqs change
while self.ip_info.poll_changed(cx).is_ready() || self.bind_reqs.poll_changed(cx).is_ready()
{
let reqs = self.bind_reqs.read_and_mark_seen();
let listeners = &mut self.listeners;
let port = self.port;
self.ip_info.peek_and_mark_seen(|ip_info| {
update_vhost_listeners(listeners, port, ip_info, &reqs)
})?;
}
// Poll each listener for incoming connections
for (&addr, (listener, gw_info)) in &self.listeners {
match listener.poll_accept(cx) {
Poll::Ready(Ok((stream, peer_addr))) => {
if let Err(e) = socket2::SockRef::from(&stream).set_keepalive(true) {
tracing::error!("Failed to set tcp keepalive: {e}");
tracing::debug!("{e:?}");
}
return Poll::Ready(Ok((
NetworkInterfaceListenerAcceptMetadata {
inner: TcpMetadata {
local_addr: addr,
peer_addr,
},
info: gw_info.clone(),
},
Box::pin(stream),
)));
}
Poll::Ready(Err(e)) => {
tracing::trace!("VHostBindListener accept error on {addr}: {e}");
}
Poll::Pending => {}
}
}
Poll::Pending
}
}
pub trait VHostTarget<A: Accept>: std::fmt::Debug + Eq {
type PreprocessRes: Send + 'static;
#[allow(unused_variables)]
@@ -182,6 +326,10 @@ pub trait VHostTarget<A: Accept>: std::fmt::Debug + Eq {
fn acme(&self) -> Option<&AcmeProvider> {
None
}
/// Returns (public_gateways, private_ips) this target needs the listener to bind on.
fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) {
(BTreeSet::new(), BTreeSet::new())
}
fn preprocess<'a>(
&'a self,
prev: ServerConfig,
@@ -200,6 +348,7 @@ pub trait VHostTarget<A: Accept>: std::fmt::Debug + Eq {
pub trait DynVHostTargetT<A: Accept>: std::fmt::Debug + Any {
fn filter(&self, metadata: &<A as Accept>::Metadata) -> bool;
fn acme(&self) -> Option<&AcmeProvider>;
fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>);
fn preprocess<'a>(
&'a self,
prev: ServerConfig,
@@ -224,6 +373,9 @@ impl<A: Accept, T: VHostTarget<A> + 'static> DynVHostTargetT<A> for T {
fn acme(&self) -> Option<&AcmeProvider> {
VHostTarget::acme(self)
}
fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) {
VHostTarget::bind_requirements(self)
}
fn preprocess<'a>(
&'a self,
prev: ServerConfig,
@@ -301,7 +453,8 @@ impl<A: Accept + 'static> Preprocessed<A> {
#[derive(Clone)]
pub struct ProxyTarget {
pub filter: DynInterfaceFilter,
pub public: BTreeSet<GatewayId>,
pub private: BTreeSet<IpAddr>,
pub acme: Option<AcmeProvider>,
pub addr: SocketAddr,
pub add_x_forwarded_headers: bool,
@@ -309,7 +462,8 @@ pub struct ProxyTarget {
}
impl PartialEq for ProxyTarget {
fn eq(&self, other: &Self) -> bool {
self.filter == other.filter
self.public == other.public
&& self.private == other.private
&& self.acme == other.acme
&& self.addr == other.addr
&& self.connect_ssl.as_ref().map(Arc::as_ptr)
@@ -320,7 +474,8 @@ impl Eq for ProxyTarget {}
impl fmt::Debug for ProxyTarget {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("ProxyTarget")
.field("filter", &self.filter)
.field("public", &self.public)
.field("private", &self.private)
.field("acme", &self.acme)
.field("addr", &self.addr)
.field("add_x_forwarded_headers", &self.add_x_forwarded_headers)
@@ -340,16 +495,35 @@ where
{
type PreprocessRes = AcceptStream;
fn filter(&self, metadata: &<A as Accept>::Metadata) -> bool {
let info = extract::<GatewayInfo, _>(metadata);
if info.is_none() {
tracing::warn!("No GatewayInfo on metadata");
let gw = extract::<GatewayInfo, _>(metadata);
let tcp = extract::<TcpMetadata, _>(metadata);
let (Some(gw), Some(tcp)) = (gw, tcp) else {
return false;
};
let Some(ip_info) = &gw.info.ip_info else {
return false;
};
let src = tcp.peer_addr.ip();
// Public: source is outside all known subnets (direct internet)
let is_public = !ip_info.subnets.iter().any(|s| s.contains(&src));
if is_public {
self.public.contains(&gw.id)
} else {
// Private: accept if connection arrived on an interface with a matching IP
ip_info
.subnets
.iter()
.any(|s| self.private.contains(&s.addr()))
}
info.as_ref()
.map_or(true, |i| self.filter.filter(&i.id, &i.info))
}
fn acme(&self) -> Option<&AcmeProvider> {
self.acme.as_ref()
}
fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) {
(self.public.clone(), self.private.clone())
}
async fn preprocess<'a>(
&'a self,
mut prev: ServerConfig,
@@ -518,6 +692,7 @@ where
let (target, rc) = self.0.peek(|m| {
m.get(&hello.server_name().map(InternedString::from))
.or_else(|| m.get(&None))
.into_iter()
.flatten()
.filter(|(_, rc)| rc.strong_count() > 0)
@@ -634,28 +809,15 @@ where
struct VHostServer<A: Accept + 'static> {
mapping: Watch<Mapping<A>>,
bind_reqs: Watch<VHostBindRequirements>,
_thread: NonDetachingJoinHandle<()>,
}
impl<'a> From<&'a BTreeMap<Option<InternedString>, BTreeMap<ProxyTarget, Weak<()>>>> for AnyFilter {
fn from(value: &'a BTreeMap<Option<InternedString>, BTreeMap<ProxyTarget, Weak<()>>>) -> Self {
Self(
value
.iter()
.flat_map(|(_, v)| {
v.iter()
.filter(|(_, r)| r.strong_count() > 0)
.map(|(t, _)| t.filter.clone())
})
.collect(),
)
}
}
impl<A: Accept> VHostServer<A> {
#[instrument(skip_all)]
fn new<M: HasModel>(
listener: A,
bind_reqs: Watch<VHostBindRequirements>,
db: TypedPatchDb<M>,
crypto_provider: Arc<CryptoProvider>,
acme_cache: AcmeTlsAlpnCache,
@@ -679,6 +841,7 @@ impl<A: Accept> VHostServer<A> {
let mapping = Watch::new(BTreeMap::new());
Self {
mapping: mapping.clone(),
bind_reqs,
_thread: tokio::spawn(async move {
let mut listener = VHostListener(TlsListener::new(
listener,
@@ -729,6 +892,9 @@ impl<A: Accept> VHostServer<A> {
targets.insert(target, Arc::downgrade(&rc));
writable.insert(hostname, targets);
res = Ok(rc);
if changed {
self.update_bind_reqs(writable);
}
changed
});
if self.mapping.watcher_count() > 1 {
@@ -752,9 +918,23 @@ impl<A: Accept> VHostServer<A> {
if !targets.is_empty() {
writable.insert(hostname, targets);
}
if pre != post {
self.update_bind_reqs(writable);
}
pre == post
});
}
fn update_bind_reqs(&self, mapping: &Mapping<A>) {
let new_reqs = compute_bind_reqs(mapping);
self.bind_reqs.send_if_modified(|reqs| {
if *reqs != new_reqs {
*reqs = new_reqs;
true
} else {
false
}
});
}
fn is_empty(&self) -> bool {
self.mapping.peek(|m| m.is_empty())
}

View File

@@ -366,28 +366,6 @@ where
pub struct WebServerAcceptorSetter<A: Accept> {
acceptor: Watch<A>,
}
impl<A, B> WebServerAcceptorSetter<Option<Either<A, B>>>
where
A: Accept,
B: Accept<Metadata = A::Metadata>,
{
pub fn try_upgrade<F: FnOnce(A) -> Result<B, Error>>(&self, f: F) -> Result<(), Error> {
let mut res = Ok(());
self.acceptor.send_modify(|a| {
*a = match a.take() {
Some(Either::Left(a)) => match f(a) {
Ok(b) => Some(Either::Right(b)),
Err(e) => {
res = Err(e);
None
}
},
x => x,
}
});
res
}
}
impl<A: Accept> Deref for WebServerAcceptorSetter<A> {
type Target = Watch<A>;
fn deref(&self) -> &Self::Target {

View File

@@ -85,6 +85,7 @@ pub fn wifi<C: Context>() -> ParentHandler<C> {
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct SetWifiEnabledParams {
@@ -150,16 +151,17 @@ pub fn country<C: Context>() -> ParentHandler<C> {
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct AddParams {
pub struct WifiAddParams {
#[arg(help = "help.arg.wifi-ssid")]
ssid: String,
#[arg(help = "help.arg.wifi-password")]
password: String,
}
#[instrument(skip_all)]
pub async fn add(ctx: RpcContext, AddParams { ssid, password }: AddParams) -> Result<(), Error> {
pub async fn add(ctx: RpcContext, WifiAddParams { ssid, password }: WifiAddParams) -> Result<(), Error> {
let wifi_manager = ctx.wifi_manager.clone();
if !ssid.is_ascii() {
return Err(Error::new(
@@ -229,15 +231,16 @@ pub async fn add(ctx: RpcContext, AddParams { ssid, password }: AddParams) -> Re
Ok(())
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct SsidParams {
pub struct WifiSsidParams {
#[arg(help = "help.arg.wifi-ssid")]
ssid: String,
}
#[instrument(skip_all)]
pub async fn connect(ctx: RpcContext, SsidParams { ssid }: SsidParams) -> Result<(), Error> {
pub async fn connect(ctx: RpcContext, WifiSsidParams { ssid }: WifiSsidParams) -> Result<(), Error> {
let wifi_manager = ctx.wifi_manager.clone();
if !ssid.is_ascii() {
return Err(Error::new(
@@ -311,7 +314,7 @@ pub async fn connect(ctx: RpcContext, SsidParams { ssid }: SsidParams) -> Result
}
#[instrument(skip_all)]
pub async fn remove(ctx: RpcContext, SsidParams { ssid }: SsidParams) -> Result<(), Error> {
pub async fn remove(ctx: RpcContext, WifiSsidParams { ssid }: WifiSsidParams) -> Result<(), Error> {
let wifi_manager = ctx.wifi_manager.clone();
if !ssid.is_ascii() {
return Err(Error::new(
@@ -359,11 +362,13 @@ pub async fn remove(ctx: RpcContext, SsidParams { ssid }: SsidParams) -> Result<
.result?;
Ok(())
}
#[derive(serde::Serialize, serde::Deserialize)]
#[derive(serde::Serialize, serde::Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct WifiListInfo {
ssids: HashMap<Ssid, SignalStrength>,
connected: Option<Ssid>,
#[ts(type = "string | null")]
country: Option<CountryCode>,
ethernet: bool,
available_wifi: Vec<WifiListOut>,
@@ -374,7 +379,8 @@ pub struct WifiListInfoLow {
strength: SignalStrength,
security: Vec<String>,
}
#[derive(serde::Serialize, serde::Deserialize)]
#[derive(serde::Serialize, serde::Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct WifiListOut {
ssid: Ssid,
@@ -560,6 +566,7 @@ pub async fn get_available(ctx: RpcContext, _: Empty) -> Result<Vec<WifiListOut>
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct SetCountryParams {
@@ -605,7 +612,7 @@ pub struct NetworkId(String);
/// Ssid are the names of the wifis, usually human readable.
#[derive(
Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, serde::Serialize, serde::Deserialize,
Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, serde::Serialize, serde::Deserialize, TS,
)]
pub struct Ssid(String);
@@ -622,6 +629,7 @@ pub struct Ssid(String);
Hash,
serde::Serialize,
serde::Deserialize,
TS,
)]
pub struct SignalStrength(u8);

View File

@@ -75,6 +75,7 @@ pub fn notification<C: Context>() -> ParentHandler<C> {
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct ListNotificationParams {
@@ -140,6 +141,7 @@ pub async fn list(
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct ModifyNotificationParams {
@@ -175,6 +177,7 @@ pub async fn remove(
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct ModifyNotificationBeforeParams {
@@ -326,6 +329,7 @@ pub async fn create(
}
#[derive(Debug, Clone, PartialEq, Eq, Hash, serde::Serialize, serde::Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub enum NotificationLevel {
Success,
@@ -396,26 +400,31 @@ impl Map for Notifications {
}
}
#[derive(Debug, Serialize, Deserialize, HasModel)]
#[derive(Debug, Serialize, Deserialize, HasModel, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
pub struct Notification {
pub package_id: Option<PackageId>,
#[ts(type = "string")]
pub created_at: DateTime<Utc>,
pub code: u32,
pub level: NotificationLevel,
pub title: String,
pub message: String,
#[ts(type = "any")]
pub data: Value,
#[serde(default = "const_true")]
pub seen: bool,
}
#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct NotificationWithId {
id: u32,
#[serde(flatten)]
#[ts(flatten)]
notification: Notification,
}

View File

@@ -15,6 +15,7 @@ use crate::progress::{FullProgressTracker, ProgressUnits};
use crate::registry::context::RegistryContext;
use crate::registry::device_info::DeviceInfo;
use crate::registry::package::index::{PackageIndex, PackageVersionInfo};
use crate::s9pk::manifest::LocaleString;
use crate::s9pk::merkle_archive::source::ArchiveSource;
use crate::s9pk::v2::SIG_CONTEXT;
use crate::util::VersionString;
@@ -38,11 +39,11 @@ impl Default for PackageDetailLevel {
}
}
#[derive(Debug, Deserialize, Serialize, TS)]
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct PackageInfoShort {
pub release_notes: String,
pub release_notes: LocaleString,
}
#[derive(Debug, Deserialize, Serialize, TS, Parser, HasModel)]
@@ -89,17 +90,20 @@ impl GetPackageResponse {
let lesser_versions: BTreeMap<_, _> = self
.other_versions
.as_ref()
.clone()
.into_iter()
.flatten()
.filter(|(v, _)| ***v < *version)
.filter(|(v, _)| **v < *version)
.collect();
if !lesser_versions.is_empty() {
table.add_row(row![bc => "OLDER VERSIONS"]);
table.add_row(row![bc => "VERSION", "RELEASE NOTES"]);
for (version, info) in lesser_versions {
table.add_row(row![AsRef::<str>::as_ref(version), &info.release_notes]);
table.add_row(row![
AsRef::<str>::as_ref(&version),
&info.release_notes.localized()
]);
}
}
@@ -147,6 +151,7 @@ fn get_matching_models(
id,
source_version,
device_info,
target_version,
..
}: &GetPackageParams,
) -> Result<Vec<(PackageId, ExtendedVersion, Model<PackageVersionInfo>)>, Error> {
@@ -165,26 +170,29 @@ fn get_matching_models(
.as_entries()?
.into_iter()
.map(|(v, info)| {
let ev = ExtendedVersion::from(v);
Ok::<_, Error>(
if source_version.as_ref().map_or(Ok(true), |source_version| {
Ok::<_, Error>(
source_version.satisfies(
&info
.as_source_version()
.de()?
.unwrap_or(VersionRange::any()),
),
)
})? {
if target_version.as_ref().map_or(true, |tv| ev.satisfies(tv))
&& source_version.as_ref().map_or(Ok(true), |source_version| {
Ok::<_, Error>(
source_version.satisfies(
&info
.as_source_version()
.de()?
.unwrap_or(VersionRange::any()),
),
)
})?
{
let mut info = info.clone();
if let Some(device_info) = &device_info {
if info.for_device(device_info)? {
Some((k.clone(), ExtendedVersion::from(v), info))
Some((k.clone(), ev, info))
} else {
None
}
} else {
Some((k.clone(), ExtendedVersion::from(v), info))
Some((k.clone(), ev, info))
}
} else {
None
@@ -207,12 +215,7 @@ pub async fn get_package(ctx: RegistryContext, params: GetPackageParams) -> Resu
for (id, version, info) in get_matching_models(&peek.as_index().as_package(), &params)? {
let package_best = best.entry(id.clone()).or_default();
let package_other = other.entry(id.clone()).or_default();
if params
.target_version
.as_ref()
.map_or(true, |v| version.satisfies(v))
&& package_best.keys().all(|k| !(**k > version))
{
if package_best.keys().all(|k| !(**k > version)) {
for worse_version in package_best
.keys()
.filter(|k| ***k < version)
@@ -569,3 +572,42 @@ pub async fn cli_download(
Ok(())
}
#[test]
fn check_matching_info_short() {
use crate::registry::package::index::PackageMetadata;
use crate::s9pk::manifest::{Alerts, Description};
use crate::util::DataUrl;
let lang_map = |s: &str| {
LocaleString::LanguageMap([("en".into(), s.into())].into_iter().collect())
};
let info = PackageVersionInfo {
metadata: PackageMetadata {
title: "Test Package".into(),
icon: DataUrl::from_vec("image/png", vec![]),
description: Description {
short: lang_map("A short description"),
long: lang_map("A longer description of the test package"),
},
release_notes: lang_map("Initial release"),
git_hash: None,
license: "MIT".into(),
wrapper_repo: "https://github.com/example/wrapper".parse().unwrap(),
upstream_repo: "https://github.com/example/upstream".parse().unwrap(),
support_site: "https://example.com/support".parse().unwrap(),
marketing_site: "https://example.com".parse().unwrap(),
donation_url: None,
docs_url: None,
alerts: Alerts::default(),
dependency_metadata: BTreeMap::new(),
os_version: exver::Version::new([0, 3, 6], []),
sdk_version: None,
hardware_acceleration: false,
},
source_version: None,
s9pks: Vec::new(),
};
from_value::<PackageInfoShort>(to_value(&info).unwrap()).unwrap();
}

View File

@@ -240,7 +240,7 @@ impl LocaleString {
pub fn localize(&mut self) {
self.localize_for(&*rust_i18n::locale());
}
pub fn localized(mut self) -> String {
pub fn localized(self) -> String {
self.localized_for(&*rust_i18n::locale())
}
}

View File

@@ -151,7 +151,7 @@ async fn get_action_input(
#[derive(Debug, Clone, Serialize, Deserialize, TS, Parser)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
#[ts(export, rename = "EffectsRunActionParams")]
pub struct RunActionParams {
#[serde(default)]
#[ts(skip)]

View File

@@ -11,6 +11,9 @@ use serde::{Deserialize, Serialize};
use tracing::warn;
use ts_rs::TS;
use patch_db::json_ptr::JsonPointer;
use crate::db::model::Database;
use crate::net::ssl::FullchainCertData;
use crate::prelude::*;
use crate::service::effects::context::EffectContext;
@@ -29,7 +32,7 @@ struct ServiceCallbackMap {
get_service_interface: BTreeMap<(PackageId, ServiceInterfaceId), Vec<CallbackHandler>>,
list_service_interfaces: BTreeMap<PackageId, Vec<CallbackHandler>>,
get_system_smtp: Vec<CallbackHandler>,
get_host_info: BTreeMap<(PackageId, HostId), Vec<CallbackHandler>>,
get_host_info: BTreeMap<(PackageId, HostId), (NonDetachingJoinHandle<()>, Vec<CallbackHandler>)>,
get_ssl_certificate: EqMap<
(BTreeSet<InternedString>, FullchainCertData, Algorithm),
(NonDetachingJoinHandle<()>, Vec<CallbackHandler>),
@@ -57,7 +60,7 @@ impl ServiceCallbacks {
});
this.get_system_smtp
.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
this.get_host_info.retain(|_, v| {
this.get_host_info.retain(|_, (_, v)| {
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
!v.is_empty()
});
@@ -141,29 +144,57 @@ impl ServiceCallbacks {
}
pub(super) fn add_get_host_info(
&self,
self: &Arc<Self>,
db: &TypedPatchDb<Database>,
package_id: PackageId,
host_id: HostId,
handler: CallbackHandler,
) {
self.mutate(|this| {
this.get_host_info
.entry((package_id, host_id))
.or_default()
.entry((package_id.clone(), host_id.clone()))
.or_insert_with(|| {
let ptr: JsonPointer = format!(
"/public/packageData/{}/hosts/{}",
package_id, host_id
)
.parse()
.expect("valid json pointer");
let db = db.clone();
let callbacks = Arc::clone(self);
let key = (package_id, host_id);
(
tokio::spawn(async move {
let mut sub = db.subscribe(ptr).await;
while sub.recv().await.is_some() {
if let Some(cbs) = callbacks.mutate(|this| {
this.get_host_info
.remove(&key)
.map(|(_, handlers)| CallbackHandlers(handlers))
.filter(|cb| !cb.0.is_empty())
}) {
if let Err(e) = cbs.call(vector![]).await {
tracing::error!(
"Error in host info callback: {e}"
);
tracing::debug!("{e:?}");
}
}
// entry was removed when we consumed handlers,
// so stop watching — a new subscription will be
// created if the service re-registers
break;
}
})
.into(),
Vec::new(),
)
})
.1
.push(handler);
})
}
#[must_use]
pub fn get_host_info(&self, id: &(PackageId, HostId)) -> Option<CallbackHandlers> {
self.mutate(|this| {
Some(CallbackHandlers(
this.get_host_info.remove(id).unwrap_or_default(),
))
.filter(|cb| !cb.0.is_empty())
})
}
pub(super) fn add_get_ssl_certificate(
&self,
ctx: EffectContext,

View File

@@ -29,6 +29,7 @@ pub async fn get_host_info(
if let Some(callback) = callback {
let callback = callback.register(&context.seed.persistent_container);
context.seed.ctx.callbacks.add_get_host_info(
&context.seed.ctx.db,
package_id.clone(),
host_id.clone(),
CallbackHandler::new(&context, callback),

View File

@@ -55,20 +55,18 @@ pub async fn get_ssl_certificate(
.map(|(_, m)| m.as_hosts().as_entries())
.flatten_ok()
.map_ok(|(_, m)| {
Ok(m.as_onions()
.de()?
.iter()
.map(InternedString::from_display)
.chain(m.as_public_domains().keys()?)
.chain(m.as_private_domains().de()?)
Ok(m.as_public_domains()
.keys()?
.into_iter()
.chain(m.as_private_domains().keys()?)
.chain(
m.as_hostname_info()
m.as_bindings()
.de()?
.values()
.flatten()
.flat_map(|b| b.addresses.available.iter().cloned())
.map(|h| h.to_san_hostname()),
)
.collect::<Vec<_>>())
.collect::<Vec<InternedString>>())
})
.map(|a| a.and_then(|a| a))
.flatten_ok()
@@ -181,20 +179,18 @@ pub async fn get_ssl_key(
.map(|m| m.as_hosts().as_entries())
.flatten_ok()
.map_ok(|(_, m)| {
Ok(m.as_onions()
.de()?
.iter()
.map(InternedString::from_display)
.chain(m.as_public_domains().keys()?)
.chain(m.as_private_domains().de()?)
Ok(m.as_public_domains()
.keys()?
.into_iter()
.chain(m.as_private_domains().keys()?)
.chain(
m.as_hostname_info()
m.as_bindings()
.de()?
.values()
.flatten()
.flat_map(|b| b.addresses.available.iter().cloned())
.map(|h| h.to_san_hostname()),
)
.collect::<Vec<_>>())
.collect::<Vec<InternedString>>())
})
.map(|a| a.and_then(|a| a))
.flatten_ok()

View File

@@ -701,6 +701,7 @@ struct ServiceActorSeed {
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct RebuildParams {
#[arg(help = "help.arg.package-id")]
pub id: PackageId,

View File

@@ -1,4 +1,4 @@
use std::collections::{BTreeMap, BTreeSet};
use std::collections::BTreeMap;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::Duration;
@@ -88,77 +88,17 @@ impl ServiceMap {
#[instrument(skip_all)]
pub async fn init(&self, ctx: &RpcContext) -> Result<(), Error> {
let db = ctx.db.peek().await;
let ids = db.as_public().as_package_data().keys()?;
// Build dependency map for all packages
let mut dependencies: BTreeMap<PackageId, BTreeSet<PackageId>> = BTreeMap::new();
let ids = ctx.db.peek().await.as_public().as_package_data().keys()?;
let mut jobs = FuturesUnordered::new();
for id in &ids {
if let Some(pde) = db.as_public().as_package_data().as_idx(&id) {
let deps: BTreeSet<PackageId> = pde
.as_current_dependencies()
.de()?
.0
.keys()
.cloned()
.collect();
dependencies.insert(id.clone(), deps);
jobs.push(self.load(ctx, id, LoadDisposition::Retry));
}
while let Some(res) = jobs.next().await {
if let Err(e) = res {
tracing::error!("Error loading installed package as service: {e}");
tracing::debug!("{e:?}");
}
}
let mut remaining: BTreeSet<PackageId> = ids.iter().cloned().collect();
while !remaining.is_empty() {
// Find packages with no remaining dependencies
let can_load: Vec<PackageId> = remaining
.iter()
.filter(|pkg_id| {
// A package can be loaded if none of its dependencies are still in the remaining set
if let Some(deps) = dependencies.get(*pkg_id) {
!deps.iter().any(|dep| remaining.contains(dep))
} else {
true
}
})
.cloned()
.collect();
if can_load.is_empty() {
// Dependency cycle detected, load remaining packages anyway
tracing::warn!(
"Dependency cycle detected, loading remaining packages in arbitrary order"
);
let mut jobs = FuturesUnordered::new();
for id in &remaining {
jobs.push(self.load(ctx, id, LoadDisposition::Retry));
}
while let Some(res) = jobs.next().await {
if let Err(e) = res {
tracing::error!("Error loading installed package as service: {e}");
tracing::debug!("{e:?}");
}
}
break;
}
// Remove from remaining set
for id in &can_load {
remaining.remove(id);
}
// Load packages with no remaining dependencies concurrently
let mut jobs = FuturesUnordered::new();
for id in &can_load {
jobs.push(self.load(ctx, id, LoadDisposition::Retry));
}
while let Some(res) = jobs.next().await {
if let Err(e) = res {
tracing::error!("Error loading installed package as service: {e}");
tracing::debug!("{e:?}");
}
}
}
Ok(())
}
@@ -319,6 +259,7 @@ impl ServiceMap {
service_interfaces: Default::default(),
hosts: Default::default(),
store_exposed_dependents: Default::default(),
outbound_gateway: None,
},
)?;
};

View File

@@ -58,7 +58,8 @@ impl ValueParserFactory for SshPubKey {
}
}
#[derive(serde::Serialize, serde::Deserialize)]
#[derive(serde::Serialize, serde::Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct SshKeyResponse {
pub alg: String,
@@ -115,15 +116,16 @@ pub fn ssh<C: Context>() -> ParentHandler<C> {
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct AddParams {
pub struct SshAddParams {
#[arg(help = "help.arg.ssh-public-key")]
key: SshPubKey,
}
#[instrument(skip_all)]
pub async fn add(ctx: RpcContext, AddParams { key }: AddParams) -> Result<SshKeyResponse, Error> {
pub async fn add(ctx: RpcContext, SshAddParams { key }: SshAddParams) -> Result<SshKeyResponse, Error> {
let mut key = WithTimeData::new(key);
let fingerprint = InternedString::intern(key.0.fingerprint_md5());
let (keys, res) = ctx
@@ -150,9 +152,10 @@ pub async fn add(ctx: RpcContext, AddParams { key }: AddParams) -> Result<SshKey
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct DeleteParams {
pub struct SshDeleteParams {
#[arg(help = "help.arg.ssh-fingerprint")]
#[ts(type = "string")]
fingerprint: InternedString,
@@ -161,7 +164,7 @@ pub struct DeleteParams {
#[instrument(skip_all)]
pub async fn remove(
ctx: RpcContext,
DeleteParams { fingerprint }: DeleteParams,
SshDeleteParams { fingerprint }: SshDeleteParams,
) -> Result<(), Error> {
let keys = ctx
.db

View File

@@ -191,7 +191,9 @@ pub async fn governor(
Ok(GovernorInfo { current, available })
}
#[derive(Serialize, Deserialize)]
#[derive(Serialize, Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct TimeInfo {
now: String,
uptime: u64,
@@ -331,6 +333,7 @@ pub struct MetricLeaf<T> {
}
#[derive(Clone, Copy, Debug, PartialEq, PartialOrd, TS)]
#[ts(type = "{ value: string, unit: string }")]
pub struct Celsius(f64);
impl fmt::Display for Celsius {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
@@ -359,6 +362,7 @@ impl<'de> Deserialize<'de> for Celsius {
}
}
#[derive(Clone, Debug, PartialEq, PartialOrd, TS)]
#[ts(type = "{ value: string, unit: string }")]
pub struct Percentage(f64);
impl Serialize for Percentage {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
@@ -385,6 +389,7 @@ impl<'de> Deserialize<'de> for Percentage {
}
#[derive(Clone, Debug, TS)]
#[ts(type = "{ value: string, unit: string }")]
pub struct MebiBytes(pub f64);
impl Serialize for MebiBytes {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
@@ -411,6 +416,7 @@ impl<'de> Deserialize<'de> for MebiBytes {
}
#[derive(Clone, Debug, PartialEq, PartialOrd, TS)]
#[ts(type = "{ value: string, unit: string }")]
pub struct GigaBytes(f64);
impl Serialize for GigaBytes {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
@@ -490,6 +496,7 @@ pub async fn metrics(ctx: RpcContext) -> Result<Metrics, Error> {
#[derive(Deserialize, Serialize, Clone, Debug, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct MetricsFollowResponse {
pub guid: Guid,
pub metrics: Metrics,
@@ -1211,6 +1218,7 @@ pub async fn set_keyboard(ctx: RpcContext, options: KeyboardOptions) -> Result<(
}
#[derive(Debug, Clone, Deserialize, Serialize, TS, Parser)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct SetLanguageParams {
#[arg(help = "help.arg.language-code")]

View File

@@ -414,14 +414,11 @@ pub async fn show_config(
i.iter().find_map(|(_, info)| {
info.ip_info
.as_ref()
.filter(|_| info.public())
.iter()
.find_map(|info| info.subnets.iter().next())
.copied()
.and_then(|ip_info| ip_info.wan_ip)
.map(IpAddr::from)
})
})
.or_not_found("a public IP address")?
.addr()
};
Ok(client
.client_config(
@@ -459,7 +456,7 @@ pub async fn add_forward(
})
.map(|s| s.prefix_len())
.unwrap_or(32);
let rc = ctx.forward.add_forward(source, target, prefix).await?;
let rc = ctx.forward.add_forward(source, target, prefix, None).await?;
ctx.active_forwards.mutate(|m| {
m.insert(source, rc);
});

View File

@@ -199,7 +199,7 @@ impl TunnelContext {
})
.map(|s| s.prefix_len())
.unwrap_or(32);
active_forwards.insert(from, forward.add_forward(from, to, prefix).await?);
active_forwards.insert(from, forward.add_forward(from, to, prefix, None).await?);
}
Ok(Self(Arc::new(TunnelContextSeed {

View File

@@ -523,27 +523,27 @@ pub async fn init_web(ctx: CliContext) -> Result<(), Error> {
println!(concat!(
"To access your Web URL securely, trust your Root CA (displayed above) on your client device(s):\n",
" - MacOS\n",
" 1. Open the Terminal app\n",
" 2. Paste the following command (**DO NOTt** click Return): pbcopy < ~/Desktop/ca.crt\n",
" 3. Copy your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
" 4. Back in Terminal, click Return. ca.crt is saved to your Desktop\n",
" 5. Complete by trusting your Root CA: https://https://staging.docs.start9.com/device-guides/mac/ca.html\n",
" 1. Open the Terminal app\n",
" 2. Paste the following command (**DO NOT** click Return): pbcopy < ~/Desktop/ca.crt\n",
" 3. Copy your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
" 4. Back in Terminal, click Return. ca.crt is saved to your Desktop\n",
" 5. Complete by trusting your Root CA: https://docs.start9.com/device-guides/mac/ca.html\n",
" - Linux\n",
" 1. Open gedit, nano, or any editor\n",
" 2. Copy/paste your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
" 3. Name the file ca.crt and save as plaintext\n",
" 5. Complete by trusting your Root CA: https://https://staging.docs.start9.com/device-guides/linux/ca.html\n",
" 1. Open gedit, nano, or any editor\n",
" 2. Copy/paste your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
" 3. Name the file ca.crt and save as plaintext\n",
" 4. Complete by trusting your Root CA: https://docs.start9.com/device-guides/linux/ca.html\n",
" - Windows\n",
" 1. Open the Notepad app\n",
" 2. Copy/paste your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
" 3. Name the file ca.crt and save as plaintext\n",
" 5. Complete by trusting your Root CA: https://https://staging.docs.start9.com/device-guides/windows/ca.html\n",
" 1. Open the Notepad app\n",
" 2. Copy/paste your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
" 3. Name the file ca.crt and save as plaintext\n",
" 4. Complete by trusting your Root CA: https://docs.start9.com/device-guides/windows/ca.html\n",
" - Android/Graphene\n",
" 1. Send the ca.crt file (created above) to yourself\n",
" 2. Complete by trusting your Root CA: https://https://staging.docs.start9.com/device-guides/android/ca.html\n",
" 1. Send the ca.crt file (created above) to yourself\n",
" 2. Complete by trusting your Root CA: https://docs.start9.com/device-guides/android/ca.html\n",
" - iOS\n",
" 1. Send the ca.crt file (created above) to yourself\n",
" 2. Complete by trusting your Root CA: https://https://staging.docs.start9.com/device-guides/ios/ca.html\n",
" 1. Send the ca.crt file (created above) to yourself\n",
" 2. Complete by trusting your Root CA: https://docs.start9.com/device-guides/ios/ca.html\n",
));
return Ok(());

View File

@@ -59,8 +59,9 @@ mod v0_4_0_alpha_16;
mod v0_4_0_alpha_17;
mod v0_4_0_alpha_18;
mod v0_4_0_alpha_19;
mod v0_4_0_alpha_20;
pub type Current = v0_4_0_alpha_19::Version; // VERSION_BUMP
pub type Current = v0_4_0_alpha_20::Version; // VERSION_BUMP
impl Current {
#[instrument(skip(self, db))]
@@ -181,7 +182,8 @@ enum Version {
V0_4_0_alpha_16(Wrapper<v0_4_0_alpha_16::Version>),
V0_4_0_alpha_17(Wrapper<v0_4_0_alpha_17::Version>),
V0_4_0_alpha_18(Wrapper<v0_4_0_alpha_18::Version>),
V0_4_0_alpha_19(Wrapper<v0_4_0_alpha_19::Version>), // VERSION_BUMP
V0_4_0_alpha_19(Wrapper<v0_4_0_alpha_19::Version>),
V0_4_0_alpha_20(Wrapper<v0_4_0_alpha_20::Version>), // VERSION_BUMP
Other(exver::Version),
}
@@ -243,7 +245,8 @@ impl Version {
Self::V0_4_0_alpha_16(v) => DynVersion(Box::new(v.0)),
Self::V0_4_0_alpha_17(v) => DynVersion(Box::new(v.0)),
Self::V0_4_0_alpha_18(v) => DynVersion(Box::new(v.0)),
Self::V0_4_0_alpha_19(v) => DynVersion(Box::new(v.0)), // VERSION_BUMP
Self::V0_4_0_alpha_19(v) => DynVersion(Box::new(v.0)),
Self::V0_4_0_alpha_20(v) => DynVersion(Box::new(v.0)), // VERSION_BUMP
Self::Other(v) => {
return Err(Error::new(
eyre!("unknown version {v}"),
@@ -297,7 +300,8 @@ impl Version {
Version::V0_4_0_alpha_16(Wrapper(x)) => x.semver(),
Version::V0_4_0_alpha_17(Wrapper(x)) => x.semver(),
Version::V0_4_0_alpha_18(Wrapper(x)) => x.semver(),
Version::V0_4_0_alpha_19(Wrapper(x)) => x.semver(), // VERSION_BUMP
Version::V0_4_0_alpha_19(Wrapper(x)) => x.semver(),
Version::V0_4_0_alpha_20(Wrapper(x)) => x.semver(), // VERSION_BUMP
Version::Other(x) => x.clone(),
}
}

View File

@@ -10,13 +10,13 @@ A server is not a toy. It is a critical component of the computing paradigm, and
Start9 is paving new ground with StartOS, trying to create what most developers and IT professionals thought impossible; namely, an OS and user experience that affords a normal person the same independent control over their data and communications as an experienced Linux sysadmin.
The difficulty of our endeavor requires making mistakes; and our integrity and dedication to excellence require that we correct them. This means a willingness to discard bad ideas and broken parts, and if absolutely necessary, to tear it all down and start over. That is exactly what we did with StartOS v0.2.0 in 2020. It is what we did with StartOS v0.3.0 in 2022. And we are doing it now with StartOS v0.4.0 in 2025.
The difficulty of our endeavor requires making mistakes; and our integrity and dedication to excellence require that we correct them. This means a willingness to discard bad ideas and broken parts, and if absolutely necessary, to tear it all down and start over. That is exactly what we did with StartOS v0.2.0 in 2020. It is what we did with StartOS v0.3.0 in 2022. And we are doing it now with StartOS v0.4.0 in 2026.
v0.4.0 is a complete rewrite of StartOS, almost nothing survived. After nearly six years of building StartOS, we believe that we have finally arrived at the correct architecture and foundation that will allow us to deliver on the promise of sovereign computing.
## Changelog
### Improved User interface
### New User interface
We re-wrote the StartOS UI to be more performant, more intuitive, and better looking on both mobile and desktop. Enjoy.
@@ -28,6 +28,10 @@ StartOS v0.4.0 supports multiple languages and also makes it easy to add more la
Neither Docker nor Podman offer the reliability and flexibility needed for StartOS. Instead, v0.4.0 uses a nested container paradigm based on LXC for the outer container and Linux namespaces for sub containers. This architecture naturally supports multi container setups.
### Hardware Acceleration
Services can take advantage of (and require) the presence of certain hardware modules, such as Nvidia GPUs, for transcoding or inference purposes. For example, StartOS and Ollama can run natively on The Nvidia DGX Spark and take full advantage of the hardware/firmware stack to perform local inference against open source models.
### New S9PK archive format
The S9PK archive format has been overhauled to allow for signature verification of partial downloads, and allow direct mounting of container images without unpacking the s9pk.
@@ -80,13 +84,13 @@ The new start-fs fuse module unifies file system expectations for various platfo
StartOS now uses Extended Versioning (Exver), which consists of three parts: (1) a Semver-compliant upstream version, (2) a Semver-compliant wrapper version, and (3) an optional "flavor" prefix. Flavors can be thought of as alternative implementations of services, where a user would only want one or the other installed, and data can feasibly be migrating between the two. Another common characteristic of flavors is that they satisfy the same API requirement of dependents, though this is not strictly necessary. A valid Exver looks something like this: `#knots:29.0:1.0-beta.1`. This would translate to "the first beta release of StartOS wrapper version 1.0 of Bitcoin Knots version 29.0".
### ACME
### Let's Encrypt
StartOS now supports using ACME protocol to automatically obtain SSL/TLS certificates from widely trusted certificate authorities, such as Let's Encrypt, for your public domains. This means people visiting your public websites and APIs will not need to download and trust your server's Root CA.
StartOS now supports Let's Encrypt to automatically obtain SSL/TLS certificates for public domains. This means people visiting your public websites and APIs will not need to download and trust your server's Root CA.
### Gateways
Gateways connect your server to the Internet. They process outbound traffic, and under certain conditions, they also permit inbound traffic. For example, your router is a gateway. It is now possible add gateways to StartOS, such as StartTunnel, in order to more granularly control how your installed services are exposed to the Internet.
Gateways connect your server to the Internet, facilitating inbound and outbound traffic. Your router is a gateway. It is now possible to add Wireguard VPN gateways to your server to control how devices outside the LAN connect to your server and how your server connects out to the Internet.
### Static DNS Servers

View File

@@ -1,4 +1,4 @@
use std::collections::{BTreeMap, BTreeSet};
use std::collections::BTreeMap;
use std::ffi::OsStr;
use std::path::Path;
@@ -23,17 +23,14 @@ use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::util::unmount;
use crate::hostname::Hostname;
use crate::net::forward::AvailablePorts;
use crate::net::host::Host;
use crate::net::keys::KeyStore;
use crate::net::tor::{OnionAddress, TorSecretKey};
use crate::notifications::Notifications;
use crate::prelude::*;
use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile;
use crate::ssh::{SshKeys, SshPubKey};
use crate::util::Invoke;
use crate::util::crypto::ed25519_expand_key;
use crate::util::serde::Pem;
use crate::{DATA_DIR, HostId, Id, PACKAGE_DATA, PackageId, ReplayId};
use crate::{DATA_DIR, PACKAGE_DATA, PackageId, ReplayId};
lazy_static::lazy_static! {
static ref V0_3_6_alpha_0: exver::Version = exver::Version::new(
@@ -146,12 +143,7 @@ pub struct Version;
impl VersionT for Version {
type Previous = v0_3_5_2::Version;
type PreUpRes = (
AccountInfo,
SshKeys,
CifsTargets,
BTreeMap<PackageId, BTreeMap<HostId, TorSecretKey>>,
);
type PreUpRes = (AccountInfo, SshKeys, CifsTargets);
fn semver(self) -> exver::Version {
V0_3_6_alpha_0.clone()
}
@@ -166,20 +158,18 @@ impl VersionT for Version {
let cifs = previous_cifs(&pg).await?;
let tor_keys = previous_tor_keys(&pg).await?;
Command::new("systemctl")
.arg("stop")
.arg("postgresql@*.service")
.invoke(crate::ErrorKind::Database)
.await?;
Ok((account, ssh_keys, cifs, tor_keys))
Ok((account, ssh_keys, cifs))
}
fn up(
self,
db: &mut Value,
(account, ssh_keys, cifs, tor_keys): Self::PreUpRes,
(account, ssh_keys, cifs): Self::PreUpRes,
) -> Result<Value, Error> {
let prev_package_data = db["package-data"].clone();
@@ -242,11 +232,7 @@ impl VersionT for Version {
"ui": db["ui"],
});
let mut keystore = KeyStore::new(&account)?;
for key in tor_keys.values().flat_map(|v| v.values()) {
assert!(key.is_valid());
keystore.onion.insert(key.clone());
}
let keystore = KeyStore::new(&account)?;
let private = {
let mut value = json!({});
@@ -350,20 +336,6 @@ impl VersionT for Version {
false
};
let onions = input[&*id]["installed"]["interface-addresses"]
.as_object()
.into_iter()
.flatten()
.filter_map(|(id, addrs)| {
addrs["tor-address"].as_str().map(|addr| {
Ok((
HostId::from(Id::try_from(id.clone())?),
addr.parse::<OnionAddress>()?,
))
})
})
.collect::<Result<BTreeMap<_, _>, Error>>()?;
if let Err(e) = async {
let package_s9pk = tokio::fs::File::open(path).await?;
let file = MultiCursorFile::open(&package_s9pk).await?;
@@ -381,11 +353,8 @@ impl VersionT for Version {
.await?
.await?;
let to_sync = ctx
.db
ctx.db
.mutate(|db| {
let mut to_sync = BTreeSet::new();
let package = db
.as_public_mut()
.as_package_data_mut()
@@ -396,29 +365,11 @@ impl VersionT for Version {
.as_tasks_mut()
.remove(&ReplayId::from("needs-config"))?;
}
for (id, onion) in onions {
package
.as_hosts_mut()
.upsert(&id, || Ok(Host::new()))?
.as_onions_mut()
.mutate(|o| {
o.clear();
o.insert(onion);
Ok(())
})?;
to_sync.insert(id);
}
Ok(to_sync)
Ok(())
})
.await
.result?;
if let Some(service) = &*ctx.services.get(&id).await {
for host_id in to_sync {
service.sync_host(host_id.clone()).await?;
}
}
Ok::<_, Error>(())
}
.await
@@ -481,33 +432,6 @@ async fn previous_account_info(pg: &sqlx::Pool<sqlx::Postgres>) -> Result<Accoun
password: account_query
.try_get("password")
.with_ctx(|_| (ErrorKind::Database, "password"))?,
tor_keys: vec![TorSecretKey::from_bytes(
if let Some(bytes) = account_query
.try_get::<Option<Vec<u8>>, _>("tor_key")
.with_ctx(|_| (ErrorKind::Database, "tor_key"))?
{
<[u8; 64]>::try_from(bytes).map_err(|e| {
Error::new(
eyre!("expected vec of len 64, got len {}", e.len()),
ErrorKind::ParseDbField,
)
})?
} else {
ed25519_expand_key(
&<[u8; 32]>::try_from(
account_query
.try_get::<Vec<u8>, _>("network_key")
.with_kind(ErrorKind::Database)?,
)
.map_err(|e| {
Error::new(
eyre!("expected vec of len 32, got len {}", e.len()),
ErrorKind::ParseDbField,
)
})?,
)
},
)?],
server_id: account_query
.try_get("server_id")
.with_ctx(|_| (ErrorKind::Database, "server_id"))?,
@@ -579,68 +503,3 @@ async fn previous_ssh_keys(pg: &sqlx::Pool<sqlx::Postgres>) -> Result<SshKeys, E
Ok(ssh_keys)
}
#[tracing::instrument(skip_all)]
async fn previous_tor_keys(
pg: &sqlx::Pool<sqlx::Postgres>,
) -> Result<BTreeMap<PackageId, BTreeMap<HostId, TorSecretKey>>, Error> {
let mut res = BTreeMap::<PackageId, BTreeMap<HostId, TorSecretKey>>::new();
let net_key_query = sqlx::query(r#"SELECT * FROM network_keys"#)
.fetch_all(pg)
.await
.with_kind(ErrorKind::Database)?;
for row in net_key_query {
let package_id: PackageId = row
.try_get::<String, _>("package")
.with_ctx(|_| (ErrorKind::Database, "network_keys::package"))?
.parse()?;
let interface_id: HostId = row
.try_get::<String, _>("interface")
.with_ctx(|_| (ErrorKind::Database, "network_keys::interface"))?
.parse()?;
let key = TorSecretKey::from_bytes(ed25519_expand_key(
&<[u8; 32]>::try_from(
row.try_get::<Vec<u8>, _>("key")
.with_ctx(|_| (ErrorKind::Database, "network_keys::key"))?,
)
.map_err(|e| {
Error::new(
eyre!("expected vec of len 32, got len {}", e.len()),
ErrorKind::ParseDbField,
)
})?,
))?;
res.entry(package_id).or_default().insert(interface_id, key);
}
let tor_key_query = sqlx::query(r#"SELECT * FROM tor"#)
.fetch_all(pg)
.await
.with_kind(ErrorKind::Database)?;
for row in tor_key_query {
let package_id: PackageId = row
.try_get::<String, _>("package")
.with_ctx(|_| (ErrorKind::Database, "tor::package"))?
.parse()?;
let interface_id: HostId = row
.try_get::<String, _>("interface")
.with_ctx(|_| (ErrorKind::Database, "tor::interface"))?
.parse()?;
let key = TorSecretKey::from_bytes(
<[u8; 64]>::try_from(
row.try_get::<Vec<u8>, _>("key")
.with_ctx(|_| (ErrorKind::Database, "tor::key"))?,
)
.map_err(|e| {
Error::new(
eyre!("expected vec of len 64, got len {}", e.len()),
ErrorKind::ParseDbField,
)
})?,
)?;
res.entry(package_id).or_default().insert(interface_id, key);
}
Ok(res)
}

View File

@@ -8,7 +8,6 @@ use super::v0_3_5::V0_3_0_COMPAT;
use super::{VersionT, v0_3_6_alpha_9};
use crate::GatewayId;
use crate::net::host::address::PublicDomainConfig;
use crate::net::tor::OnionAddress;
use crate::prelude::*;
lazy_static::lazy_static! {
@@ -22,7 +21,7 @@ lazy_static::lazy_static! {
#[serde(rename_all = "camelCase")]
#[serde(tag = "kind")]
enum HostAddress {
Onion { address: OnionAddress },
Onion { address: String },
Domain { address: InternedString },
}

View File

@@ -1,11 +1,7 @@
use std::collections::BTreeSet;
use exver::{PreReleaseSegment, VersionRange};
use imbl_value::InternedString;
use super::v0_3_5::V0_3_0_COMPAT;
use super::{VersionT, v0_4_0_alpha_11};
use crate::net::tor::TorSecretKey;
use crate::prelude::*;
lazy_static::lazy_static! {
@@ -33,48 +29,6 @@ impl VersionT for Version {
}
#[instrument(skip_all)]
fn up(self, db: &mut Value, _: Self::PreUpRes) -> Result<Value, Error> {
let mut err = None;
let onion_store = db["private"]["keyStore"]["onion"]
.as_object_mut()
.or_not_found("private.keyStore.onion")?;
onion_store.retain(|o, v| match from_value::<TorSecretKey>(v.clone()) {
Ok(k) => k.is_valid() && &InternedString::from_display(&k.onion_address()) == o,
Err(e) => {
err = Some(e);
true
}
});
if let Some(e) = err {
return Err(e);
}
let allowed_addresses = onion_store.keys().cloned().collect::<BTreeSet<_>>();
let fix_host = |host: &mut Value| {
Ok::<_, Error>(
host["onions"]
.as_array_mut()
.or_not_found("host.onions")?
.retain(|addr| {
addr.as_str()
.map(|s| allowed_addresses.contains(s))
.unwrap_or(false)
}),
)
};
for (_, pde) in db["public"]["packageData"]
.as_object_mut()
.or_not_found("public.packageData")?
.iter_mut()
{
for (_, host) in pde["hosts"]
.as_object_mut()
.or_not_found("public.packageData[].hosts")?
.iter_mut()
{
fix_host(host)?;
}
}
fix_host(&mut db["public"]["serverInfo"]["network"]["host"])?;
if db["private"]["keyStore"]["localCerts"].is_null() {
db["private"]["keyStore"]["localCerts"] =
db["private"]["keyStore"]["local_certs"].clone();

View File

@@ -0,0 +1,205 @@
use exver::{PreReleaseSegment, VersionRange};
use super::v0_3_5::V0_3_0_COMPAT;
use super::{VersionT, v0_4_0_alpha_19};
use crate::prelude::*;
lazy_static::lazy_static! {
static ref V0_4_0_alpha_20: exver::Version = exver::Version::new(
[0, 4, 0],
[PreReleaseSegment::String("alpha".into()), 20.into()]
);
}
#[derive(Clone, Copy, Debug, Default)]
pub struct Version;
impl VersionT for Version {
type Previous = v0_4_0_alpha_19::Version;
type PreUpRes = ();
async fn pre_up(self) -> Result<Self::PreUpRes, Error> {
Ok(())
}
fn semver(self) -> exver::Version {
V0_4_0_alpha_20.clone()
}
fn compat(self) -> &'static VersionRange {
&V0_3_0_COMPAT
}
#[instrument(skip_all)]
fn up(self, db: &mut Value, _: Self::PreUpRes) -> Result<Value, Error> {
// Remove onions and tor-related fields from server host
if let Some(host) = db
.get_mut("public")
.and_then(|p| p.get_mut("serverInfo"))
.and_then(|s| s.get_mut("network"))
.and_then(|n| n.get_mut("host"))
.and_then(|h| h.as_object_mut())
{
host.remove("onions");
}
// Remove onions from all package hosts
if let Some(packages) = db
.get_mut("public")
.and_then(|p| p.get_mut("packageData"))
.and_then(|p| p.as_object_mut())
{
for (_, package) in packages.iter_mut() {
if let Some(hosts) = package.get_mut("hosts").and_then(|h| h.as_object_mut()) {
for (_, host) in hosts.iter_mut() {
if let Some(host_obj) = host.as_object_mut() {
host_obj.remove("onions");
}
}
}
}
}
// Remove onion store from private keyStore
if let Some(key_store) = db
.get_mut("private")
.and_then(|p| p.get_mut("keyStore"))
.and_then(|k| k.as_object_mut())
{
key_store.remove("onion");
}
// Migrate server host: remove hostnameInfo, add addresses to bindings, clean net
migrate_host(
db.get_mut("public")
.and_then(|p| p.get_mut("serverInfo"))
.and_then(|s| s.get_mut("network"))
.and_then(|n| n.get_mut("host")),
);
// Migrate all package hosts
if let Some(packages) = db
.get_mut("public")
.and_then(|p| p.get_mut("packageData"))
.and_then(|p| p.as_object_mut())
{
for (_, package) in packages.iter_mut() {
if let Some(hosts) = package.get_mut("hosts").and_then(|h| h.as_object_mut()) {
for (_, host) in hosts.iter_mut() {
migrate_host(Some(host));
}
}
}
}
// Migrate availablePorts from IdPool format to BTreeMap<u16, bool>
// Rebuild from actual assigned ports in all bindings
migrate_available_ports(db);
Ok(Value::Null)
}
fn down(self, _db: &mut Value) -> Result<(), Error> {
Ok(())
}
}
fn collect_ports_from_host(host: Option<&Value>, ports: &mut Value) {
let Some(bindings) = host
.and_then(|h| h.get("bindings"))
.and_then(|b| b.as_object())
else {
return;
};
for (_, binding) in bindings.iter() {
if let Some(net) = binding.get("net") {
if let Some(port) = net.get("assignedPort").and_then(|p| p.as_u64()) {
if let Some(obj) = ports.as_object_mut() {
obj.insert(port.to_string().into(), Value::from(false));
}
}
if let Some(port) = net.get("assignedSslPort").and_then(|p| p.as_u64()) {
if let Some(obj) = ports.as_object_mut() {
obj.insert(port.to_string().into(), Value::from(true));
}
}
}
}
}
fn migrate_available_ports(db: &mut Value) {
let mut new_ports: Value = serde_json::json!({}).into();
// Collect from server host
let server_host = db
.get("public")
.and_then(|p| p.get("serverInfo"))
.and_then(|s| s.get("network"))
.and_then(|n| n.get("host"))
.cloned();
collect_ports_from_host(server_host.as_ref(), &mut new_ports);
// Collect from all package hosts
if let Some(packages) = db
.get("public")
.and_then(|p| p.get("packageData"))
.and_then(|p| p.as_object())
{
for (_, package) in packages.iter() {
if let Some(hosts) = package.get("hosts").and_then(|h| h.as_object()) {
for (_, host) in hosts.iter() {
collect_ports_from_host(Some(host), &mut new_ports);
}
}
}
}
// Replace private.availablePorts
if let Some(private) = db.get_mut("private").and_then(|p| p.as_object_mut()) {
private.insert("availablePorts".into(), new_ports);
}
}
fn migrate_host(host: Option<&mut Value>) {
let Some(host) = host.and_then(|h| h.as_object_mut()) else {
return;
};
// Remove hostnameInfo from host
host.remove("hostnameInfo");
// Migrate privateDomains from array to object (BTreeSet -> BTreeMap<_, BTreeSet<GatewayId>>)
if let Some(private_domains) = host.get("privateDomains").and_then(|v| v.as_array()).cloned() {
let mut new_pd: Value = serde_json::json!({}).into();
for domain in private_domains {
if let Some(d) = domain.as_str() {
if let Some(obj) = new_pd.as_object_mut() {
obj.insert(d.into(), serde_json::json!([]).into());
}
}
}
host.insert("privateDomains".into(), new_pd);
}
// For each binding: add "addresses" field, remove gateway-level fields from "net"
if let Some(bindings) = host.get_mut("bindings").and_then(|b| b.as_object_mut()) {
for (_, binding) in bindings.iter_mut() {
if let Some(binding_obj) = binding.as_object_mut() {
// Add addresses if not present
if !binding_obj.contains_key("addresses") {
binding_obj.insert(
"addresses".into(),
serde_json::json!({
"enabled": [],
"disabled": [],
"available": []
})
.into(),
);
}
// Remove gateway-level privateDisabled/publicEnabled from net
if let Some(net) = binding_obj.get_mut("net").and_then(|n| n.as_object_mut()) {
net.remove("privateDisabled");
net.remove("publicEnabled");
}
}
}
}
}

View File

@@ -23,7 +23,7 @@ if [ "${PROJECT}" = "startos" ]; then
else
INSTALL_TARGET="install-${PROJECT#start-}"
fi
make "${INSTALL_TARGET}" DESTDIR=dpkg-workdir/$BASENAME
make "${INSTALL_TARGET}" DESTDIR=dpkg-workdir/$BASENAME REMOTE=
if [ -f dpkg-workdir/$BASENAME/usr/lib/$PROJECT/depends ]; then
if [ -n "$DEPENDS" ]; then

Some files were not shown because too many files have changed in this diff Show More