Compare commits

..

6 Commits

Author SHA1 Message Date
gStart9
6ec2feb230 Fix docs URLs in start-tunnel installer output 2026-03-11 13:36:22 -06:00
Alex Inkin
be921b7865 chore: update packages (#3132)
* chore: update packages

* start tunnel messaging

* chore: standalone

* pbpaste instead

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2026-03-09 09:53:47 -06:00
Matt Hill
a4bae73592 rpc/v1 for polling 2026-03-06 11:25:03 -07:00
Matt Hill
8b89f016ad task fix and keyboard fix (#3130)
* task fix and keyboard fix

* fixes for build scripts

* passthrough feature

* feat: inline domain health checks and improve address UX

- addPublicDomain returns DNS query + port check results (AddPublicDomainRes)
  so frontend skips separate API calls after adding a domain
- addPrivateDomain returns check_dns result for the gateway
- Support multiple ports per domain in validation modal (deduplicated)
- Run port checks concurrently via futures::future::join_all
- Add note to add-domain dialog showing other interfaces on same host
- Add addXForwardedHeaders to knownProtocols in SDK Host.ts
- Add plugin filter kind, pluginId filter, matchesAny, and docs to
  getServiceInterface.ts
- Add PassthroughInfo type and passthroughs field to NetworkInfo
- Pluralize "port forwarding rules" in i18n dictionaries

* feat: add shared host note to private domain dialog with i18n

* fix: scope public domain to single binding and return single port check

Accept internalPort in AddPublicDomainParams to target a specific
binding. Disable the domain on all other bindings. Return a single
CheckPortRes instead of Vec. Revert multi-port UI to singular port
display from 0f8a66b35.

* better shared hostname approach,  and improve look-feel of addresses tables

* fix starttls

* preserve usb as top efi boot option

* fix race condition in wan ip check

* sdk beta.56

* various bug, improve smtp

* multiple bugs, better outbound gateway UX

* remove non option from smtp for better package compat

* bump sdk

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2026-03-06 00:30:06 -07:00
Aiden McClelland
3320391fcc feat: support preferred external ports besides 443 (#3117)
* docs: update preferred external port design in TODO

* docs: add user-controlled public/private and port forward mapping to design

* docs: overhaul interfaces page design with view/manage split and per-address controls

* docs: move address enable/disable to overflow menu, add SSL indicator, defer UI placement decisions

* chore: remove tor from startos core

Tor is being moved from a built-in OS feature to a service. This removes
the Arti-based Tor client, onion address management, hidden service
creation, and all related code from the core backend, frontend, and SDK.

- Delete core/src/net/tor/ module (~2060 lines)
- Remove OnionAddress, TorSecretKey, TorController from all consumers
- Remove HostnameInfo::Onion and HostAddress::Onion variants
- Remove onion CRUD RPC endpoints and tor subcommand
- Remove tor key handling from account and backup/restore
- Remove ~12 tor-related Cargo dependencies (arti-client, torut, etc.)
- Remove tor UI components, API methods, mock data, and routes
- Remove OnionHostname and tor patterns/regexes from SDK
- Add v0_4_0_alpha_20 database migration to strip onion data
- Bump version to 0.4.0-alpha.20

* chore: flatten HostnameInfo from enum to struct

HostnameInfo only had one variant (Ip) after removing Tor. Flatten it
into a plain struct with fields gateway, public, hostname. Remove all
kind === 'ip' type guards and narrowing across SDK, frontend, and
container runtime. Update DB migration to strip the kind field.

* chore: format RPCSpec.md markdown table

* docs: update TODO.md with DerivedAddressInfo design, remove completed tor task

* feat: implement preferred port allocation and per-address enable/disable

- Add AvailablePorts::try_alloc() with SSL tracking (BTreeMap<u16, bool>)
- Add DerivedAddressInfo on BindInfo with private_disabled/public_enabled/possible sets
- Add Bindings wrapper with Map impl for patchdb indexed access
- Flatten HostAddress from single-variant enum to struct
- Replace set-gateway-enabled RPC with set-address-enabled
- Remove hostname_info from Host; computed addresses now in BindInfo.addresses.possible
- Compute possible addresses inline in NetServiceData::update()
- Update DB migration, SDK types, frontend, and container-runtime

* feat: replace InterfaceFilter with ForwardRequirements, add WildcardListener, complete alpha.20 bump

- Replace DynInterfaceFilter with ForwardRequirements for per-IP forward
  precision with source-subnet iptables filtering for private forwards
- Add WildcardListener (binds [::]:port) to replace the per-gateway
  NetworkInterfaceListener/SelfContainedNetworkInterfaceListener/
  UpgradableListener infrastructure
- Update forward-port script with src_subnet and excluded_src env vars
- Remove unused filter types and listener infrastructure from gateway.rs
- Add availablePorts migration (IdPool -> BTreeMap<u16, bool>) to alpha.20
- Complete version bump to 0.4.0-alpha.20 in SDK and web

* outbound gateway support (#3120)

* Multiple (#3111)

* fix alerts i18n, fix status display, better, remove usb media, hide shutdown for install complete

* trigger chnage detection for localize pipe and round out implementing localize pipe for consistency even though not needed

* Fix PackageInfoShort to handle LocaleString on releaseNotes (#3112)

* Fix PackageInfoShort to handle LocaleString on releaseNotes

* fix: filter by target_version in get_matching_models and pass otherVersions from install

* chore: add exver documentation for ai agents

* frontend plus some be types

---------

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>

* feat: replace SourceFilter with IpNet, add policy routing, remove MASQUERADE

* build ts types and fix i18n

* fix license display in marketplace

* wip refactor

* chore: update ts bindings for preferred port design

* feat: refactor NetService to watch DB and reconcile network state

- NetService sync task now uses PatchDB DbWatch instead of being called
  directly after DB mutations
- Read gateways from DB instead of network interface context when
  updating host addresses
- gateway sync updates all host addresses in the DB
- Add Watch<u64> channel for callers to wait on sync completion
- Fix ts-rs codegen bug with #[ts(skip)] on flattened Plugin field
- Update SDK getServiceInterface.ts for new HostnameInfo shape
- Remove unnecessary HTTPS redirect in static_server.rs
- Fix tunnel/api.rs to filter for WAN IPv4 address

* re-arrange (#3123)

* new service interfacee page

* feat: add mdns hostname metadata variant and fix vhost routing

- Add HostnameMetadata::Mdns variant to distinguish mDNS from private domains
- Mark mDNS addresses as private (public: false) since mDNS is local-only
- Fall back to null SNI entry when hostname not found in vhost mapping
- Simplify public detection in ProxyTarget filter
- Pass hostname to update_addresses for mDNS domain name generation

* looking good

* feat: add port_forwards field to Host for tracking gateway forwarding rules

* update bindings for API types, add ARCHITECTURE (#3124)

* update binding for API types, add ARCHITECTURE

* translations

* fix: add CONNMARK restore-mark to mangle OUTPUT chain

The CONNMARK --restore-mark rule was only in PREROUTING, which handles
forwarded packets. Locally-bound listeners (e.g. vhost) generate replies
through the OUTPUT chain, where the fwmark was never restored. This
caused response packets to route via the default table instead of back
through the originating interface.

* chore: reserialize db on equal version, update bindings and docs

- Run de/ser roundtrip in pre_init even when db version matches, ensuring
  all #[serde(default)] fields are populated before any typed access
- Add patchdb.md documentation for TypedDbWatch patterns
- Update TS bindings for CheckPortParams, CheckPortRes, ifconfigUrl
- Update CLAUDE.md docs with patchdb and component-level references

* fix: include public gateways for IP-based addresses in vhost targets

The server hostname vhost construction only collected private IPs,
always setting public to empty. Public IP addresses (Ipv4/Ipv6 metadata
with public=true) were never added to the vhost target's public gateway
set, causing the vhost filter to reject public traffic for IP-based
addresses.

* fix: add TLS handshake timeout and fix accept loop deadlock

Two issues in TlsListener::poll_accept:

1. No timeout on TLS handshakes: LazyConfigAcceptor waits indefinitely
   for ClientHello. Attackers that complete TCP handshake but never send
   TLS data create zombie futures in `in_progress` that never complete.
   Fix: wrap the entire handshake in tokio::time::timeout(15s).

2. Missing waker on new-connection pending path: when a TCP connection
   is accepted and the TLS handshake is pending, poll_accept returned
   Pending without calling wake_by_ref(). Since the TcpListener returned
   Ready (not Pending), no waker was registered for it. With edge-
   triggered epoll and no other wakeup source, the task sleeps forever
   and remaining connections in the kernel accept queue are never
   drained. Fix: add cx.waker().wake_by_ref() so the task immediately
   re-polls and continues draining the accept queue.

* fix: switch BackgroundJobRunner from Vec to FuturesUnordered

BackgroundJobRunner stored active jobs in a Vec<BoxFuture> and polled
ALL of them on every wakeup — O(n) per poll. Since this runs in the
same tokio::select! as the WebServer accept loop, polling overhead from
active connections directly delayed acceptance of new connections.

FuturesUnordered only polls woken futures — O(woken) instead of O(n).

* chore: update bindings and use typed params for outbound gateway API

* feat: per-service and default outbound gateway routing

Add set-outbound-gateway RPC for packages and set-default-outbound RPC
for the server, with policy routing enforcement via ip rules. Fix
connmark restore to skip packets with existing fwmarks, add bridge
subnet routes to per-interface tables, and fix squashfs path in
update-image-local.sh.

* refactor: manifest wraps PackageMetadata, move dependency_metadata to PackageVersionInfo

Manifest now embeds PackageMetadata via #[serde(flatten)] instead of
duplicating ~14 fields. icon and dependency_metadata moved from
PackageMetadata to PackageVersionInfo since they are registry-enrichment
data loaded from the S9PK archive. merge_with now returns errors on
metadata/icon/dependency_metadata mismatches instead of silently ignoring
them.

* fix: replace .status() with .invoke() for iptables/ip commands

Using .status() leaks stderr directly to system logs, causing noisy
iptables error messages. Switch all networking CLI invocations to use
.invoke() which captures stderr properly. For check-then-act patterns
(iptables -C), use .invoke().await.is_err() instead of
.status().await.map_or(false, |s| s.success()).

* feat: add check-dns gateway endpoint and fix per-interface routing tables

Add a `check-dns` RPC endpoint that verifies whether a gateway's DNS
is properly configured for private domain resolution. Uses a three-tier
check: direct match (DNS == server IP), TXT challenge probe (DNS on
LAN), or failure (DNS off-subnet).

Fix per-interface routing tables to clone all non-default routes from
the main table instead of only the interface's own subnets. This
preserves LAN reachability when the priority-75 catch-all overrides
default routing. Filter out status-only flags (linkdown, dead) that
are invalid for `ip route add`.

* refactor: rename manifest metadata fields and improve error display

Rename wrapperRepo→packageRepo, marketingSite→marketingUrl,
docsUrl→docsUrls (array), remove supportSite. Add display_src/display_dbg
helpers to Error. Fix DepInfo description type to LocaleString. Update
web UI, SDK bindings, tests, and fixtures to match. Clean up cli_attach
error handling and remove dead commented code.

* chore: bump sdk version to 0.4.0-beta.49

* chore: add createTask decoupling TODO

* chore: add TODO to clear service error state on install/update

* round out dns check, dns server check, port forward check, and gateway port forwards

* chore: add TODOs for URL plugins, NAT hairpinning, and start-tunnel OTA updates

* version instead of os query param

* interface row clickable again, bu now with a chevron!

* feat: implement URL plugins with table/row actions and prefill support

- Add URL plugin effects (register, export_url, clear_urls) in core
- Add PluginHostnameInfo, HostnameMetadata::Plugin, and plugin registration types
- Implement plugin URL table in web UI with tableAction button and rowAction overflow menus
- Thread urlPluginMetadata (packageId, hostId, interfaceId, internalPort) as prefill to actions
- Add prefill support to PackageActionData so metadata passes through form dialogs
- Add i18n translations for plugin error messages
- Clean up plugin URLs on package uninstall

* feat: split row_actions into remove_action and overflow_actions for URL plugins

* touch up URL plugins table

* show table even when no addresses

* feat: NAT hairpinning, DNS static servers, clear service error on install

- Add POSTROUTING MASQUERADE rules for container and host hairpin NAT
- Allow bridge subnet containers to reach private forwards via LAN IPs
- Pass bridge_subnet env var from forward.rs to forward-port script
- Use DB-configured static DNS servers in resolver with DB watcher
- Fall back to resolv.conf servers when no static servers configured
- Clear service error state when install/update completes successfully
- Remove completed TODO items

* feat: builder-style InputSpec API, prefill plumbing, and port forward fix

- Add addKey() and add() builder methods to InputSpec with InputSpecTools
- Move OuterType to last generic param on Value, List, and all dynamic methods
- Plumb prefill through getActionInput end-to-end (core → container-runtime → SDK)
- Filter port_forwards to enabled addresses only
- Bump SDK to 0.4.0-beta.50

* fix: propagate host locale into LXC containers and write locale.conf

* chore: remove completed URL plugins TODO

* feat: OTA updates for start-tunnel via apt repository (untested)

- Add apt repo publish script (build/apt/publish-deb.sh) for S3-hosted repo
- Add apt source config and GPG key placeholder (apt/)
- Add tunnel.update.check and tunnel.update.apply RPC endpoints
- Wire up update API in tunnel frontend (api service + mock)
- Uses systemd-run --scope to survive service restart during update

* fix: publish script dpkg-name, s3cfg fallback, and --reinstall for apply

* chore: replace OTA updates TODO with UI TODO for MattDHill

* feat: add getOutboundGateway effect and simplify VersionGraph init/uninit

Add getOutboundGateway effect across core, container-runtime, and SDK
to let services query their effective outbound gateway with callback
support. Remove preInstall/uninstall hooks from VersionGraph as they
are no longer needed.

* frontend start-tunnel updates

* chore: remove completed TODO

* feat: tor hidden service key migration

* chore: migrate from ts-matches to zod across all TypeScript packages

* feat(core): allow setting server hostname

* send prefill for tasks and hide operations to hidden fields

* fix(core): preserve plugin URLs across binding updates

BindInfo::update was replacing addresses with a new DerivedAddressInfo
that cleared the available set, wiping plugin-exported URLs whenever
bind() was called. Also simplify update_addresses plugin preservation
to use retain in place rather than collecting into a separate set.

* minor cleanup from patch-db audit

* clean up prefill flow

* frontend support for setting and changing hostname

* feat(core): refactor hostname to ServerHostnameInfo with name/hostname pair

- Rename Hostname to ServerHostnameInfo, add name + hostname fields
- Add set_hostname_rpc for changing hostname at runtime
- Migrate alpha_20: generate serverInfo.name from hostname, delete ui.name
- Extract gateway.rs helpers to fix rustfmt nesting depth issue
- Add i18n key for hostname validation error
- Update SDK bindings

* add comments to everything potentially consumer facing (#3127)

* add comments to everything potentially consumer facing

* rework smtp

---------

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>

* implement server name

* setup changes

* clean up copy around addresses table

* feat: add zod-deep-partial, partialValidator on InputSpec, and z.deepPartial re-export

* fix: header color in zoom (#3128)

* fix: merge version ranges when adding existing package signer (#3125)

* fix: merge version ranges when adding existing package signer

   Previously, add_package_signer unconditionally inserted the new
   version range, overwriting any existing authorization for that signer.
   Now it OR-merges the new range with the existing one, so running
   signer add multiple times accumulates permissions rather than
   replacing them.

* add --merge flag to registry package signer add

  Default behavior remains overwrite. When --merge is passed, the new
  version range is OR-merged with the existing one, allowing admins to
  accumulate permissions incrementally.

* add missing attribute to TS type

* make merge optional

* upsert instead of insert

* VersionRange::None on upsert

* fix: header color in zoom

---------

Co-authored-by: Dominion5254 <musashidisciple@proton.me>

* update snake and add about this server to system general

* chore: bump sdk to beta.53, wrap z.deepPartial with passthrough

* reset instead of reset defaults

* action failure show dialog

* chore: bump sdk to beta.54, add device-info RPC, improve SDK abort handling and InputSpec filtering

- Bump SDK version to 0.4.0-beta.54
- Add `server.device-info` RPC endpoint and `s9pk select` CLI command
- Extract `HardwareRequirements::is_compatible()` method, reuse in registry filtering
- Add `AbortedError` class with `muteUnhandled` flag, replace generic abort errors
- Handle unhandled promise rejections in container-runtime with mute support
- Improve `InputSpec.filter()` with `keepByDefault` param and boolean filter values
- Accept readonly tuples in `CommandType` and `splitCommand`
- Remove `sync_host` calls from host API handlers (binding/address changes)
- Filter mDNS hostnames by secure gateway availability
- Derive mDNS enabled state from LAN IPs in web UI
- Add "Open UI" action to address table, disable mDNS toggle
- Hide debug details in service error component
- Update rpc-toolkit docs for no-params handlers

* fix: add --no-nvram to efi grub-install to preserve built-in boot order

* update snake

* diable actions when in error state

* chore: split out nvidia variant

* misc bugfixes

* create manage-release script (untested)

* fix: preserve z namespace types for sdk consumers

* sdk version bump

* new checkPort types

* multiple bugs and better port forward ux

* fix link

* chore: todos and formatting

* fix build

---------

Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Dominion5254 <musashidisciple@proton.me>
2026-03-04 04:37:31 -07:00
Dominion5254
26a68afdef fix: merge version ranges when adding existing package signer (#3125)
* fix: merge version ranges when adding existing package signer

   Previously, add_package_signer unconditionally inserted the new
   version range, overwriting any existing authorization for that signer.
   Now it OR-merges the new range with the existing one, so running
   signer add multiple times accumulates permissions rather than
   replacing them.

* add --merge flag to registry package signer add

  Default behavior remains overwrite. When --merge is passed, the new
  version range is OR-merged with the existing one, allowing admins to
  accumulate permissions incrementally.

* add missing attribute to TS type

* make merge optional

* upsert instead of insert

* VersionRange::None on upsert
2026-02-18 13:21:33 -07:00
545 changed files with 21655 additions and 16374 deletions

View File

@@ -1,6 +1 @@
{ {}
"attribution": {
"commit": "",
"pr": ""
}
}

View File

@@ -25,10 +25,13 @@ on:
- ALL - ALL
- x86_64 - x86_64
- x86_64-nonfree - x86_64-nonfree
- x86_64-nvidia
- aarch64 - aarch64
- aarch64-nonfree - aarch64-nonfree
- aarch64-nvidia
# - raspberrypi # - raspberrypi
- riscv64 - riscv64
- riscv64-nonfree
deploy: deploy:
type: choice type: choice
description: Deploy description: Deploy
@@ -65,10 +68,13 @@ jobs:
fromJson('{ fromJson('{
"x86_64": ["x86_64"], "x86_64": ["x86_64"],
"x86_64-nonfree": ["x86_64"], "x86_64-nonfree": ["x86_64"],
"x86_64-nvidia": ["x86_64"],
"aarch64": ["aarch64"], "aarch64": ["aarch64"],
"aarch64-nonfree": ["aarch64"], "aarch64-nonfree": ["aarch64"],
"aarch64-nvidia": ["aarch64"],
"raspberrypi": ["aarch64"], "raspberrypi": ["aarch64"],
"riscv64": ["riscv64"], "riscv64": ["riscv64"],
"riscv64-nonfree": ["riscv64"],
"ALL": ["x86_64", "aarch64", "riscv64"] "ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL'] }')[github.event.inputs.platform || 'ALL']
}} }}
@@ -125,7 +131,7 @@ jobs:
format( format(
'[ '[
["{0}"], ["{0}"],
["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64"] ["x86_64", "x86_64-nonfree", "x86_64-nvidia", "aarch64", "aarch64-nonfree", "aarch64-nvidia", "riscv64", "riscv64-nonfree"]
]', ]',
github.event.inputs.platform || 'ALL' github.event.inputs.platform || 'ALL'
) )
@@ -139,18 +145,24 @@ jobs:
fromJson('{ fromJson('{
"x86_64": "ubuntu-latest", "x86_64": "ubuntu-latest",
"x86_64-nonfree": "ubuntu-latest", "x86_64-nonfree": "ubuntu-latest",
"x86_64-nvidia": "ubuntu-latest",
"aarch64": "ubuntu-24.04-arm", "aarch64": "ubuntu-24.04-arm",
"aarch64-nonfree": "ubuntu-24.04-arm", "aarch64-nonfree": "ubuntu-24.04-arm",
"aarch64-nvidia": "ubuntu-24.04-arm",
"raspberrypi": "ubuntu-24.04-arm", "raspberrypi": "ubuntu-24.04-arm",
"riscv64": "ubuntu-24.04-arm", "riscv64": "ubuntu-24.04-arm",
"riscv64-nonfree": "ubuntu-24.04-arm",
}')[matrix.platform], }')[matrix.platform],
fromJson('{ fromJson('{
"x86_64": "buildjet-8vcpu-ubuntu-2204", "x86_64": "buildjet-8vcpu-ubuntu-2204",
"x86_64-nonfree": "buildjet-8vcpu-ubuntu-2204", "x86_64-nonfree": "buildjet-8vcpu-ubuntu-2204",
"x86_64-nvidia": "buildjet-8vcpu-ubuntu-2204",
"aarch64": "buildjet-8vcpu-ubuntu-2204-arm", "aarch64": "buildjet-8vcpu-ubuntu-2204-arm",
"aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm", "aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm",
"aarch64-nvidia": "buildjet-8vcpu-ubuntu-2204-arm",
"raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm", "raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm",
"riscv64": "buildjet-8vcpu-ubuntu-2204", "riscv64": "buildjet-8vcpu-ubuntu-2204",
"riscv64-nonfree": "buildjet-8vcpu-ubuntu-2204",
}')[matrix.platform] }')[matrix.platform]
) )
)[github.event.inputs.runner == 'fast'] )[github.event.inputs.runner == 'fast']
@@ -161,10 +173,13 @@ jobs:
fromJson('{ fromJson('{
"x86_64": "x86_64", "x86_64": "x86_64",
"x86_64-nonfree": "x86_64", "x86_64-nonfree": "x86_64",
"x86_64-nvidia": "x86_64",
"aarch64": "aarch64", "aarch64": "aarch64",
"aarch64-nonfree": "aarch64", "aarch64-nonfree": "aarch64",
"aarch64-nvidia": "aarch64",
"raspberrypi": "aarch64", "raspberrypi": "aarch64",
"riscv64": "riscv64", "riscv64": "riscv64",
"riscv64-nonfree": "riscv64",
}')[matrix.platform] }')[matrix.platform]
}} }}
steps: steps:

101
ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,101 @@
# Architecture
StartOS is an open-source Linux distribution for running personal servers. It manages discovery, installation, network configuration, backups, and health monitoring of self-hosted services.
## Tech Stack
- Backend: Rust (async/Tokio, Axum web framework)
- Frontend: Angular 20 + TypeScript + TaigaUI
- Container runtime: Node.js/TypeScript with LXC
- Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync
- API: JSON-RPC via rpc-toolkit (see `core/rpc-toolkit.md`)
- Auth: Password + session cookie, public/private key signatures, local authcookie (see `core/src/middleware/auth/`)
## Project Structure
```bash
/
├── assets/ # Screenshots for README
├── build/ # Auxiliary files and scripts for deployed images
├── container-runtime/ # Node.js program managing package containers
├── core/ # Rust backend: API, daemon (startd), CLI (start-cli)
├── debian/ # Debian package maintainer scripts
├── image-recipe/ # Scripts for building StartOS images
├── patch-db/ # (submodule) Diff-based data store for frontend sync
├── sdk/ # TypeScript SDK for building StartOS packages
└── web/ # Web UIs (Angular)
```
## Components
- **`core/`** — Rust backend daemon. Produces a single binary `startbox` that is symlinked as `startd` (main daemon), `start-cli` (CLI), `start-container` (runs inside LXC containers), `registrybox` (package registry), and `tunnelbox` (VPN/tunnel). Handles all backend logic: RPC API, service lifecycle, networking (DNS, ACME, WiFi, Tor, WireGuard), backups, and database state management. See [core/ARCHITECTURE.md](core/ARCHITECTURE.md).
- **`web/`** — Angular 20 + TypeScript workspace using Taiga UI. Contains three applications (admin UI, setup wizard, VPN management) and two shared libraries (common components/services, marketplace). Communicates with the backend exclusively via JSON-RPC. See [web/ARCHITECTURE.md](web/ARCHITECTURE.md).
- **`container-runtime/`** — Node.js runtime that runs inside each service's LXC container. Loads the service's JavaScript from its S9PK package and manages subcontainers. Communicates with the host daemon via JSON-RPC over Unix socket. See [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md).
- **`sdk/`** — TypeScript SDK for packaging services for StartOS (`@start9labs/start-sdk`). Split into `base/` (core types, ABI definitions, effects interface, consumed by web as `@start9labs/start-sdk-base`) and `package/` (full SDK for service developers, consumed by container-runtime as `@start9labs/start-sdk`).
- **`patch-db/`** — Git submodule providing diff-based state synchronization. Uses CBOR encoding. Backend mutations produce diffs that are pushed to the frontend via WebSocket, enabling reactive UI updates without polling. See [patch-db repo](https://github.com/Start9Labs/patch-db).
## Build Pipeline
Components have a strict dependency chain. Changes flow in one direction:
```
Rust (core/)
→ cargo test exports ts-rs types to core/bindings/
→ rsync copies to sdk/base/lib/osBindings/
→ SDK build produces baseDist/ and dist/
→ web/ consumes baseDist/ (via @start9labs/start-sdk-base)
→ container-runtime/ consumes dist/ (via @start9labs/start-sdk)
```
Key make targets along this chain:
| Step | Command | What it does |
|---|---|---|
| 1 | `cargo check -p start-os` | Verify Rust compiles |
| 2 | `make ts-bindings` | Export ts-rs types → rsync to SDK |
| 3 | `cd sdk && make baseDist dist` | Build SDK packages |
| 4 | `cd web && npm run check` | Type-check Angular projects |
| 5 | `cd container-runtime && npm run check` | Type-check runtime |
**Important**: Editing `sdk/base/lib/osBindings/*.ts` alone is NOT sufficient — you must rebuild the SDK bundle (step 3) before web/container-runtime can see the changes.
## Cross-Layer Verification
When making changes across multiple layers (Rust, SDK, web, container-runtime), verify in this order:
1. **Rust**: `cargo check -p start-os` — verifies core compiles
2. **TS bindings**: `make ts-bindings` — regenerates TypeScript types from Rust `#[ts(export)]` structs
- Runs `./core/build/build-ts.sh` to export ts-rs types to `core/bindings/`
- Syncs `core/bindings/``sdk/base/lib/osBindings/` via rsync
- If you manually edit files in `sdk/base/lib/osBindings/`, you must still rebuild the SDK (step 3)
3. **SDK bundle**: `cd sdk && make baseDist dist` — compiles SDK source into packages
- `baseDist/` is consumed by `/web` (via `@start9labs/start-sdk-base`)
- `dist/` is consumed by `/container-runtime` (via `@start9labs/start-sdk`)
- Web and container-runtime reference the **built** SDK, not source files
4. **Web type check**: `cd web && npm run check` — type-checks all Angular projects
5. **Container runtime type check**: `cd container-runtime && npm run check` — type-checks the runtime
## Data Flow: Backend to Frontend
StartOS uses Patch-DB for reactive state synchronization:
1. The backend mutates state via `db.mutate()`, producing CBOR diffs
2. Diffs are pushed to the frontend over a persistent WebSocket connection
3. The frontend applies diffs to its local state copy and notifies observers
4. Components watch specific database paths via `PatchDB.watch$()`, receiving updates reactively
This means the UI is always eventually consistent with the backend — after any mutating API call, the frontend waits for the corresponding PatchDB diff before resolving, so the UI reflects the result immediately.
## Further Reading
- [core/ARCHITECTURE.md](core/ARCHITECTURE.md) — Rust backend architecture
- [web/ARCHITECTURE.md](web/ARCHITECTURE.md) — Angular frontend architecture
- [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md) — Container runtime details
- [core/rpc-toolkit.md](core/rpc-toolkit.md) — JSON-RPC handler patterns
- [core/s9pk-structure.md](core/s9pk-structure.md) — S9PK package format
- [docs/exver.md](docs/exver.md) — Extended versioning format
- [docs/VERSION_BUMP.md](docs/VERSION_BUMP.md) — Version bumping guide

View File

@@ -2,60 +2,35 @@
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview ## Architecture
StartOS is an open-source Linux distribution for running personal servers. It manages discovery, installation, network configuration, backups, and health monitoring of self-hosted services. See [ARCHITECTURE.md](ARCHITECTURE.md) for the full system architecture, component map, build pipeline, and cross-layer verification order.
**Tech Stack:** Each major component has its own `CLAUDE.md` with detailed guidance: `core/`, `web/`, `container-runtime/`, `sdk/`.
- Backend: Rust (async/Tokio, Axum web framework)
- Frontend: Angular 20 + TypeScript + TaigaUI
- Container runtime: Node.js/TypeScript with LXC
- Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync
- API: JSON-RPC via rpc-toolkit (see `core/rpc-toolkit.md`)
- Auth: Password + session cookie, public/private key signatures, local authcookie (see `core/src/middleware/auth/`)
## Build & Development ## Build & Development
See [CONTRIBUTING.md](CONTRIBUTING.md) for: See [CONTRIBUTING.md](CONTRIBUTING.md) for:
- Environment setup and requirements - Environment setup and requirements
- Build commands and make targets - Build commands and make targets
- Testing and formatting commands - Testing and formatting commands
- Environment variables - Environment variables
**Quick reference:** **Quick reference:**
```bash ```bash
. ./devmode.sh # Enable dev mode . ./devmode.sh # Enable dev mode
make update-startbox REMOTE=start9@<ip> # Fastest iteration (binary + UI) make update-startbox REMOTE=start9@<ip> # Fastest iteration (binary + UI)
make test-core # Run Rust tests make test-core # Run Rust tests
``` ```
### Verifying code changes ## Operating Rules
When making changes across multiple layers (Rust, SDK, web, container-runtime), verify in this order: - Always verify cross-layer changes using the order described in [ARCHITECTURE.md](ARCHITECTURE.md#cross-layer-verification)
- Check component-level CLAUDE.md files for component-specific conventions. ALWAYS read it before operating on that component.
1. **Rust**: `cargo check -p start-os` — verifies core compiles - Follow existing patterns before inventing new ones
2. **TS bindings**: `make ts-bindings` — regenerates TypeScript types from Rust `#[ts(export)]` structs - Always use `make` recipes when they exist for testing builds rather than manually invoking build commands
- Runs `./core/build/build-ts.sh` to export ts-rs types to `core/bindings/`
- Syncs `core/bindings/``sdk/base/lib/osBindings/` via rsync
- If you manually edit files in `sdk/base/lib/osBindings/`, you must still rebuild the SDK (step 3)
3. **SDK bundle**: `cd sdk && make baseDist dist` — compiles SDK source into packages
- `baseDist/` is consumed by `/web` (via `@start9labs/start-sdk-base`)
- `dist/` is consumed by `/container-runtime` (via `@start9labs/start-sdk`)
- Web and container-runtime reference the **built** SDK, not source files
4. **Web type check**: `cd web && npm run check` — type-checks all Angular projects
5. **Container runtime type check**: `cd container-runtime && npm run check` — type-checks the runtime
**Important**: Editing `sdk/base/lib/osBindings/*.ts` alone is NOT sufficient — you must rebuild the SDK bundle (step 3) before web/container-runtime can see the changes.
## Architecture
Each major component has its own `CLAUDE.md` with detailed guidance.
- **`core/`** — Rust backend daemon (startbox, start-cli, start-container, registrybox, tunnelbox)
- **`web/`** — Angular frontend workspace (admin UI, setup wizard, marketplace, shared library)
- **`container-runtime/`** — Node.js runtime managing service containers via JSON-RPC
- **`sdk/`** — TypeScript SDK for packaging services (`@start9labs/start-sdk`)
- **`patch-db/`** — Git submodule providing diff-based state synchronization
## Supplementary Documentation ## Supplementary Documentation
@@ -75,6 +50,7 @@ On startup:
1. **Check for `docs/USER.md`** - If it doesn't exist, prompt the user for their name/identifier and create it. This file is gitignored since it varies per developer. 1. **Check for `docs/USER.md`** - If it doesn't exist, prompt the user for their name/identifier and create it. This file is gitignored since it varies per developer.
2. **Check `docs/TODO.md` for relevant tasks** - Show TODOs that either: 2. **Check `docs/TODO.md` for relevant tasks** - Show TODOs that either:
- Have no `@username` tag (relevant to everyone) - Have no `@username` tag (relevant to everyone)
- Are tagged with the current user's identifier - Are tagged with the current user's identifier

View File

@@ -6,27 +6,7 @@ This guide is for contributing to the StartOS. If you are interested in packagin
- [Matrix](https://matrix.to/#/#dev-startos:matrix.start9labs.com) - [Matrix](https://matrix.to/#/#dev-startos:matrix.start9labs.com)
## Project Structure For project structure and system architecture, see [ARCHITECTURE.md](ARCHITECTURE.md).
```bash
/
├── assets/ # Screenshots for README
├── build/ # Auxiliary files and scripts for deployed images
├── container-runtime/ # Node.js program managing package containers
├── core/ # Rust backend: API, daemon (startd), CLI (start-cli)
├── debian/ # Debian package maintainer scripts
├── image-recipe/ # Scripts for building StartOS images
├── patch-db/ # (submodule) Diff-based data store for frontend sync
├── sdk/ # TypeScript SDK for building StartOS packages
└── web/ # Web UIs (Angular)
```
See component READMEs for details:
- [`core`](core/README.md)
- [`web`](web/README.md)
- [`build`](build/README.md)
- [`patch-db`](https://github.com/Start9Labs/patch-db)
## Environment Setup ## Environment Setup

View File

@@ -7,7 +7,7 @@ GIT_HASH_FILE := $(shell ./build/env/check-git-hash.sh)
VERSION_FILE := $(shell ./build/env/check-version.sh) VERSION_FILE := $(shell ./build/env/check-version.sh)
BASENAME := $(shell PROJECT=startos ./build/env/basename.sh) BASENAME := $(shell PROJECT=startos ./build/env/basename.sh)
PLATFORM := $(shell if [ -f $(PLATFORM_FILE) ]; then cat $(PLATFORM_FILE); else echo unknown; fi) PLATFORM := $(shell if [ -f $(PLATFORM_FILE) ]; then cat $(PLATFORM_FILE); else echo unknown; fi)
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi) ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; elif [ "$(PLATFORM)" = "rockchip64" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g; s/-nvidia$$//g'; fi)
RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi) RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi)
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh) REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh)
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh) TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh)
@@ -139,6 +139,11 @@ install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts) $(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port) $(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
$(call mkdir,$(DESTDIR)/etc/apt/sources.list.d)
$(call cp,apt/start9.list,$(DESTDIR)/etc/apt/sources.list.d/start9.list)
$(call mkdir,$(DESTDIR)/usr/share/keyrings)
$(call cp,apt/start9.gpg,$(DESTDIR)/usr/share/keyrings/start9.gpg)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-tunnelbox.sh ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-tunnelbox.sh
@@ -278,7 +283,7 @@ core/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
rm -rf core/bindings rm -rf core/bindings
./core/build/build-ts.sh ./core/build/build-ts.sh
ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts
npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/bindings/*.ts npm --prefix sdk/base exec -- prettier --config=./sdk/base/package.json -w './core/bindings/**/*.ts'
touch core/bindings/index.ts touch core/bindings/index.ts
sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts

View File

@@ -52,7 +52,7 @@ The easiest path. [Buy a server](https://store.start9.com) from Start9 and plug
### Build your own ### Build your own
Install StartOS on your own hardware. Follow one of the [DIY guides](https://start9.com/latest/diy). Reasons to go this route: Follow the [install guide](https://docs.start9.com/start-os/installing.html) to install StartOS on your own hardware. . Reasons to go this route:
1. You already have compatible hardware 1. You already have compatible hardware
2. You want to save on shipping costs 2. You want to save on shipping costs

261
TODO.md
View File

@@ -1,261 +0,0 @@
# AI Agent TODOs
Pending tasks for AI agents. Remove items when completed.
## Unreviewed CLAUDE.md Sections
- [ ] Architecture - Web (`/web`) - @MattDHill
## Features
- [ ] Support preferred external ports besides 443 - @dr-bonez
**Problem**: Currently, port 443 is the only preferred external port that is actually honored. When a
service requests `preferred_external_port: 8443` (or any non-443 value) for SSL, the system ignores
the preference and assigns a dynamic-range port (49152-65535). The `preferred_external_port` is only
used as a label for Tor mappings and as a trigger for the port-443 special case in `update()`.
**Goal**: Honor `preferred_external_port` for both SSL and non-SSL binds when the requested port is
available, with proper conflict resolution and fallback to dynamic-range allocation.
### Design
**Key distinction**: There are two separate concepts for SSL port usage:
1. **Port ownership** (`assigned_ssl_port`) — A port exclusively owned by a binding, allocated from
`AvailablePorts`. Used for server hostnames (`.local`, mDNS, etc.) and iptables forwards.
2. **Domain SSL port** — The port used for domain-based vhost entries. A binding does NOT need to own
a port to have a domain vhost on it. The VHostController already supports multiple hostnames on the
same port via SNI. Any binding can create a domain vhost entry on any SSL port that the
VHostController has a listener for, regardless of who "owns" that port.
For example: the OS owns port 443 as its `assigned_ssl_port`. A service with
`preferred_external_port: 443` won't get 443 as its `assigned_ssl_port` (it's taken), but it CAN
still have domain vhost entries on port 443 — SNI routes by hostname.
#### 1. Preferred Port Allocation for Ownership ✅ DONE
`AvailablePorts::try_alloc(port) -> Option<u16>` added to `forward.rs`. `BindInfo::new()` and
`BindInfo::update()` attempt the preferred port first, falling back to dynamic-range allocation.
#### 2. Per-Address Enable/Disable ✅ DONE
Gateway-level `private_disabled`/`public_enabled` on `NetInfo` replaced with per-address
`DerivedAddressInfo` on `BindInfo`. `hostname_info` removed from `Host` — computed addresses now
live in `BindInfo.addresses.possible`.
**`DerivedAddressInfo` struct** (on `BindInfo`):
```rust
pub struct DerivedAddressInfo {
pub private_disabled: BTreeSet<HostnameInfo>,
pub public_enabled: BTreeSet<HostnameInfo>,
pub possible: BTreeSet<HostnameInfo>, // COMPUTED by update()
}
```
`DerivedAddressInfo::enabled()` returns `possible` filtered by the two sets. `HostnameInfo` derives
`Ord` for `BTreeSet` usage. `AddressFilter` (implementing `InterfaceFilter`) derives enabled
gateway set from `DerivedAddressInfo` for vhost/forward filtering.
**RPC endpoint**: `set-gateway-enabled` replaced with `set-address-enabled` (on both
`server.host.binding` and `package.host.binding`).
**How disabling works per address type** (enforcement deferred to Section 3):
- **WAN/LAN IP:port**: Will be enforced via **source-IP gating** in the vhost layer (Section 3).
- **Hostname-based addresses** (`.local`, domains): Disabled by **not creating the vhost/SNI
entry** for that hostname.
#### 3. Eliminate the Port 5443 Hack: Source-IP-Based WAN Blocking (`vhost.rs`, `net_controller.rs`)
**Current problem**: The `if ssl.preferred_external_port == 443` branch (line 341 of
`net_controller.rs`) creates a bespoke dual-vhost setup: port 5443 for private-only access and port
443 for public (or public+private). This exists because both public and private traffic arrive on the
same port 443 listener, and the current `InterfaceFilter`/`PublicFilter` model distinguishes
public/private by which *network interface* the connection arrived on — which doesn't work when both
traffic types share a listener.
**Solution**: Determine public vs private based on **source IP** at the vhost level. Traffic arriving
from the gateway IP should be treated as public (the gateway may MASQUERADE/NAT internet traffic, so
anything from the gateway is potentially public). Traffic from LAN IPs is private.
This applies to **all** vhost targets, not just port 443:
- **Add a `public` field to `ProxyTarget`** (or an enum: `Public`, `Private`, `Both`) indicating
what traffic this target accepts, derived from the binding's user-controlled `public` field.
- **Modify `VHostTarget::filter()`** (`vhost.rs:342`): Instead of (or in addition to) checking the
network interface via `GatewayInfo`, check the source IP of the TCP connection against known gateway
IPs. If the source IP matches a gateway or IP outside the subnet, the connection is public;
otherwise it's private. Use this to gate against the target's `public` field.
- **Eliminate the 5443 port entirely**: A single vhost entry on port 443 (or any shared SSL port) can
serve both public and private traffic, with per-target source-IP gating determining which backend
handles which connections.
#### 4. Port Forward Mapping in Patch-DB
When a binding is marked `public = true`, StartOS must record the required port forwards in patch-db
so the frontend can display them to the user. The user then configures these on their router manually.
For each public binding, store:
- The external port the router should forward (the actual vhost port used for domains, or the
`assigned_port` / `assigned_ssl_port` for non-domain access)
- The protocol (TCP/UDP)
- The StartOS LAN IP as the forward target
- Which service/binding this forward is for (for display purposes)
This mapping should be in the public database model so the frontend can read and display it.
#### 5. Simplify `update()` Domain Vhost Logic (`net_controller.rs`)
With source-IP gating in the vhost controller:
- **Remove the `== 443` special case** and the 5443 secondary vhost.
- For **server hostnames** (`.local`, mDNS, embassy, startos, localhost): use `assigned_ssl_port`
(the port the binding owns).
- For **domain-based vhost entries**: attempt to use `preferred_external_port` as the vhost port.
This succeeds if the port is either unused or already has an SSL listener (SNI handles sharing).
It fails only if the port is already in use by a non-SSL binding, or is a restricted port. On
failure, fall back to `assigned_ssl_port`.
- The binding's `public` field determines the `ProxyTarget`'s public/private gating.
- Hostname info must exactly match the actual vhost port used: for server hostnames, report
`ssl_port: assigned_ssl_port`. For domains, report `ssl_port: preferred_external_port` if it was
successfully used for the domain vhost, otherwise report `ssl_port: assigned_ssl_port`.
#### 6. Frontend: Interfaces Page Overhaul (View/Manage Split)
The current interfaces page is a single page showing gateways (with toggle), addresses, public
domains, and private domains. It gets split into two pages: **View** and **Manage**.
**SDK**: `preferredExternalPort` is already exposed. No additional SDK changes needed.
##### View Page
Displays all computed addresses for the interface (from `BindInfo.addresses`) as a flat list. For each
address, show: URL, type (IPv4, IPv6, .local, domain), access level (public/private),
gateway name, SSL indicator, enable/disable state, port forward info for public addresses, and a test button
for reachability (see Section 7).
No gateway-level toggles. The old `gateways.component.ts` toggle UI is removed.
**Note**: Exact UI element placement (where toggles, buttons, info badges go) is sensitive.
Prompt the user for specific placement decisions during implementation.
##### Manage Page
Simple CRUD interface for configuring which addresses exist. Two sections:
- **Public domains**: Add/remove. Uses existing RPC endpoints:
- `{server,package}.host.address.domain.public.add`
- `{server,package}.host.address.domain.public.remove`
- **Private domains**: Add/remove. Uses existing RPC endpoints:
- `{server,package}.host.address.domain.private.add`
- `{server,package}.host.address.domain.private.remove`
##### Key Frontend Files to Modify
| File | Change |
|------|--------|
| `web/projects/ui/src/app/routes/portal/components/interfaces/` | Overhaul: split into view/manage |
| `web/projects/ui/src/app/routes/portal/components/interfaces/gateways.component.ts` | Remove (replaced by per-address toggles on View page) |
| `web/projects/ui/src/app/routes/portal/components/interfaces/interface.service.ts` | Update `MappedServiceInterface` to compute enabled addresses from `DerivedAddressInfo` |
| `web/projects/ui/src/app/routes/portal/components/interfaces/addresses/` | Refactor for View page with overflow menu (enable/disable) and test buttons |
| `web/projects/ui/src/app/routes/portal/routes/services/services.routes.ts` | Add routes for view/manage sub-pages |
| `web/projects/ui/src/app/routes/portal/routes/system/system.routes.ts` | Add routes for view/manage sub-pages |
#### 7. Reachability Test Endpoint
New RPC endpoint that tests whether an address is actually reachable, with diagnostic info on
failure.
**RPC endpoint** (`binding.rs` or new file):
- **`test-address`** — Test reachability of a specific address.
```ts
interface BindingTestAddressParams {
internalPort: number
address: HostnameInfo
}
```
The backend simply performs the raw checks and returns the results. The **frontend** owns all
interpretation — it already knows the address type, expected IP, expected port, etc. from the
`HostnameInfo` data, so it can compare against the backend results and construct fix messaging.
```ts
interface TestAddressResult {
dns: string[] | null // resolved IPs, null if not a domain address or lookup failed
portOpen: boolean | null // TCP connect result, null if not applicable
}
```
This yields two RPC methods:
- `server.host.binding.test-address`
- `package.host.binding.test-address`
The frontend already has the full `HostnameInfo` context (expected IP, domain, port, gateway,
public/private). It compares the backend's raw results against the expected state and constructs
localized fix instructions. For example:
- `dns` returned but doesn't contain the expected WAN IP → "Update DNS A record for {domain}
to {wanIp}"
- `dns` is `null` for a domain address → "DNS lookup failed for {domain}"
- `portOpen` is `false` → "Configure port forward on your router: external {port} TCP →
{lanIp}:{port}"
### Key Files
| File | Role |
|------|------|
| `core/src/net/forward.rs` | `AvailablePorts` — port pool allocation, `try_alloc()` for preferred ports |
| `core/src/net/host/binding.rs` | `Bindings` (Map wrapper for patchdb), `BindInfo`/`NetInfo`/`DerivedAddressInfo`/`AddressFilter` — per-address enable/disable, `set-address-enabled` RPC |
| `core/src/net/net_controller.rs:259` | `NetServiceData::update()` — computes `DerivedAddressInfo.possible`, vhost/forward/DNS reconciliation, 5443 hack removal |
| `core/src/net/vhost.rs` | `VHostController` / `ProxyTarget` — source-IP gating for public/private |
| `core/src/net/gateway.rs` | `InterfaceFilter` trait and filter types (`AddressFilter`, `PublicFilter`, etc.) |
| `core/src/net/service_interface.rs` | `HostnameInfo` — derives `Ord` for `BTreeSet` usage |
| `core/src/net/host/address.rs` | `HostAddress` (flattened struct), domain CRUD endpoints |
| `sdk/base/lib/interfaces/Host.ts` | SDK `MultiHost.bindPort()` — no changes needed |
| `core/src/db/model/public.rs` | Public DB model — port forward mapping |
- [ ] Extract TS-exported types into a lightweight sub-crate for fast binding generation
**Problem**: `make ts-bindings` compiles the entire `start-os` crate (with all dependencies: tokio,
axum, openssl, etc.) just to run test functions that serialize type definitions to `.ts` files.
Even in debug mode, this takes minutes. The generated output is pure type info — no runtime code
is needed.
**Goal**: Generate TS bindings in seconds by isolating exported types in a small crate with minimal
dependencies.
**Approach**: Create a `core/bindings-types/` sub-crate containing (or re-exporting) all 168
`#[ts(export)]` types. This crate depends only on `serde`, `ts-rs`, `exver`, and other type-only
crates — not on tokio, axum, openssl, etc. Then `build-ts.sh` runs `cargo test -p bindings-types`
instead of `cargo test -p start-os`.
**Challenge**: The exported types are scattered across `core/src/` and reference each other and
other crate types. Extracting them requires either moving the type definitions into the sub-crate
(and importing them back into `start-os`) or restructuring to share a common types crate.
- [ ] Use auto-generated RPC types in the frontend instead of manual duplicates
**Problem**: The web frontend manually defines ~755 lines of API request/response types in
`web/projects/ui/src/app/services/api/api.types.ts` that can drift from the actual Rust types.
**Current state**: The Rust backend already has `#[ts(export)]` on RPC param types (e.g.
`AddTunnelParams`, `SetWifiEnabledParams`, `LoginParams`), and they are generated into
`core/bindings/`. However, commit `71b83245b` ("Chore/unexport api ts #2585", April 2024)
deliberately stopped building them into the SDK and had the frontend maintain its own types.
**Goal**: Reverse that decision — pipe the generated RPC types through the SDK into the frontend
so `api.types.ts` can import them instead of duplicating them. This eliminates drift between
backend and frontend API contracts.
- [ ] Auto-configure port forwards via UPnP/NAT-PMP/PCP - @dr-bonez
**Blocked by**: "Support preferred external ports besides 443" (must be implemented and tested
end-to-end first).
**Goal**: When a binding is marked public, automatically configure port forwards on the user's router
using UPnP, NAT-PMP, or PCP, instead of requiring manual router configuration. Fall back to
displaying manual instructions (the port forward mapping from patch-db) when auto-configuration is
unavailable or fails.

BIN
apt/start9.gpg Normal file

Binary file not shown.

1
apt/start9.list Normal file
View File

@@ -0,0 +1 @@
deb [arch=amd64,arm64,riscv64 signed-by=/usr/share/keyrings/start9.gpg] https://start9-debs.nyc3.cdn.digitaloceanspaces.com stable main

138
build/apt/publish-deb.sh Executable file
View File

@@ -0,0 +1,138 @@
#!/bin/bash
#
# Publish .deb files to an S3-hosted apt repository.
#
# Usage: publish-deb.sh <deb-file-or-directory> [<deb-file-or-directory> ...]
#
# Environment variables:
# GPG_PRIVATE_KEY - Armored GPG private key (imported if set)
# GPG_KEY_ID - GPG key ID for signing
# S3_ACCESS_KEY - S3 access key
# S3_SECRET_KEY - S3 secret key
# S3_ENDPOINT - S3 endpoint (default: https://nyc3.digitaloceanspaces.com)
# S3_BUCKET - S3 bucket name (default: start9-debs)
# SUITE - Apt suite name (default: stable)
# COMPONENT - Apt component name (default: main)
set -e
if [ $# -eq 0 ]; then
echo "Usage: $0 <deb-file-or-directory> [...]" >&2
exit 1
fi
BUCKET="${S3_BUCKET:-start9-debs}"
ENDPOINT="${S3_ENDPOINT:-https://nyc3.digitaloceanspaces.com}"
SUITE="${SUITE:-stable}"
COMPONENT="${COMPONENT:-main}"
REPO_DIR="$(mktemp -d)"
cleanup() {
rm -rf "$REPO_DIR"
}
trap cleanup EXIT
# Import GPG key if provided
if [ -n "$GPG_PRIVATE_KEY" ]; then
echo "$GPG_PRIVATE_KEY" | gpg --batch --import 2>/dev/null
fi
# Configure s3cmd
if [ -n "$S3_ACCESS_KEY" ] && [ -n "$S3_SECRET_KEY" ]; then
S3CMD_CONFIG="$(mktemp)"
cat > "$S3CMD_CONFIG" <<EOF
[default]
access_key = ${S3_ACCESS_KEY}
secret_key = ${S3_SECRET_KEY}
host_base = $(echo "$ENDPOINT" | sed 's|https://||')
host_bucket = %(bucket)s.$(echo "$ENDPOINT" | sed 's|https://||')
use_https = True
EOF
s3() {
s3cmd -c "$S3CMD_CONFIG" "$@"
}
else
# Fall back to default ~/.s3cfg
S3CMD_CONFIG=""
s3() {
s3cmd "$@"
}
fi
# Sync existing repo from S3
echo "Syncing existing repo from s3://${BUCKET}/ ..."
s3 sync --no-mime-magic "s3://${BUCKET}/" "$REPO_DIR/" 2>/dev/null || true
# Collect all .deb files from arguments
DEB_FILES=()
for arg in "$@"; do
if [ -d "$arg" ]; then
while IFS= read -r -d '' f; do
DEB_FILES+=("$f")
done < <(find "$arg" -name '*.deb' -print0)
elif [ -f "$arg" ]; then
DEB_FILES+=("$arg")
else
echo "Warning: $arg is not a file or directory, skipping" >&2
fi
done
if [ ${#DEB_FILES[@]} -eq 0 ]; then
echo "No .deb files found" >&2
exit 1
fi
# Copy each deb to the pool, renaming to standard format
for deb in "${DEB_FILES[@]}"; do
PKG_NAME="$(dpkg-deb --field "$deb" Package)"
POOL_DIR="$REPO_DIR/pool/${COMPONENT}/${PKG_NAME:0:1}/${PKG_NAME}"
mkdir -p "$POOL_DIR"
cp "$deb" "$POOL_DIR/"
dpkg-name -o "$POOL_DIR/$(basename "$deb")" 2>/dev/null || true
echo "Added: $(basename "$deb") -> pool/${COMPONENT}/${PKG_NAME:0:1}/${PKG_NAME}/"
done
# Generate Packages indices for each architecture
for arch in amd64 arm64 riscv64; do
BINARY_DIR="$REPO_DIR/dists/${SUITE}/${COMPONENT}/binary-${arch}"
mkdir -p "$BINARY_DIR"
(
cd "$REPO_DIR"
dpkg-scanpackages --arch "$arch" pool/ > "$BINARY_DIR/Packages"
gzip -k -f "$BINARY_DIR/Packages"
)
echo "Generated Packages index for ${arch}"
done
# Generate Release file
(
cd "$REPO_DIR/dists/${SUITE}"
apt-ftparchive release \
-o "APT::FTPArchive::Release::Origin=Start9" \
-o "APT::FTPArchive::Release::Label=Start9" \
-o "APT::FTPArchive::Release::Suite=${SUITE}" \
-o "APT::FTPArchive::Release::Codename=${SUITE}" \
-o "APT::FTPArchive::Release::Architectures=amd64 arm64 riscv64" \
-o "APT::FTPArchive::Release::Components=${COMPONENT}" \
. > Release
)
echo "Generated Release file"
# Sign if GPG key is available
if [ -n "$GPG_KEY_ID" ]; then
(
cd "$REPO_DIR/dists/${SUITE}"
gpg --default-key "$GPG_KEY_ID" --batch --yes --detach-sign -o Release.gpg Release
gpg --default-key "$GPG_KEY_ID" --batch --yes --clearsign -o InRelease Release
)
echo "Signed Release file with key ${GPG_KEY_ID}"
else
echo "Warning: GPG_KEY_ID not set, Release file is unsigned" >&2
fi
# Upload to S3
echo "Uploading to s3://${BUCKET}/ ..."
s3 sync --acl-public --no-mime-magic "$REPO_DIR/" "s3://${BUCKET}/"
[ -n "$S3CMD_CONFIG" ] && rm -f "$S3CMD_CONFIG"
echo "Done."

View File

@@ -55,6 +55,7 @@ socat
sqlite3 sqlite3
squashfs-tools squashfs-tools
squashfs-tools-ng squashfs-tools-ng
ssl-cert
sudo sudo
systemd systemd
systemd-resolved systemd-resolved

View File

@@ -0,0 +1 @@
+ nmap

View File

@@ -12,6 +12,10 @@ fi
if [[ "$PLATFORM" =~ -nonfree$ ]]; then if [[ "$PLATFORM" =~ -nonfree$ ]]; then
FEATURES+=("nonfree") FEATURES+=("nonfree")
fi fi
if [[ "$PLATFORM" =~ -nvidia$ ]]; then
FEATURES+=("nonfree")
FEATURES+=("nvidia")
fi
feature_file_checker=' feature_file_checker='
/^#/ { next } /^#/ { next }

View File

@@ -4,7 +4,4 @@
+ firmware-iwlwifi + firmware-iwlwifi
+ firmware-libertas + firmware-libertas
+ firmware-misc-nonfree + firmware-misc-nonfree
+ firmware-realtek + firmware-realtek
+ nvidia-container-toolkit
# + nvidia-driver
# + nvidia-kernel-dkms

View File

@@ -0,0 +1 @@
+ nvidia-container-toolkit

View File

@@ -34,14 +34,14 @@ fi
IMAGE_BASENAME=startos-${VERSION_FULL}_${IB_TARGET_PLATFORM} IMAGE_BASENAME=startos-${VERSION_FULL}_${IB_TARGET_PLATFORM}
BOOTLOADERS=grub-efi BOOTLOADERS=grub-efi
if [ "$IB_TARGET_PLATFORM" = "x86_64" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nonfree" ]; then if [ "$IB_TARGET_PLATFORM" = "x86_64" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nvidia" ]; then
IB_TARGET_ARCH=amd64 IB_TARGET_ARCH=amd64
QEMU_ARCH=x86_64 QEMU_ARCH=x86_64
BOOTLOADERS=grub-efi,syslinux BOOTLOADERS=grub-efi,syslinux
elif [ "$IB_TARGET_PLATFORM" = "aarch64" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "raspberrypi" ] || [ "$IB_TARGET_PLATFORM" = "rockchip64" ]; then elif [ "$IB_TARGET_PLATFORM" = "aarch64" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nvidia" ] || [ "$IB_TARGET_PLATFORM" = "raspberrypi" ] || [ "$IB_TARGET_PLATFORM" = "rockchip64" ]; then
IB_TARGET_ARCH=arm64 IB_TARGET_ARCH=arm64
QEMU_ARCH=aarch64 QEMU_ARCH=aarch64
elif [ "$IB_TARGET_PLATFORM" = "riscv64" ]; then elif [ "$IB_TARGET_PLATFORM" = "riscv64" ] || [ "$IB_TARGET_PLATFORM" = "riscv64-nonfree" ]; then
IB_TARGET_ARCH=riscv64 IB_TARGET_ARCH=riscv64
QEMU_ARCH=riscv64 QEMU_ARCH=riscv64
else else
@@ -60,9 +60,13 @@ mkdir -p $prep_results_dir
cd $prep_results_dir cd $prep_results_dir
NON_FREE= NON_FREE=
if [[ "${IB_TARGET_PLATFORM}" =~ -nonfree$ ]] || [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [[ "${IB_TARGET_PLATFORM}" =~ -nonfree$ ]] || [[ "${IB_TARGET_PLATFORM}" =~ -nvidia$ ]] || [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
NON_FREE=1 NON_FREE=1
fi fi
NVIDIA=
if [[ "${IB_TARGET_PLATFORM}" =~ -nvidia$ ]]; then
NVIDIA=1
fi
IMAGE_TYPE=iso IMAGE_TYPE=iso
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ] || [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ] || [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
IMAGE_TYPE=img IMAGE_TYPE=img
@@ -101,7 +105,7 @@ lb config \
--iso-preparer "START9 LABS; HTTPS://START9.COM" \ --iso-preparer "START9 LABS; HTTPS://START9.COM" \
--iso-publisher "START9 LABS; HTTPS://START9.COM" \ --iso-publisher "START9 LABS; HTTPS://START9.COM" \
--backports true \ --backports true \
--bootappend-live "boot=live noautologin" \ --bootappend-live "boot=live noautologin console=tty0" \
--bootloaders $BOOTLOADERS \ --bootloaders $BOOTLOADERS \
--cache false \ --cache false \
--mirror-bootstrap "https://deb.debian.org/debian/" \ --mirror-bootstrap "https://deb.debian.org/debian/" \
@@ -177,7 +181,7 @@ if [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list
fi fi
if [ "$NON_FREE" = 1 ]; then if [ "$NVIDIA" = 1 ]; then
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o config/archives/nvidia-container-toolkit.key curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o config/archives/nvidia-container-toolkit.key
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \ curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/etc/apt/trusted.gpg.d/nvidia-container-toolkit.key.gpg] https://#g' \ | sed 's#deb https://#deb [signed-by=/etc/apt/trusted.gpg.d/nvidia-container-toolkit.key.gpg] https://#g' \
@@ -205,11 +209,11 @@ cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF
set -e set -e
if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then if [ "${NVIDIA}" = "1" ]; then
# install a specific NVIDIA driver version # install a specific NVIDIA driver version
# ---------------- configuration ---------------- # ---------------- configuration ----------------
NVIDIA_DRIVER_VERSION="\${NVIDIA_DRIVER_VERSION:-580.119.02}" NVIDIA_DRIVER_VERSION="\${NVIDIA_DRIVER_VERSION:-580.126.09}"
BASE_URL="https://download.nvidia.com/XFree86/Linux-${QEMU_ARCH}" BASE_URL="https://download.nvidia.com/XFree86/Linux-${QEMU_ARCH}"
@@ -259,12 +263,15 @@ if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
echo "[nvidia-hook] Running NVIDIA installer for kernel \${KVER}" >&2 echo "[nvidia-hook] Running NVIDIA installer for kernel \${KVER}" >&2
sh "\${RUN_PATH}" \ if ! sh "\${RUN_PATH}" \
--silent \ --silent \
--kernel-name="\${KVER}" \ --kernel-name="\${KVER}" \
--no-x-check \ --no-x-check \
--no-nouveau-check \ --no-nouveau-check \
--no-runlevel-check --no-runlevel-check; then
cat /var/log/nvidia-installer.log
exit 1
fi
# Rebuild module metadata # Rebuild module metadata
echo "[nvidia-hook] Running depmod for \${KVER}" >&2 echo "[nvidia-hook] Running depmod for \${KVER}" >&2

View File

@@ -13,7 +13,7 @@ for kind in INPUT FORWARD ACCEPT; do
iptables -A $kind -j "${NAME}_${kind}" iptables -A $kind -j "${NAME}_${kind}"
fi fi
done done
for kind in PREROUTING OUTPUT; do for kind in PREROUTING OUTPUT POSTROUTING; do
if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
iptables -t nat -N "${NAME}_${kind}" 2> /dev/null iptables -t nat -N "${NAME}_${kind}" 2> /dev/null
iptables -t nat -A $kind -j "${NAME}_${kind}" iptables -t nat -A $kind -j "${NAME}_${kind}"
@@ -26,7 +26,7 @@ trap 'err=1' ERR
for kind in INPUT FORWARD ACCEPT; do for kind in INPUT FORWARD ACCEPT; do
iptables -F "${NAME}_${kind}" 2> /dev/null iptables -F "${NAME}_${kind}" 2> /dev/null
done done
for kind in PREROUTING OUTPUT; do for kind in PREROUTING OUTPUT POSTROUTING; do
iptables -t nat -F "${NAME}_${kind}" 2> /dev/null iptables -t nat -F "${NAME}_${kind}" 2> /dev/null
done done
if [ "$UNDO" = 1 ]; then if [ "$UNDO" = 1 ]; then
@@ -40,6 +40,11 @@ fi
if [ -n "$src_subnet" ]; then if [ -n "$src_subnet" ]; then
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
# Also allow containers on the bridge subnet to reach this forward
if [ -n "$bridge_subnet" ]; then
iptables -t nat -A ${NAME}_PREROUTING -s "$bridge_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$bridge_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
fi
else else
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
@@ -53,4 +58,15 @@ iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT
# NAT hairpin: masquerade traffic from the bridge subnet or host to the DNAT
# target, so replies route back through the host for proper NAT reversal.
# Container-to-container hairpin (source is on the bridge subnet)
if [ -n "$bridge_subnet" ]; then
iptables -t nat -A ${NAME}_POSTROUTING -s "$bridge_subnet" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
iptables -t nat -A ${NAME}_POSTROUTING -s "$bridge_subnet" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
fi
# Host-to-container hairpin (host connects to its own gateway IP, source is sip)
iptables -t nat -A ${NAME}_POSTROUTING -s "$sip" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
iptables -t nat -A ${NAME}_POSTROUTING -s "$sip" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
exit $err exit $err

View File

@@ -68,6 +68,21 @@ fi
EOF EOF
# Promote the USB installer boot entry back to first in EFI boot order.
# The entry number was saved during initial OS install.
if [ -d /sys/firmware/efi ] && [ -f /media/startos/config/efi-installer-entry ]; then
USB_ENTRY=$(cat /media/startos/config/efi-installer-entry)
if [ -n "$USB_ENTRY" ]; then
CURRENT_ORDER=$(efibootmgr | grep BootOrder | sed 's/BootOrder: //')
OTHER_ENTRIES=$(echo "$CURRENT_ORDER" | tr ',' '\n' | grep -v "$USB_ENTRY" | tr '\n' ',' | sed 's/,$//')
if [ -n "$OTHER_ENTRIES" ]; then
efibootmgr -o "$USB_ENTRY,$OTHER_ENTRIES"
else
efibootmgr -o "$USB_ENTRY"
fi
fi
fi
sync sync
umount -Rl /media/startos/next umount -Rl /media/startos/next

364
build/manage-release.sh Executable file
View File

@@ -0,0 +1,364 @@
#!/bin/bash
set -e
REPO="Start9Labs/start-os"
REGISTRY="https://alpha-registry-x.start9.com"
S3_BUCKET="s3://startos-images"
S3_CDN="https://startos-images.nyc3.cdn.digitaloceanspaces.com"
START9_GPG_KEY="2D63C217"
ARCHES="aarch64 aarch64-nonfree aarch64-nvidia riscv64 riscv64-nonfree x86_64 x86_64-nonfree x86_64-nvidia"
CLI_ARCHES="aarch64 riscv64 x86_64"
parse_run_id() {
local val="$1"
if [[ "$val" =~ /actions/runs/([0-9]+) ]]; then
echo "${BASH_REMATCH[1]}"
else
echo "$val"
fi
}
require_version() {
if [ -z "${VERSION:-}" ]; then
read -rp "VERSION: " VERSION
if [ -z "$VERSION" ]; then
>&2 echo '$VERSION required'
exit 2
fi
fi
}
release_dir() {
echo "$HOME/Downloads/v$VERSION"
}
ensure_release_dir() {
local dir
dir=$(release_dir)
if [ "$CLEAN" = "1" ]; then
rm -rf "$dir"
fi
mkdir -p "$dir"
cd "$dir"
}
enter_release_dir() {
local dir
dir=$(release_dir)
if [ ! -d "$dir" ]; then
>&2 echo "Release directory $dir does not exist. Run 'download' or 'pull' first."
exit 1
fi
cd "$dir"
}
cli_target_for() {
local arch=$1 os=$2
local pair="${arch}-${os}"
if [ "$pair" = "riscv64-linux" ]; then
echo "riscv64gc-unknown-linux-musl"
elif [ "$pair" = "riscv64-macos" ]; then
return 1
elif [ "$os" = "linux" ]; then
echo "${arch}-unknown-linux-musl"
elif [ "$os" = "macos" ]; then
echo "${arch}-apple-darwin"
fi
}
release_files() {
for file in *.iso *.squashfs *.deb; do
[ -f "$file" ] && echo "$file"
done
for file in start-cli_*; do
[[ "$file" == *.asc ]] && continue
[ -f "$file" ] && echo "$file"
done
}
resolve_gh_user() {
GH_USER=${GH_USER:-$(gh api user -q .login 2>/dev/null || true)}
GH_GPG_KEY=$(git config user.signingkey 2>/dev/null || true)
}
# --- Subcommands ---
cmd_download() {
require_version
if [ -z "${RUN_ID:-}" ]; then
read -rp "RUN_ID (OS images, leave blank to skip): " RUN_ID
fi
RUN_ID=$(parse_run_id "${RUN_ID:-}")
if [ -z "${ST_RUN_ID:-}" ]; then
read -rp "ST_RUN_ID (start-tunnel, leave blank to skip): " ST_RUN_ID
fi
ST_RUN_ID=$(parse_run_id "${ST_RUN_ID:-}")
if [ -z "${CLI_RUN_ID:-}" ]; then
read -rp "CLI_RUN_ID (start-cli, leave blank to skip): " CLI_RUN_ID
fi
CLI_RUN_ID=$(parse_run_id "${CLI_RUN_ID:-}")
ensure_release_dir
if [ -n "$RUN_ID" ]; then
for arch in $ARCHES; do
while ! gh run download -R $REPO "$RUN_ID" -n "$arch.squashfs" -D "$(pwd)"; do sleep 1; done
done
for arch in $ARCHES; do
while ! gh run download -R $REPO "$RUN_ID" -n "$arch.iso" -D "$(pwd)"; do sleep 1; done
done
fi
if [ -n "$ST_RUN_ID" ]; then
for arch in $CLI_ARCHES; do
while ! gh run download -R $REPO "$ST_RUN_ID" -n "start-tunnel_$arch.deb" -D "$(pwd)"; do sleep 1; done
done
fi
if [ -n "$CLI_RUN_ID" ]; then
for arch in $CLI_ARCHES; do
for os in linux macos; do
local target
target=$(cli_target_for "$arch" "$os") || continue
while ! gh run download -R $REPO "$CLI_RUN_ID" -n "start-cli_$target" -D "$(pwd)"; do sleep 1; done
mv start-cli "start-cli_${arch}-${os}"
done
done
fi
}
cmd_pull() {
require_version
ensure_release_dir
echo "Downloading release assets from tag v$VERSION..."
# Download debs and CLI binaries from the GH release
for file in $(gh release view -R $REPO "v$VERSION" --json assets -q '.assets[].name' | grep -E '\.(deb)$|^start-cli_'); do
gh release download -R $REPO "v$VERSION" -p "$file" -D "$(pwd)" --clobber
done
# Download ISOs and squashfs from S3 CDN
for arch in $ARCHES; do
for ext in squashfs iso; do
# Get the actual filename from the GH release asset list or body
local filename
filename=$(gh release view -R $REPO "v$VERSION" --json assets -q ".assets[].name" | grep "_${arch}\\.${ext}$" || true)
if [ -z "$filename" ]; then
filename=$(gh release view -R $REPO "v$VERSION" --json body -q .body | grep -oP "[^ ]*_${arch}\\.${ext}" | head -1 || true)
fi
if [ -n "$filename" ]; then
echo "Downloading $filename from S3..."
curl -fSL -o "$filename" "$S3_CDN/v$VERSION/$filename"
fi
done
done
}
cmd_register() {
require_version
enter_release_dir
start-cli --registry=$REGISTRY registry os version add "$VERSION" "v$VERSION" '' ">=0.3.5 <=$VERSION"
}
cmd_upload() {
require_version
enter_release_dir
for file in $(release_files); do
case "$file" in
*.iso|*.squashfs)
s3cmd put -P "$file" "$S3_BUCKET/v$VERSION/$file"
;;
*)
gh release upload -R $REPO "v$VERSION" "$file"
;;
esac
done
}
cmd_index() {
require_version
enter_release_dir
for arch in $ARCHES; do
for file in *_"$arch".squashfs *_"$arch".iso; do
start-cli --registry=$REGISTRY registry os asset add --platform="$arch" --version="$VERSION" "$file" "$S3_CDN/v$VERSION/$file"
done
done
}
cmd_sign() {
require_version
enter_release_dir
resolve_gh_user
for file in $(release_files); do
gpg -u $START9_GPG_KEY --detach-sign --armor -o "${file}.start9.asc" "$file"
if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then
gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "${file}.${GH_USER}.asc" "$file"
fi
done
gpg --export -a $START9_GPG_KEY > start9.key.asc
if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then
gpg --export -a "$GH_GPG_KEY" > "${GH_USER}.key.asc"
else
>&2 echo 'Warning: could not determine GitHub user or GPG signing key, skipping personal signature'
fi
tar -czvf signatures.tar.gz *.asc
gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber
}
cmd_cosign() {
require_version
enter_release_dir
resolve_gh_user
if [ -z "$GH_USER" ] || [ -z "$GH_GPG_KEY" ]; then
>&2 echo 'Error: could not determine GitHub user or GPG signing key'
>&2 echo "Set GH_USER and/or configure git user.signingkey"
exit 1
fi
echo "Downloading existing signatures..."
gh release download -R $REPO "v$VERSION" -p "signatures.tar.gz" -D "$(pwd)" --clobber
tar -xzf signatures.tar.gz
echo "Adding personal signatures as $GH_USER..."
for file in $(release_files); do
gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "${file}.${GH_USER}.asc" "$file"
done
gpg --export -a "$GH_GPG_KEY" > "${GH_USER}.key.asc"
echo "Re-packing signatures..."
tar -czvf signatures.tar.gz *.asc
gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber
echo "Done. Personal signatures for $GH_USER added to v$VERSION."
}
cmd_notes() {
require_version
enter_release_dir
cat << EOF
# ISO Downloads
- [x86_64/AMD64]($S3_CDN/v$VERSION/$(ls *_x86_64-nonfree.iso))
- [x86_64/AMD64 + NVIDIA]($S3_CDN/v$VERSION/$(ls *_x86_64-nvidia.iso))
- [x86_64/AMD64-slim (FOSS-only)]($S3_CDN/v$VERSION/$(ls *_x86_64.iso) "Without proprietary software or drivers")
- [aarch64/ARM64]($S3_CDN/v$VERSION/$(ls *_aarch64-nonfree.iso))
- [aarch64/ARM64 + NVIDIA]($S3_CDN/v$VERSION/$(ls *_aarch64-nvidia.iso))
- [aarch64/ARM64-slim (FOSS-Only)]($S3_CDN/v$VERSION/$(ls *_aarch64.iso) "Without proprietary software or drivers")
- [RISCV64 (RVA23)]($S3_CDN/v$VERSION/$(ls *_riscv64-nonfree.iso))
- [RISCV64 (RVA23)-slim (FOSS-only)]($S3_CDN/v$VERSION/$(ls *_riscv64.iso) "Without proprietary software or drivers")
EOF
cat << 'EOF'
# StartOS Checksums
## SHA-256
```
EOF
sha256sum *.iso *.squashfs
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum *.iso *.squashfs
cat << 'EOF'
```
# Start-Tunnel Checksums
## SHA-256
```
EOF
sha256sum start-tunnel*.deb
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum start-tunnel*.deb
cat << 'EOF'
```
# start-cli Checksums
## SHA-256
```
EOF
release_files | grep '^start-cli_' | xargs sha256sum
cat << 'EOF'
```
## BLAKE-3
```
EOF
release_files | grep '^start-cli_' | xargs b3sum
cat << 'EOF'
```
EOF
}
cmd_full_release() {
cmd_download
cmd_register
cmd_upload
cmd_index
cmd_sign
cmd_notes
}
usage() {
cat << 'EOF'
Usage: manage-release.sh <subcommand>
Subcommands:
download Download artifacts from GitHub Actions runs
Requires: RUN_ID, ST_RUN_ID, CLI_RUN_ID (any combination)
pull Download an existing release from the GH tag and S3
register Register the version in the Start9 registry
upload Upload artifacts to GitHub Releases and S3
index Add assets to the registry index
sign Sign all artifacts with Start9 org key (+ personal key if available)
and upload signatures.tar.gz
cosign Add personal GPG signature to an existing release's signatures
(requires 'pull' first so you can verify assets before signing)
notes Print release notes with download links and checksums
full-release Run: download → register → upload → index → sign → notes
Environment variables:
VERSION (required) Release version
RUN_ID GitHub Actions run ID for OS images (download subcommand)
ST_RUN_ID GitHub Actions run ID for start-tunnel (download subcommand)
CLI_RUN_ID GitHub Actions run ID for start-cli (download subcommand)
GH_USER Override GitHub username (default: autodetected via gh cli)
CLEAN Set to 1 to wipe and recreate the release directory
EOF
}
case "${1:-}" in
download) cmd_download ;;
pull) cmd_pull ;;
register) cmd_register ;;
upload) cmd_upload ;;
index) cmd_index ;;
sign) cmd_sign ;;
cosign) cmd_cosign ;;
notes) cmd_notes ;;
full-release) cmd_full_release ;;
*) usage; exit 1 ;;
esac

View File

@@ -1,142 +0,0 @@
#!/bin/bash
if [ -z "$VERSION" ]; then
>&2 echo '$VERSION required'
exit 2
fi
set -e
if [ "$SKIP_DL" != "1" ]; then
if [ "$SKIP_CLEAN" != "1" ]; then
rm -rf ~/Downloads/v$VERSION
mkdir ~/Downloads/v$VERSION
cd ~/Downloads/v$VERSION
fi
if [ -n "$RUN_ID" ]; then
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.squashfs -D $(pwd); do sleep 1; done
done
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.iso -D $(pwd); do sleep 1; done
done
fi
if [ -n "$ST_RUN_ID" ]; then
for arch in aarch64 riscv64 x86_64; do
while ! gh run download -R Start9Labs/start-os $ST_RUN_ID -n start-tunnel_$arch.deb -D $(pwd); do sleep 1; done
done
fi
if [ -n "$CLI_RUN_ID" ]; then
for arch in aarch64 riscv64 x86_64; do
for os in linux macos; do
pair=${arch}-${os}
if [ "${pair}" = "riscv64-linux" ]; then
target=riscv64gc-unknown-linux-musl
elif [ "${pair}" = "riscv64-macos" ]; then
continue
elif [ "${os}" = "linux" ]; then
target="${arch}-unknown-linux-musl"
elif [ "${os}" = "macos" ]; then
target="${arch}-apple-darwin"
fi
while ! gh run download -R Start9Labs/start-os $CLI_RUN_ID -n start-cli_$target -D $(pwd); do sleep 1; done
mv start-cli "start-cli_${pair}"
done
done
fi
else
cd ~/Downloads/v$VERSION
fi
start-cli --registry=https://alpha-registry-x.start9.com registry os version add $VERSION "v$VERSION" '' ">=0.3.5 <=$VERSION"
if [ "$SKIP_UL" = "2" ]; then
exit 2
elif [ "$SKIP_UL" != "1" ]; then
for file in *.deb start-cli_*; do
gh release upload -R Start9Labs/start-os v$VERSION $file
done
for file in *.iso *.squashfs; do
s3cmd put -P $file s3://startos-images/v$VERSION/$file
done
fi
if [ "$SKIP_INDEX" != "1" ]; then
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
for file in *_$arch.squashfs *_$arch.iso; do
start-cli --registry=https://alpha-registry-x.start9.com registry os asset add --platform=$arch --version=$VERSION $file https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$file
done
done
fi
for file in *.iso *.squashfs *.deb start-cli_*; do
gpg -u 7CFFDA41CA66056A --detach-sign --armor -o "${file}.asc" "$file"
done
gpg --export -a 7CFFDA41CA66056A > dr-bonez.key.asc
tar -czvf signatures.tar.gz *.asc
gh release upload -R Start9Labs/start-os v$VERSION signatures.tar.gz
cat << EOF
# ISO Downloads
- [x86_64/AMD64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64-nonfree.iso))
- [x86_64/AMD64-slim (FOSS-only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64.iso) "Without proprietary software or drivers")
- [aarch64/ARM64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64-nonfree.iso))
- [aarch64/ARM64-slim (FOSS-Only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64.iso) "Without proprietary software or drivers")
- [RISCV64 (RVA23)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_riscv64.iso))
EOF
cat << 'EOF'
# StartOS Checksums
## SHA-256
```
EOF
sha256sum *.iso *.squashfs
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum *.iso *.squashfs
cat << 'EOF'
```
# Start-Tunnel Checksums
## SHA-256
```
EOF
sha256sum start-tunnel*.deb
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum start-tunnel*.deb
cat << 'EOF'
```
# start-cli Checksums
## SHA-256
```
EOF
sha256sum start-cli_*
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum start-cli_*
cat << 'EOF'
```
EOF

View File

@@ -16,16 +16,16 @@ The container runtime communicates with the host via JSON-RPC over Unix socket.
## `/media/startos/` Directory (mounted by host into container) ## `/media/startos/` Directory (mounted by host into container)
| Path | Description | | Path | Description |
|------|-------------| | -------------------- | ----------------------------------------------------- |
| `volumes/<name>/` | Package data volumes (id-mapped, persistent) | | `volumes/<name>/` | Package data volumes (id-mapped, persistent) |
| `assets/` | Read-only assets from s9pk `assets.squashfs` | | `assets/` | Read-only assets from s9pk `assets.squashfs` |
| `images/<name>/` | Container images (squashfs, used for subcontainers) | | `images/<name>/` | Container images (squashfs, used for subcontainers) |
| `images/<name>.env` | Environment variables for image | | `images/<name>.env` | Environment variables for image |
| `images/<name>.json` | Image metadata | | `images/<name>.json` | Image metadata |
| `backup/` | Backup mount point (mounted during backup operations) | | `backup/` | Backup mount point (mounted during backup operations) |
| `rpc/service.sock` | RPC socket (container runtime listens here) | | `rpc/service.sock` | RPC socket (container runtime listens here) |
| `rpc/host.sock` | Host RPC socket (for effects callbacks to host) | | `rpc/host.sock` | Host RPC socket (for effects callbacks to host) |
## S9PK Structure ## S9PK Structure

View File

@@ -0,0 +1,30 @@
// Mock for ESM-only mime package — Jest's module loader doesn't support require(esm)
const types = {
".png": "image/png",
".jpg": "image/jpeg",
".jpeg": "image/jpeg",
".gif": "image/gif",
".svg": "image/svg+xml",
".webp": "image/webp",
".ico": "image/x-icon",
".json": "application/json",
".js": "application/javascript",
".html": "text/html",
".css": "text/css",
".txt": "text/plain",
".md": "text/markdown",
}
module.exports = {
default: {
getType(path) {
const ext = "." + path.split(".").pop()
return types[ext] || null
},
getExtension(type) {
const entry = Object.entries(types).find(([, v]) => v === type)
return entry ? entry[0].slice(1) : null
},
},
__esModule: true,
}

View File

@@ -5,4 +5,7 @@ module.exports = {
testEnvironment: "node", testEnvironment: "node",
rootDir: "./src/", rootDir: "./src/",
modulePathIgnorePatterns: ["./dist/"], modulePathIgnorePatterns: ["./dist/"],
moduleNameMapper: {
"^mime$": "<rootDir>/../__mocks__/mime.js",
},
} }

View File

@@ -19,7 +19,6 @@
"lodash.merge": "^4.6.2", "lodash.merge": "^4.6.2",
"mime": "^4.0.7", "mime": "^4.0.7",
"node-fetch": "^3.1.0", "node-fetch": "^3.1.0",
"ts-matches": "^6.3.2",
"tslib": "^2.5.3", "tslib": "^2.5.3",
"typescript": "^5.1.3", "typescript": "^5.1.3",
"yaml": "^2.3.1" "yaml": "^2.3.1"
@@ -38,7 +37,7 @@
}, },
"../sdk/dist": { "../sdk/dist": {
"name": "@start9labs/start-sdk", "name": "@start9labs/start-sdk",
"version": "0.4.0-beta.48", "version": "0.4.0-beta.58",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@iarna/toml": "^3.0.0", "@iarna/toml": "^3.0.0",
@@ -49,8 +48,9 @@
"ini": "^5.0.0", "ini": "^5.0.0",
"isomorphic-fetch": "^3.0.0", "isomorphic-fetch": "^3.0.0",
"mime": "^4.0.7", "mime": "^4.0.7",
"ts-matches": "^6.3.2", "yaml": "^2.7.1",
"yaml": "^2.7.1" "zod": "^4.3.6",
"zod-deep-partial": "^1.2.0"
}, },
"devDependencies": { "devDependencies": {
"@types/jest": "^29.4.0", "@types/jest": "^29.4.0",
@@ -6494,12 +6494,6 @@
} }
} }
}, },
"node_modules/ts-matches": {
"version": "6.3.2",
"resolved": "https://registry.npmjs.org/ts-matches/-/ts-matches-6.3.2.tgz",
"integrity": "sha512-UhSgJymF8cLd4y0vV29qlKVCkQpUtekAaujXbQVc729FezS8HwqzepqvtjzQ3HboatIqN/Idor85O2RMwT7lIQ==",
"license": "MIT"
},
"node_modules/tslib": { "node_modules/tslib": {
"version": "2.8.1", "version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",

View File

@@ -28,7 +28,6 @@
"lodash.merge": "^4.6.2", "lodash.merge": "^4.6.2",
"mime": "^4.0.7", "mime": "^4.0.7",
"node-fetch": "^3.1.0", "node-fetch": "^3.1.0",
"ts-matches": "^6.3.2",
"tslib": "^2.5.3", "tslib": "^2.5.3",
"typescript": "^5.1.3", "typescript": "^5.1.3",
"yaml": "^2.3.1" "yaml": "^2.3.1"

View File

@@ -3,33 +3,39 @@ import {
types as T, types as T,
utils, utils,
VersionRange, VersionRange,
z,
} from "@start9labs/start-sdk" } from "@start9labs/start-sdk"
import * as net from "net" import * as net from "net"
import { object, string, number, literals, some, unknown } from "ts-matches"
import { Effects } from "../Models/Effects" import { Effects } from "../Models/Effects"
import { CallbackHolder } from "../Models/CallbackHolder" import { CallbackHolder } from "../Models/CallbackHolder"
import { asError } from "@start9labs/start-sdk/base/lib/util" import { asError } from "@start9labs/start-sdk/base/lib/util"
const matchRpcError = object({ const matchRpcError = z.object({
error: object({ error: z.object({
code: number, code: z.number(),
message: string, message: z.string(),
data: some( data: z
string, .union([
object({ z.string(),
details: string, z.object({
debug: string.nullable().optional(), details: z.string(),
}), debug: z.string().nullable().optional(),
) }),
])
.nullable() .nullable()
.optional(), .optional(),
}), }),
}) })
const testRpcError = matchRpcError.test function testRpcError(v: unknown): v is RpcError {
const testRpcResult = object({ return matchRpcError.safeParse(v).success
result: unknown, }
}).test const matchRpcResult = z.object({
type RpcError = typeof matchRpcError._TYPE result: z.unknown(),
})
function testRpcResult(v: unknown): v is z.infer<typeof matchRpcResult> {
return matchRpcResult.safeParse(v).success
}
type RpcError = z.infer<typeof matchRpcError>
const SOCKET_PATH = "/media/startos/rpc/host.sock" const SOCKET_PATH = "/media/startos/rpc/host.sock"
let hostSystemId = 0 let hostSystemId = 0
@@ -71,7 +77,7 @@ const rpcRoundFor =
"Error in host RPC:", "Error in host RPC:",
utils.asError({ method, params, error: res.error }), utils.asError({ method, params, error: res.error }),
) )
if (string.test(res.error.data)) { if (typeof res.error.data === "string") {
message += ": " + res.error.data message += ": " + res.error.data
console.error(`Details: ${res.error.data}`) console.error(`Details: ${res.error.data}`)
} else { } else {
@@ -253,6 +259,14 @@ export function makeEffects(context: EffectContext): Effects {
callback: context.callbacks?.addCallback(options.callback) || null, callback: context.callbacks?.addCallback(options.callback) || null,
}) as ReturnType<T.Effects["getSystemSmtp"]> }) as ReturnType<T.Effects["getSystemSmtp"]>
}, },
getOutboundGateway(
...[options]: Parameters<T.Effects["getOutboundGateway"]>
) {
return rpcRound("get-outbound-gateway", {
...options,
callback: context.callbacks?.addCallback(options.callback) || null,
}) as ReturnType<T.Effects["getOutboundGateway"]>
},
listServiceInterfaces( listServiceInterfaces(
...[options]: Parameters<T.Effects["listServiceInterfaces"]> ...[options]: Parameters<T.Effects["listServiceInterfaces"]>
) { ) {
@@ -316,6 +330,31 @@ export function makeEffects(context: EffectContext): Effects {
T.Effects["setDataVersion"] T.Effects["setDataVersion"]
> >
}, },
plugin: {
url: {
register(
...[options]: Parameters<T.Effects["plugin"]["url"]["register"]>
) {
return rpcRound("plugin.url.register", options) as ReturnType<
T.Effects["plugin"]["url"]["register"]
>
},
exportUrl(
...[options]: Parameters<T.Effects["plugin"]["url"]["exportUrl"]>
) {
return rpcRound("plugin.url.export-url", options) as ReturnType<
T.Effects["plugin"]["url"]["exportUrl"]
>
},
clearUrls(
...[options]: Parameters<T.Effects["plugin"]["url"]["clearUrls"]>
) {
return rpcRound("plugin.url.clear-urls", options) as ReturnType<
T.Effects["plugin"]["url"]["clearUrls"]
>
},
},
},
} }
if (context.callbacks?.onLeaveContext) if (context.callbacks?.onLeaveContext)
self.onLeaveContext(() => { self.onLeaveContext(() => {

View File

@@ -1,25 +1,13 @@
// @ts-check // @ts-check
import * as net from "net" import * as net from "net"
import {
object,
some,
string,
literal,
array,
number,
matches,
any,
shape,
anyOf,
literals,
} from "ts-matches"
import { import {
ExtendedVersion, ExtendedVersion,
types as T, types as T,
utils, utils,
VersionRange, VersionRange,
z,
} from "@start9labs/start-sdk" } from "@start9labs/start-sdk"
import * as fs from "fs" import * as fs from "fs"
@@ -29,89 +17,92 @@ import { jsonPath, unNestPath } from "../Models/JsonPath"
import { System } from "../Interfaces/System" import { System } from "../Interfaces/System"
import { makeEffects } from "./EffectCreator" import { makeEffects } from "./EffectCreator"
type MaybePromise<T> = T | Promise<T> type MaybePromise<T> = T | Promise<T>
export const matchRpcResult = anyOf( export const matchRpcResult = z.union([
object({ result: any }), z.object({ result: z.any() }),
object({ z.object({
error: object({ error: z.object({
code: number, code: z.number(),
message: string, message: z.string(),
data: object({ data: z
details: string.optional(), .object({
debug: any.optional(), details: z.string().optional(),
}) debug: z.any().optional(),
})
.nullable() .nullable()
.optional(), .optional(),
}), }),
}), }),
) ])
export type RpcResult = typeof matchRpcResult._TYPE export type RpcResult = z.infer<typeof matchRpcResult>
type SocketResponse = ({ jsonrpc: "2.0"; id: IdType } & RpcResult) | null type SocketResponse = ({ jsonrpc: "2.0"; id: IdType } & RpcResult) | null
const SOCKET_PARENT = "/media/startos/rpc" const SOCKET_PARENT = "/media/startos/rpc"
const SOCKET_PATH = "/media/startos/rpc/service.sock" const SOCKET_PATH = "/media/startos/rpc/service.sock"
const jsonrpc = "2.0" as const const jsonrpc = "2.0" as const
const isResult = object({ result: any }).test const isResultSchema = z.object({ result: z.any() })
const isResult = (v: unknown): v is z.infer<typeof isResultSchema> =>
isResultSchema.safeParse(v).success
const idType = some(string, number, literal(null)) const idType = z.union([z.string(), z.number(), z.literal(null)])
type IdType = null | string | number | undefined type IdType = null | string | number | undefined
const runType = object({ const runType = z.object({
id: idType.optional(), id: idType.optional(),
method: literal("execute"), method: z.literal("execute"),
params: object({ params: z.object({
id: string, id: z.string(),
procedure: string, procedure: z.string(),
input: any, input: z.any(),
timeout: number.nullable().optional(), timeout: z.number().nullable().optional(),
}), }),
}) })
const sandboxRunType = object({ const sandboxRunType = z.object({
id: idType.optional(), id: idType.optional(),
method: literal("sandbox"), method: z.literal("sandbox"),
params: object({ params: z.object({
id: string, id: z.string(),
procedure: string, procedure: z.string(),
input: any, input: z.any(),
timeout: number.nullable().optional(), timeout: z.number().nullable().optional(),
}), }),
}) })
const callbackType = object({ const callbackType = z.object({
method: literal("callback"), method: z.literal("callback"),
params: object({ params: z.object({
id: number, id: z.number(),
args: array, args: z.array(z.unknown()),
}), }),
}) })
const initType = object({ const initType = z.object({
id: idType.optional(), id: idType.optional(),
method: literal("init"), method: z.literal("init"),
params: object({ params: z.object({
id: string, id: z.string(),
kind: literals("install", "update", "restore").nullable(), kind: z.enum(["install", "update", "restore"]).nullable(),
}), }),
}) })
const startType = object({ const startType = z.object({
id: idType.optional(), id: idType.optional(),
method: literal("start"), method: z.literal("start"),
}) })
const stopType = object({ const stopType = z.object({
id: idType.optional(), id: idType.optional(),
method: literal("stop"), method: z.literal("stop"),
}) })
const exitType = object({ const exitType = z.object({
id: idType.optional(), id: idType.optional(),
method: literal("exit"), method: z.literal("exit"),
params: object({ params: z.object({
id: string, id: z.string(),
target: string.nullable(), target: z.string().nullable(),
}), }),
}) })
const evalType = object({ const evalType = z.object({
id: idType.optional(), id: idType.optional(),
method: literal("eval"), method: z.literal("eval"),
params: object({ params: z.object({
script: string, script: z.string(),
}), }),
}) })
@@ -144,7 +135,9 @@ const handleRpc = (id: IdType, result: Promise<RpcResult>) =>
}, },
})) }))
const hasId = object({ id: idType }).test const hasIdSchema = z.object({ id: idType })
const hasId = (v: unknown): v is z.infer<typeof hasIdSchema> =>
hasIdSchema.safeParse(v).success
export class RpcListener { export class RpcListener {
shouldExit = false shouldExit = false
unixSocketServer = net.createServer(async (server) => {}) unixSocketServer = net.createServer(async (server) => {})
@@ -246,40 +239,52 @@ export class RpcListener {
} }
private dealWithInput(input: unknown): MaybePromise<SocketResponse> { private dealWithInput(input: unknown): MaybePromise<SocketResponse> {
return matches(input) const parsed = z.object({ method: z.string() }).safeParse(input)
.when(runType, async ({ id, params }) => { if (!parsed.success) {
console.warn(
`Couldn't parse the following input ${JSON.stringify(input)}`,
)
return {
jsonrpc,
id: (input as any)?.id,
error: {
code: -32602,
message: "invalid params",
data: {
details: JSON.stringify(input),
},
},
}
}
switch (parsed.data.method) {
case "execute": {
const { id, params } = runType.parse(input)
const system = this.system const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure) const procedure = jsonPath.parse(params.procedure)
const { input, timeout, id: eventId } = params const { input: inp, timeout, id: eventId } = params
const result = this.getResult( const result = this.getResult(procedure, system, eventId, timeout, inp)
procedure,
system,
eventId,
timeout,
input,
)
return handleRpc(id, result) return handleRpc(id, result)
}) }
.when(sandboxRunType, async ({ id, params }) => { case "sandbox": {
const { id, params } = sandboxRunType.parse(input)
const system = this.system const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure) const procedure = jsonPath.parse(params.procedure)
const { input, timeout, id: eventId } = params const { input: inp, timeout, id: eventId } = params
const result = this.getResult( const result = this.getResult(procedure, system, eventId, timeout, inp)
procedure,
system,
eventId,
timeout,
input,
)
return handleRpc(id, result) return handleRpc(id, result)
}) }
.when(callbackType, async ({ params: { id, args } }) => { case "callback": {
const {
params: { id, args },
} = callbackType.parse(input)
this.callCallback(id, args) this.callCallback(id, args)
return null return null
}) }
.when(startType, async ({ id }) => { case "start": {
const { id } = startType.parse(input)
const callbacks = const callbacks =
this.callbacks?.getChild("main") || this.callbacks?.child("main") this.callbacks?.getChild("main") || this.callbacks?.child("main")
const effects = makeEffects({ const effects = makeEffects({
@@ -290,8 +295,9 @@ export class RpcListener {
id, id,
this.system.start(effects).then((result) => ({ result })), this.system.start(effects).then((result) => ({ result })),
) )
}) }
.when(stopType, async ({ id }) => { case "stop": {
const { id } = stopType.parse(input)
return handleRpc( return handleRpc(
id, id,
this.system.stop().then((result) => { this.system.stop().then((result) => {
@@ -300,8 +306,9 @@ export class RpcListener {
return { result } return { result }
}), }),
) )
}) }
.when(exitType, async ({ id, params }) => { case "exit": {
const { id, params } = exitType.parse(input)
return handleRpc( return handleRpc(
id, id,
(async () => { (async () => {
@@ -323,8 +330,9 @@ export class RpcListener {
} }
})().then((result) => ({ result })), })().then((result) => ({ result })),
) )
}) }
.when(initType, async ({ id, params }) => { case "init": {
const { id, params } = initType.parse(input)
return handleRpc( return handleRpc(
id, id,
(async () => { (async () => {
@@ -349,8 +357,9 @@ export class RpcListener {
} }
})().then((result) => ({ result })), })().then((result) => ({ result })),
) )
}) }
.when(evalType, async ({ id, params }) => { case "eval": {
const { id, params } = evalType.parse(input)
return handleRpc( return handleRpc(
id, id,
(async () => { (async () => {
@@ -375,41 +384,28 @@ export class RpcListener {
} }
})(), })(),
) )
}) }
.when( default: {
shape({ id: idType.optional(), method: string }), const { id, method } = z
({ id, method }) => ({ .object({ id: idType.optional(), method: z.string() })
.passthrough()
.parse(input)
return {
jsonrpc, jsonrpc,
id, id,
error: { error: {
code: -32601, code: -32601,
message: `Method not found`, message: "Method not found",
data: { data: {
details: method, details: method,
}, },
}, },
}),
)
.defaultToLazy(() => {
console.warn(
`Couldn't parse the following input ${JSON.stringify(input)}`,
)
return {
jsonrpc,
id: (input as any)?.id,
error: {
code: -32602,
message: "invalid params",
data: {
details: JSON.stringify(input),
},
},
} }
}) }
}
} }
private getResult( private getResult(
procedure: typeof jsonPath._TYPE, procedure: z.infer<typeof jsonPath>,
system: System, system: System,
eventId: string, eventId: string,
timeout: number | null | undefined, timeout: number | null | undefined,
@@ -437,6 +433,7 @@ export class RpcListener {
return system.getActionInput( return system.getActionInput(
effects, effects,
procedures[2], procedures[2],
input?.prefill ?? null,
timeout || null, timeout || null,
) )
case procedures[1] === "actions" && procedures[3] === "run": case procedures[1] === "actions" && procedures[3] === "run":
@@ -448,26 +445,18 @@ export class RpcListener {
) )
} }
} }
})().then(ensureResultTypeShape, (error) => })().then(ensureResultTypeShape, (error) => {
matches(error) const errorSchema = z.object({
.when( error: z.string(),
object({ code: z.number().default(0),
error: string, })
code: number.defaultTo(0), const parsed = errorSchema.safeParse(error)
}), if (parsed.success) {
(error) => ({ return {
error: { error: { code: parsed.data.code, message: parsed.data.error },
code: error.code, }
message: error.error, }
}, return { error: { code: 0, message: String(error) } }
}), })
)
.defaultToLazy(() => ({
error: {
code: 0,
message: String(error),
},
})),
)
} }
} }

View File

@@ -2,7 +2,7 @@ import * as fs from "fs/promises"
import * as cp from "child_process" import * as cp from "child_process"
import { SubContainer, types as T } from "@start9labs/start-sdk" import { SubContainer, types as T } from "@start9labs/start-sdk"
import { promisify } from "util" import { promisify } from "util"
import { DockerProcedure, VolumeId } from "../../../Models/DockerProcedure" import { DockerProcedure } from "../../../Models/DockerProcedure"
import { Volume } from "./matchVolume" import { Volume } from "./matchVolume"
import { import {
CommandOptions, CommandOptions,
@@ -28,7 +28,7 @@ export class DockerProcedureContainer extends Drop {
effects: T.Effects, effects: T.Effects,
packageId: string, packageId: string,
data: DockerProcedure, data: DockerProcedure,
volumes: { [id: VolumeId]: Volume }, volumes: { [id: string]: Volume },
name: string, name: string,
options: { subcontainer?: SubContainer<SDKManifest> } = {}, options: { subcontainer?: SubContainer<SDKManifest> } = {},
) { ) {
@@ -47,7 +47,7 @@ export class DockerProcedureContainer extends Drop {
effects: T.Effects, effects: T.Effects,
packageId: string, packageId: string,
data: DockerProcedure, data: DockerProcedure,
volumes: { [id: VolumeId]: Volume }, volumes: { [id: string]: Volume },
name: string, name: string,
) { ) {
const subcontainer = await SubContainerOwned.of( const subcontainer = await SubContainerOwned.of(
@@ -64,7 +64,7 @@ export class DockerProcedureContainer extends Drop {
? `${subcontainer.rootfs}${mounts[mount]}` ? `${subcontainer.rootfs}${mounts[mount]}`
: `${subcontainer.rootfs}/${mounts[mount]}` : `${subcontainer.rootfs}/${mounts[mount]}`
await fs.mkdir(path, { recursive: true }) await fs.mkdir(path, { recursive: true })
const volumeMount = volumes[mount] const volumeMount: Volume = volumes[mount]
if (volumeMount.type === "data") { if (volumeMount.type === "data") {
await subcontainer.mount( await subcontainer.mount(
Mounts.of().mountVolume({ Mounts.of().mountVolume({
@@ -89,8 +89,8 @@ export class DockerProcedureContainer extends Drop {
`${packageId}.embassy`, `${packageId}.embassy`,
...new Set( ...new Set(
Object.values(hostInfo?.bindings || {}) Object.values(hostInfo?.bindings || {})
.flatMap((b) => b.addresses.possible) .flatMap((b) => b.addresses.available)
.map((h) => h.hostname.value), .map((h) => h.hostname),
).values(), ).values(),
] ]
const certChain = await effects.getSslCertificate({ const certChain = await effects.getSslCertificate({

View File

@@ -15,26 +15,11 @@ import { System } from "../../../Interfaces/System"
import { matchManifest, Manifest } from "./matchManifest" import { matchManifest, Manifest } from "./matchManifest"
import * as childProcess from "node:child_process" import * as childProcess from "node:child_process"
import { DockerProcedureContainer } from "./DockerProcedureContainer" import { DockerProcedureContainer } from "./DockerProcedureContainer"
import { DockerProcedure } from "../../../Models/DockerProcedure"
import { promisify } from "node:util" import { promisify } from "node:util"
import * as U from "./oldEmbassyTypes" import * as U from "./oldEmbassyTypes"
import { MainLoop } from "./MainLoop" import { MainLoop } from "./MainLoop"
import { import { z } from "@start9labs/start-sdk"
matches,
boolean,
dictionary,
literal,
literals,
object,
string,
unknown,
any,
tuple,
number,
anyOf,
deferred,
Parser,
array,
} from "ts-matches"
import { AddSslOptions } from "@start9labs/start-sdk/base/lib/osBindings" import { AddSslOptions } from "@start9labs/start-sdk/base/lib/osBindings"
import { import {
BindOptionsByProtocol, BindOptionsByProtocol,
@@ -57,6 +42,15 @@ function todo(): never {
throw new Error("Not implemented") throw new Error("Not implemented")
} }
/**
* Local type for procedure values from the manifest.
* The manifest's zod schemas use ZodTypeAny casts that produce `unknown` in zod v4.
* This type restores the expected shape for type-safe property access.
*/
type Procedure =
| (DockerProcedure & { type: "docker" })
| { type: "script"; args: unknown[] | null }
const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json" const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json"
export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js" export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js"
@@ -65,26 +59,24 @@ const configFile = FileHelper.json(
base: new Volume("embassy"), base: new Volume("embassy"),
subpath: "config.json", subpath: "config.json",
}, },
matches.any, z.any(),
) )
const dependsOnFile = FileHelper.json( const dependsOnFile = FileHelper.json(
{ {
base: new Volume("embassy"), base: new Volume("embassy"),
subpath: "dependsOn.json", subpath: "dependsOn.json",
}, },
dictionary([string, array(string)]), z.record(z.string(), z.array(z.string())),
) )
const matchResult = object({ const matchResult = z.object({
result: any, result: z.any(),
}) })
const matchError = object({ const matchError = z.object({
error: string, error: z.string(),
}) })
const matchErrorCode = object<{ const matchErrorCode = z.object({
"error-code": [number, string] | readonly [number, string] "error-code": z.tuple([z.number(), z.string()]),
}>({
"error-code": tuple(number, string),
}) })
const assertNever = ( const assertNever = (
@@ -96,29 +88,34 @@ const assertNever = (
/** /**
Should be changing the type for specific properties, and this is mostly a transformation for the old return types to the newer one. Should be changing the type for specific properties, and this is mostly a transformation for the old return types to the newer one.
*/ */
function isMatchResult(a: unknown): a is z.infer<typeof matchResult> {
return matchResult.safeParse(a).success
}
function isMatchError(a: unknown): a is z.infer<typeof matchError> {
return matchError.safeParse(a).success
}
function isMatchErrorCode(a: unknown): a is z.infer<typeof matchErrorCode> {
return matchErrorCode.safeParse(a).success
}
const fromReturnType = <A>(a: U.ResultType<A>): A => { const fromReturnType = <A>(a: U.ResultType<A>): A => {
if (matchResult.test(a)) { if (isMatchResult(a)) {
return a.result return a.result
} }
if (matchError.test(a)) { if (isMatchError(a)) {
console.info({ passedErrorStack: new Error().stack, error: a.error }) console.info({ passedErrorStack: new Error().stack, error: a.error })
throw { error: a.error } throw { error: a.error }
} }
if (matchErrorCode.test(a)) { if (isMatchErrorCode(a)) {
const [code, message] = a["error-code"] const [code, message] = a["error-code"]
throw { error: message, code } throw { error: message, code }
} }
return assertNever(a) return assertNever(a as never)
} }
const matchSetResult = object({ const matchSetResult = z.object({
"depends-on": dictionary([string, array(string)]) "depends-on": z.record(z.string(), z.array(z.string())).nullable().optional(),
.nullable() dependsOn: z.record(z.string(), z.array(z.string())).nullable().optional(),
.optional(), signal: z.enum([
dependsOn: dictionary([string, array(string)])
.nullable()
.optional(),
signal: literals(
"SIGTERM", "SIGTERM",
"SIGHUP", "SIGHUP",
"SIGINT", "SIGINT",
@@ -151,7 +148,7 @@ const matchSetResult = object({
"SIGPWR", "SIGPWR",
"SIGSYS", "SIGSYS",
"SIGINFO", "SIGINFO",
), ]),
}) })
type OldGetConfigRes = { type OldGetConfigRes = {
@@ -233,33 +230,29 @@ const asProperty = (x: PackagePropertiesV2): PropertiesReturn =>
Object.fromEntries( Object.fromEntries(
Object.entries(x).map(([key, value]) => [key, asProperty_(value)]), Object.entries(x).map(([key, value]) => [key, asProperty_(value)]),
) )
const [matchPackageProperties, setMatchPackageProperties] = const matchPackagePropertyObject: z.ZodType<PackagePropertyObject> = z.object({
deferred<PackagePropertiesV2>() value: z.lazy(() => matchPackageProperties),
const matchPackagePropertyObject: Parser<unknown, PackagePropertyObject> = type: z.literal("object"),
object({ description: z.string(),
value: matchPackageProperties, })
type: literal("object"),
description: string,
})
const matchPackagePropertyString: Parser<unknown, PackagePropertyString> = const matchPackagePropertyString: z.ZodType<PackagePropertyString> = z.object({
object({ type: z.literal("string"),
type: literal("string"), description: z.string().nullable().optional(),
description: string.nullable().optional(), value: z.string(),
value: string, copyable: z.boolean().nullable().optional(),
copyable: boolean.nullable().optional(), qr: z.boolean().nullable().optional(),
qr: boolean.nullable().optional(), masked: z.boolean().nullable().optional(),
masked: boolean.nullable().optional(), })
}) const matchPackageProperties: z.ZodType<PackagePropertiesV2> = z.lazy(() =>
setMatchPackageProperties( z.record(
dictionary([ z.string(),
string, z.union([matchPackagePropertyObject, matchPackagePropertyString]),
anyOf(matchPackagePropertyObject, matchPackagePropertyString), ),
]),
) )
const matchProperties = object({ const matchProperties = z.object({
version: literal(2), version: z.literal(2),
data: matchPackageProperties, data: matchPackageProperties,
}) })
@@ -303,7 +296,7 @@ export class SystemForEmbassy implements System {
}) })
const manifestData = await fs.readFile(manifestLocation, "utf-8") const manifestData = await fs.readFile(manifestLocation, "utf-8")
return new SystemForEmbassy( return new SystemForEmbassy(
matchManifest.unsafeCast(JSON.parse(manifestData)), matchManifest.parse(JSON.parse(manifestData)),
moduleCode, moduleCode,
) )
} }
@@ -389,7 +382,9 @@ export class SystemForEmbassy implements System {
delete this.currentRunning delete this.currentRunning
if (currentRunning) { if (currentRunning) {
await currentRunning.clean({ await currentRunning.clean({
timeout: fromDuration(this.manifest.main["sigterm-timeout"] || "30s"), timeout: fromDuration(
(this.manifest.main["sigterm-timeout"] as any) || "30s",
),
}) })
} }
} }
@@ -510,6 +505,7 @@ export class SystemForEmbassy implements System {
async getActionInput( async getActionInput(
effects: Effects, effects: Effects,
actionId: string, actionId: string,
_prefill: Record<string, unknown> | null,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<T.ActionInput | null> { ): Promise<T.ActionInput | null> {
if (actionId === "config") { if (actionId === "config") {
@@ -622,7 +618,7 @@ export class SystemForEmbassy implements System {
effects: Effects, effects: Effects,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<void> { ): Promise<void> {
const backup = this.manifest.backup.create const backup = this.manifest.backup.create as Procedure
if (backup.type === "docker") { if (backup.type === "docker") {
const commands = [backup.entrypoint, ...backup.args] const commands = [backup.entrypoint, ...backup.args]
const container = await DockerProcedureContainer.of( const container = await DockerProcedureContainer.of(
@@ -655,7 +651,7 @@ export class SystemForEmbassy implements System {
encoding: "utf-8", encoding: "utf-8",
}) })
.catch((_) => null) .catch((_) => null)
const restoreBackup = this.manifest.backup.restore const restoreBackup = this.manifest.backup.restore as Procedure
if (restoreBackup.type === "docker") { if (restoreBackup.type === "docker") {
const commands = [restoreBackup.entrypoint, ...restoreBackup.args] const commands = [restoreBackup.entrypoint, ...restoreBackup.args]
const container = await DockerProcedureContainer.of( const container = await DockerProcedureContainer.of(
@@ -688,7 +684,7 @@ export class SystemForEmbassy implements System {
effects: Effects, effects: Effects,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<OldGetConfigRes> { ): Promise<OldGetConfigRes> {
const config = this.manifest.config?.get const config = this.manifest.config?.get as Procedure | undefined
if (!config) return { spec: {} } if (!config) return { spec: {} }
if (config.type === "docker") { if (config.type === "docker") {
const commands = [config.entrypoint, ...config.args] const commands = [config.entrypoint, ...config.args]
@@ -730,7 +726,7 @@ export class SystemForEmbassy implements System {
) )
await updateConfig(effects, this.manifest, spec, newConfig) await updateConfig(effects, this.manifest, spec, newConfig)
await configFile.write(effects, newConfig) await configFile.write(effects, newConfig)
const setConfigValue = this.manifest.config?.set const setConfigValue = this.manifest.config?.set as Procedure | undefined
if (!setConfigValue) return if (!setConfigValue) return
if (setConfigValue.type === "docker") { if (setConfigValue.type === "docker") {
const commands = [ const commands = [
@@ -745,7 +741,7 @@ export class SystemForEmbassy implements System {
this.manifest.volumes, this.manifest.volumes,
`Set Config - ${commands.join(" ")}`, `Set Config - ${commands.join(" ")}`,
) )
const answer = matchSetResult.unsafeCast( const answer = matchSetResult.parse(
JSON.parse( JSON.parse(
(await container.execFail(commands, timeoutMs)).stdout.toString(), (await container.execFail(commands, timeoutMs)).stdout.toString(),
), ),
@@ -758,7 +754,7 @@ export class SystemForEmbassy implements System {
const method = moduleCode.setConfig const method = moduleCode.setConfig
if (!method) throw new Error("Expecting that the method setConfig exists") if (!method) throw new Error("Expecting that the method setConfig exists")
const answer = matchSetResult.unsafeCast( const answer = matchSetResult.parse(
await method( await method(
polyfillEffects(effects, this.manifest), polyfillEffects(effects, this.manifest),
newConfig as U.Config, newConfig as U.Config,
@@ -787,7 +783,11 @@ export class SystemForEmbassy implements System {
const requiredDeps = { const requiredDeps = {
...Object.fromEntries( ...Object.fromEntries(
Object.entries(this.manifest.dependencies ?? {}) Object.entries(this.manifest.dependencies ?? {})
.filter(([k, v]) => v?.requirement.type === "required") .filter(
([k, v]) =>
(v?.requirement as { type: string } | undefined)?.type ===
"required",
)
.map((x) => [x[0], []]) || [], .map((x) => [x[0], []]) || [],
), ),
} }
@@ -855,7 +855,7 @@ export class SystemForEmbassy implements System {
} }
if (migration) { if (migration) {
const [_, procedure] = migration const [_, procedure] = migration as readonly [unknown, Procedure]
if (procedure.type === "docker") { if (procedure.type === "docker") {
const commands = [procedure.entrypoint, ...procedure.args] const commands = [procedure.entrypoint, ...procedure.args]
const container = await DockerProcedureContainer.of( const container = await DockerProcedureContainer.of(
@@ -893,7 +893,10 @@ export class SystemForEmbassy implements System {
effects: Effects, effects: Effects,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<PropertiesReturn> { ): Promise<PropertiesReturn> {
const setConfigValue = this.manifest.properties const setConfigValue = this.manifest.properties as
| Procedure
| null
| undefined
if (!setConfigValue) throw new Error("There is no properties") if (!setConfigValue) throw new Error("There is no properties")
if (setConfigValue.type === "docker") { if (setConfigValue.type === "docker") {
const commands = [setConfigValue.entrypoint, ...setConfigValue.args] const commands = [setConfigValue.entrypoint, ...setConfigValue.args]
@@ -904,7 +907,7 @@ export class SystemForEmbassy implements System {
this.manifest.volumes, this.manifest.volumes,
`Properties - ${commands.join(" ")}`, `Properties - ${commands.join(" ")}`,
) )
const properties = matchProperties.unsafeCast( const properties = matchProperties.parse(
JSON.parse( JSON.parse(
(await container.execFail(commands, timeoutMs)).stdout.toString(), (await container.execFail(commands, timeoutMs)).stdout.toString(),
), ),
@@ -915,7 +918,7 @@ export class SystemForEmbassy implements System {
const method = moduleCode.properties const method = moduleCode.properties
if (!method) if (!method)
throw new Error("Expecting that the method properties exists") throw new Error("Expecting that the method properties exists")
const properties = matchProperties.unsafeCast( const properties = matchProperties.parse(
await method(polyfillEffects(effects, this.manifest)).then( await method(polyfillEffects(effects, this.manifest)).then(
fromReturnType, fromReturnType,
), ),
@@ -930,7 +933,8 @@ export class SystemForEmbassy implements System {
formData: unknown, formData: unknown,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<T.ActionResult> { ): Promise<T.ActionResult> {
const actionProcedure = this.manifest.actions?.[actionId]?.implementation const actionProcedure = this.manifest.actions?.[actionId]
?.implementation as Procedure | undefined
const toActionResult = ({ const toActionResult = ({
message, message,
value, value,
@@ -997,7 +1001,9 @@ export class SystemForEmbassy implements System {
oldConfig: unknown, oldConfig: unknown,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<object> { ): Promise<object> {
const actionProcedure = this.manifest.dependencies?.[id]?.config?.check const actionProcedure = this.manifest.dependencies?.[id]?.config?.check as
| Procedure
| undefined
if (!actionProcedure) return { message: "Action not found", value: null } if (!actionProcedure) return { message: "Action not found", value: null }
if (actionProcedure.type === "docker") { if (actionProcedure.type === "docker") {
const commands = [ const commands = [
@@ -1089,40 +1095,50 @@ export class SystemForEmbassy implements System {
} }
} }
const matchPointer = object({ const matchPointer = z.object({
type: literal("pointer"), type: z.literal("pointer"),
}) })
const matchPointerPackage = object({ const matchPointerPackage = z.object({
subtype: literal("package"), subtype: z.literal("package"),
target: literals("tor-key", "tor-address", "lan-address"), target: z.enum(["tor-key", "tor-address", "lan-address"]),
"package-id": string, "package-id": z.string(),
interface: string, interface: z.string(),
}) })
const matchPointerConfig = object({ const matchPointerConfig = z.object({
subtype: literal("package"), subtype: z.literal("package"),
target: literals("config"), target: z.enum(["config"]),
"package-id": string, "package-id": z.string(),
selector: string, selector: z.string(),
multi: boolean, multi: z.boolean(),
}) })
const matchSpec = object({ const matchSpec = z.object({
spec: object, spec: z.record(z.string(), z.unknown()),
}) })
const matchVariants = object({ variants: dictionary([string, unknown]) }) const matchVariants = z.object({ variants: z.record(z.string(), z.unknown()) })
function isMatchPointer(v: unknown): v is z.infer<typeof matchPointer> {
return matchPointer.safeParse(v).success
}
function isMatchSpec(v: unknown): v is z.infer<typeof matchSpec> {
return matchSpec.safeParse(v).success
}
function isMatchVariants(v: unknown): v is z.infer<typeof matchVariants> {
return matchVariants.safeParse(v).success
}
function cleanSpecOfPointers<T>(mutSpec: T): T { function cleanSpecOfPointers<T>(mutSpec: T): T {
if (!object.test(mutSpec)) return mutSpec if (typeof mutSpec !== "object" || mutSpec === null) return mutSpec
for (const key in mutSpec) { for (const key in mutSpec) {
const value = mutSpec[key] const value = mutSpec[key]
if (matchSpec.test(value)) value.spec = cleanSpecOfPointers(value.spec) if (isMatchSpec(value))
if (matchVariants.test(value)) value.spec = cleanSpecOfPointers(value.spec) as Record<string, unknown>
if (isMatchVariants(value))
value.variants = Object.fromEntries( value.variants = Object.fromEntries(
Object.entries(value.variants).map(([key, value]) => [ Object.entries(value.variants).map(([key, value]) => [
key, key,
cleanSpecOfPointers(value), cleanSpecOfPointers(value),
]), ]),
) )
if (!matchPointer.test(value)) continue if (!isMatchPointer(value)) continue
delete mutSpec[key] delete mutSpec[key]
// // if (value.target === ) // // if (value.target === )
} }
@@ -1245,7 +1261,7 @@ async function updateConfig(
: catchFn( : catchFn(
() => () =>
filled.addressInfo!.filter({ kind: "mdns" })!.hostnames[0] filled.addressInfo!.filter({ kind: "mdns" })!.hostnames[0]
.hostname.value, .hostname,
) || "" ) || ""
mutConfigValue[key] = url mutConfigValue[key] = url
} }
@@ -1268,7 +1284,7 @@ function extractServiceInterfaceId(manifest: Manifest, specInterface: string) {
} }
async function convertToNewConfig(value: OldGetConfigRes) { async function convertToNewConfig(value: OldGetConfigRes) {
try { try {
const valueSpec: OldConfigSpec = matchOldConfigSpec.unsafeCast(value.spec) const valueSpec: OldConfigSpec = matchOldConfigSpec.parse(value.spec)
const spec = transformConfigSpec(valueSpec) const spec = transformConfigSpec(valueSpec)
if (!value.config) return { spec, config: null } if (!value.config) return { spec, config: null }
const config = transformOldConfigToNew(valueSpec, value.config) ?? null const config = transformOldConfigToNew(valueSpec, value.config) ?? null

View File

@@ -4,9 +4,9 @@ import synapseManifest from "./__fixtures__/synapseManifest"
describe("matchManifest", () => { describe("matchManifest", () => {
test("gittea", () => { test("gittea", () => {
matchManifest.unsafeCast(giteaManifest) matchManifest.parse(giteaManifest)
}) })
test("synapse", () => { test("synapse", () => {
matchManifest.unsafeCast(synapseManifest) matchManifest.parse(synapseManifest)
}) })
}) })

View File

@@ -1,126 +1,121 @@
import { import { z } from "@start9labs/start-sdk"
object,
literal,
string,
array,
boolean,
dictionary,
literals,
number,
unknown,
some,
every,
} from "ts-matches"
import { matchVolume } from "./matchVolume" import { matchVolume } from "./matchVolume"
import { matchDockerProcedure } from "../../../Models/DockerProcedure" import { matchDockerProcedure } from "../../../Models/DockerProcedure"
const matchJsProcedure = object({ const matchJsProcedure = z.object({
type: literal("script"), type: z.literal("script"),
args: array(unknown).nullable().optional().defaultTo([]), args: z.array(z.unknown()).nullable().optional().default([]),
}) })
const matchProcedure = some(matchDockerProcedure, matchJsProcedure) const matchProcedure = z.union([matchDockerProcedure, matchJsProcedure])
export type Procedure = typeof matchProcedure._TYPE export type Procedure = z.infer<typeof matchProcedure>
const matchAction = object({ const matchAction = z.object({
name: string, name: z.string(),
description: string, description: z.string(),
warning: string.nullable().optional(), warning: z.string().nullable().optional(),
implementation: matchProcedure, implementation: matchProcedure,
"allowed-statuses": array(literals("running", "stopped")), "allowed-statuses": z.array(z.enum(["running", "stopped"])),
"input-spec": unknown.nullable().optional(), "input-spec": z.unknown().nullable().optional(),
}) })
export const matchManifest = object({ export const matchManifest = z.object({
id: string, id: z.string(),
title: string, title: z.string(),
version: string, version: z.string(),
main: matchDockerProcedure, main: matchDockerProcedure,
assets: object({ assets: z
assets: string.nullable().optional(), .object({
scripts: string.nullable().optional(), assets: z.string().nullable().optional(),
}) scripts: z.string().nullable().optional(),
})
.nullable() .nullable()
.optional(), .optional(),
"health-checks": dictionary([ "health-checks": z.record(
string, z.string(),
every( z.intersection(
matchProcedure, matchProcedure,
object({ z.object({
name: string, name: z.string(),
["success-message"]: string.nullable().optional(), "success-message": z.string().nullable().optional(),
}), }),
), ),
]), ),
config: object({ config: z
get: matchProcedure, .object({
set: matchProcedure, get: matchProcedure,
}) set: matchProcedure,
})
.nullable() .nullable()
.optional(), .optional(),
properties: matchProcedure.nullable().optional(), properties: matchProcedure.nullable().optional(),
volumes: dictionary([string, matchVolume]), volumes: z.record(z.string(), matchVolume),
interfaces: dictionary([ interfaces: z.record(
string, z.string(),
object({ z.object({
name: string, name: z.string(),
description: string, description: z.string(),
"tor-config": object({ "tor-config": z
"port-mapping": dictionary([string, string]), .object({
}) "port-mapping": z.record(z.string(), z.string()),
})
.nullable() .nullable()
.optional(), .optional(),
"lan-config": dictionary([ "lan-config": z
string, .record(
object({ z.string(),
ssl: boolean, z.object({
internal: number, ssl: z.boolean(),
}), internal: z.number(),
]) }),
)
.nullable() .nullable()
.optional(), .optional(),
ui: boolean, ui: z.boolean(),
protocols: array(string), protocols: z.array(z.string()),
}), }),
]), ),
backup: object({ backup: z.object({
create: matchProcedure, create: matchProcedure,
restore: matchProcedure, restore: matchProcedure,
}), }),
migrations: object({ migrations: z
to: dictionary([string, matchProcedure]), .object({
from: dictionary([string, matchProcedure]), to: z.record(z.string(), matchProcedure),
}) from: z.record(z.string(), matchProcedure),
})
.nullable() .nullable()
.optional(), .optional(),
dependencies: dictionary([ dependencies: z.record(
string, z.string(),
object({ z
version: string, .object({
requirement: some( version: z.string(),
object({ requirement: z.union([
type: literal("opt-in"), z.object({
how: string, type: z.literal("opt-in"),
}), how: z.string(),
object({ }),
type: literal("opt-out"), z.object({
how: string, type: z.literal("opt-out"),
}), how: z.string(),
object({ }),
type: literal("required"), z.object({
}), type: z.literal("required"),
), }),
description: string.nullable().optional(), ]),
config: object({ description: z.string().nullable().optional(),
check: matchProcedure, config: z
"auto-configure": matchProcedure, .object({
check: matchProcedure,
"auto-configure": matchProcedure,
})
.nullable()
.optional(),
}) })
.nullable()
.optional(),
})
.nullable() .nullable()
.optional(), .optional(),
]), ),
actions: dictionary([string, matchAction]), actions: z.record(z.string(), matchAction),
}) })
export type Manifest = typeof matchManifest._TYPE export type Manifest = z.infer<typeof matchManifest>

View File

@@ -1,32 +1,32 @@
import { object, literal, string, boolean, some } from "ts-matches" import { z } from "@start9labs/start-sdk"
const matchDataVolume = object({ const matchDataVolume = z.object({
type: literal("data"), type: z.literal("data"),
readonly: boolean.optional(), readonly: z.boolean().optional(),
}) })
const matchAssetVolume = object({ const matchAssetVolume = z.object({
type: literal("assets"), type: z.literal("assets"),
}) })
const matchPointerVolume = object({ const matchPointerVolume = z.object({
type: literal("pointer"), type: z.literal("pointer"),
"package-id": string, "package-id": z.string(),
"volume-id": string, "volume-id": z.string(),
path: string, path: z.string(),
readonly: boolean, readonly: z.boolean(),
}) })
const matchCertificateVolume = object({ const matchCertificateVolume = z.object({
type: literal("certificate"), type: z.literal("certificate"),
"interface-id": string, "interface-id": z.string(),
}) })
const matchBackupVolume = object({ const matchBackupVolume = z.object({
type: literal("backup"), type: z.literal("backup"),
readonly: boolean, readonly: z.boolean(),
}) })
export const matchVolume = some( export const matchVolume = z.union([
matchDataVolume, matchDataVolume,
matchAssetVolume, matchAssetVolume,
matchPointerVolume, matchPointerVolume,
matchCertificateVolume, matchCertificateVolume,
matchBackupVolume, matchBackupVolume,
) ])
export type Volume = typeof matchVolume._TYPE export type Volume = z.infer<typeof matchVolume>

View File

@@ -12,43 +12,43 @@ import nostrConfig2 from "./__fixtures__/nostrConfig2"
describe("transformConfigSpec", () => { describe("transformConfigSpec", () => {
test("matchOldConfigSpec(embassyPages.homepage.variants[web-page])", () => { test("matchOldConfigSpec(embassyPages.homepage.variants[web-page])", () => {
matchOldConfigSpec.unsafeCast( matchOldConfigSpec.parse(
fixtureEmbassyPagesConfig.homepage.variants["web-page"], fixtureEmbassyPagesConfig.homepage.variants["web-page"],
) )
}) })
test("matchOldConfigSpec(embassyPages)", () => { test("matchOldConfigSpec(embassyPages)", () => {
matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig) matchOldConfigSpec.parse(fixtureEmbassyPagesConfig)
}) })
test("transformConfigSpec(embassyPages)", () => { test("transformConfigSpec(embassyPages)", () => {
const spec = matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig) const spec = matchOldConfigSpec.parse(fixtureEmbassyPagesConfig)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("matchOldConfigSpec(RTL.nodes)", () => { test("matchOldConfigSpec(RTL.nodes)", () => {
matchOldValueSpecList.unsafeCast(fixtureRTLConfig.nodes) matchOldValueSpecList.parse(fixtureRTLConfig.nodes)
}) })
test("matchOldConfigSpec(RTL)", () => { test("matchOldConfigSpec(RTL)", () => {
matchOldConfigSpec.unsafeCast(fixtureRTLConfig) matchOldConfigSpec.parse(fixtureRTLConfig)
}) })
test("transformConfigSpec(RTL)", () => { test("transformConfigSpec(RTL)", () => {
const spec = matchOldConfigSpec.unsafeCast(fixtureRTLConfig) const spec = matchOldConfigSpec.parse(fixtureRTLConfig)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("transformConfigSpec(searNXG)", () => { test("transformConfigSpec(searNXG)", () => {
const spec = matchOldConfigSpec.unsafeCast(searNXG) const spec = matchOldConfigSpec.parse(searNXG)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("transformConfigSpec(bitcoind)", () => { test("transformConfigSpec(bitcoind)", () => {
const spec = matchOldConfigSpec.unsafeCast(bitcoind) const spec = matchOldConfigSpec.parse(bitcoind)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("transformConfigSpec(nostr)", () => { test("transformConfigSpec(nostr)", () => {
const spec = matchOldConfigSpec.unsafeCast(nostr) const spec = matchOldConfigSpec.parse(nostr)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("transformConfigSpec(nostr2)", () => { test("transformConfigSpec(nostr2)", () => {
const spec = matchOldConfigSpec.unsafeCast(nostrConfig2) const spec = matchOldConfigSpec.parse(nostrConfig2)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
}) })

View File

@@ -1,19 +1,4 @@
import { IST } from "@start9labs/start-sdk" import { IST, z } from "@start9labs/start-sdk"
import {
dictionary,
object,
anyOf,
string,
literals,
array,
number,
boolean,
Parser,
deferred,
every,
nill,
literal,
} from "ts-matches"
export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec { export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
return Object.entries(oldSpec).reduce((inputSpec, [key, oldVal]) => { return Object.entries(oldSpec).reduce((inputSpec, [key, oldVal]) => {
@@ -82,7 +67,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
name: oldVal.name, name: oldVal.name,
description: oldVal.description || null, description: oldVal.description || null,
warning: oldVal.warning || null, warning: oldVal.warning || null,
spec: transformConfigSpec(matchOldConfigSpec.unsafeCast(oldVal.spec)), spec: transformConfigSpec(matchOldConfigSpec.parse(oldVal.spec)),
} }
} else if (oldVal.type === "string") { } else if (oldVal.type === "string") {
newVal = { newVal = {
@@ -121,7 +106,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
...obj, ...obj,
[id]: { [id]: {
name: oldVal.tag["variant-names"][id] || id, name: oldVal.tag["variant-names"][id] || id,
spec: transformConfigSpec(matchOldConfigSpec.unsafeCast(spec)), spec: transformConfigSpec(matchOldConfigSpec.parse(spec)),
}, },
}), }),
{} as Record<string, { name: string; spec: IST.InputSpec }>, {} as Record<string, { name: string; spec: IST.InputSpec }>,
@@ -153,7 +138,7 @@ export function transformOldConfigToNew(
if (isObject(val)) { if (isObject(val)) {
newVal = transformOldConfigToNew( newVal = transformOldConfigToNew(
matchOldConfigSpec.unsafeCast(val.spec), matchOldConfigSpec.parse(val.spec),
config[key], config[key],
) )
} }
@@ -172,7 +157,7 @@ export function transformOldConfigToNew(
newVal = { newVal = {
selection, selection,
value: transformOldConfigToNew( value: transformOldConfigToNew(
matchOldConfigSpec.unsafeCast(val.variants[selection]), matchOldConfigSpec.parse(val.variants[selection]),
config[key], config[key],
), ),
} }
@@ -183,10 +168,7 @@ export function transformOldConfigToNew(
if (isObjectList(val)) { if (isObjectList(val)) {
newVal = (config[key] as object[]).map((obj) => newVal = (config[key] as object[]).map((obj) =>
transformOldConfigToNew( transformOldConfigToNew(matchOldConfigSpec.parse(val.spec.spec), obj),
matchOldConfigSpec.unsafeCast(val.spec.spec),
obj,
),
) )
} else if (isUnionList(val)) return obj } else if (isUnionList(val)) return obj
} }
@@ -212,7 +194,7 @@ export function transformNewConfigToOld(
if (isObject(val)) { if (isObject(val)) {
newVal = transformNewConfigToOld( newVal = transformNewConfigToOld(
matchOldConfigSpec.unsafeCast(val.spec), matchOldConfigSpec.parse(val.spec),
config[key], config[key],
) )
} }
@@ -221,7 +203,7 @@ export function transformNewConfigToOld(
newVal = { newVal = {
[val.tag.id]: config[key].selection, [val.tag.id]: config[key].selection,
...transformNewConfigToOld( ...transformNewConfigToOld(
matchOldConfigSpec.unsafeCast(val.variants[config[key].selection]), matchOldConfigSpec.parse(val.variants[config[key].selection]),
config[key].value, config[key].value,
), ),
} }
@@ -230,10 +212,7 @@ export function transformNewConfigToOld(
if (isList(val)) { if (isList(val)) {
if (isObjectList(val)) { if (isObjectList(val)) {
newVal = (config[key] as object[]).map((obj) => newVal = (config[key] as object[]).map((obj) =>
transformNewConfigToOld( transformNewConfigToOld(matchOldConfigSpec.parse(val.spec.spec), obj),
matchOldConfigSpec.unsafeCast(val.spec.spec),
obj,
),
) )
} else if (isUnionList(val)) return obj } else if (isUnionList(val)) return obj
} }
@@ -337,9 +316,7 @@ function getListSpec(
default: oldVal.default as Record<string, unknown>[], default: oldVal.default as Record<string, unknown>[],
spec: { spec: {
type: "object", type: "object",
spec: transformConfigSpec( spec: transformConfigSpec(matchOldConfigSpec.parse(oldVal.spec.spec)),
matchOldConfigSpec.unsafeCast(oldVal.spec.spec),
),
uniqueBy: oldVal.spec["unique-by"] || null, uniqueBy: oldVal.spec["unique-by"] || null,
displayAs: oldVal.spec["display-as"] || null, displayAs: oldVal.spec["display-as"] || null,
}, },
@@ -393,211 +370,281 @@ function isUnionList(
} }
export type OldConfigSpec = Record<string, OldValueSpec> export type OldConfigSpec = Record<string, OldValueSpec>
const [_matchOldConfigSpec, setMatchOldConfigSpec] = deferred<unknown>() export const matchOldConfigSpec: z.ZodType<OldConfigSpec> = z.lazy(() =>
export const matchOldConfigSpec = _matchOldConfigSpec as Parser< z.record(z.string(), matchOldValueSpec),
unknown,
OldConfigSpec
>
export const matchOldDefaultString = anyOf(
string,
object({ charset: string, len: number }),
) )
type OldDefaultString = typeof matchOldDefaultString._TYPE export const matchOldDefaultString = z.union([
z.string(),
z.object({ charset: z.string(), len: z.number() }),
])
type OldDefaultString = z.infer<typeof matchOldDefaultString>
export const matchOldValueSpecString = object({ export const matchOldValueSpecString = z.object({
type: literals("string"), type: z.enum(["string"]),
name: string, name: z.string(),
masked: boolean.nullable().optional(), masked: z.boolean().nullable().optional(),
copyable: boolean.nullable().optional(), copyable: z.boolean().nullable().optional(),
nullable: boolean.nullable().optional(), nullable: z.boolean().nullable().optional(),
placeholder: string.nullable().optional(), placeholder: z.string().nullable().optional(),
pattern: string.nullable().optional(), pattern: z.string().nullable().optional(),
"pattern-description": string.nullable().optional(), "pattern-description": z.string().nullable().optional(),
default: matchOldDefaultString.nullable().optional(), default: matchOldDefaultString.nullable().optional(),
textarea: boolean.nullable().optional(), textarea: z.boolean().nullable().optional(),
description: string.nullable().optional(), description: z.string().nullable().optional(),
warning: string.nullable().optional(), warning: z.string().nullable().optional(),
}) })
export const matchOldValueSpecNumber = object({ export const matchOldValueSpecNumber = z.object({
type: literals("number"), type: z.enum(["number"]),
nullable: boolean, nullable: z.boolean(),
name: string, name: z.string(),
range: string, range: z.string(),
integral: boolean, integral: z.boolean(),
default: number.nullable().optional(), default: z.number().nullable().optional(),
description: string.nullable().optional(), description: z.string().nullable().optional(),
warning: string.nullable().optional(), warning: z.string().nullable().optional(),
units: string.nullable().optional(), units: z.string().nullable().optional(),
placeholder: anyOf(number, string).nullable().optional(), placeholder: z.union([z.number(), z.string()]).nullable().optional(),
}) })
type OldValueSpecNumber = typeof matchOldValueSpecNumber._TYPE type OldValueSpecNumber = z.infer<typeof matchOldValueSpecNumber>
export const matchOldValueSpecBoolean = object({ export const matchOldValueSpecBoolean = z.object({
type: literals("boolean"), type: z.enum(["boolean"]),
default: boolean, default: z.boolean(),
name: string, name: z.string(),
description: string.nullable().optional(), description: z.string().nullable().optional(),
warning: string.nullable().optional(), warning: z.string().nullable().optional(),
}) })
type OldValueSpecBoolean = typeof matchOldValueSpecBoolean._TYPE type OldValueSpecBoolean = z.infer<typeof matchOldValueSpecBoolean>
const matchOldValueSpecObject = object({ type OldValueSpecObject = {
type: literals("object"), type: "object"
spec: _matchOldConfigSpec, spec: OldConfigSpec
name: string, name: string
description: string.nullable().optional(), description?: string | null
warning: string.nullable().optional(), warning?: string | null
}
const matchOldValueSpecObject: z.ZodType<OldValueSpecObject> = z.object({
type: z.enum(["object"]),
spec: z.lazy(() => matchOldConfigSpec),
name: z.string(),
description: z.string().nullable().optional(),
warning: z.string().nullable().optional(),
}) })
type OldValueSpecObject = typeof matchOldValueSpecObject._TYPE
const matchOldValueSpecEnum = object({ const matchOldValueSpecEnum = z.object({
values: array(string), values: z.array(z.string()),
"value-names": dictionary([string, string]), "value-names": z.record(z.string(), z.string()),
type: literals("enum"), type: z.enum(["enum"]),
default: string, default: z.string(),
name: string, name: z.string(),
description: string.nullable().optional(), description: z.string().nullable().optional(),
warning: string.nullable().optional(), warning: z.string().nullable().optional(),
}) })
type OldValueSpecEnum = typeof matchOldValueSpecEnum._TYPE type OldValueSpecEnum = z.infer<typeof matchOldValueSpecEnum>
const matchOldUnionTagSpec = object({ const matchOldUnionTagSpec = z.object({
id: string, // The name of the field containing one of the union variants id: z.string(), // The name of the field containing one of the union variants
"variant-names": dictionary([string, string]), // The name of each variant "variant-names": z.record(z.string(), z.string()), // The name of each variant
name: string, name: z.string(),
description: string.nullable().optional(), description: z.string().nullable().optional(),
warning: string.nullable().optional(), warning: z.string().nullable().optional(),
}) })
const matchOldValueSpecUnion = object({ type OldValueSpecUnion = {
type: literals("union"), type: "union"
tag: z.infer<typeof matchOldUnionTagSpec>
variants: Record<string, OldConfigSpec>
default: string
}
const matchOldValueSpecUnion: z.ZodType<OldValueSpecUnion> = z.object({
type: z.enum(["union"]),
tag: matchOldUnionTagSpec, tag: matchOldUnionTagSpec,
variants: dictionary([string, _matchOldConfigSpec]), variants: z.record(
default: string, z.string(),
z.lazy(() => matchOldConfigSpec),
),
default: z.string(),
}) })
type OldValueSpecUnion = typeof matchOldValueSpecUnion._TYPE
const [matchOldUniqueBy, setOldUniqueBy] = deferred<OldUniqueBy>()
type OldUniqueBy = type OldUniqueBy =
| null | null
| string | string
| { any: OldUniqueBy[] } | { any: OldUniqueBy[] }
| { all: OldUniqueBy[] } | { all: OldUniqueBy[] }
setOldUniqueBy( const matchOldUniqueBy: z.ZodType<OldUniqueBy> = z.lazy(() =>
anyOf( z.union([
nill, z.null(),
string, z.string(),
object({ any: array(matchOldUniqueBy) }), z.object({ any: z.array(matchOldUniqueBy) }),
object({ all: array(matchOldUniqueBy) }), z.object({ all: z.array(matchOldUniqueBy) }),
), ]),
) )
const matchOldListValueSpecObject = object({ type OldListValueSpecObject = {
spec: _matchOldConfigSpec, // this is a mapped type of the config object at this level, replacing the object's values with specs on those values spec: OldConfigSpec
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list "unique-by"?: OldUniqueBy | null
"display-as": string.nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec' "display-as"?: string | null
}) }
const matchOldListValueSpecUnion = object({ const matchOldListValueSpecObject: z.ZodType<OldListValueSpecObject> = z.object(
{
spec: z.lazy(() => matchOldConfigSpec), // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list
"display-as": z.string().nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
},
)
type OldListValueSpecUnion = {
"unique-by"?: OldUniqueBy | null
"display-as"?: string | null
tag: z.infer<typeof matchOldUnionTagSpec>
variants: Record<string, OldConfigSpec>
}
const matchOldListValueSpecUnion: z.ZodType<OldListValueSpecUnion> = z.object({
"unique-by": matchOldUniqueBy.nullable().optional(), "unique-by": matchOldUniqueBy.nullable().optional(),
"display-as": string.nullable().optional(), "display-as": z.string().nullable().optional(),
tag: matchOldUnionTagSpec, tag: matchOldUnionTagSpec,
variants: dictionary([string, _matchOldConfigSpec]), variants: z.record(
z.string(),
z.lazy(() => matchOldConfigSpec),
),
}) })
const matchOldListValueSpecString = object({ const matchOldListValueSpecString = z.object({
masked: boolean.nullable().optional(), masked: z.boolean().nullable().optional(),
copyable: boolean.nullable().optional(), copyable: z.boolean().nullable().optional(),
pattern: string.nullable().optional(), pattern: z.string().nullable().optional(),
"pattern-description": string.nullable().optional(), "pattern-description": z.string().nullable().optional(),
placeholder: string.nullable().optional(), placeholder: z.string().nullable().optional(),
}) })
const matchOldListValueSpecEnum = object({ const matchOldListValueSpecEnum = z.object({
values: array(string), values: z.array(z.string()),
"value-names": dictionary([string, string]), "value-names": z.record(z.string(), z.string()),
}) })
const matchOldListValueSpecNumber = object({ const matchOldListValueSpecNumber = z.object({
range: string, range: z.string(),
integral: boolean, integral: z.boolean(),
units: string.nullable().optional(), units: z.string().nullable().optional(),
placeholder: anyOf(number, string).nullable().optional(), placeholder: z.union([z.number(), z.string()]).nullable().optional(),
}) })
type OldValueSpecListBase = {
type: "list"
range: string
default: string[] | number[] | OldDefaultString[] | Record<string, unknown>[]
name: string
description?: string | null
warning?: string | null
}
type OldValueSpecList = OldValueSpecListBase &
(
| { subtype: "string"; spec: z.infer<typeof matchOldListValueSpecString> }
| { subtype: "enum"; spec: z.infer<typeof matchOldListValueSpecEnum> }
| { subtype: "object"; spec: OldListValueSpecObject }
| { subtype: "number"; spec: z.infer<typeof matchOldListValueSpecNumber> }
| { subtype: "union"; spec: OldListValueSpecUnion }
)
// represents a spec for a list // represents a spec for a list
export const matchOldValueSpecList = every( export const matchOldValueSpecList: z.ZodType<OldValueSpecList> =
object({ z.intersection(
type: literals("list"), z.object({
range: string, // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules type: z.enum(["list"]),
default: anyOf( range: z.string(), // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules
array(string), default: z.union([
array(number), z.array(z.string()),
array(matchOldDefaultString), z.array(z.number()),
array(object), z.array(matchOldDefaultString),
), z.array(z.object({}).passthrough()),
name: string, ]),
description: string.nullable().optional(), name: z.string(),
warning: string.nullable().optional(), description: z.string().nullable().optional(),
}), warning: z.string().nullable().optional(),
anyOf(
object({
subtype: literals("string"),
spec: matchOldListValueSpecString,
}), }),
object({ z.union([
subtype: literals("enum"), z.object({
spec: matchOldListValueSpecEnum, subtype: z.enum(["string"]),
}), spec: matchOldListValueSpecString,
object({ }),
subtype: literals("object"), z.object({
spec: matchOldListValueSpecObject, subtype: z.enum(["enum"]),
}), spec: matchOldListValueSpecEnum,
object({ }),
subtype: literals("number"), z.object({
spec: matchOldListValueSpecNumber, subtype: z.enum(["object"]),
}), spec: matchOldListValueSpecObject,
object({ }),
subtype: literals("union"), z.object({
spec: matchOldListValueSpecUnion, subtype: z.enum(["number"]),
}), spec: matchOldListValueSpecNumber,
), }),
) z.object({
type OldValueSpecList = typeof matchOldValueSpecList._TYPE subtype: z.enum(["union"]),
spec: matchOldListValueSpecUnion,
}),
]),
) as unknown as z.ZodType<OldValueSpecList>
const matchOldValueSpecPointer = every( type OldValueSpecPointer = {
object({ type: "pointer"
type: literal("pointer"), } & (
}), | {
anyOf( subtype: "package"
object({ target: "tor-key" | "tor-address" | "lan-address"
subtype: literal("package"), "package-id": string
target: literals("tor-key", "tor-address", "lan-address"), interface: string
"package-id": string, }
interface: string, | {
}), subtype: "package"
object({ target: "config"
subtype: literal("package"), "package-id": string
target: literals("config"), selector: string
"package-id": string, multi: boolean
selector: string, }
multi: boolean,
}),
),
) )
type OldValueSpecPointer = typeof matchOldValueSpecPointer._TYPE const matchOldValueSpecPointer: z.ZodType<OldValueSpecPointer> = z.intersection(
z.object({
type: z.literal("pointer"),
}),
z.union([
z.object({
subtype: z.literal("package"),
target: z.enum(["tor-key", "tor-address", "lan-address"]),
"package-id": z.string(),
interface: z.string(),
}),
z.object({
subtype: z.literal("package"),
target: z.enum(["config"]),
"package-id": z.string(),
selector: z.string(),
multi: z.boolean(),
}),
]),
) as unknown as z.ZodType<OldValueSpecPointer>
export const matchOldValueSpec = anyOf( type OldValueSpecString = z.infer<typeof matchOldValueSpecString>
type OldValueSpec =
| OldValueSpecString
| OldValueSpecNumber
| OldValueSpecBoolean
| OldValueSpecObject
| OldValueSpecEnum
| OldValueSpecList
| OldValueSpecUnion
| OldValueSpecPointer
export const matchOldValueSpec: z.ZodType<OldValueSpec> = z.union([
matchOldValueSpecString, matchOldValueSpecString,
matchOldValueSpecNumber, matchOldValueSpecNumber,
matchOldValueSpecBoolean, matchOldValueSpecBoolean,
matchOldValueSpecObject, matchOldValueSpecObject as z.ZodType<OldValueSpecObject>,
matchOldValueSpecEnum, matchOldValueSpecEnum,
matchOldValueSpecList, matchOldValueSpecList as z.ZodType<OldValueSpecList>,
matchOldValueSpecUnion, matchOldValueSpecUnion as z.ZodType<OldValueSpecUnion>,
matchOldValueSpecPointer, matchOldValueSpecPointer as z.ZodType<OldValueSpecPointer>,
) ])
type OldValueSpec = typeof matchOldValueSpec._TYPE
setMatchOldConfigSpec(dictionary([string, matchOldValueSpec]))
export class Range { export class Range {
min?: number min?: number

View File

@@ -47,11 +47,12 @@ export class SystemForStartOs implements System {
getActionInput( getActionInput(
effects: Effects, effects: Effects,
id: string, id: string,
prefill: Record<string, unknown> | null,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<T.ActionInput | null> { ): Promise<T.ActionInput | null> {
const action = this.abi.actions.get(id) const action = this.abi.actions.get(id)
if (!action) throw new Error(`Action ${id} not found`) if (!action) throw new Error(`Action ${id} not found`)
return action.getInput({ effects }) return action.getInput({ effects, prefill })
} }
runAction( runAction(
effects: Effects, effects: Effects,

View File

@@ -33,6 +33,7 @@ export type System = {
getActionInput( getActionInput(
effects: Effects, effects: Effects,
actionId: string, actionId: string,
prefill: Record<string, unknown> | null,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<T.ActionInput | null> ): Promise<T.ActionInput | null>

View File

@@ -1,41 +1,19 @@
import { import { z } from "@start9labs/start-sdk"
object,
literal,
string,
boolean,
array,
dictionary,
literals,
number,
Parser,
some,
} from "ts-matches"
import { matchDuration } from "./Duration" import { matchDuration } from "./Duration"
const VolumeId = string export const matchDockerProcedure = z.object({
const Path = string type: z.literal("docker"),
image: z.string(),
export type VolumeId = string system: z.boolean().optional(),
export type Path = string entrypoint: z.string(),
export const matchDockerProcedure = object({ args: z.array(z.string()).default([]),
type: literal("docker"), mounts: z.record(z.string(), z.string()).optional(),
image: string, "io-format": z
system: boolean.optional(), .enum(["json", "json-pretty", "yaml", "cbor", "toml", "toml-pretty"])
entrypoint: string,
args: array(string).defaultTo([]),
mounts: dictionary([VolumeId, Path]).optional(),
"io-format": literals(
"json",
"json-pretty",
"yaml",
"cbor",
"toml",
"toml-pretty",
)
.nullable() .nullable()
.optional(), .optional(),
"sigterm-timeout": some(number, matchDuration).onMismatch(30), "sigterm-timeout": z.union([z.number(), matchDuration]).catch(30),
inject: boolean.defaultTo(false), inject: z.boolean().default(false),
}) })
export type DockerProcedure = typeof matchDockerProcedure._TYPE export type DockerProcedure = z.infer<typeof matchDockerProcedure>

View File

@@ -1,11 +1,11 @@
import { string } from "ts-matches" import { z } from "@start9labs/start-sdk"
export type TimeUnit = "d" | "h" | "s" | "ms" | "m" | "µs" | "ns" export type TimeUnit = "d" | "h" | "s" | "ms" | "m" | "µs" | "ns"
export type Duration = `${number}${TimeUnit}` export type Duration = `${number}${TimeUnit}`
const durationRegex = /^([0-9]*(\.[0-9]+)?)(ns|µs|ms|s|m|d)$/ const durationRegex = /^([0-9]*(\.[0-9]+)?)(ns|µs|ms|s|m|d)$/
export const matchDuration = string.refine(isDuration) export const matchDuration = z.string().refine(isDuration)
export function isDuration(value: string): value is Duration { export function isDuration(value: string): value is Duration {
return durationRegex.test(value) return durationRegex.test(value)
} }

View File

@@ -1,10 +1,10 @@
import { literals, some, string } from "ts-matches" import { z } from "@start9labs/start-sdk"
type NestedPath<A extends string, B extends string> = `/${A}/${string}/${B}` type NestedPath<A extends string, B extends string> = `/${A}/${string}/${B}`
type NestedPaths = NestedPath<"actions", "run" | "getInput"> type NestedPaths = NestedPath<"actions", "run" | "getInput">
// prettier-ignore // prettier-ignore
type UnNestPaths<A> = type UnNestPaths<A> =
A extends `${infer A}/${infer B}` ? [...UnNestPaths<A>, ... UnNestPaths<B>] : A extends `${infer A}/${infer B}` ? [...UnNestPaths<A>, ... UnNestPaths<B>] :
[A] [A]
export function unNestPath<A extends string>(a: A): UnNestPaths<A> { export function unNestPath<A extends string>(a: A): UnNestPaths<A> {
@@ -17,14 +17,14 @@ function isNestedPath(path: string): path is NestedPaths {
return true return true
return false return false
} }
export const jsonPath = some( export const jsonPath = z.union([
literals( z.enum([
"/packageInit", "/packageInit",
"/packageUninit", "/packageUninit",
"/backup/create", "/backup/create",
"/backup/restore", "/backup/restore",
), ]),
string.refine(isNestedPath, "isNestedPath"), z.string().refine(isNestedPath),
) ])
export type JsonPath = typeof jsonPath._TYPE export type JsonPath = z.infer<typeof jsonPath>

View File

@@ -1,5 +1,4 @@
import { RpcListener } from "./Adapters/RpcListener" import { RpcListener } from "./Adapters/RpcListener"
import { SystemForEmbassy } from "./Adapters/Systems/SystemForEmbassy"
import { AllGetDependencies } from "./Interfaces/AllGetDependencies" import { AllGetDependencies } from "./Interfaces/AllGetDependencies"
import { getSystem } from "./Adapters/Systems" import { getSystem } from "./Adapters/Systems"
@@ -7,6 +6,18 @@ const getDependencies: AllGetDependencies = {
system: getSystem, system: getSystem,
} }
process.on("unhandledRejection", (reason) => {
if (
reason instanceof Error &&
"muteUnhandled" in reason &&
reason.muteUnhandled
) {
// mute
} else {
console.error("Unhandled promise rejection", reason)
}
})
for (let s of ["SIGTERM", "SIGINT", "SIGHUP"]) { for (let s of ["SIGTERM", "SIGINT", "SIGHUP"]) {
process.on(s, (s) => { process.on(s, (s) => {
console.log(`Caught ${s}`) console.log(`Caught ${s}`)

View File

@@ -16,6 +16,6 @@ case $ARCH in
esac esac
docker run --rm $USE_TTY --platform=$DOCKER_PLATFORM -eARCH --privileged -v "$(pwd):/root/start-os" start9/build-env /root/start-os/container-runtime/update-image.sh docker run --rm $USE_TTY --platform=$DOCKER_PLATFORM -eARCH --privileged -v "$(pwd):/root/start-os" start9/build-env /root/start-os/container-runtime/update-image.sh
if [ "$(ls -nd "rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then if [ "$(ls -nd "container-runtime/rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then
docker run --rm $USE_TTY -v "$(pwd):/root/start-os" start9/build-env chown -R $UID:$UID /root/start-os/container-runtime docker run --rm $USE_TTY -v "$(pwd):/root/start-os" start9/build-env chown -R $UID:$UID /root/start-os/container-runtime
fi fi

72
core/ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,72 @@
# Core Architecture
The Rust backend daemon for StartOS.
## Binaries
The crate produces a single binary `startbox` that is symlinked under different names for different behavior:
- `startbox` / `startd` — Main daemon
- `start-cli` — CLI interface
- `start-container` — Runs inside LXC containers; communicates with host and manages subcontainers
- `registrybox` — Registry daemon
- `tunnelbox` — VPN/tunnel daemon
## Crate Structure
- `startos` — Core library that supports building `startbox`
- `helpers` — Utility functions used across both `startos` and `js-engine`
- `models` — Types shared across `startos`, `js-engine`, and `helpers`
## Key Modules
- `src/context/` — Context types (RpcContext, CliContext, InitContext, DiagnosticContext)
- `src/service/` — Service lifecycle management with actor pattern (`service_actor.rs`)
- `src/db/model/` — Patch-DB models (`public.rs` synced to frontend, `private.rs` backend-only)
- `src/net/` — Networking (DNS, ACME, WiFi, Tor via Arti, WireGuard)
- `src/s9pk/` — S9PK package format (merkle archive)
- `src/registry/` — Package registry management
## RPC Pattern
The API is JSON-RPC (not REST). All endpoints are RPC methods organized in a hierarchical command structure using [rpc-toolkit](https://github.com/Start9Labs/rpc-toolkit). Handlers are registered in a tree of `ParentHandler` nodes, with four handler types: `from_fn_async` (standard), `from_fn_async_local` (non-Send), `from_fn` (sync), and `from_fn_blocking` (blocking). Metadata like `.with_about()` drives middleware and documentation.
See [rpc-toolkit.md](rpc-toolkit.md) for full handler patterns and configuration.
## Patch-DB Patterns
Patch-DB provides diff-based state synchronization. Changes to `db/model/public.rs` automatically sync to the frontend.
**Key patterns:**
- `db.peek().await` — Get a read-only snapshot of the database state
- `db.mutate(|db| { ... }).await` — Apply mutations atomically, returns `MutateResult`
- `#[derive(HasModel)]` — Derive macro for types stored in the database, generates typed accessors
**Generated accessor types** (from `HasModel` derive):
- `as_field()` — Immutable reference: `&Model<T>`
- `as_field_mut()` — Mutable reference: `&mut Model<T>`
- `into_field()` — Owned value: `Model<T>`
**`Model<T>` APIs** (from `db/prelude.rs`):
- `.de()` — Deserialize to `T`
- `.ser(&value)` — Serialize from `T`
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize
- For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()`
See [patchdb.md](patchdb.md) for `TypedDbWatch<T>` construction, API, and usage patterns.
## i18n
See [i18n-patterns.md](i18n-patterns.md) for internationalization key conventions and the `t!()` macro.
## Rust Utilities & Patterns
See [core-rust-patterns.md](core-rust-patterns.md) for common utilities (Invoke trait, Guard pattern, mount guards, Apply trait, etc.).
## Related Documentation
- [rpc-toolkit.md](rpc-toolkit.md) — JSON-RPC handler patterns
- [patchdb.md](patchdb.md) — Patch-DB watch patterns and TypedDbWatch
- [i18n-patterns.md](i18n-patterns.md) — Internationalization conventions
- [core-rust-patterns.md](core-rust-patterns.md) — Common Rust utilities
- [s9pk-structure.md](s9pk-structure.md) — S9PK package format

View File

@@ -2,51 +2,27 @@
The Rust backend daemon for StartOS. The Rust backend daemon for StartOS.
## Binaries ## Architecture
- `startbox` — Main daemon (runs as `startd`) See [ARCHITECTURE.md](ARCHITECTURE.md) for binaries, modules, Patch-DB patterns, and related documentation.
- `start-cli` — CLI interface
- `start-container` — Runs inside LXC containers; communicates with host and manages subcontainers
- `registrybox` — Registry daemon
- `tunnelbox` — VPN/tunnel daemon
## Key Modules See [CONTRIBUTING.md](CONTRIBUTING.md) for how to add RPC endpoints, TS-exported types, and i18n keys.
- `src/context/` — Context types (RpcContext, CliContext, InitContext, DiagnosticContext) ## Quick Reference
- `src/service/` — Service lifecycle management with actor pattern (`service_actor.rs`)
- `src/db/model/` — Patch-DB models (`public.rs` synced to frontend, `private.rs` backend-only)
- `src/net/` — Networking (DNS, ACME, WiFi, Tor via Arti, WireGuard)
- `src/s9pk/` — S9PK package format (merkle archive)
- `src/registry/` — Package registry management
## RPC Pattern ```bash
cargo check -p start-os # Type check
make test-core # Run tests
make ts-bindings # Regenerate TS types after changing #[ts(export)] structs
cd sdk && make baseDist dist # Rebuild SDK after ts-bindings
```
See `rpc-toolkit.md` for JSON-RPC handler patterns and configuration. ## Operating Rules
## Patch-DB Patterns - Always run `cargo check -p start-os` after modifying Rust code
- When adding RPC endpoints, follow the patterns in [rpc-toolkit.md](rpc-toolkit.md)
Patch-DB provides diff-based state synchronization. Changes to `db/model/public.rs` automatically sync to the frontend. - When modifying `#[ts(export)]` types, regenerate bindings and rebuild the SDK (see [ARCHITECTURE.md](../ARCHITECTURE.md#build-pipeline))
- When adding i18n keys, add all 5 locales in `core/locales/i18n.yaml` (see [i18n-patterns.md](i18n-patterns.md))
**Key patterns:** - When using DB watches, follow the `TypedDbWatch<T>` patterns in [patchdb.md](patchdb.md)
- `db.peek().await` — Get a read-only snapshot of the database state - **Always use `.invoke(ErrorKind::...)` instead of `.status()` when running CLI commands** via `tokio::process::Command`. The `Invoke` trait (from `crate::util::Invoke`) captures stdout/stderr and checks exit codes properly. Using `.status()` leaks stderr directly to system logs, creating noise. For check-then-act patterns (e.g. `iptables -C`), use `.invoke(...).await.is_ok()` / `.is_err()` instead of `.status().await.map_or(false, |s| s.success())`.
- `db.mutate(|db| { ... }).await` — Apply mutations atomically, returns `MutateResult` - Always use file utils in util::io instead of tokio::fs when available
- `#[derive(HasModel)]` — Derive macro for types stored in the database, generates typed accessors
**Generated accessor types** (from `HasModel` derive):
- `as_field()` — Immutable reference: `&Model<T>`
- `as_field_mut()` — Mutable reference: `&mut Model<T>`
- `into_field()` — Owned value: `Model<T>`
**`Model<T>` APIs** (from `db/prelude.rs`):
- `.de()` — Deserialize to `T`
- `.ser(&value)` — Serialize from `T`
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize
- For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()`
## i18n
See `i18n-patterns.md` for internationalization key conventions and the `t!()` macro.
## Rust Utilities & Patterns
See `core-rust-patterns.md` for common utilities (Invoke trait, Guard pattern, mount guards, Apply trait, etc.).

49
core/CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,49 @@
# Contributing to Core
For general environment setup, cloning, and build system, see the root [CONTRIBUTING.md](../CONTRIBUTING.md).
## Prerequisites
- [Rust](https://rustup.rs) (nightly for formatting)
- [rust-analyzer](https://rust-analyzer.github.io/) recommended
- [Docker](https://docs.docker.com/get-docker/) (for cross-compilation via `rust-zig-builder` container)
## Common Commands
```bash
cargo check -p start-os # Type check
cargo test --features=test # Run tests (or: make test-core)
make format # Format with nightly rustfmt
cd core && cargo test <test_name> --features=test # Run a specific test
```
## Adding a New RPC Endpoint
1. Define a params struct with `#[derive(Deserialize, Serialize)]`
2. Choose a handler type (`from_fn_async` for most cases)
3. Write the handler function: `async fn my_handler(ctx: RpcContext, params: MyParams) -> Result<MyResponse, Error>`
4. Register it in the appropriate `ParentHandler` tree
5. If params/response should be available in TypeScript, add `#[derive(TS)]` and `#[ts(export)]`
See [rpc-toolkit.md](rpc-toolkit.md) for full handler patterns and all four handler types.
## Adding TS-Exported Types
When a Rust type needs to be available in TypeScript (for the web frontend or SDK):
1. Add `ts_rs::TS` to the derive list and `#[ts(export)]` to the struct/enum
2. Use `#[serde(rename_all = "camelCase")]` for JS-friendly field names
3. For types that don't implement TS (like `DateTime<Utc>`, `exver::Version`), use `#[ts(type = "string")]` overrides
4. For `u64` fields that should be JS `number` (not `bigint`), use `#[ts(type = "number")]`
5. Run `make ts-bindings` to regenerate — files appear in `core/bindings/` then sync to `sdk/base/lib/osBindings/`
6. Rebuild the SDK: `cd sdk && make baseDist dist`
## Adding i18n Keys
1. Add the key to `core/locales/i18n.yaml` with all 5 language translations
2. Use the `t!("your.key.name")` macro in Rust code
3. Follow existing namespace conventions — match the module path where the key is used
4. Use kebab-case for multi-word segments
5. Translations are validated at compile time
See [i18n-patterns.md](i18n-patterns.md) for full conventions.

View File

@@ -22,9 +22,7 @@ several different names for different behavior:
- `start-sdk`: This is a CLI tool that aids in building and packaging services - `start-sdk`: This is a CLI tool that aids in building and packaging services
you wish to deploy to StartOS you wish to deploy to StartOS
## Questions ## Documentation
If you have questions about how various pieces of the backend system work. Open - [ARCHITECTURE.md](ARCHITECTURE.md) — Backend architecture, modules, and patterns
an issue and tag the following people - [CONTRIBUTING.md](CONTRIBUTING.md) — How to contribute to core
- dr-bonez

View File

@@ -197,6 +197,13 @@ setup.transferring-data:
fr_FR: "Transfert de données" fr_FR: "Transfert de données"
pl_PL: "Przesyłanie danych" pl_PL: "Przesyłanie danych"
setup.password-required:
en_US: "Password is required for fresh setup"
de_DE: "Passwort ist für die Ersteinrichtung erforderlich"
es_ES: "Se requiere contraseña para la configuración inicial"
fr_FR: "Le mot de passe est requis pour la première configuration"
pl_PL: "Hasło jest wymagane do nowej konfiguracji"
# system.rs # system.rs
system.governor-not-available: system.governor-not-available:
en_US: "Governor %{governor} not available" en_US: "Governor %{governor} not available"
@@ -994,6 +1001,27 @@ disk.mount.binding:
fr_FR: "Liaison de %{src} à %{dst}" fr_FR: "Liaison de %{src} à %{dst}"
pl_PL: "Wiązanie %{src} do %{dst}" pl_PL: "Wiązanie %{src} do %{dst}"
hostname.empty:
en_US: "Hostname cannot be empty"
de_DE: "Der Hostname darf nicht leer sein"
es_ES: "El nombre de host no puede estar vacío"
fr_FR: "Le nom d'hôte ne peut pas être vide"
pl_PL: "Nazwa hosta nie może być pusta"
hostname.invalid-character:
en_US: "Invalid character in hostname: %{char}"
de_DE: "Ungültiges Zeichen im Hostnamen: %{char}"
es_ES: "Carácter no válido en el nombre de host: %{char}"
fr_FR: "Caractère invalide dans le nom d'hôte : %{char}"
pl_PL: "Nieprawidłowy znak w nazwie hosta: %{char}"
hostname.must-provide-name-or-hostname:
en_US: "Must provide at least one of: name, hostname"
de_DE: "Es muss mindestens eines angegeben werden: name, hostname"
es_ES: "Se debe proporcionar al menos uno de: name, hostname"
fr_FR: "Vous devez fournir au moins l'un des éléments suivants : name, hostname"
pl_PL: "Należy podać co najmniej jedno z: name, hostname"
# init.rs # init.rs
init.running-preinit: init.running-preinit:
en_US: "Running preinit.sh" en_US: "Running preinit.sh"
@@ -1243,6 +1271,21 @@ backup.target.cifs.target-not-found-id:
fr_FR: "ID de cible de sauvegarde %{id} non trouvé" fr_FR: "ID de cible de sauvegarde %{id} non trouvé"
pl_PL: "Nie znaleziono ID celu kopii zapasowej %{id}" pl_PL: "Nie znaleziono ID celu kopii zapasowej %{id}"
# service/effects/net/plugin.rs
net.plugin.manifest-missing-plugin:
en_US: "manifest does not declare the \"%{plugin}\" plugin"
de_DE: "Manifest deklariert das Plugin \"%{plugin}\" nicht"
es_ES: "el manifiesto no declara el plugin \"%{plugin}\""
fr_FR: "le manifeste ne déclare pas le plugin \"%{plugin}\""
pl_PL: "manifest nie deklaruje wtyczki \"%{plugin}\""
net.plugin.binding-not-found:
en_US: "binding not found: %{binding}"
de_DE: "Bindung nicht gefunden: %{binding}"
es_ES: "enlace no encontrado: %{binding}"
fr_FR: "liaison introuvable : %{binding}"
pl_PL: "powiązanie nie znalezione: %{binding}"
# net/ssl.rs # net/ssl.rs
net.ssl.unreachable: net.ssl.unreachable:
en_US: "unreachable" en_US: "unreachable"
@@ -1790,6 +1833,28 @@ registry.package.remove-mirror.unauthorized:
fr_FR: "Non autorisé" fr_FR: "Non autorisé"
pl_PL: "Brak autoryzacji" pl_PL: "Brak autoryzacji"
# registry/package/index.rs
registry.package.index.metadata-mismatch:
en_US: "package metadata mismatch: remove the existing version first, then re-add"
de_DE: "Paketmetadaten stimmen nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
es_ES: "discrepancia de metadatos del paquete: elimine la versión existente primero, luego vuelva a agregarla"
fr_FR: "discordance des métadonnées du paquet : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
pl_PL: "niezgodność metadanych pakietu: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
registry.package.index.icon-mismatch:
en_US: "package icon mismatch: remove the existing version first, then re-add"
de_DE: "Paketsymbol stimmt nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
es_ES: "discrepancia del icono del paquete: elimine la versión existente primero, luego vuelva a agregarla"
fr_FR: "discordance de l'icône du paquet : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
pl_PL: "niezgodność ikony pakietu: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
registry.package.index.dependency-metadata-mismatch:
en_US: "dependency metadata mismatch: remove the existing version first, then re-add"
de_DE: "Abhängigkeitsmetadaten stimmen nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
es_ES: "discrepancia de metadatos de dependencia: elimine la versión existente primero, luego vuelva a agregarla"
fr_FR: "discordance des métadonnées de dépendance : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
pl_PL: "niezgodność metadanych zależności: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
# registry/package/get.rs # registry/package/get.rs
registry.package.get.version-not-found: registry.package.get.version-not-found:
en_US: "Could not find a version of %{id} that satisfies %{version}" en_US: "Could not find a version of %{id} that satisfies %{version}"
@@ -3087,7 +3152,7 @@ help.arg.smtp-from:
fr_FR: "Adresse de l'expéditeur" fr_FR: "Adresse de l'expéditeur"
pl_PL: "Adres nadawcy e-mail" pl_PL: "Adres nadawcy e-mail"
help.arg.smtp-login: help.arg.smtp-username:
en_US: "SMTP authentication username" en_US: "SMTP authentication username"
de_DE: "SMTP-Authentifizierungsbenutzername" de_DE: "SMTP-Authentifizierungsbenutzername"
es_ES: "Nombre de usuario de autenticación SMTP" es_ES: "Nombre de usuario de autenticación SMTP"
@@ -3108,13 +3173,20 @@ help.arg.smtp-port:
fr_FR: "Port du serveur SMTP" fr_FR: "Port du serveur SMTP"
pl_PL: "Port serwera SMTP" pl_PL: "Port serwera SMTP"
help.arg.smtp-server: help.arg.smtp-host:
en_US: "SMTP server hostname" en_US: "SMTP server hostname"
de_DE: "SMTP-Server-Hostname" de_DE: "SMTP-Server-Hostname"
es_ES: "Nombre de host del servidor SMTP" es_ES: "Nombre de host del servidor SMTP"
fr_FR: "Nom d'hôte du serveur SMTP" fr_FR: "Nom d'hôte du serveur SMTP"
pl_PL: "Nazwa hosta serwera SMTP" pl_PL: "Nazwa hosta serwera SMTP"
help.arg.smtp-security:
en_US: "Connection security mode (starttls or tls)"
de_DE: "Verbindungssicherheitsmodus (starttls oder tls)"
es_ES: "Modo de seguridad de conexión (starttls o tls)"
fr_FR: "Mode de sécurité de connexion (starttls ou tls)"
pl_PL: "Tryb zabezpieczeń połączenia (starttls lub tls)"
help.arg.smtp-to: help.arg.smtp-to:
en_US: "Email recipient address" en_US: "Email recipient address"
de_DE: "E-Mail-Empfängeradresse" de_DE: "E-Mail-Empfängeradresse"
@@ -3612,6 +3684,13 @@ help.arg.s9pk-file-path:
fr_FR: "Chemin vers le fichier de paquet s9pk" fr_FR: "Chemin vers le fichier de paquet s9pk"
pl_PL: "Ścieżka do pliku pakietu s9pk" pl_PL: "Ścieżka do pliku pakietu s9pk"
help.arg.s9pk-file-paths:
en_US: "Paths to s9pk package files"
de_DE: "Pfade zu s9pk-Paketdateien"
es_ES: "Rutas a los archivos de paquete s9pk"
fr_FR: "Chemins vers les fichiers de paquet s9pk"
pl_PL: "Ścieżki do plików pakietów s9pk"
help.arg.session-ids: help.arg.session-ids:
en_US: "Session identifiers" en_US: "Session identifiers"
de_DE: "Sitzungskennungen" de_DE: "Sitzungskennungen"
@@ -3935,6 +4014,13 @@ about.allow-gateway-infer-inbound-access-from-wan:
fr_FR: "Permettre à cette passerelle de déduire si elle a un accès entrant depuis le WAN en fonction de son adresse IPv4" fr_FR: "Permettre à cette passerelle de déduire si elle a un accès entrant depuis le WAN en fonction de son adresse IPv4"
pl_PL: "Pozwól tej bramce wywnioskować, czy ma dostęp przychodzący z WAN na podstawie adresu IPv4" pl_PL: "Pozwól tej bramce wywnioskować, czy ma dostęp przychodzący z WAN na podstawie adresu IPv4"
about.apply-available-update:
en_US: "Apply available update"
de_DE: "Verfügbares Update anwenden"
es_ES: "Aplicar actualización disponible"
fr_FR: "Appliquer la mise à jour disponible"
pl_PL: "Zastosuj dostępną aktualizację"
about.calculate-blake3-hash-for-file: about.calculate-blake3-hash-for-file:
en_US: "Calculate blake3 hash for a file" en_US: "Calculate blake3 hash for a file"
de_DE: "Blake3-Hash für eine Datei berechnen" de_DE: "Blake3-Hash für eine Datei berechnen"
@@ -3949,6 +4035,20 @@ about.cancel-install-package:
fr_FR: "Annuler l'installation d'un paquet" fr_FR: "Annuler l'installation d'un paquet"
pl_PL: "Anuluj instalację pakietu" pl_PL: "Anuluj instalację pakietu"
about.check-dns-configuration:
en_US: "Check DNS configuration for a gateway"
de_DE: "DNS-Konfiguration für ein Gateway prüfen"
es_ES: "Verificar la configuración DNS de un gateway"
fr_FR: "Vérifier la configuration DNS d'une passerelle"
pl_PL: "Sprawdź konfigurację DNS bramy"
about.check-for-updates:
en_US: "Check for available updates"
de_DE: "Nach verfügbaren Updates suchen"
es_ES: "Buscar actualizaciones disponibles"
fr_FR: "Vérifier les mises à jour disponibles"
pl_PL: "Sprawdź dostępne aktualizacje"
about.check-update-startos: about.check-update-startos:
en_US: "Check a given registry for StartOS updates and update if available" en_US: "Check a given registry for StartOS updates and update if available"
de_DE: "Ein bestimmtes Registry auf StartOS-Updates prüfen und bei Verfügbarkeit aktualisieren" de_DE: "Ein bestimmtes Registry auf StartOS-Updates prüfen und bei Verfügbarkeit aktualisieren"
@@ -4887,6 +4987,13 @@ about.publish-s9pk:
fr_FR: "Publier s9pk dans le bucket S3 et indexer dans le registre" fr_FR: "Publier s9pk dans le bucket S3 et indexer dans le registre"
pl_PL: "Opublikuj s9pk do bucketu S3 i zindeksuj w rejestrze" pl_PL: "Opublikuj s9pk do bucketu S3 i zindeksuj w rejestrze"
about.select-s9pk-for-device:
en_US: "Select the best compatible s9pk for a target device"
de_DE: "Das beste kompatible s9pk für ein Zielgerät auswählen"
es_ES: "Seleccionar el s9pk más compatible para un dispositivo destino"
fr_FR: "Sélectionner le meilleur s9pk compatible pour un appareil cible"
pl_PL: "Wybierz najlepiej kompatybilny s9pk dla urządzenia docelowego"
about.rebuild-service-container: about.rebuild-service-container:
en_US: "Rebuild service container" en_US: "Rebuild service container"
de_DE: "Dienst-Container neu erstellen" de_DE: "Dienst-Container neu erstellen"
@@ -5139,6 +5246,13 @@ about.set-country:
fr_FR: "Définir le pays" fr_FR: "Définir le pays"
pl_PL: "Ustaw kraj" pl_PL: "Ustaw kraj"
about.set-hostname:
en_US: "Set the server hostname"
de_DE: "Den Server-Hostnamen festlegen"
es_ES: "Establecer el nombre de host del servidor"
fr_FR: "Définir le nom d'hôte du serveur"
pl_PL: "Ustaw nazwę hosta serwera"
about.set-gateway-enabled-for-binding: about.set-gateway-enabled-for-binding:
en_US: "Set gateway enabled for binding" en_US: "Set gateway enabled for binding"
de_DE: "Gateway für Bindung aktivieren" de_DE: "Gateway für Bindung aktivieren"

105
core/patchdb.md Normal file
View File

@@ -0,0 +1,105 @@
# Patch-DB Patterns
## Model<T> and HasModel
Types stored in the database derive `HasModel`, which generates typed accessor methods on `Model<T>`:
```rust
#[derive(Debug, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
pub struct ServerInfo {
pub version: Version,
pub network: NetworkInfo,
// ...
}
```
**Generated accessors** (one per field):
- `as_version()``&Model<Version>`
- `as_version_mut()``&mut Model<Version>`
- `into_version()``Model<Version>`
**`Model<T>` APIs:**
- `.de()` — Deserialize to `T`
- `.ser(&value)` — Serialize from `T`
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize
- For maps: `.keys()`, `.as_idx(&key)`, `.insert()`, `.remove()`, `.contains_key()`
## Database Access
```rust
// Read-only snapshot
let snap = db.peek().await;
let version = snap.as_public().as_server_info().as_version().de()?;
// Atomic mutation
db.mutate(|db| {
db.as_public_mut().as_server_info_mut().as_version_mut().ser(&new_version)?;
Ok(())
}).await;
```
## TypedDbWatch<T>
Watch a JSON pointer path for changes and deserialize as a typed value. Requires `T: HasModel`.
### Construction
```rust
use patch_db::json_ptr::JsonPointer;
let ptr: JsonPointer = "/public/serverInfo".parse().unwrap();
let mut watch = db.watch(ptr).await.typed::<ServerInfo>();
```
### API
- `watch.peek()?.de()?` — Get current value as `T`
- `watch.changed().await?` — Wait until the watched path changes
- `watch.peek()?.as_field().de()?` — Access nested fields via `HasModel` accessors
### Usage Patterns
**Wait for a condition, then proceed:**
```rust
// Wait for DB version to match current OS version
let current = Current::default().semver();
let mut watch = db
.watch("/public/serverInfo".parse().unwrap())
.await
.typed::<ServerInfo>();
loop {
let server_info = watch.peek()?.de()?;
if server_info.version == current {
break;
}
watch.changed().await?;
}
```
**React to changes in a loop:**
```rust
// From net_controller.rs — react to host changes
let mut watch = db
.watch("/public/serverInfo/network/host".parse().unwrap())
.await
.typed::<Host>();
loop {
if let Err(e) = watch.changed().await {
tracing::error!("DB watch disconnected: {e}");
break;
}
let host = watch.peek()?.de()?;
// ... process host ...
}
```
### Real Examples
- `net_controller.rs:469` — Watch `Hosts` for package network changes
- `net_controller.rs:493` — Watch `Host` for main UI network changes
- `service_actor.rs:37` — Watch `StatusInfo` for service state transitions
- `gateway.rs:1212` — Wait for DB migrations to complete before syncing

View File

@@ -21,6 +21,14 @@ pub async fn my_handler(ctx: RpcContext, params: MyParams) -> Result<MyResponse,
from_fn_async(my_handler) from_fn_async(my_handler)
``` ```
If a handler takes no params, simply omit the params argument entirely (no need for `_: Empty`):
```rust
pub async fn no_params_handler(ctx: RpcContext) -> Result<MyResponse, Error> {
// ...
}
```
### `from_fn_async_local` - Non-thread-safe async handlers ### `from_fn_async_local` - Non-thread-safe async handlers
For async functions that are not `Send` (cannot be safely moved between threads). Use when working with non-thread-safe types. For async functions that are not `Send` (cannot be safely moved between threads). Use when working with non-thread-safe types.
@@ -181,9 +189,9 @@ pub struct MyParams {
### Adding a New RPC Endpoint ### Adding a New RPC Endpoint
1. Define params struct with `Deserialize, Serialize, Parser, TS` 1. Define params struct with `Deserialize, Serialize, Parser, TS` (skip if no params needed)
2. Choose handler type based on sync/async and thread-safety 2. Choose handler type based on sync/async and thread-safety
3. Write handler function taking `(Context, Params) -> Result<Response, Error>` 3. Write handler function taking `(Context, Params) -> Result<Response, Error>` (omit Params if none needed)
4. Add to parent handler with appropriate extensions (display modifiers before `with_about`) 4. Add to parent handler with appropriate extensions (display modifiers before `with_about`)
5. TypeScript types auto-generated via `make ts-bindings` 5. TypeScript types auto-generated via `make ts-bindings`

View File

@@ -6,7 +6,7 @@ use openssl::pkey::{PKey, Private};
use openssl::x509::X509; use openssl::x509::X509;
use crate::db::model::DatabaseModel; use crate::db::model::DatabaseModel;
use crate::hostname::{Hostname, generate_hostname, generate_id}; use crate::hostname::{ServerHostnameInfo, generate_hostname, generate_id};
use crate::net::ssl::{gen_nistp256, make_root_cert}; use crate::net::ssl::{gen_nistp256, make_root_cert};
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::Pem; use crate::util::serde::Pem;
@@ -23,7 +23,7 @@ fn hash_password(password: &str) -> Result<String, Error> {
#[derive(Clone)] #[derive(Clone)]
pub struct AccountInfo { pub struct AccountInfo {
pub server_id: String, pub server_id: String,
pub hostname: Hostname, pub hostname: ServerHostnameInfo,
pub password: String, pub password: String,
pub root_ca_key: PKey<Private>, pub root_ca_key: PKey<Private>,
pub root_ca_cert: X509, pub root_ca_cert: X509,
@@ -31,11 +31,19 @@ pub struct AccountInfo {
pub developer_key: ed25519_dalek::SigningKey, pub developer_key: ed25519_dalek::SigningKey,
} }
impl AccountInfo { impl AccountInfo {
pub fn new(password: &str, start_time: SystemTime) -> Result<Self, Error> { pub fn new(
password: &str,
start_time: SystemTime,
hostname: Option<ServerHostnameInfo>,
) -> Result<Self, Error> {
let server_id = generate_id(); let server_id = generate_id();
let hostname = generate_hostname(); let hostname = if let Some(h) = hostname {
h
} else {
ServerHostnameInfo::from_hostname(generate_hostname())
};
let root_ca_key = gen_nistp256()?; let root_ca_key = gen_nistp256()?;
let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?; let root_ca_cert = make_root_cert(&root_ca_key, &hostname.hostname, start_time)?;
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random( let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
&mut ssh_key::rand_core::OsRng::default(), &mut ssh_key::rand_core::OsRng::default(),
)); ));
@@ -54,7 +62,7 @@ impl AccountInfo {
pub fn load(db: &DatabaseModel) -> Result<Self, Error> { pub fn load(db: &DatabaseModel) -> Result<Self, Error> {
let server_id = db.as_public().as_server_info().as_id().de()?; let server_id = db.as_public().as_server_info().as_id().de()?;
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?); let hostname = ServerHostnameInfo::load(db.as_public().as_server_info())?;
let password = db.as_private().as_password().de()?; let password = db.as_private().as_password().de()?;
let key_store = db.as_private().as_key_store(); let key_store = db.as_private().as_key_store();
let cert_store = key_store.as_local_certs(); let cert_store = key_store.as_local_certs();
@@ -77,7 +85,7 @@ impl AccountInfo {
pub fn save(&self, db: &mut DatabaseModel) -> Result<(), Error> { pub fn save(&self, db: &mut DatabaseModel) -> Result<(), Error> {
let server_info = db.as_public_mut().as_server_info_mut(); let server_info = db.as_public_mut().as_server_info_mut();
server_info.as_id_mut().ser(&self.server_id)?; server_info.as_id_mut().ser(&self.server_id)?;
server_info.as_hostname_mut().ser(&self.hostname.0)?; self.hostname.save(server_info)?;
server_info server_info
.as_pubkey_mut() .as_pubkey_mut()
.ser(&self.ssh_key.public_key().to_openssh()?)?; .ser(&self.ssh_key.public_key().to_openssh()?)?;
@@ -115,8 +123,8 @@ impl AccountInfo {
pub fn hostnames(&self) -> impl IntoIterator<Item = InternedString> + Send + '_ { pub fn hostnames(&self) -> impl IntoIterator<Item = InternedString> + Send + '_ {
[ [
self.hostname.no_dot_host_name(), (*self.hostname.hostname).clone(),
self.hostname.local_domain_name(), self.hostname.hostname.local_domain_name(),
] ]
} }
} }

View File

@@ -67,6 +67,10 @@ pub struct GetActionInputParams {
pub package_id: PackageId, pub package_id: PackageId,
#[arg(help = "help.arg.action-id")] #[arg(help = "help.arg.action-id")]
pub action_id: ActionId, pub action_id: ActionId,
#[ts(type = "Record<string, unknown> | null")]
#[serde(default)]
#[arg(skip)]
pub prefill: Option<Value>,
} }
#[instrument(skip_all)] #[instrument(skip_all)]
@@ -75,6 +79,7 @@ pub async fn get_action_input(
GetActionInputParams { GetActionInputParams {
package_id, package_id,
action_id, action_id,
prefill,
}: GetActionInputParams, }: GetActionInputParams,
) -> Result<Option<ActionInput>, Error> { ) -> Result<Option<ActionInput>, Error> {
ctx.services ctx.services
@@ -82,7 +87,7 @@ pub async fn get_action_input(
.await .await
.as_ref() .as_ref()
.or_not_found(lazy_format!("Manager for {}", package_id))? .or_not_found(lazy_format!("Manager for {}", package_id))?
.get_action_input(Guid::new(), action_id) .get_action_input(Guid::new(), action_id, prefill.unwrap_or(Value::Null))
.await .await
} }
@@ -271,6 +276,7 @@ pub fn display_action_result<T: Serialize>(
} }
#[derive(Deserialize, Serialize, TS)] #[derive(Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct RunActionParams { pub struct RunActionParams {
pub package_id: PackageId, pub package_id: PackageId,
@@ -362,6 +368,7 @@ pub async fn run_action(
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ClearTaskParams { pub struct ClearTaskParams {

View File

@@ -418,6 +418,7 @@ impl AsLogoutSessionId for KillSessionId {
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct KillParams { pub struct KillParams {
@@ -435,6 +436,7 @@ pub async fn kill<C: SessionAuthContext>(
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ResetPasswordParams { pub struct ResetPasswordParams {

View File

@@ -30,6 +30,7 @@ use crate::util::serde::IoFormat;
use crate::version::VersionT; use crate::version::VersionT;
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct BackupParams { pub struct BackupParams {
@@ -270,9 +271,9 @@ async fn perform_backup(
package_backups.insert( package_backups.insert(
id.clone(), id.clone(),
PackageBackupInfo { PackageBackupInfo {
os_version: manifest.as_os_version().de()?, os_version: manifest.as_metadata().as_os_version().de()?,
version: manifest.as_version().de()?, version: manifest.as_version().de()?,
title: manifest.as_title().de()?, title: manifest.as_metadata().as_title().de()?,
timestamp: Utc::now(), timestamp: Utc::now(),
}, },
); );
@@ -337,7 +338,7 @@ async fn perform_backup(
let timestamp = Utc::now(); let timestamp = Utc::now();
backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into(); backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into();
backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.clone()); backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.hostname.clone());
backup_guard.unencrypted_metadata.timestamp = timestamp.clone(); backup_guard.unencrypted_metadata.timestamp = timestamp.clone();
backup_guard.metadata.version = crate::version::Current::default().semver().into(); backup_guard.metadata.version = crate::version::Current::default().semver().into();
backup_guard.metadata.timestamp = Some(timestamp); backup_guard.metadata.timestamp = Some(timestamp);

View File

@@ -2,6 +2,7 @@ use std::collections::BTreeMap;
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::PackageId; use crate::PackageId;
use crate::context::CliContext; use crate::context::CliContext;
@@ -13,19 +14,22 @@ pub mod os;
pub mod restore; pub mod restore;
pub mod target; pub mod target;
#[derive(Debug, Deserialize, Serialize)] #[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
pub struct BackupReport { pub struct BackupReport {
server: ServerBackupReport, server: ServerBackupReport,
packages: BTreeMap<PackageId, PackageBackupReport>, packages: BTreeMap<PackageId, PackageBackupReport>,
} }
#[derive(Debug, Deserialize, Serialize)] #[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
pub struct ServerBackupReport { pub struct ServerBackupReport {
attempted: bool, attempted: bool,
error: Option<String>, error: Option<String>,
} }
#[derive(Debug, Deserialize, Serialize)] #[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
pub struct PackageBackupReport { pub struct PackageBackupReport {
pub error: Option<String>, pub error: Option<String>,
} }

View File

@@ -6,7 +6,7 @@ use serde::{Deserialize, Serialize};
use ssh_key::private::Ed25519Keypair; use ssh_key::private::Ed25519Keypair;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::hostname::{Hostname, generate_hostname, generate_id}; use crate::hostname::{ServerHostname, ServerHostnameInfo, generate_hostname, generate_id};
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::{Base32, Base64, Pem}; use crate::util::serde::{Base32, Base64, Pem};
@@ -27,10 +27,12 @@ impl<'de> Deserialize<'de> for OsBackup {
.map_err(serde::de::Error::custom)?, .map_err(serde::de::Error::custom)?,
1 => patch_db::value::from_value::<OsBackupV1>(tagged.rest) 1 => patch_db::value::from_value::<OsBackupV1>(tagged.rest)
.map_err(serde::de::Error::custom)? .map_err(serde::de::Error::custom)?
.project(), .project()
.map_err(serde::de::Error::custom)?,
2 => patch_db::value::from_value::<OsBackupV2>(tagged.rest) 2 => patch_db::value::from_value::<OsBackupV2>(tagged.rest)
.map_err(serde::de::Error::custom)? .map_err(serde::de::Error::custom)?
.project(), .project()
.map_err(serde::de::Error::custom)?,
v => { v => {
return Err(serde::de::Error::custom(&format!( return Err(serde::de::Error::custom(&format!(
"Unknown backup version {v}" "Unknown backup version {v}"
@@ -75,7 +77,7 @@ impl OsBackupV0 {
Ok(OsBackup { Ok(OsBackup {
account: AccountInfo { account: AccountInfo {
server_id: generate_id(), server_id: generate_id(),
hostname: generate_hostname(), hostname: ServerHostnameInfo::from_hostname(generate_hostname()),
password: Default::default(), password: Default::default(),
root_ca_key: self.root_ca_key.0, root_ca_key: self.root_ca_key.0,
root_ca_cert: self.root_ca_cert.0, root_ca_cert: self.root_ca_cert.0,
@@ -104,11 +106,11 @@ struct OsBackupV1 {
ui: Value, // JSON Value ui: Value, // JSON Value
} }
impl OsBackupV1 { impl OsBackupV1 {
fn project(self) -> OsBackup { fn project(self) -> Result<OsBackup, Error> {
OsBackup { Ok(OsBackup {
account: AccountInfo { account: AccountInfo {
server_id: self.server_id, server_id: self.server_id,
hostname: Hostname(self.hostname), hostname: ServerHostnameInfo::from_hostname(ServerHostname::new(self.hostname)?),
password: Default::default(), password: Default::default(),
root_ca_key: self.root_ca_key.0, root_ca_key: self.root_ca_key.0,
root_ca_cert: self.root_ca_cert.0, root_ca_cert: self.root_ca_cert.0,
@@ -116,7 +118,7 @@ impl OsBackupV1 {
developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key), developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
}, },
ui: self.ui, ui: self.ui,
} })
} }
} }
@@ -134,11 +136,11 @@ struct OsBackupV2 {
ui: Value, // JSON Value ui: Value, // JSON Value
} }
impl OsBackupV2 { impl OsBackupV2 {
fn project(self) -> OsBackup { fn project(self) -> Result<OsBackup, Error> {
OsBackup { Ok(OsBackup {
account: AccountInfo { account: AccountInfo {
server_id: self.server_id, server_id: self.server_id,
hostname: Hostname(self.hostname), hostname: ServerHostnameInfo::from_hostname(ServerHostname::new(self.hostname)?),
password: Default::default(), password: Default::default(),
root_ca_key: self.root_ca_key.0, root_ca_key: self.root_ca_key.0,
root_ca_cert: self.root_ca_cert.0, root_ca_cert: self.root_ca_cert.0,
@@ -146,12 +148,12 @@ impl OsBackupV2 {
developer_key: self.compat_s9pk_key.0, developer_key: self.compat_s9pk_key.0,
}, },
ui: self.ui, ui: self.ui,
} })
} }
fn unproject(backup: &OsBackup) -> Self { fn unproject(backup: &OsBackup) -> Self {
Self { Self {
server_id: backup.account.server_id.clone(), server_id: backup.account.server_id.clone(),
hostname: backup.account.hostname.0.clone(), hostname: (*backup.account.hostname.hostname).clone(),
root_ca_key: Pem(backup.account.root_ca_key.clone()), root_ca_key: Pem(backup.account.root_ca_key.clone()),
root_ca_cert: Pem(backup.account.root_ca_cert.clone()), root_ca_cert: Pem(backup.account.root_ca_cert.clone()),
ssh_key: Pem(backup.account.ssh_key.clone()), ssh_key: Pem(backup.account.ssh_key.clone()),

View File

@@ -17,6 +17,7 @@ use crate::db::model::Database;
use crate::disk::mount::backup::BackupMountGuard; use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite; use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard}; use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::hostname::ServerHostnameInfo;
use crate::init::init; use crate::init::init;
use crate::prelude::*; use crate::prelude::*;
use crate::progress::ProgressUnits; use crate::progress::ProgressUnits;
@@ -30,6 +31,7 @@ use crate::{PLATFORM, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
#[ts(export)]
pub struct RestorePackageParams { pub struct RestorePackageParams {
#[arg(help = "help.arg.package-ids")] #[arg(help = "help.arg.package-ids")]
pub ids: Vec<PackageId>, pub ids: Vec<PackageId>,
@@ -84,11 +86,12 @@ pub async fn restore_packages_rpc(
pub async fn recover_full_server( pub async fn recover_full_server(
ctx: &SetupContext, ctx: &SetupContext,
disk_guid: InternedString, disk_guid: InternedString,
password: String, password: Option<String>,
recovery_source: TmpMountGuard, recovery_source: TmpMountGuard,
server_id: &str, server_id: &str,
recovery_password: &str, recovery_password: &str,
kiosk: Option<bool>, kiosk: Option<bool>,
hostname: Option<ServerHostnameInfo>,
SetupExecuteProgress { SetupExecuteProgress {
init_phases, init_phases,
restore_phase, restore_phase,
@@ -107,12 +110,18 @@ pub async fn recover_full_server(
.with_ctx(|_| (ErrorKind::Filesystem, os_backup_path.display().to_string()))?, .with_ctx(|_| (ErrorKind::Filesystem, os_backup_path.display().to_string()))?,
)?; )?;
os_backup.account.password = argon2::hash_encoded( if let Some(password) = password {
password.as_bytes(), os_backup.account.password = argon2::hash_encoded(
&rand::random::<[u8; 16]>()[..], password.as_bytes(),
&argon2::Config::rfc9106_low_mem(), &rand::random::<[u8; 16]>()[..],
) &argon2::Config::rfc9106_low_mem(),
.with_kind(ErrorKind::PasswordHashGeneration)?; )
.with_kind(ErrorKind::PasswordHashGeneration)?;
}
if let Some(h) = hostname {
os_backup.account.hostname = h;
}
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi"); let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
sync_kiosk(kiosk).await?; sync_kiosk(kiosk).await?;
@@ -182,7 +191,7 @@ pub async fn recover_full_server(
Ok(( Ok((
SetupResult { SetupResult {
hostname: os_backup.account.hostname, hostname: os_backup.account.hostname.hostname,
root_ca: Pem(os_backup.account.root_ca_cert), root_ca: Pem(os_backup.account.root_ca_cert),
needs_restart: ctx.install_rootfs.peek(|a| a.is_some()), needs_restart: ctx.install_rootfs.peek(|a| a.is_some()),
}, },

View File

@@ -36,7 +36,8 @@ impl Map for CifsTargets {
} }
} }
#[derive(Debug, Deserialize, Serialize)] #[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct CifsBackupTarget { pub struct CifsBackupTarget {
hostname: String, hostname: String,
@@ -72,9 +73,10 @@ pub fn cifs<C: Context>() -> ParentHandler<C> {
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct AddParams { pub struct CifsAddParams {
#[arg(help = "help.arg.cifs-hostname")] #[arg(help = "help.arg.cifs-hostname")]
pub hostname: String, pub hostname: String,
#[arg(help = "help.arg.cifs-path")] #[arg(help = "help.arg.cifs-path")]
@@ -87,12 +89,12 @@ pub struct AddParams {
pub async fn add( pub async fn add(
ctx: RpcContext, ctx: RpcContext,
AddParams { CifsAddParams {
hostname, hostname,
path, path,
username, username,
password, password,
}: AddParams, }: CifsAddParams,
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> { ) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
let cifs = Cifs { let cifs = Cifs {
hostname, hostname,
@@ -131,9 +133,10 @@ pub async fn add(
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UpdateParams { pub struct CifsUpdateParams {
#[arg(help = "help.arg.backup-target-id")] #[arg(help = "help.arg.backup-target-id")]
pub id: BackupTargetId, pub id: BackupTargetId,
#[arg(help = "help.arg.cifs-hostname")] #[arg(help = "help.arg.cifs-hostname")]
@@ -148,13 +151,13 @@ pub struct UpdateParams {
pub async fn update( pub async fn update(
ctx: RpcContext, ctx: RpcContext,
UpdateParams { CifsUpdateParams {
id, id,
hostname, hostname,
path, path,
username, username,
password, password,
}: UpdateParams, }: CifsUpdateParams,
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> { ) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
let id = if let BackupTargetId::Cifs { id } = id { let id = if let BackupTargetId::Cifs { id } = id {
id id
@@ -207,14 +210,18 @@ pub async fn update(
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct RemoveParams { pub struct CifsRemoveParams {
#[arg(help = "help.arg.backup-target-id")] #[arg(help = "help.arg.backup-target-id")]
pub id: BackupTargetId, pub id: BackupTargetId,
} }
pub async fn remove(ctx: RpcContext, RemoveParams { id }: RemoveParams) -> Result<(), Error> { pub async fn remove(
ctx: RpcContext,
CifsRemoveParams { id }: CifsRemoveParams,
) -> Result<(), Error> {
let id = if let BackupTargetId::Cifs { id } = id { let id = if let BackupTargetId::Cifs { id } = id {
id id
} else { } else {

View File

@@ -34,7 +34,8 @@ use crate::util::{FromStrParser, VersionString};
pub mod cifs; pub mod cifs;
#[derive(Debug, Deserialize, Serialize)] #[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(tag = "type")] #[serde(tag = "type")]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub enum BackupTarget { pub enum BackupTarget {
@@ -49,7 +50,7 @@ pub enum BackupTarget {
} }
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, TS)] #[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, TS)]
#[ts(type = "string")] #[ts(export, type = "string")]
pub enum BackupTargetId { pub enum BackupTargetId {
Disk { logicalname: PathBuf }, Disk { logicalname: PathBuf },
Cifs { id: u32 }, Cifs { id: u32 },
@@ -111,6 +112,7 @@ impl Serialize for BackupTargetId {
} }
#[derive(Debug, Deserialize, Serialize, TS)] #[derive(Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(tag = "type")] #[serde(tag = "type")]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub enum BackupTargetFS { pub enum BackupTargetFS {
@@ -210,20 +212,26 @@ pub async fn list(ctx: RpcContext) -> Result<BTreeMap<BackupTargetId, BackupTarg
.collect()) .collect())
} }
#[derive(Clone, Debug, Default, Deserialize, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct BackupInfo { pub struct BackupInfo {
#[ts(type = "string")]
pub version: Version, pub version: Version,
#[ts(type = "string | null")]
pub timestamp: Option<DateTime<Utc>>, pub timestamp: Option<DateTime<Utc>>,
pub package_backups: BTreeMap<PackageId, PackageBackupInfo>, pub package_backups: BTreeMap<PackageId, PackageBackupInfo>,
} }
#[derive(Clone, Debug, Deserialize, Serialize)] #[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct PackageBackupInfo { pub struct PackageBackupInfo {
pub title: InternedString, pub title: InternedString,
pub version: VersionString, pub version: VersionString,
#[ts(type = "string")]
pub os_version: Version, pub os_version: Version,
#[ts(type = "string")]
pub timestamp: DateTime<Utc>, pub timestamp: DateTime<Utc>,
} }
@@ -265,6 +273,7 @@ fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) -> Re
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct InfoParams { pub struct InfoParams {
@@ -387,6 +396,7 @@ pub async fn mount(
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UmountParams { pub struct UmountParams {

View File

@@ -70,7 +70,8 @@ async fn inner_main(
}; };
let (rpc_ctx, shutdown) = async { let (rpc_ctx, shutdown) = async {
crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.clone())).await?; crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.hostname.clone()))
.await?;
let mut shutdown_recv = rpc_ctx.shutdown.subscribe(); let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
@@ -147,10 +148,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
.build() .build()
.expect(&t!("bins.startd.failed-to-initialize-runtime")); .expect(&t!("bins.startd.failed-to-initialize-runtime"));
let res = rt.block_on(async { let res = rt.block_on(async {
let mut server = WebServer::new( let mut server = WebServer::new(Acceptor::new(WildcardListener::new(80)?), refresher());
Acceptor::new(WildcardListener::new(80)?),
refresher(),
);
match inner_main(&mut server, &config).await { match inner_main(&mut server, &config).await {
Ok(a) => { Ok(a) => {
server.shutdown().await; server.shutdown().await;

View File

@@ -7,13 +7,13 @@ use clap::Parser;
use futures::FutureExt; use futures::FutureExt;
use rpc_toolkit::CliApp; use rpc_toolkit::CliApp;
use rust_i18n::t; use rust_i18n::t;
use tokio::net::TcpListener;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
use visit_rs::Visit; use visit_rs::Visit;
use crate::context::CliContext; use crate::context::CliContext;
use crate::context::config::ClientConfig; use crate::context::config::ClientConfig;
use tokio::net::TcpListener;
use crate::net::tls::TlsListener; use crate::net::tls::TlsListener;
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer}; use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
use crate::prelude::*; use crate::prelude::*;

View File

@@ -10,7 +10,6 @@ use std::time::Duration;
use chrono::{TimeDelta, Utc}; use chrono::{TimeDelta, Utc};
use imbl::OrdMap; use imbl::OrdMap;
use imbl_value::InternedString; use imbl_value::InternedString;
use itertools::Itertools;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use reqwest::{Client, Proxy}; use reqwest::{Client, Proxy};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
@@ -25,7 +24,6 @@ use crate::account::AccountInfo;
use crate::auth::Sessions; use crate::auth::Sessions;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::db::model::Database; use crate::db::model::Database;
use crate::db::model::package::TaskSeverity;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::disk::mount::filesystem::bind::Bind; use crate::disk::mount::filesystem::bind::Bind;
use crate::disk::mount::filesystem::block_dev::BlockDev; use crate::disk::mount::filesystem::block_dev::BlockDev;
@@ -44,7 +42,6 @@ use crate::prelude::*;
use crate::progress::{FullProgressTracker, PhaseProgressTrackerHandle}; use crate::progress::{FullProgressTracker, PhaseProgressTrackerHandle};
use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations}; use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations};
use crate::service::ServiceMap; use crate::service::ServiceMap;
use crate::service::action::update_tasks;
use crate::service::effects::callbacks::ServiceCallbacks; use crate::service::effects::callbacks::ServiceCallbacks;
use crate::service::effects::subcontainer::NVIDIA_OVERLAY_PATH; use crate::service::effects::subcontainer::NVIDIA_OVERLAY_PATH;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
@@ -53,7 +50,7 @@ use crate::util::future::NonDetachingJoinHandle;
use crate::util::io::{TmpDir, delete_file}; use crate::util::io::{TmpDir, delete_file};
use crate::util::lshw::LshwDevice; use crate::util::lshw::LshwDevice;
use crate::util::sync::{SyncMutex, SyncRwLock, Watch}; use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
use crate::{ActionId, DATA_DIR, PLATFORM, PackageId}; use crate::{DATA_DIR, PLATFORM, PackageId};
pub struct RpcContextSeed { pub struct RpcContextSeed {
is_closed: AtomicBool, is_closed: AtomicBool,
@@ -114,7 +111,6 @@ pub struct CleanupInitPhases {
cleanup_sessions: PhaseProgressTrackerHandle, cleanup_sessions: PhaseProgressTrackerHandle,
init_services: PhaseProgressTrackerHandle, init_services: PhaseProgressTrackerHandle,
prune_s9pks: PhaseProgressTrackerHandle, prune_s9pks: PhaseProgressTrackerHandle,
check_tasks: PhaseProgressTrackerHandle,
} }
impl CleanupInitPhases { impl CleanupInitPhases {
pub fn new(handle: &FullProgressTracker) -> Self { pub fn new(handle: &FullProgressTracker) -> Self {
@@ -122,7 +118,6 @@ impl CleanupInitPhases {
cleanup_sessions: handle.add_phase("Cleaning up sessions".into(), Some(1)), cleanup_sessions: handle.add_phase("Cleaning up sessions".into(), Some(1)),
init_services: handle.add_phase("Initializing services".into(), Some(10)), init_services: handle.add_phase("Initializing services".into(), Some(10)),
prune_s9pks: handle.add_phase("Pruning S9PKs".into(), Some(1)), prune_s9pks: handle.add_phase("Pruning S9PKs".into(), Some(1)),
check_tasks: handle.add_phase("Checking action requests".into(), Some(1)),
} }
} }
} }
@@ -165,8 +160,7 @@ impl RpcContext {
{ {
(net_ctrl, os_net_service) (net_ctrl, os_net_service)
} else { } else {
let net_ctrl = let net_ctrl = Arc::new(NetController::init(db.clone(), socks_proxy).await?);
Arc::new(NetController::init(db.clone(), &account.hostname, socks_proxy).await?);
webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe())); webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe()));
let os_net_service = net_ctrl.os_bindings().await?; let os_net_service = net_ctrl.os_bindings().await?;
(net_ctrl, os_net_service) (net_ctrl, os_net_service)
@@ -174,7 +168,7 @@ impl RpcContext {
init_net_ctrl.complete(); init_net_ctrl.complete();
tracing::info!("{}", t!("context.rpc.initialized-net-controller")); tracing::info!("{}", t!("context.rpc.initialized-net-controller"));
if PLATFORM.ends_with("-nonfree") { if PLATFORM.ends_with("-nvidia") {
if let Err(e) = Command::new("nvidia-smi") if let Err(e) = Command::new("nvidia-smi")
.invoke(ErrorKind::ParseSysInfo) .invoke(ErrorKind::ParseSysInfo)
.await .await
@@ -412,7 +406,6 @@ impl RpcContext {
mut cleanup_sessions, mut cleanup_sessions,
mut init_services, mut init_services,
mut prune_s9pks, mut prune_s9pks,
mut check_tasks,
}: CleanupInitPhases, }: CleanupInitPhases,
) -> Result<(), Error> { ) -> Result<(), Error> {
cleanup_sessions.start(); cleanup_sessions.start();
@@ -504,76 +497,6 @@ impl RpcContext {
} }
prune_s9pks.complete(); prune_s9pks.complete();
check_tasks.start();
let mut action_input: OrdMap<PackageId, BTreeMap<ActionId, Value>> = OrdMap::new();
let tasks: BTreeSet<_> = peek
.as_public()
.as_package_data()
.as_entries()?
.into_iter()
.map(|(_, pde)| {
Ok(pde
.as_tasks()
.as_entries()?
.into_iter()
.map(|(_, r)| {
let t = r.as_task();
Ok::<_, Error>(if t.as_input().transpose_ref().is_some() {
Some((t.as_package_id().de()?, t.as_action_id().de()?))
} else {
None
})
})
.filter_map_ok(|a| a))
})
.flatten_ok()
.map(|a| a.and_then(|a| a))
.try_collect()?;
let procedure_id = Guid::new();
for (package_id, action_id) in tasks {
if let Some(service) = self.services.get(&package_id).await.as_ref() {
if let Some(input) = service
.get_action_input(procedure_id.clone(), action_id.clone())
.await
.log_err()
.flatten()
.and_then(|i| i.value)
{
action_input
.entry(package_id)
.or_default()
.insert(action_id, input);
}
}
}
self.db
.mutate(|db| {
for (package_id, action_input) in &action_input {
for (action_id, input) in action_input {
for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
pde.as_tasks_mut().mutate(|tasks| {
Ok(update_tasks(tasks, package_id, action_id, input, false))
})?;
}
}
}
for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
if pde
.as_tasks()
.de()?
.into_iter()
.any(|(_, t)| t.active && t.task.severity == TaskSeverity::Critical)
{
pde.as_status_info_mut().stop()?;
}
}
Ok(())
})
.await
.result?;
check_tasks.complete();
Ok(()) Ok(())
} }
pub async fn call_remote<RemoteContext>( pub async fn call_remote<RemoteContext>(

View File

@@ -19,7 +19,7 @@ use crate::MAIN_DATA;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::disk::mount::guard::{MountGuard, TmpMountGuard}; use crate::disk::mount::guard::{MountGuard, TmpMountGuard};
use crate::hostname::Hostname; use crate::hostname::ServerHostname;
use crate::net::gateway::WildcardListener; use crate::net::gateway::WildcardListener;
use crate::net::web_server::{WebServer, WebServerAcceptorSetter}; use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
use crate::prelude::*; use crate::prelude::*;
@@ -45,7 +45,7 @@ lazy_static::lazy_static! {
#[ts(export)] #[ts(export)]
pub struct SetupResult { pub struct SetupResult {
#[ts(type = "string")] #[ts(type = "string")]
pub hostname: Hostname, pub hostname: ServerHostname,
pub root_ca: Pem<X509>, pub root_ca: Pem<X509>,
pub needs_restart: bool, pub needs_restart: bool,
} }

View File

@@ -8,6 +8,7 @@ use crate::prelude::*;
use crate::{Error, PackageId}; use crate::{Error, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ControlParams { pub struct ControlParams {

View File

@@ -45,7 +45,12 @@ impl Database {
.collect(), .collect(),
ssh_privkey: Pem(account.ssh_key.clone()), ssh_privkey: Pem(account.ssh_key.clone()),
ssh_pubkeys: SshKeys::new(), ssh_pubkeys: SshKeys::new(),
available_ports: AvailablePorts::new(), available_ports: {
let mut ports = AvailablePorts::new();
ports.set_ssl(80, false);
ports.set_ssl(443, true);
ports
},
sessions: Sessions::new(), sessions: Sessions::new(),
notifications: Notifications::new(), notifications: Notifications::new(),
cifs: CifsTargets::new(), cifs: CifsTargets::new(),

View File

@@ -381,9 +381,10 @@ pub struct PackageDataEntry {
pub hosts: Hosts, pub hosts: Hosts,
#[ts(type = "string[]")] #[ts(type = "string[]")]
pub store_exposed_dependents: Vec<JsonPointer>, pub store_exposed_dependents: Vec<JsonPointer>,
#[serde(default)]
#[ts(type = "string | null")] #[ts(type = "string | null")]
pub outbound_gateway: Option<GatewayId>, pub outbound_gateway: Option<GatewayId>,
#[serde(default)]
pub plugin: PackagePlugin,
} }
impl AsRef<PackageDataEntry> for PackageDataEntry { impl AsRef<PackageDataEntry> for PackageDataEntry {
fn as_ref(&self) -> &PackageDataEntry { fn as_ref(&self) -> &PackageDataEntry {
@@ -391,6 +392,21 @@ impl AsRef<PackageDataEntry> for PackageDataEntry {
} }
} }
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct PackagePlugin {
pub url: Option<UrlPluginRegistration>,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct UrlPluginRegistration {
pub table_action: ActionId,
}
#[derive(Debug, Clone, Default, Deserialize, Serialize, TS)] #[derive(Debug, Clone, Default, Deserialize, Serialize, TS)]
#[ts(export)] #[ts(export)]
pub struct CurrentDependencies(pub BTreeMap<PackageId, CurrentDependencyInfo>); pub struct CurrentDependencies(pub BTreeMap<PackageId, CurrentDependencyInfo>);

View File

@@ -13,6 +13,7 @@ use openssl::hash::MessageDigest;
use patch_db::{HasModel, Value}; use patch_db::{HasModel, Value};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use url::Url;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::db::DbAccessByKey; use crate::db::DbAccessByKey;
@@ -23,7 +24,7 @@ use crate::net::host::Host;
use crate::net::host::binding::{ use crate::net::host::binding::{
AddSslOptions, BindInfo, BindOptions, Bindings, DerivedAddressInfo, NetInfo, AddSslOptions, BindInfo, BindOptions, Bindings, DerivedAddressInfo, NetInfo,
}; };
use crate::net::vhost::AlpnInfo; use crate::net::vhost::{AlpnInfo, PassthroughInfo};
use crate::prelude::*; use crate::prelude::*;
use crate::progress::FullProgress; use crate::progress::FullProgress;
use crate::system::{KeyboardOptions, SmtpValue}; use crate::system::{KeyboardOptions, SmtpValue};
@@ -58,7 +59,8 @@ impl Public {
platform: get_platform(), platform: get_platform(),
id: account.server_id.clone(), id: account.server_id.clone(),
version: Current::default().semver(), version: Current::default().semver(),
hostname: account.hostname.no_dot_host_name(), name: account.hostname.name.clone(),
hostname: (*account.hostname.hostname).clone(),
last_backup: None, last_backup: None,
package_version_compat: Current::default().compat().clone(), package_version_compat: Current::default().compat().clone(),
post_init_migration_todos: BTreeMap::new(), post_init_migration_todos: BTreeMap::new(),
@@ -93,6 +95,7 @@ impl Public {
), ),
public_domains: BTreeMap::new(), public_domains: BTreeMap::new(),
private_domains: BTreeMap::new(), private_domains: BTreeMap::new(),
port_forwards: BTreeSet::new(),
}, },
wifi: WifiInfo { wifi: WifiInfo {
enabled: true, enabled: true,
@@ -118,6 +121,7 @@ impl Public {
}, },
dns: Default::default(), dns: Default::default(),
default_outbound: None, default_outbound: None,
passthroughs: Vec::new(),
}, },
status_info: ServerStatus { status_info: ServerStatus {
backup_progress: None, backup_progress: None,
@@ -142,6 +146,7 @@ impl Public {
zram: true, zram: true,
governor: None, governor: None,
smtp: None, smtp: None,
ifconfig_url: default_ifconfig_url(),
ram: 0, ram: 0,
devices: Vec::new(), devices: Vec::new(),
kiosk, kiosk,
@@ -163,19 +168,21 @@ fn get_platform() -> InternedString {
(&*PLATFORM).into() (&*PLATFORM).into()
} }
pub fn default_ifconfig_url() -> Url {
"https://ifconfig.co".parse().unwrap()
}
#[derive(Debug, Deserialize, Serialize, HasModel, TS)] #[derive(Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[model = "Model<Self>"] #[model = "Model<Self>"]
#[ts(export)] #[ts(export)]
pub struct ServerInfo { pub struct ServerInfo {
#[serde(default = "get_arch")] #[serde(default = "get_arch")]
#[ts(type = "string")]
pub arch: InternedString, pub arch: InternedString,
#[serde(default = "get_platform")] #[serde(default = "get_platform")]
#[ts(type = "string")]
pub platform: InternedString, pub platform: InternedString,
pub id: String, pub id: String,
#[ts(type = "string")] pub name: InternedString,
pub hostname: InternedString, pub hostname: InternedString,
#[ts(type = "string")] #[ts(type = "string")]
pub version: Version, pub version: Version,
@@ -199,6 +206,9 @@ pub struct ServerInfo {
pub zram: bool, pub zram: bool,
pub governor: Option<Governor>, pub governor: Option<Governor>,
pub smtp: Option<SmtpValue>, pub smtp: Option<SmtpValue>,
#[serde(default = "default_ifconfig_url")]
#[ts(type = "string")]
pub ifconfig_url: Url,
#[ts(type = "number")] #[ts(type = "number")]
pub ram: u64, pub ram: u64,
pub devices: Vec<LshwDevice>, pub devices: Vec<LshwDevice>,
@@ -224,6 +234,8 @@ pub struct NetworkInfo {
#[serde(default)] #[serde(default)]
#[ts(type = "string | null")] #[ts(type = "string | null")]
pub default_outbound: Option<GatewayId>, pub default_outbound: Option<GatewayId>,
#[serde(default)]
pub passthroughs: Vec<PassthroughInfo>,
} }
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)] #[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]

View File

@@ -45,7 +45,7 @@ impl TS for DepInfo {
"DepInfo".into() "DepInfo".into()
} }
fn inline() -> String { fn inline() -> String {
"{ description: string | null, optional: boolean } & MetadataSrc".into() "{ description: LocaleString | null, optional: boolean } & MetadataSrc".into()
} }
fn inline_flattened() -> String { fn inline_flattened() -> String {
Self::inline() Self::inline()
@@ -54,7 +54,8 @@ impl TS for DepInfo {
where where
Self: 'static, Self: 'static,
{ {
v.visit::<MetadataSrc>() v.visit::<MetadataSrc>();
v.visit::<LocaleString>();
} }
fn output_path() -> Option<&'static std::path::Path> { fn output_path() -> Option<&'static std::path::Path> {
Some(Path::new("DepInfo.ts")) Some(Path::new("DepInfo.ts"))

View File

@@ -19,7 +19,7 @@ use super::mount::filesystem::block_dev::BlockDev;
use super::mount::guard::TmpMountGuard; use super::mount::guard::TmpMountGuard;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::disk::mount::guard::GenericMountGuard; use crate::disk::mount::guard::GenericMountGuard;
use crate::hostname::Hostname; use crate::hostname::ServerHostname;
use crate::prelude::*; use crate::prelude::*;
use crate::util::Invoke; use crate::util::Invoke;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
@@ -43,22 +43,28 @@ pub struct DiskInfo {
pub guid: Option<InternedString>, pub guid: Option<InternedString>,
} }
#[derive(Clone, Debug, Deserialize, Serialize)] #[derive(Clone, Debug, Deserialize, Serialize, ts_rs::TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct PartitionInfo { pub struct PartitionInfo {
pub logicalname: PathBuf, pub logicalname: PathBuf,
pub label: Option<String>, pub label: Option<String>,
#[ts(type = "number")]
pub capacity: u64, pub capacity: u64,
#[ts(type = "number | null")]
pub used: Option<u64>, pub used: Option<u64>,
pub start_os: BTreeMap<String, StartOsRecoveryInfo>, pub start_os: BTreeMap<String, StartOsRecoveryInfo>,
pub guid: Option<InternedString>, pub guid: Option<InternedString>,
} }
#[derive(Clone, Debug, Default, Deserialize, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Serialize, ts_rs::TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct StartOsRecoveryInfo { pub struct StartOsRecoveryInfo {
pub hostname: Hostname, pub hostname: ServerHostname,
#[ts(type = "string")]
pub version: exver::Version, pub version: exver::Version,
#[ts(type = "string")]
pub timestamp: DateTime<Utc>, pub timestamp: DateTime<Utc>,
pub password_hash: Option<String>, pub password_hash: Option<String>,
pub wrapped_key: Option<String>, pub wrapped_key: Option<String>,

View File

@@ -3,6 +3,7 @@ use std::fmt::{Debug, Display};
use axum::http::StatusCode; use axum::http::StatusCode;
use axum::http::uri::InvalidUri; use axum::http::uri::InvalidUri;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use imbl_value::InternedString;
use num_enum::TryFromPrimitive; use num_enum::TryFromPrimitive;
use patch_db::Value; use patch_db::Value;
use rpc_toolkit::reqwest; use rpc_toolkit::reqwest;
@@ -204,17 +205,12 @@ pub struct Error {
impl Display for Error { impl Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}: {:#}", &self.kind.as_str(), self.source) write!(f, "{}: {}", &self.kind.as_str(), self.display_src())
} }
} }
impl Debug for Error { impl Debug for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!( write!(f, "{}: {}", &self.kind.as_str(), self.display_dbg())
f,
"{}: {:?}",
&self.kind.as_str(),
self.debug.as_ref().unwrap_or(&self.source)
)
} }
} }
impl Error { impl Error {
@@ -235,8 +231,13 @@ impl Error {
} }
pub fn clone_output(&self) -> Self { pub fn clone_output(&self) -> Self {
Error { Error {
source: eyre!("{}", self.source), source: eyre!("{:#}", self.source),
debug: self.debug.as_ref().map(|e| eyre!("{e}")), debug: Some(
self.debug
.as_ref()
.map(|e| eyre!("{e}"))
.unwrap_or_else(|| eyre!("{:?}", self.source)),
),
kind: self.kind, kind: self.kind,
info: self.info.clone(), info: self.info.clone(),
task: None, task: None,
@@ -257,6 +258,30 @@ impl Error {
self.task.take(); self.task.take();
self self
} }
pub fn display_src(&self) -> impl Display {
struct D<'a>(&'a Error);
impl<'a> Display for D<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:#}", self.0.source)
}
}
D(self)
}
pub fn display_dbg(&self) -> impl Display {
struct D<'a>(&'a Error);
impl<'a> Display for D<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
if let Some(debug) = &self.0.debug {
write!(f, "{}", debug)
} else {
write!(f, "{:?}", self.0.source)
}
}
}
D(self)
}
} }
impl axum::response::IntoResponse for Error { impl axum::response::IntoResponse for Error {
fn into_response(self) -> axum::response::Response { fn into_response(self) -> axum::response::Response {
@@ -433,9 +458,11 @@ impl Debug for ErrorData {
impl std::error::Error for ErrorData {} impl std::error::Error for ErrorData {}
impl From<Error> for ErrorData { impl From<Error> for ErrorData {
fn from(value: Error) -> Self { fn from(value: Error) -> Self {
let details = value.display_src().to_string();
let debug = value.display_dbg().to_string();
Self { Self {
details: value.to_string(), details,
debug: format!("{:?}", value), debug,
info: value.info, info: value.info,
} }
} }
@@ -623,13 +650,10 @@ impl<T> ResultExt<T, Error> for Result<T, Error> {
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> { fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| { self.map_err(|e| {
let (kind, ctx) = f(&e); let (kind, ctx) = f(&e);
let ctx = InternedString::from_display(&ctx);
let source = e.source; let source = e.source;
let with_ctx = format!("{ctx}: {source}"); let source = source.wrap_err(ctx.clone());
let source = source.wrap_err(with_ctx); let debug = e.debug.map(|e| e.wrap_err(ctx));
let debug = e.debug.map(|e| {
let with_ctx = format!("{ctx}: {e}");
e.wrap_err(with_ctx)
});
Error { Error {
kind, kind,
source, source,

View File

@@ -1,25 +1,58 @@
use clap::Parser;
use imbl_value::InternedString; use imbl_value::InternedString;
use lazy_format::lazy_format; use lazy_format::lazy_format;
use rand::{Rng, rng}; use serde::{Deserialize, Serialize};
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use ts_rs::TS;
use crate::context::RpcContext;
use crate::db::model::public::ServerInfo;
use crate::prelude::*;
use crate::util::Invoke; use crate::util::Invoke;
use crate::{Error, ErrorKind};
#[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize)]
pub struct Hostname(pub InternedString);
lazy_static::lazy_static! { #[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize, ts_rs::TS)]
static ref ADJECTIVES: Vec<String> = include_str!("./assets/adjectives.txt").lines().map(|x| x.to_string()).collect(); #[ts(type = "string")]
static ref NOUNS: Vec<String> = include_str!("./assets/nouns.txt").lines().map(|x| x.to_string()).collect(); pub struct ServerHostname(InternedString);
} impl std::ops::Deref for ServerHostname {
impl AsRef<str> for Hostname { type Target = InternedString;
fn as_ref(&self) -> &str { fn deref(&self) -> &Self::Target {
&self.0 &self.0
} }
} }
impl AsRef<str> for ServerHostname {
fn as_ref(&self) -> &str {
&***self
}
}
impl ServerHostname {
fn validate(&self) -> Result<(), Error> {
if self.0.is_empty() {
return Err(Error::new(
eyre!("{}", t!("hostname.empty")),
ErrorKind::InvalidRequest,
));
}
if let Some(c) = self
.0
.chars()
.find(|c| !(c.is_ascii_alphanumeric() || c == &'-') || c.is_ascii_uppercase())
{
return Err(Error::new(
eyre!("{}", t!("hostname.invalid-character", char = c)),
ErrorKind::InvalidRequest,
));
}
Ok(())
}
pub fn new(hostname: InternedString) -> Result<Self, Error> {
let res = Self(hostname);
res.validate()?;
Ok(res)
}
impl Hostname {
pub fn lan_address(&self) -> InternedString { pub fn lan_address(&self) -> InternedString {
InternedString::from_display(&lazy_format!("https://{}.local", self.0)) InternedString::from_display(&lazy_format!("https://{}.local", self.0))
} }
@@ -28,17 +61,135 @@ impl Hostname {
InternedString::from_display(&lazy_format!("{}.local", self.0)) InternedString::from_display(&lazy_format!("{}.local", self.0))
} }
pub fn no_dot_host_name(&self) -> InternedString { pub fn load(server_info: &Model<ServerInfo>) -> Result<Self, Error> {
self.0.clone() Ok(Self(server_info.as_hostname().de()?))
}
pub fn save(&self, server_info: &mut Model<ServerInfo>) -> Result<(), Error> {
server_info.as_hostname_mut().ser(&**self)
} }
} }
pub fn generate_hostname() -> Hostname { #[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize, ts_rs::TS)]
let mut rng = rng(); #[ts(type = "string")]
let adjective = &ADJECTIVES[rng.random_range(0..ADJECTIVES.len())]; pub struct ServerHostnameInfo {
let noun = &NOUNS[rng.random_range(0..NOUNS.len())]; pub name: InternedString,
Hostname(InternedString::from_display(&lazy_format!( pub hostname: ServerHostname,
"{adjective}-{noun}" }
lazy_static::lazy_static! {
static ref ADJECTIVES: Vec<String> = include_str!("./assets/adjectives.txt").lines().map(|x| x.to_string()).collect();
static ref NOUNS: Vec<String> = include_str!("./assets/nouns.txt").lines().map(|x| x.to_string()).collect();
}
impl AsRef<str> for ServerHostnameInfo {
fn as_ref(&self) -> &str {
&self.hostname
}
}
fn normalize(s: &str) -> InternedString {
let mut prev_was_dash = true;
let mut normalized = s
.chars()
.filter_map(|c| {
if c.is_alphanumeric() {
prev_was_dash = false;
Some(c.to_ascii_lowercase())
} else if (c == '-' || c.is_whitespace()) && !prev_was_dash {
prev_was_dash = true;
Some('-')
} else {
None
}
})
.collect::<String>();
while normalized.ends_with('-') {
normalized.pop();
}
if normalized.len() < 4 {
generate_hostname().0
} else {
normalized.into()
}
}
fn denormalize(s: &str) -> InternedString {
let mut cap = true;
s.chars()
.map(|c| {
if c == '-' {
cap = true;
' '
} else if cap {
cap = false;
c.to_ascii_uppercase()
} else {
c
}
})
.collect::<String>()
.into()
}
impl ServerHostnameInfo {
pub fn new(
name: Option<InternedString>,
hostname: Option<InternedString>,
) -> Result<Self, Error> {
Self::new_opt(name, hostname)
.map(|h| h.unwrap_or_else(|| ServerHostnameInfo::from_hostname(generate_hostname())))
}
pub fn new_opt(
name: Option<InternedString>,
hostname: Option<InternedString>,
) -> Result<Option<Self>, Error> {
let name = name.filter(|n| !n.is_empty());
let hostname = hostname.filter(|h| !h.is_empty());
Ok(match (name, hostname) {
(Some(name), Some(hostname)) => Some(ServerHostnameInfo {
name,
hostname: ServerHostname::new(hostname)?,
}),
(Some(name), None) => Some(ServerHostnameInfo::from_name(name)),
(None, Some(hostname)) => Some(ServerHostnameInfo::from_hostname(ServerHostname::new(
hostname,
)?)),
(None, None) => None,
})
}
pub fn from_hostname(hostname: ServerHostname) -> Self {
Self {
name: denormalize(&**hostname),
hostname,
}
}
pub fn from_name(name: InternedString) -> Self {
Self {
hostname: ServerHostname(normalize(&*name)),
name,
}
}
pub fn load(server_info: &Model<ServerInfo>) -> Result<Self, Error> {
Ok(Self {
name: server_info.as_name().de()?,
hostname: ServerHostname::load(server_info)?,
})
}
pub fn save(&self, server_info: &mut Model<ServerInfo>) -> Result<(), Error> {
server_info.as_name_mut().ser(&self.name)?;
self.hostname.save(server_info)
}
}
pub fn generate_hostname() -> ServerHostname {
let num = rand::random::<u16>();
ServerHostname(InternedString::from_display(&lazy_format!(
"startos-{num:04x}"
))) )))
} }
@@ -48,17 +199,17 @@ pub fn generate_id() -> String {
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn get_current_hostname() -> Result<Hostname, Error> { pub async fn get_current_hostname() -> Result<InternedString, Error> {
let out = Command::new("hostname") let out = Command::new("hostname")
.invoke(ErrorKind::ParseSysInfo) .invoke(ErrorKind::ParseSysInfo)
.await?; .await?;
let out_string = String::from_utf8(out)?; let out_string = String::from_utf8(out)?;
Ok(Hostname(out_string.trim().into())) Ok(out_string.trim().into())
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> { pub async fn set_hostname(hostname: &ServerHostname) -> Result<(), Error> {
let hostname = &*hostname.0; let hostname = &***hostname;
Command::new("hostnamectl") Command::new("hostnamectl")
.arg("--static") .arg("--static")
.arg("set-hostname") .arg("set-hostname")
@@ -77,7 +228,7 @@ pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn sync_hostname(hostname: &Hostname) -> Result<(), Error> { pub async fn sync_hostname(hostname: &ServerHostname) -> Result<(), Error> {
set_hostname(hostname).await?; set_hostname(hostname).await?;
Command::new("systemctl") Command::new("systemctl")
.arg("restart") .arg("restart")
@@ -86,3 +237,54 @@ pub async fn sync_hostname(hostname: &Hostname) -> Result<(), Error> {
.await?; .await?;
Ok(()) Ok(())
} }
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
#[ts(export)]
pub struct SetServerHostnameParams {
name: Option<InternedString>,
hostname: Option<InternedString>,
}
pub async fn set_hostname_rpc(
ctx: RpcContext,
SetServerHostnameParams { name, hostname }: SetServerHostnameParams,
) -> Result<(), Error> {
let name = name.filter(|n| !n.is_empty());
let hostname = hostname
.filter(|h| !h.is_empty())
.map(ServerHostname::new)
.transpose()?;
if name.is_none() && hostname.is_none() {
return Err(Error::new(
eyre!("{}", t!("hostname.must-provide-name-or-hostname")),
ErrorKind::InvalidRequest,
));
};
let info = ctx
.db
.mutate(|db| {
let server_info = db.as_public_mut().as_server_info_mut();
if let Some(name) = name {
server_info.as_name_mut().ser(&name)?;
}
if let Some(hostname) = &hostname {
hostname.save(server_info)?;
}
ServerHostnameInfo::load(server_info)
})
.await
.result?;
ctx.account.mutate(|a| a.hostname = info.clone());
if let Some(h) = hostname {
sync_hostname(&h).await?;
}
Ok(())
}
#[test]
fn test_generate_hostname() {
assert_eq!(dbg!(generate_hostname().0).len(), 12);
}

View File

@@ -18,7 +18,7 @@ use crate::context::{CliContext, InitContext, RpcContext};
use crate::db::model::Database; use crate::db::model::Database;
use crate::db::model::public::ServerStatus; use crate::db::model::public::ServerStatus;
use crate::developer::OS_DEVELOPER_KEY_PATH; use crate::developer::OS_DEVELOPER_KEY_PATH;
use crate::hostname::Hostname; use crate::hostname::ServerHostname;
use crate::middleware::auth::local::LocalAuthContext; use crate::middleware::auth::local::LocalAuthContext;
use crate::net::gateway::WildcardListener; use crate::net::gateway::WildcardListener;
use crate::net::net_controller::{NetController, NetService}; use crate::net::net_controller::{NetController, NetService};
@@ -191,15 +191,16 @@ pub async fn init(
.arg(OS_DEVELOPER_KEY_PATH) .arg(OS_DEVELOPER_KEY_PATH)
.invoke(ErrorKind::Filesystem) .invoke(ErrorKind::Filesystem)
.await?; .await?;
let hostname = ServerHostname::load(peek.as_public().as_server_info())?;
crate::ssh::sync_keys( crate::ssh::sync_keys(
&Hostname(peek.as_public().as_server_info().as_hostname().de()?), &hostname,
&peek.as_private().as_ssh_privkey().de()?, &peek.as_private().as_ssh_privkey().de()?,
&peek.as_private().as_ssh_pubkeys().de()?, &peek.as_private().as_ssh_pubkeys().de()?,
SSH_DIR, SSH_DIR,
) )
.await?; .await?;
crate::ssh::sync_keys( crate::ssh::sync_keys(
&Hostname(peek.as_public().as_server_info().as_hostname().de()?), &hostname,
&peek.as_private().as_ssh_privkey().de()?, &peek.as_private().as_ssh_privkey().de()?,
&Default::default(), &Default::default(),
"/root/.ssh", "/root/.ssh",
@@ -211,12 +212,7 @@ pub async fn init(
start_net.start(); start_net.start();
let net_ctrl = Arc::new( let net_ctrl = Arc::new(
NetController::init( NetController::init(db.clone(), cfg.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN)).await?,
db.clone(),
&account.hostname,
cfg.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN),
)
.await?,
); );
webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe())); webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe()));
let os_net_service = net_ctrl.os_bindings().await?; let os_net_service = net_ctrl.os_bindings().await?;

View File

@@ -177,6 +177,7 @@ pub async fn install(
} }
#[derive(Deserialize, Serialize, TS)] #[derive(Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct SideloadParams { pub struct SideloadParams {
#[ts(skip)] #[ts(skip)]
@@ -185,6 +186,7 @@ pub struct SideloadParams {
} }
#[derive(Deserialize, Serialize, TS)] #[derive(Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct SideloadResponse { pub struct SideloadResponse {
pub upload: Guid, pub upload: Guid,
@@ -284,6 +286,7 @@ pub async fn sideload(
} }
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)] #[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct CancelInstallParams { pub struct CancelInstallParams {
@@ -521,6 +524,7 @@ pub async fn cli_install(
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UninstallParams { pub struct UninstallParams {

View File

@@ -25,6 +25,9 @@ pub fn platform_to_arch(platform: &str) -> &str {
if let Some(arch) = platform.strip_suffix("-nonfree") { if let Some(arch) = platform.strip_suffix("-nonfree") {
return arch; return arch;
} }
if let Some(arch) = platform.strip_suffix("-nvidia") {
return arch;
}
match platform { match platform {
"raspberrypi" | "rockchip64" => "aarch64", "raspberrypi" | "rockchip64" => "aarch64",
_ => platform, _ => platform,
@@ -268,6 +271,18 @@ pub fn server<C: Context>() -> ParentHandler<C> {
.with_about("about.display-time-uptime") .with_about("about.display-time-uptime")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand(
"device-info",
ParentHandler::<C, WithIoFormat<Empty>>::new().root_handler(
from_fn_async(system::device_info)
.with_display_serializable()
.with_custom_display_fn(|handle, result| {
system::display_device_info(handle.params, result)
})
.with_about("about.get-device-info")
.with_call_remote::<CliContext>(),
),
)
.subcommand( .subcommand(
"experimental", "experimental",
system::experimental::<C>().with_about("about.commands-experimental"), system::experimental::<C>().with_about("about.commands-experimental"),
@@ -377,6 +392,20 @@ pub fn server<C: Context>() -> ParentHandler<C> {
"host", "host",
net::host::server_host_api::<C>().with_about("about.commands-host-system-ui"), net::host::server_host_api::<C>().with_about("about.commands-host-system-ui"),
) )
.subcommand(
"set-hostname",
from_fn_async(hostname::set_hostname_rpc)
.no_display()
.with_about("about.set-hostname")
.with_call_remote::<CliContext>(),
)
.subcommand(
"set-ifconfig-url",
from_fn_async(system::set_ifconfig_url)
.no_display()
.with_about("about.set-ifconfig-url")
.with_call_remote::<CliContext>(),
)
.subcommand( .subcommand(
"set-keyboard", "set-keyboard",
from_fn_async(system::set_keyboard) from_fn_async(system::set_keyboard)
@@ -548,4 +577,12 @@ pub fn package<C: Context>() -> ParentHandler<C> {
"host", "host",
net::host::host_api::<C>().with_about("about.manage-network-hosts-package"), net::host::host_api::<C>().with_about("about.manage-network-hosts-package"),
) )
.subcommand(
"set-outbound-gateway",
from_fn_async(net::gateway::set_outbound_gateway)
.with_metadata("sync_db", Value::Bool(true))
.no_display()
.with_about("about.set-outbound-gateway-package")
.with_call_remote::<CliContext>(),
)
} }

View File

@@ -24,6 +24,7 @@ use tokio::process::{Child, Command};
use tokio_stream::wrappers::LinesStream; use tokio_stream::wrappers::LinesStream;
use tokio_tungstenite::tungstenite::Message; use tokio_tungstenite::tungstenite::Message;
use tracing::instrument; use tracing::instrument;
use ts_rs::TS;
use crate::PackageId; use crate::PackageId;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
@@ -109,23 +110,28 @@ async fn ws_handler(
} }
} }
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)] #[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct LogResponse { pub struct LogResponse {
#[ts(as = "Vec<LogEntry>")]
pub entries: Reversible<LogEntry>, pub entries: Reversible<LogEntry>,
start_cursor: Option<String>, start_cursor: Option<String>,
end_cursor: Option<String>, end_cursor: Option<String>,
} }
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)] #[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct LogFollowResponse { pub struct LogFollowResponse {
start_cursor: Option<String>, start_cursor: Option<String>,
guid: Guid, guid: Guid,
} }
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)] #[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct LogEntry { pub struct LogEntry {
#[ts(type = "string")]
timestamp: DateTime<Utc>, timestamp: DateTime<Utc>,
message: String, message: String,
boot_id: String, boot_id: String,
@@ -321,14 +327,17 @@ impl From<BootIdentifier> for String {
} }
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export, concrete(Extra = Empty), bound = "")]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct LogsParams<Extra: FromArgMatches + Args = Empty> { pub struct LogsParams<Extra: FromArgMatches + Args = Empty> {
#[command(flatten)] #[command(flatten)]
#[serde(flatten)] #[serde(flatten)]
#[ts(skip)]
extra: Extra, extra: Extra,
#[arg(short = 'l', long = "limit", help = "help.arg.log-limit")] #[arg(short = 'l', long = "limit", help = "help.arg.log-limit")]
#[ts(optional)]
limit: Option<usize>, limit: Option<usize>,
#[arg( #[arg(
short = 'c', short = 'c',
@@ -336,9 +345,11 @@ pub struct LogsParams<Extra: FromArgMatches + Args = Empty> {
conflicts_with = "follow", conflicts_with = "follow",
help = "help.arg.log-cursor" help = "help.arg.log-cursor"
)] )]
#[ts(optional)]
cursor: Option<String>, cursor: Option<String>,
#[arg(short = 'b', long = "boot", help = "help.arg.log-boot")] #[arg(short = 'b', long = "boot", help = "help.arg.log-boot")]
#[serde(default)] #[serde(default)]
#[ts(optional, type = "number | string")]
boot: Option<BootIdentifier>, boot: Option<BootIdentifier>,
#[arg( #[arg(
short = 'B', short = 'B',

View File

@@ -17,3 +17,6 @@ lxc.net.0.link = lxcbr0
lxc.net.0.flags = up lxc.net.0.flags = up
lxc.rootfs.options = rshared lxc.rootfs.options = rshared
# Environment
lxc.environment = LANG={lang}

View File

@@ -174,10 +174,15 @@ impl LxcContainer {
config: LxcConfig, config: LxcConfig,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let guid = new_guid(); let guid = new_guid();
let lang = std::env::var("LANG").unwrap_or_else(|_| "C.UTF-8".into());
let machine_id = hex::encode(rand::random::<[u8; 16]>()); let machine_id = hex::encode(rand::random::<[u8; 16]>());
let container_dir = Path::new(LXC_CONTAINER_DIR).join(&*guid); let container_dir = Path::new(LXC_CONTAINER_DIR).join(&*guid);
tokio::fs::create_dir_all(&container_dir).await?; tokio::fs::create_dir_all(&container_dir).await?;
let config_str = format!(include_str!("./config.template"), guid = &*guid); let config_str = format!(
include_str!("./config.template"),
guid = &*guid,
lang = &lang,
);
tokio::fs::write(container_dir.join("config"), config_str).await?; tokio::fs::write(container_dir.join("config"), config_str).await?;
let rootfs_dir = container_dir.join("rootfs"); let rootfs_dir = container_dir.join("rootfs");
let rootfs = OverlayGuard::mount( let rootfs = OverlayGuard::mount(
@@ -215,6 +220,13 @@ impl LxcContainer {
100000, 100000,
) )
.await?; .await?;
write_file_owned_atomic(
rootfs_dir.join("etc/default/locale"),
format!("LANG={lang}\n"),
100000,
100000,
)
.await?;
Command::new("sed") Command::new("sed")
.arg("-i") .arg("-i")
.arg(format!("s/LXC_NAME/{guid}/g")) .arg(format!("s/LXC_NAME/{guid}/g"))

View File

@@ -20,9 +20,6 @@ use crate::context::RpcContext;
use crate::middleware::auth::DbContext; use crate::middleware::auth::DbContext;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::OpenAuthedContinuations; use crate::rpc_continuations::OpenAuthedContinuations;
use crate::util::Invoke;
use crate::util::io::{create_file_mod, read_file_to_string};
use crate::util::serde::{BASE64, const_true};
use crate::util::sync::SyncMutex; use crate::util::sync::SyncMutex;
pub trait SessionAuthContext: DbContext { pub trait SessionAuthContext: DbContext {

View File

@@ -27,7 +27,7 @@ use crate::db::model::public::AcmeSettings;
use crate::db::{DbAccess, DbAccessByKey, DbAccessMut}; use crate::db::{DbAccess, DbAccessByKey, DbAccessMut};
use crate::error::ErrorData; use crate::error::ErrorData;
use crate::net::ssl::should_use_cert; use crate::net::ssl::should_use_cert;
use crate::net::tls::{SingleCertResolver, TlsHandler}; use crate::net::tls::{SingleCertResolver, TlsHandler, TlsHandlerAction};
use crate::net::web_server::Accept; use crate::net::web_server::Accept;
use crate::prelude::*; use crate::prelude::*;
use crate::util::FromStrParser; use crate::util::FromStrParser;
@@ -173,7 +173,7 @@ where
&'a mut self, &'a mut self,
hello: &'a ClientHello<'a>, hello: &'a ClientHello<'a>,
_: &'a <A as Accept>::Metadata, _: &'a <A as Accept>::Metadata,
) -> Option<ServerConfig> { ) -> Option<TlsHandlerAction> {
let domain = hello.server_name()?; let domain = hello.server_name()?;
if hello if hello
.alpn() .alpn()
@@ -207,20 +207,20 @@ where
cfg.alpn_protocols = vec![ACME_TLS_ALPN_NAME.to_vec()]; cfg.alpn_protocols = vec![ACME_TLS_ALPN_NAME.to_vec()];
tracing::info!("performing ACME auth challenge"); tracing::info!("performing ACME auth challenge");
return Some(cfg); return Some(TlsHandlerAction::Tls(cfg));
} }
let domains: BTreeSet<InternedString> = [domain.into()].into_iter().collect(); let domains: BTreeSet<InternedString> = [domain.into()].into_iter().collect();
let crypto_provider = self.crypto_provider.clone(); let crypto_provider = self.crypto_provider.clone();
if let Some(cert) = self.get_cert(&domains).await { if let Some(cert) = self.get_cert(&domains).await {
return Some( return Some(TlsHandlerAction::Tls(
ServerConfig::builder_with_provider(crypto_provider) ServerConfig::builder_with_provider(crypto_provider)
.with_safe_default_protocol_versions() .with_safe_default_protocol_versions()
.log_err()? .log_err()?
.with_no_client_auth() .with_no_client_auth()
.with_cert_resolver(Arc::new(SingleCertResolver(Arc::new(cert)))), .with_cert_resolver(Arc::new(SingleCertResolver(Arc::new(cert)))),
); ));
} }
None None
@@ -461,7 +461,8 @@ impl ValueParserFactory for AcmeProvider {
} }
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct InitAcmeParams { pub struct InitAcmeParams {
#[arg(long, help = "help.arg.acme-provider")] #[arg(long, help = "help.arg.acme-provider")]
pub provider: AcmeProvider, pub provider: AcmeProvider,
@@ -486,7 +487,8 @@ pub async fn init(
Ok(()) Ok(())
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct RemoveAcmeParams { pub struct RemoveAcmeParams {
#[arg(long, help = "help.arg.acme-provider")] #[arg(long, help = "help.arg.acme-provider")]
pub provider: AcmeProvider, pub provider: AcmeProvider,

View File

@@ -11,7 +11,8 @@ use futures::{FutureExt, StreamExt, TryStreamExt};
use hickory_server::authority::{AuthorityObject, Catalog, MessageResponseBuilder}; use hickory_server::authority::{AuthorityObject, Catalog, MessageResponseBuilder};
use hickory_server::proto::op::{Header, ResponseCode}; use hickory_server::proto::op::{Header, ResponseCode};
use hickory_server::proto::rr::{Name, Record, RecordType}; use hickory_server::proto::rr::{Name, Record, RecordType};
use hickory_server::resolver::config::{ResolverConfig, ResolverOpts}; use hickory_server::proto::xfer::Protocol;
use hickory_server::resolver::config::{NameServerConfig, ResolverConfig, ResolverOpts};
use hickory_server::server::{Request, RequestHandler, ResponseHandler, ResponseInfo}; use hickory_server::server::{Request, RequestHandler, ResponseHandler, ResponseInfo};
use hickory_server::store::forwarder::{ForwardAuthority, ForwardConfig}; use hickory_server::store::forwarder::{ForwardAuthority, ForwardConfig};
use hickory_server::{ServerFuture, resolver as hickory_resolver}; use hickory_server::{ServerFuture, resolver as hickory_resolver};
@@ -25,6 +26,7 @@ use serde::{Deserialize, Serialize};
use tokio::net::{TcpListener, UdpSocket}; use tokio::net::{TcpListener, UdpSocket};
use tokio::sync::RwLock; use tokio::sync::RwLock;
use tracing::instrument; use tracing::instrument;
use ts_rs::TS;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::db::model::Database; use crate::db::model::Database;
@@ -93,7 +95,8 @@ pub fn dns_api<C: Context>() -> ParentHandler<C> {
) )
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct QueryDnsParams { pub struct QueryDnsParams {
#[arg(help = "help.arg.fqdn")] #[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString, pub fqdn: InternedString,
@@ -133,7 +136,8 @@ pub fn query_dns<C: Context>(
.map_err(Error::from) .map_err(Error::from)
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct SetStaticDnsParams { pub struct SetStaticDnsParams {
#[arg(help = "help.arg.dns-servers")] #[arg(help = "help.arg.dns-servers")]
pub servers: Option<Vec<String>>, pub servers: Option<Vec<String>>,
@@ -203,6 +207,7 @@ pub async fn dump_table(
struct ResolveMap { struct ResolveMap {
private_domains: BTreeMap<InternedString, Weak<()>>, private_domains: BTreeMap<InternedString, Weak<()>>,
services: BTreeMap<Option<PackageId>, BTreeMap<Ipv4Addr, Weak<()>>>, services: BTreeMap<Option<PackageId>, BTreeMap<Ipv4Addr, Weak<()>>>,
challenges: BTreeMap<InternedString, (InternedString, Weak<()>)>,
} }
pub struct DnsController { pub struct DnsController {
@@ -237,22 +242,60 @@ impl Resolver {
let mut prev = crate::util::serde::hash_serializable::<sha2::Sha256, _>(&( let mut prev = crate::util::serde::hash_serializable::<sha2::Sha256, _>(&(
ResolverConfig::new(), ResolverConfig::new(),
ResolverOpts::default(), ResolverOpts::default(),
Option::<std::collections::VecDeque<SocketAddr>>::None,
)) ))
.unwrap_or_default(); .unwrap_or_default();
loop { loop {
if let Err(e) = async { let res: Result<(), Error> = async {
let mut stream = file_string_stream("/run/systemd/resolve/resolv.conf") let mut file_stream =
.filter_map(|a| futures::future::ready(a.transpose())) file_string_stream("/run/systemd/resolve/resolv.conf")
.boxed(); .filter_map(|a| futures::future::ready(a.transpose()))
while let Some(conf) = stream.try_next().await? { .boxed();
let (config, mut opts) = let mut static_sub = db
hickory_resolver::system_conf::parse_resolv_conf(conf) .subscribe(
.with_kind(ErrorKind::ParseSysInfo)?; "/public/serverInfo/network/dns/staticServers"
opts.timeout = Duration::from_secs(30); .parse()
.unwrap(),
)
.await;
let mut last_config: Option<(ResolverConfig, ResolverOpts)> = None;
loop {
let got_file = tokio::select! {
res = file_stream.try_next() => {
let conf = res?
.ok_or_else(|| Error::new(
eyre!("resolv.conf stream ended"),
ErrorKind::Network,
))?;
let (config, mut opts) =
hickory_resolver::system_conf::parse_resolv_conf(conf)
.with_kind(ErrorKind::ParseSysInfo)?;
opts.timeout = Duration::from_secs(30);
last_config = Some((config, opts));
true
}
_ = static_sub.recv() => false,
};
let Some((ref config, ref opts)) = last_config else {
continue;
};
let static_servers: Option<std::collections::VecDeque<SocketAddr>> = db
.peek()
.await
.as_public()
.as_server_info()
.as_network()
.as_dns()
.as_static_servers()
.de()?;
let hash = crate::util::serde::hash_serializable::<sha2::Sha256, _>( let hash = crate::util::serde::hash_serializable::<sha2::Sha256, _>(
&(&config, &opts), &(config, opts, &static_servers),
)?; )?;
if hash != prev { if hash == prev {
prev = hash;
continue;
}
if got_file {
db.mutate(|db| { db.mutate(|db| {
db.as_public_mut() db.as_public_mut()
.as_server_info_mut() .as_server_info_mut()
@@ -271,44 +314,52 @@ impl Resolver {
}) })
.await .await
.result?; .result?;
let auth: Vec<Arc<dyn AuthorityObject>> = vec![Arc::new( }
ForwardAuthority::builder_tokio(ForwardConfig { let forward_servers = if let Some(servers) = &static_servers {
name_servers: from_value(Value::Array( servers
config .iter()
.name_servers() .flat_map(|addr| {
.into_iter() [
.skip(4) NameServerConfig::new(*addr, Protocol::Udp),
.map(to_value) NameServerConfig::new(*addr, Protocol::Tcp),
.collect::<Result<_, Error>>()?, ]
))?,
options: Some(opts),
}) })
.build() .map(|n| to_value(&n))
.map_err(|e| Error::new(eyre!("{e}"), ErrorKind::Network))?, .collect::<Result<_, Error>>()?
)]; } else {
{ config
let mut guard = tokio::time::timeout( .name_servers()
Duration::from_secs(10), .into_iter()
catalog.write(), .skip(4)
) .map(to_value)
.await .collect::<Result<_, Error>>()?
.map_err(|_| { };
Error::new( let auth: Vec<Arc<dyn AuthorityObject>> = vec![Arc::new(
eyre!("{}", t!("net.dns.timeout-updating-catalog")), ForwardAuthority::builder_tokio(ForwardConfig {
ErrorKind::Timeout, name_servers: from_value(Value::Array(forward_servers))?,
) options: Some(opts.clone()),
})?; })
guard.upsert(Name::root().into(), auth); .build()
drop(guard); .map_err(|e| Error::new(eyre!("{e}"), ErrorKind::Network))?,
} )];
{
let mut guard =
tokio::time::timeout(Duration::from_secs(10), catalog.write())
.await
.map_err(|_| {
Error::new(
eyre!("{}", t!("net.dns.timeout-updating-catalog")),
ErrorKind::Timeout,
)
})?;
guard.upsert(Name::root().into(), auth);
drop(guard);
} }
prev = hash; prev = hash;
} }
Ok::<_, Error>(())
} }
.await .await;
{ if let Err(e) = res {
tracing::error!("{e}"); tracing::error!("{e}");
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
tokio::time::sleep(Duration::from_secs(1)).await; tokio::time::sleep(Duration::from_secs(1)).await;
@@ -399,7 +450,41 @@ impl RequestHandler for Resolver {
match async { match async {
let req = request.request_info()?; let req = request.request_info()?;
let query = req.query; let query = req.query;
if let Some(ip) = self.resolve(query.name().borrow(), req.src.ip()) { let name = query.name();
if STARTOS.zone_of(name) && query.query_type() == RecordType::TXT {
let name_str =
InternedString::intern(name.to_lowercase().to_utf8().trim_end_matches('.'));
if let Some(txt_value) = self.resolve.mutate(|r| {
r.challenges.retain(|_, (_, weak)| weak.strong_count() > 0);
r.challenges.remove(&name_str).map(|(val, _)| val)
}) {
let mut header = Header::response_from_request(request.header());
header.set_recursion_available(true);
return response_handle
.send_response(
MessageResponseBuilder::from_message_request(&*request).build(
header,
&[Record::from_rdata(
query.name().to_owned().into(),
0,
hickory_server::proto::rr::RData::TXT(
hickory_server::proto::rr::rdata::TXT::new(vec![
txt_value.to_string(),
]),
),
)],
[],
[],
[],
),
)
.await
.map(Some);
}
}
if let Some(ip) = self.resolve(name, req.src.ip()) {
match query.query_type() { match query.query_type() {
RecordType::A => { RecordType::A => {
let mut header = Header::response_from_request(request.header()); let mut header = Header::response_from_request(request.header());
@@ -615,6 +700,34 @@ impl DnsController {
} }
} }
pub fn add_challenge(
&self,
domain: InternedString,
value: InternedString,
) -> Result<Arc<()>, Error> {
if let Some(resolve) = Weak::upgrade(&self.resolve) {
resolve.mutate(|writable| {
let entry = writable
.challenges
.entry(domain)
.or_insert_with(|| (value.clone(), Weak::new()));
let rc = if let Some(rc) = Weak::upgrade(&entry.1) {
rc
} else {
let new = Arc::new(());
*entry = (value, Arc::downgrade(&new));
new
};
Ok(rc)
})
} else {
Err(Error::new(
eyre!("{}", t!("net.dns.server-thread-exited")),
crate::ErrorKind::Network,
))
}
}
pub fn gc_private_domains<'a, BK: Ord + 'a>( pub fn gc_private_domains<'a, BK: Ord + 'a>(
&self, &self,
domains: impl IntoIterator<Item = &'a BK> + 'a, domains: impl IntoIterator<Item = &'a BK> + 'a,

View File

@@ -3,18 +3,16 @@ use std::net::{IpAddr, SocketAddrV4};
use std::sync::{Arc, Weak}; use std::sync::{Arc, Weak};
use std::time::Duration; use std::time::Duration;
use ipnet::IpNet;
use futures::channel::oneshot; use futures::channel::oneshot;
use iddqd::{IdOrdItem, IdOrdMap}; use iddqd::{IdOrdItem, IdOrdMap};
use rand::Rng;
use imbl::OrdMap; use imbl::OrdMap;
use ipnet::{IpNet, Ipv4Net};
use rand::Rng;
use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::process::Command; use tokio::process::Command;
use tokio::sync::mpsc; use tokio::sync::mpsc;
use crate::GatewayId;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::db::model::public::NetworkInterfaceInfo; use crate::db::model::public::NetworkInterfaceInfo;
use crate::prelude::*; use crate::prelude::*;
@@ -22,6 +20,7 @@ use crate::util::Invoke;
use crate::util::future::NonDetachingJoinHandle; use crate::util::future::NonDetachingJoinHandle;
use crate::util::serde::{HandlerExtSerde, display_serializable}; use crate::util::serde::{HandlerExtSerde, display_serializable};
use crate::util::sync::Watch; use crate::util::sync::Watch;
use crate::{GatewayId, HOST_IP};
pub const START9_BRIDGE_IFACE: &str = "lxcbr0"; pub const START9_BRIDGE_IFACE: &str = "lxcbr0";
const EPHEMERAL_PORT_START: u16 = 49152; const EPHEMERAL_PORT_START: u16 = 49152;
@@ -77,6 +76,11 @@ impl AvailablePorts {
self.0.insert(port, ssl); self.0.insert(port, ssl);
Some(port) Some(port)
} }
pub fn set_ssl(&mut self, port: u16, ssl: bool) {
self.0.insert(port, ssl);
}
/// Returns whether a given allocated port is SSL. /// Returns whether a given allocated port is SSL.
pub fn is_ssl(&self, port: u16) -> bool { pub fn is_ssl(&self, port: u16) -> bool {
self.0.get(&port).copied().unwrap_or(false) self.0.get(&port).copied().unwrap_or(false)
@@ -254,7 +258,13 @@ pub async fn add_iptables_rule(nat: bool, undo: bool, args: &[&str]) -> Result<(
if nat { if nat {
cmd.arg("-t").arg("nat"); cmd.arg("-t").arg("nat");
} }
if undo != !cmd.arg("-C").args(args).status().await?.success() { let exists = cmd
.arg("-C")
.args(args)
.invoke(ErrorKind::Network)
.await
.is_ok();
if undo != !exists {
let mut cmd = Command::new("iptables"); let mut cmd = Command::new("iptables");
if nat { if nat {
cmd.arg("-t").arg("nat"); cmd.arg("-t").arg("nat");
@@ -443,14 +453,13 @@ impl InterfaceForwardEntry {
continue; continue;
} }
let src_filter = let src_filter = if reqs.public_gateways.contains(gw_id) {
if reqs.public_gateways.contains(gw_id) { None
None } else if reqs.private_ips.contains(&IpAddr::V4(ip)) {
} else if reqs.private_ips.contains(&IpAddr::V4(ip)) { Some(subnet.trunc())
Some(subnet.trunc()) } else {
} else { continue;
continue; };
};
keep.insert(addr); keep.insert(addr);
let fwd_rc = port_forward let fwd_rc = port_forward
@@ -712,7 +721,14 @@ async fn forward(
.env("dip", target.ip().to_string()) .env("dip", target.ip().to_string())
.env("dprefix", target_prefix.to_string()) .env("dprefix", target_prefix.to_string())
.env("sport", source.port().to_string()) .env("sport", source.port().to_string())
.env("dport", target.port().to_string()); .env("dport", target.port().to_string())
.env(
"bridge_subnet",
Ipv4Net::new(HOST_IP.into(), 24)
.with_kind(ErrorKind::ParseNetAddress)?
.trunc()
.to_string(),
);
if let Some(subnet) = src_filter { if let Some(subnet) = src_filter {
cmd.env("src_subnet", subnet.to_string()); cmd.env("src_subnet", subnet.to_string());
} }

File diff suppressed because it is too large Load Diff

View File

@@ -10,7 +10,9 @@ use ts_rs::TS;
use crate::GatewayId; use crate::GatewayId;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel; use crate::db::model::DatabaseModel;
use crate::hostname::ServerHostname;
use crate::net::acme::AcmeProvider; use crate::net::acme::AcmeProvider;
use crate::net::gateway::{CheckDnsParams, CheckPortParams, CheckPortRes, check_dns, check_port};
use crate::net::host::{HostApiKind, all_hosts}; use crate::net::host::{HostApiKind, all_hosts};
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::{HandlerExtSerde, display_serializable}; use crate::util::serde::{HandlerExtSerde, display_serializable};
@@ -24,6 +26,7 @@ pub struct HostAddress {
} }
#[derive(Debug, Clone, Deserialize, Serialize, TS)] #[derive(Debug, Clone, Deserialize, Serialize, TS)]
#[ts(export)]
pub struct PublicDomainConfig { pub struct PublicDomainConfig {
pub gateway: GatewayId, pub gateway: GatewayId,
pub acme: Option<AcmeProvider>, pub acme: Option<AcmeProvider>,
@@ -157,7 +160,9 @@ pub fn address_api<C: Context, Kind: HostApiKind>()
) )
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct AddPublicDomainParams { pub struct AddPublicDomainParams {
#[arg(help = "help.arg.fqdn")] #[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString, pub fqdn: InternedString,
@@ -165,6 +170,17 @@ pub struct AddPublicDomainParams {
pub acme: Option<AcmeProvider>, pub acme: Option<AcmeProvider>,
#[arg(help = "help.arg.gateway-id")] #[arg(help = "help.arg.gateway-id")]
pub gateway: GatewayId, pub gateway: GatewayId,
#[arg(help = "help.arg.internal-port")]
pub internal_port: u16,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct AddPublicDomainRes {
#[ts(type = "string | null")]
pub dns: Option<Ipv4Addr>,
pub port: CheckPortRes,
} }
pub async fn add_public_domain<Kind: HostApiKind>( pub async fn add_public_domain<Kind: HostApiKind>(
@@ -173,10 +189,12 @@ pub async fn add_public_domain<Kind: HostApiKind>(
fqdn, fqdn,
acme, acme,
gateway, gateway,
internal_port,
}: AddPublicDomainParams, }: AddPublicDomainParams,
inheritance: Kind::Inheritance, inheritance: Kind::Inheritance,
) -> Result<Option<Ipv4Addr>, Error> { ) -> Result<AddPublicDomainRes, Error> {
ctx.db let ext_port = ctx
.db
.mutate(|db| { .mutate(|db| {
if let Some(acme) = &acme { if let Some(acme) = &acme {
if !db if !db
@@ -192,24 +210,96 @@ pub async fn add_public_domain<Kind: HostApiKind>(
Kind::host_for(&inheritance, db)? Kind::host_for(&inheritance, db)?
.as_public_domains_mut() .as_public_domains_mut()
.insert(&fqdn, &PublicDomainConfig { acme, gateway })?; .insert(
&fqdn,
&PublicDomainConfig {
acme,
gateway: gateway.clone(),
},
)?;
handle_duplicates(db)?; handle_duplicates(db)?;
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?; let hostname = ServerHostname::load(db.as_public().as_server_info())?;
let ports = db.as_private().as_available_ports().de()?; let gateways = db
Kind::host_for(&inheritance, db)?.update_addresses(&gateways, &ports) .as_public()
.as_server_info()
.as_network()
.as_gateways()
.de()?;
let available_ports = db.as_private().as_available_ports().de()?;
let host = Kind::host_for(&inheritance, db)?;
host.update_addresses(&hostname, &gateways, &available_ports)?;
// Find the external port for the target binding
let bindings = host.as_bindings().de()?;
let target_bind = bindings
.get(&internal_port)
.ok_or_else(|| Error::new(eyre!("binding not found for internal port {internal_port}"), ErrorKind::NotFound))?;
let ext_port = target_bind
.addresses
.available
.iter()
.find(|a| a.public && a.hostname == fqdn)
.and_then(|a| a.port)
.ok_or_else(|| Error::new(eyre!("no public address found for {fqdn} on port {internal_port}"), ErrorKind::NotFound))?;
// Disable the domain on all other bindings
host.as_bindings_mut().mutate(|b| {
for (&port, bind) in b.iter_mut() {
if port == internal_port {
continue;
}
let has_addr = bind
.addresses
.available
.iter()
.any(|a| a.public && a.hostname == fqdn);
if has_addr {
let other_ext = bind
.addresses
.available
.iter()
.find(|a| a.public && a.hostname == fqdn)
.and_then(|a| a.port)
.unwrap_or(ext_port);
bind.addresses.disabled.insert((fqdn.clone(), other_ext));
}
}
Ok(())
})?;
Ok(ext_port)
}) })
.await .await
.result?; .result?;
Kind::sync_host(&ctx, inheritance).await?;
tokio::task::spawn_blocking(|| { let ctx2 = ctx.clone();
crate::net::dns::query_dns(ctx, crate::net::dns::QueryDnsParams { fqdn }) let fqdn2 = fqdn.clone();
let (dns_result, port_result) = tokio::join!(
async {
tokio::task::spawn_blocking(move || {
crate::net::dns::query_dns(ctx2, crate::net::dns::QueryDnsParams { fqdn: fqdn2 })
})
.await
.with_kind(ErrorKind::Unknown)?
},
check_port(
ctx.clone(),
CheckPortParams {
port: ext_port,
gateway: gateway.clone(),
},
)
);
Ok(AddPublicDomainRes {
dns: dns_result?,
port: port_result?,
}) })
.await
.with_kind(ErrorKind::Unknown)?
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct RemoveDomainParams { pub struct RemoveDomainParams {
#[arg(help = "help.arg.fqdn")] #[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString, pub fqdn: InternedString,
@@ -225,18 +315,24 @@ pub async fn remove_public_domain<Kind: HostApiKind>(
Kind::host_for(&inheritance, db)? Kind::host_for(&inheritance, db)?
.as_public_domains_mut() .as_public_domains_mut()
.remove(&fqdn)?; .remove(&fqdn)?;
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?; let hostname = ServerHostname::load(db.as_public().as_server_info())?;
let gateways = db
.as_public()
.as_server_info()
.as_network()
.as_gateways()
.de()?;
let ports = db.as_private().as_available_ports().de()?; let ports = db.as_private().as_available_ports().de()?;
Kind::host_for(&inheritance, db)?.update_addresses(&gateways, &ports) Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
}) })
.await .await
.result?; .result?;
Kind::sync_host(&ctx, inheritance).await?;
Ok(()) Ok(())
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
pub struct AddPrivateDomainParams { pub struct AddPrivateDomainParams {
#[arg(help = "help.arg.fqdn")] #[arg(help = "help.arg.fqdn")]
pub fqdn: InternedString, pub fqdn: InternedString,
@@ -247,23 +343,28 @@ pub async fn add_private_domain<Kind: HostApiKind>(
ctx: RpcContext, ctx: RpcContext,
AddPrivateDomainParams { fqdn, gateway }: AddPrivateDomainParams, AddPrivateDomainParams { fqdn, gateway }: AddPrivateDomainParams,
inheritance: Kind::Inheritance, inheritance: Kind::Inheritance,
) -> Result<(), Error> { ) -> Result<bool, Error> {
ctx.db ctx.db
.mutate(|db| { .mutate(|db| {
Kind::host_for(&inheritance, db)? Kind::host_for(&inheritance, db)?
.as_private_domains_mut() .as_private_domains_mut()
.upsert(&fqdn, || Ok(BTreeSet::new()))? .upsert(&fqdn, || Ok(BTreeSet::new()))?
.mutate(|d| Ok(d.insert(gateway)))?; .mutate(|d| Ok(d.insert(gateway.clone())))?;
handle_duplicates(db)?; handle_duplicates(db)?;
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?; let hostname = ServerHostname::load(db.as_public().as_server_info())?;
let gateways = db
.as_public()
.as_server_info()
.as_network()
.as_gateways()
.de()?;
let ports = db.as_private().as_available_ports().de()?; let ports = db.as_private().as_available_ports().de()?;
Kind::host_for(&inheritance, db)?.update_addresses(&gateways, &ports) Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
}) })
.await .await
.result?; .result?;
Kind::sync_host(&ctx, inheritance).await?;
Ok(()) check_dns(ctx, CheckDnsParams { gateway }).await
} }
pub async fn remove_private_domain<Kind: HostApiKind>( pub async fn remove_private_domain<Kind: HostApiKind>(
@@ -276,13 +377,18 @@ pub async fn remove_private_domain<Kind: HostApiKind>(
Kind::host_for(&inheritance, db)? Kind::host_for(&inheritance, db)?
.as_private_domains_mut() .as_private_domains_mut()
.mutate(|d| Ok(d.remove(&domain)))?; .mutate(|d| Ok(d.remove(&domain)))?;
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?; let hostname = ServerHostname::load(db.as_public().as_server_info())?;
let gateways = db
.as_public()
.as_server_info()
.as_network()
.as_gateways()
.de()?;
let ports = db.as_private().as_available_ports().de()?; let ports = db.as_private().as_available_ports().de()?;
Kind::host_for(&inheritance, db)?.update_addresses(&gateways, &ports) Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
}) })
.await .await
.result?; .result?;
Kind::sync_host(&ctx, inheritance).await?;
Ok(()) Ok(())
} }

View File

@@ -75,7 +75,7 @@ impl DerivedAddressInfo {
} else { } else {
!self !self
.disabled .disabled
.contains(&(h.host.clone(), h.port.unwrap_or_default())) // disablable addresses will always have a port .contains(&(h.hostname.clone(), h.port.unwrap_or_default())) // disablable addresses will always have a port
} }
}) })
.collect() .collect()
@@ -204,11 +204,7 @@ impl BindInfo {
enabled: true, enabled: true,
options, options,
net: lan, net: lan,
addresses: DerivedAddressInfo { addresses,
enabled: addresses.enabled,
disabled: addresses.disabled,
available: BTreeSet::new(),
},
}) })
} }
pub fn disable(&mut self) { pub fn disable(&mut self) {
@@ -350,7 +346,7 @@ pub async fn set_address_enabled<Kind: HostApiKind>(
} else { } else {
// Domains and private IPs: toggle via (host, port) in `disabled` set // Domains and private IPs: toggle via (host, port) in `disabled` set
let port = address.port.unwrap_or(if address.ssl { 443 } else { 80 }); let port = address.port.unwrap_or(if address.ssl { 443 } else { 80 });
let key = (address.host.clone(), port); let key = (address.hostname.clone(), port);
if enabled { if enabled {
bind.addresses.disabled.remove(&key); bind.addresses.disabled.remove(&key);
} else { } else {
@@ -362,5 +358,5 @@ pub async fn set_address_enabled<Kind: HostApiKind>(
}) })
.await .await
.result?; .result?;
Kind::sync_host(&ctx, inheritance).await Ok(())
} }

View File

@@ -1,5 +1,5 @@
use std::collections::{BTreeMap, BTreeSet}; use std::collections::{BTreeMap, BTreeSet};
use std::future::Future; use std::net::{IpAddr, SocketAddrV4};
use std::panic::RefUnwindSafe; use std::panic::RefUnwindSafe;
use clap::Parser; use clap::Parser;
@@ -13,7 +13,8 @@ use ts_rs::TS;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::db::model::DatabaseModel; use crate::db::model::DatabaseModel;
use crate::db::model::public::NetworkInterfaceInfo; use crate::db::model::public::{NetworkInterfaceInfo, NetworkInterfaceType};
use crate::hostname::ServerHostname;
use crate::net::forward::AvailablePorts; use crate::net::forward::AvailablePorts;
use crate::net::host::address::{HostAddress, PublicDomainConfig, address_api}; use crate::net::host::address::{HostAddress, PublicDomainConfig, address_api};
use crate::net::host::binding::{BindInfo, BindOptions, Bindings, binding}; use crate::net::host::binding::{BindInfo, BindOptions, Bindings, binding};
@@ -32,6 +33,20 @@ pub struct Host {
pub bindings: Bindings, pub bindings: Bindings,
pub public_domains: BTreeMap<InternedString, PublicDomainConfig>, pub public_domains: BTreeMap<InternedString, PublicDomainConfig>,
pub private_domains: BTreeMap<InternedString, BTreeSet<GatewayId>>, pub private_domains: BTreeMap<InternedString, BTreeSet<GatewayId>>,
/// COMPUTED: port forwarding rules needed on gateways for public addresses to work.
#[serde(default)]
pub port_forwards: BTreeSet<PortForward>,
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct PortForward {
#[ts(type = "string")]
pub src: SocketAddrV4,
#[ts(type = "string")]
pub dst: SocketAddrV4,
pub gateway: GatewayId,
} }
impl AsRef<Host> for Host { impl AsRef<Host> for Host {
@@ -66,14 +81,20 @@ impl Host {
impl Model<Host> { impl Model<Host> {
pub fn update_addresses( pub fn update_addresses(
&mut self, &mut self,
mdns: &ServerHostname,
gateways: &OrdMap<GatewayId, NetworkInterfaceInfo>, gateways: &OrdMap<GatewayId, NetworkInterfaceInfo>,
available_ports: &AvailablePorts, available_ports: &AvailablePorts,
) -> Result<(), Error> { ) -> Result<(), Error> {
let this = self.destructure_mut(); let this = self.destructure_mut();
// ips
for (_, bind) in this.bindings.as_entries_mut()? { for (_, bind) in this.bindings.as_entries_mut()? {
let net = bind.as_net().de()?; let net = bind.as_net().de()?;
let opt = bind.as_options().de()?; let opt = bind.as_options().de()?;
let mut available = BTreeSet::new();
// Preserve existing plugin-provided addresses across recomputation
let mut available = bind.as_addresses().as_available().de()?;
available.retain(|h| matches!(h.metadata, HostnameMetadata::Plugin { .. }));
for (gid, g) in gateways { for (gid, g) in gateways {
let Some(ip_info) = &g.ip_info else { let Some(ip_info) = &g.ip_info else {
continue; continue;
@@ -98,7 +119,7 @@ impl Model<Host> {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl), ssl: opt.secure.map_or(false, |s| s.ssl),
public: false, public: false,
host: host.clone(), hostname: host.clone(),
port: Some(port), port: Some(port),
metadata: metadata.clone(), metadata: metadata.clone(),
}); });
@@ -107,7 +128,7 @@ impl Model<Host> {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: true, ssl: true,
public: false, public: false,
host: host.clone(), hostname: host.clone(),
port: Some(port), port: Some(port),
metadata, metadata,
}); });
@@ -127,7 +148,7 @@ impl Model<Host> {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl), ssl: opt.secure.map_or(false, |s| s.ssl),
public: true, public: true,
host: host.clone(), hostname: host.clone(),
port: Some(port), port: Some(port),
metadata: metadata.clone(), metadata: metadata.clone(),
}); });
@@ -136,13 +157,64 @@ impl Model<Host> {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: true, ssl: true,
public: true, public: true,
host: host.clone(), hostname: host.clone(),
port: Some(port), port: Some(port),
metadata, metadata,
}); });
} }
} }
} }
// mdns
let mdns_host = mdns.local_domain_name();
let mdns_gateways: BTreeSet<GatewayId> = gateways
.iter()
.filter(|(_, g)| {
matches!(
g.ip_info.as_ref().and_then(|i| i.device_type),
Some(NetworkInterfaceType::Ethernet | NetworkInterfaceType::Wireless)
)
})
.map(|(id, _)| id.clone())
.collect();
if let Some(port) = net.assigned_port.filter(|_| {
opt.secure
.map_or(true, |s| !(s.ssl && opt.add_ssl.is_some()))
}) {
let mdns_gateways = if opt.secure.is_some() {
mdns_gateways.clone()
} else {
mdns_gateways
.iter()
.filter(|g| gateways.get(*g).map_or(false, |g| g.secure()))
.cloned()
.collect()
};
if !mdns_gateways.is_empty() {
available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl),
public: false,
hostname: mdns_host.clone(),
port: Some(port),
metadata: HostnameMetadata::Mdns {
gateways: mdns_gateways,
},
});
}
}
if let Some(port) = net.assigned_ssl_port {
available.insert(HostnameInfo {
ssl: true,
public: false,
hostname: mdns_host,
port: Some(port),
metadata: HostnameMetadata::Mdns {
gateways: mdns_gateways,
},
});
}
// public domains
for (domain, info) in this.public_domains.de()? { for (domain, info) in this.public_domains.de()? {
let metadata = HostnameMetadata::PublicDomain { let metadata = HostnameMetadata::PublicDomain {
gateway: info.gateway.clone(), gateway: info.gateway.clone(),
@@ -156,7 +228,7 @@ impl Model<Host> {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl), ssl: opt.secure.map_or(false, |s| s.ssl),
public: true, public: true,
host: domain.clone(), hostname: domain.clone(),
port: Some(port), port: Some(port),
metadata: metadata.clone(), metadata: metadata.clone(),
}); });
@@ -173,12 +245,28 @@ impl Model<Host> {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: true, ssl: true,
public: true, public: true,
host: domain.clone(), hostname: domain,
port: Some(port), port: Some(port),
metadata, metadata,
}); });
} else if opt.secure.map_or(false, |s| s.ssl)
&& opt.add_ssl.is_none()
&& available_ports.is_ssl(opt.preferred_external_port)
&& net.assigned_port != Some(opt.preferred_external_port)
{
// Service handles its own TLS and the preferred port is
// allocated as SSL — add an address for passthrough vhost.
available.insert(HostnameInfo {
ssl: true,
public: true,
hostname: domain,
port: Some(opt.preferred_external_port),
metadata,
});
} }
} }
// private domains
for (domain, domain_gateways) in this.private_domains.de()? { for (domain, domain_gateways) in this.private_domains.de()? {
if let Some(port) = net.assigned_port.filter(|_| { if let Some(port) = net.assigned_port.filter(|_| {
opt.secure opt.secure
@@ -196,7 +284,7 @@ impl Model<Host> {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: opt.secure.map_or(false, |s| s.ssl), ssl: opt.secure.map_or(false, |s| s.ssl),
public: true, public: true,
host: domain.clone(), hostname: domain.clone(),
port: Some(port), port: Some(port),
metadata: HostnameMetadata::PrivateDomain { gateways }, metadata: HostnameMetadata::PrivateDomain { gateways },
}); });
@@ -213,16 +301,70 @@ impl Model<Host> {
available.insert(HostnameInfo { available.insert(HostnameInfo {
ssl: true, ssl: true,
public: true, public: true,
host: domain.clone(), hostname: domain,
port: Some(port), port: Some(port),
metadata: HostnameMetadata::PrivateDomain { metadata: HostnameMetadata::PrivateDomain {
gateways: domain_gateways, gateways: domain_gateways,
}, },
}); });
} else if opt.secure.map_or(false, |s| s.ssl)
&& opt.add_ssl.is_none()
&& available_ports.is_ssl(opt.preferred_external_port)
&& net.assigned_port != Some(opt.preferred_external_port)
{
available.insert(HostnameInfo {
ssl: true,
public: true,
hostname: domain,
port: Some(opt.preferred_external_port),
metadata: HostnameMetadata::PrivateDomain {
gateways: domain_gateways,
},
});
} }
} }
bind.as_addresses_mut().as_available_mut().ser(&available)?; bind.as_addresses_mut().as_available_mut().ser(&available)?;
} }
// compute port forwards from available public addresses
let bindings: Bindings = this.bindings.de()?;
let mut port_forwards = BTreeSet::new();
for bind in bindings.values() {
for addr in bind.addresses.enabled() {
if !addr.public {
continue;
}
let Some(port) = addr.port else {
continue;
};
let gw_id = match &addr.metadata {
HostnameMetadata::Ipv4 { gateway }
| HostnameMetadata::PublicDomain { gateway } => gateway,
_ => continue,
};
let Some(gw_info) = gateways.get(gw_id) else {
continue;
};
let Some(ip_info) = &gw_info.ip_info else {
continue;
};
let Some(wan_ip) = ip_info.wan_ip else {
continue;
};
for subnet in &ip_info.subnets {
let IpAddr::V4(addr) = subnet.addr() else {
continue;
};
port_forwards.insert(PortForward {
src: SocketAddrV4::new(wan_ip, port),
dst: SocketAddrV4::new(addr, port),
gateway: gw_id.clone(),
});
}
}
}
this.port_forwards.ser(&port_forwards)?;
Ok(()) Ok(())
} }
} }
@@ -325,10 +467,6 @@ pub trait HostApiKind: 'static {
inheritance: &Self::Inheritance, inheritance: &Self::Inheritance,
db: &'a mut DatabaseModel, db: &'a mut DatabaseModel,
) -> Result<&'a mut Model<Host>, Error>; ) -> Result<&'a mut Model<Host>, Error>;
fn sync_host(
ctx: &RpcContext,
inheritance: Self::Inheritance,
) -> impl Future<Output = Result<(), Error>> + Send;
} }
pub struct ForPackage; pub struct ForPackage;
impl HostApiKind for ForPackage { impl HostApiKind for ForPackage {
@@ -347,12 +485,6 @@ impl HostApiKind for ForPackage {
) -> Result<&'a mut Model<Host>, Error> { ) -> Result<&'a mut Model<Host>, Error> {
host_for(db, Some(package), host) host_for(db, Some(package), host)
} }
async fn sync_host(ctx: &RpcContext, (package, host): Self::Inheritance) -> Result<(), Error> {
let service = ctx.services.get(&package).await;
let service_ref = service.as_ref().or_not_found(&package)?;
service_ref.sync_host(host).await?;
Ok(())
}
} }
pub struct ForServer; pub struct ForServer;
impl HostApiKind for ForServer { impl HostApiKind for ForServer {
@@ -368,9 +500,6 @@ impl HostApiKind for ForServer {
) -> Result<&'a mut Model<Host>, Error> { ) -> Result<&'a mut Model<Host>, Error> {
host_for(db, None, &HostId::default()) host_for(db, None, &HostId::default())
} }
async fn sync_host(ctx: &RpcContext, _: Self::Inheritance) -> Result<(), Error> {
ctx.os_net_service.sync_host(HostId::default()).await
}
} }
pub fn host_api<C: Context>() -> ParentHandler<C, RequiresPackageId> { pub fn host_api<C: Context>() -> ParentHandler<C, RequiresPackageId> {

View File

@@ -4,15 +4,16 @@ use std::sync::{Arc, Weak};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use imbl_value::InternedString; use imbl_value::InternedString;
use nix::net::if_::if_nametoindex;
use patch_db::json_ptr::JsonPointer;
use tokio::process::Command;
use tokio::sync::Mutex; use tokio::sync::Mutex;
use tokio::task::JoinHandle; use tokio::task::JoinHandle;
use tokio_rustls::rustls::ClientConfig as TlsClientConfig; use tokio_rustls::rustls::ClientConfig as TlsClientConfig;
use tracing::instrument; use tracing::instrument;
use patch_db::json_ptr::JsonPointer;
use crate::db::model::Database; use crate::db::model::Database;
use crate::hostname::Hostname; use crate::hostname::ServerHostname;
use crate::net::dns::DnsController; use crate::net::dns::DnsController;
use crate::net::forward::{ use crate::net::forward::{
ForwardRequirements, InterfacePortForwardController, START9_BRIDGE_IFACE, add_iptables_rule, ForwardRequirements, InterfacePortForwardController, START9_BRIDGE_IFACE, add_iptables_rule,
@@ -26,6 +27,7 @@ use crate::net::socks::SocksController;
use crate::net::vhost::{AlpnInfo, DynVHostTarget, ProxyTarget, VHostController}; use crate::net::vhost::{AlpnInfo, DynVHostTarget, ProxyTarget, VHostController};
use crate::prelude::*; use crate::prelude::*;
use crate::service::effects::callbacks::ServiceCallbacks; use crate::service::effects::callbacks::ServiceCallbacks;
use crate::util::Invoke;
use crate::util::serde::MaybeUtf8String; use crate::util::serde::MaybeUtf8String;
use crate::util::sync::Watch; use crate::util::sync::Watch;
use crate::{GatewayId, HOST_IP, HostId, OptionExt, PackageId}; use crate::{GatewayId, HOST_IP, HostId, OptionExt, PackageId};
@@ -38,16 +40,11 @@ pub struct NetController {
pub(super) dns: DnsController, pub(super) dns: DnsController,
pub(super) forward: InterfacePortForwardController, pub(super) forward: InterfacePortForwardController,
pub(super) socks: SocksController, pub(super) socks: SocksController,
pub(super) server_hostnames: Vec<Option<InternedString>>,
pub(crate) callbacks: Arc<ServiceCallbacks>, pub(crate) callbacks: Arc<ServiceCallbacks>,
} }
impl NetController { impl NetController {
pub async fn init( pub async fn init(db: TypedPatchDb<Database>, socks_listen: SocketAddr) -> Result<Self, Error> {
db: TypedPatchDb<Database>,
hostname: &Hostname,
socks_listen: SocketAddr,
) -> Result<Self, Error> {
let net_iface = Arc::new(NetworkInterfaceController::new(db.clone())); let net_iface = Arc::new(NetworkInterfaceController::new(db.clone()));
let socks = SocksController::new(socks_listen)?; let socks = SocksController::new(socks_listen)?;
let crypto_provider = Arc::new(tokio_rustls::rustls::crypto::ring::default_provider()); let crypto_provider = Arc::new(tokio_rustls::rustls::crypto::ring::default_provider());
@@ -79,26 +76,27 @@ impl NetController {
], ],
) )
.await?; .await?;
let passthroughs = db
.peek()
.await
.as_public()
.as_server_info()
.as_network()
.as_passthroughs()
.de()?;
Ok(Self { Ok(Self {
db: db.clone(), db: db.clone(),
vhost: VHostController::new(db.clone(), net_iface.clone(), crypto_provider), vhost: VHostController::new(
db.clone(),
net_iface.clone(),
crypto_provider,
passthroughs,
),
tls_client_config, tls_client_config,
dns: DnsController::init(db, &net_iface.watcher).await?, dns: DnsController::init(db, &net_iface.watcher).await?,
forward: InterfacePortForwardController::new(net_iface.watcher.subscribe()), forward: InterfacePortForwardController::new(net_iface.watcher.subscribe()),
net_iface, net_iface,
socks, socks,
server_hostnames: vec![
// LAN IP
None,
// Internal DNS
Some("embassy".into()),
Some("startos".into()),
// localhost
Some("localhost".into()),
Some(hostname.no_dot_host_name()),
// LAN mDNS
Some(hostname.local_domain_name()),
],
callbacks: Arc::new(ServiceCallbacks::default()), callbacks: Arc::new(ServiceCallbacks::default()),
}) })
} }
@@ -180,12 +178,7 @@ impl NetServiceData {
}) })
} }
async fn update( async fn update(&mut self, ctrl: &NetController, id: HostId, host: Host) -> Result<(), Error> {
&mut self,
ctrl: &NetController,
id: HostId,
host: Host,
) -> Result<(), Error> {
let mut forwards: BTreeMap<u16, (SocketAddrV4, ForwardRequirements)> = BTreeMap::new(); let mut forwards: BTreeMap<u16, (SocketAddrV4, ForwardRequirements)> = BTreeMap::new();
let mut vhosts: BTreeMap<(Option<InternedString>, u16), ProxyTarget> = BTreeMap::new(); let mut vhosts: BTreeMap<(Option<InternedString>, u16), ProxyTarget> = BTreeMap::new();
let mut private_dns: BTreeSet<InternedString> = BTreeSet::new(); let mut private_dns: BTreeSet<InternedString> = BTreeSet::new();
@@ -236,23 +229,30 @@ impl NetServiceData {
.flat_map(|ip_info| ip_info.subnets.iter().map(|s| s.addr())) .flat_map(|ip_info| ip_info.subnets.iter().map(|s| s.addr()))
.collect(); .collect();
// Server hostname vhosts (on assigned_ssl_port) — private only // Collect public gateways from enabled public IP addresses
if !server_private_ips.is_empty() { let server_public_gateways: BTreeSet<GatewayId> = enabled_addresses
for hostname in ctrl.server_hostnames.iter().cloned() { .iter()
vhosts.insert( .filter(|a| a.public && a.metadata.is_ip())
(hostname, assigned_ssl_port), .flat_map(|a| a.metadata.gateways())
ProxyTarget { .cloned()
public: BTreeSet::new(), .collect();
private: server_private_ips.clone(),
acme: None, // * vhost (on assigned_ssl_port)
addr, if !server_private_ips.is_empty() || !server_public_gateways.is_empty() {
add_x_forwarded_headers: ssl.add_x_forwarded_headers, vhosts.insert(
connect_ssl: connect_ssl (None, assigned_ssl_port),
.clone() ProxyTarget {
.map(|_| ctrl.tls_client_config.clone()), public: server_public_gateways.clone(),
}, private: server_private_ips.clone(),
); acme: None,
} addr,
add_x_forwarded_headers: ssl.add_x_forwarded_headers,
connect_ssl: connect_ssl
.clone()
.map(|_| ctrl.tls_client_config.clone()),
passthrough: false,
},
);
} }
} }
@@ -266,8 +266,10 @@ impl NetServiceData {
| HostnameMetadata::PrivateDomain { .. } => {} | HostnameMetadata::PrivateDomain { .. } => {}
_ => continue, _ => continue,
} }
let domain = &addr_info.host; let domain = &addr_info.hostname;
let domain_ssl_port = addr_info.port.unwrap_or(443); let Some(domain_ssl_port) = addr_info.port else {
continue;
};
let key = (Some(domain.clone()), domain_ssl_port); let key = (Some(domain.clone()), domain_ssl_port);
let target = vhosts.entry(key).or_insert_with(|| ProxyTarget { let target = vhosts.entry(key).or_insert_with(|| ProxyTarget {
public: BTreeSet::new(), public: BTreeSet::new(),
@@ -280,6 +282,7 @@ impl NetServiceData {
addr, addr,
add_x_forwarded_headers: ssl.add_x_forwarded_headers, add_x_forwarded_headers: ssl.add_x_forwarded_headers,
connect_ssl: connect_ssl.clone().map(|_| ctrl.tls_client_config.clone()), connect_ssl: connect_ssl.clone().map(|_| ctrl.tls_client_config.clone()),
passthrough: false,
}); });
if addr_info.public { if addr_info.public {
for gw in addr_info.metadata.gateways() { for gw in addr_info.metadata.gateways() {
@@ -331,6 +334,53 @@ impl NetServiceData {
), ),
); );
} }
// Passthrough vhosts: if the service handles its own TLS
// (secure.ssl && no add_ssl) and a domain address is enabled on
// an SSL port different from assigned_port, add a passthrough
// vhost so the service's TLS endpoint is reachable on that port.
if bind.options.secure.map_or(false, |s| s.ssl) && bind.options.add_ssl.is_none() {
let assigned = bind.net.assigned_port;
for addr_info in &enabled_addresses {
if !addr_info.ssl {
continue;
}
let Some(pt_port) = addr_info.port.filter(|p| assigned != Some(*p)) else {
continue;
};
match &addr_info.metadata {
HostnameMetadata::PublicDomain { .. }
| HostnameMetadata::PrivateDomain { .. } => {}
_ => continue,
}
let domain = &addr_info.hostname;
let key = (Some(domain.clone()), pt_port);
let target = vhosts.entry(key).or_insert_with(|| ProxyTarget {
public: BTreeSet::new(),
private: BTreeSet::new(),
acme: None,
addr,
add_x_forwarded_headers: false,
connect_ssl: Err(AlpnInfo::Reflect),
passthrough: true,
});
if addr_info.public {
for gw in addr_info.metadata.gateways() {
target.public.insert(gw.clone());
}
} else {
for gw in addr_info.metadata.gateways() {
if let Some(info) = net_ifaces.get(gw) {
if let Some(ip_info) = &info.ip_info {
for subnet in &ip_info.subnets {
target.private.insert(subnet.addr());
}
}
}
}
}
}
}
} }
// ── Phase 3: Reconcile ── // ── Phase 3: Reconcile ──
@@ -424,7 +474,6 @@ impl NetServiceData {
Ok(()) Ok(())
} }
} }
pub struct NetService { pub struct NetService {
@@ -458,36 +507,163 @@ impl NetService {
let synced = Watch::new(0u64); let synced = Watch::new(0u64);
let synced_writer = synced.clone(); let synced_writer = synced.clone();
let ip = data.ip;
let data = Arc::new(Mutex::new(data)); let data = Arc::new(Mutex::new(data));
let thread_data = data.clone(); let thread_data = data.clone();
let sync_task = tokio::spawn(async move { let sync_task = tokio::spawn(async move {
if let Some(ref id) = pkg_id { if let Some(ref id) = pkg_id {
let ptr: JsonPointer = format!("/public/packageData/{}/hosts", id) let ptr: JsonPointer = format!("/public/packageData/{}/hosts", id).parse().unwrap();
.parse()
.unwrap();
let mut watch = db.watch(ptr).await.typed::<Hosts>(); let mut watch = db.watch(ptr).await.typed::<Hosts>();
// Outbound gateway enforcement
let service_ip = ip.to_string();
// Purge any stale rules from a previous instance
loop { loop {
if let Err(e) = watch.changed().await { if Command::new("ip")
tracing::error!("DB watch disconnected for {id}: {e}"); .arg("rule")
.arg("del")
.arg("from")
.arg(&service_ip)
.arg("priority")
.arg("100")
.invoke(ErrorKind::Network)
.await
.is_err()
{
break; break;
} }
if let Err(e) = async { }
let hosts = watch.peek()?.de()?; let mut outbound_sub = db
let mut data = thread_data.lock().await; .subscribe(
let ctrl = data.net_controller()?; format!("/public/packageData/{}/outboundGateway", id)
for (host_id, host) in hosts.0 { .parse::<JsonPointer<_, _>>()
data.update(&*ctrl, host_id, host).await?; .unwrap(),
)
.await;
let ctrl_for_ip = thread_data.lock().await.net_controller().ok();
let mut ip_info_watch = ctrl_for_ip
.as_ref()
.map(|c| c.net_iface.watcher.subscribe());
if let Some(ref mut w) = ip_info_watch {
w.mark_seen();
}
drop(ctrl_for_ip);
let mut current_outbound_table: Option<u32> = None;
loop {
let (hosts_changed, outbound_changed) = tokio::select! {
res = watch.changed() => {
if let Err(e) = res {
tracing::error!("DB watch disconnected for {id}: {e}");
break;
}
(true, false)
}
_ = outbound_sub.recv() => (false, true),
_ = async {
if let Some(ref mut w) = ip_info_watch {
w.changed().await;
} else {
std::future::pending::<()>().await;
}
} => (false, true),
};
// Handle host updates
if hosts_changed {
if let Err(e) = async {
let hosts = watch.peek()?.de()?;
let mut data = thread_data.lock().await;
let ctrl = data.net_controller()?;
for (host_id, host) in hosts.0 {
data.update(&*ctrl, host_id, host).await?;
}
Ok::<_, Error>(())
}
.await
{
tracing::error!("Failed to update network info for {id}: {e}");
tracing::debug!("{e:?}");
} }
Ok::<_, Error>(())
} }
.await
{ // Handle outbound gateway changes
tracing::error!("Failed to update network info for {id}: {e}"); if outbound_changed {
tracing::debug!("{e:?}"); if let Err(e) = async {
// Remove old rule if any
if let Some(old_table) = current_outbound_table.take() {
let old_table_str = old_table.to_string();
let _ = Command::new("ip")
.arg("rule")
.arg("del")
.arg("from")
.arg(&service_ip)
.arg("lookup")
.arg(&old_table_str)
.arg("priority")
.arg("100")
.invoke(ErrorKind::Network)
.await;
}
// Read current outbound gateway from DB
let outbound_gw: Option<GatewayId> = db
.peek()
.await
.as_public()
.as_package_data()
.as_idx(id)
.map(|p| p.as_outbound_gateway().de().ok())
.flatten()
.flatten();
if let Some(gw_id) = outbound_gw {
// Look up table ID for this gateway
if let Some(table_id) = if_nametoindex(gw_id.as_str())
.map(|idx| 1000 + idx)
.log_err()
{
let table_str = table_id.to_string();
Command::new("ip")
.arg("rule")
.arg("add")
.arg("from")
.arg(&service_ip)
.arg("lookup")
.arg(&table_str)
.arg("priority")
.arg("100")
.invoke(ErrorKind::Network)
.await
.log_err();
current_outbound_table = Some(table_id);
}
}
Ok::<_, Error>(())
}
.await
{
tracing::error!("Failed to update outbound gateway for {id}: {e}");
tracing::debug!("{e:?}");
}
} }
synced_writer.send_modify(|v| *v += 1); synced_writer.send_modify(|v| *v += 1);
} }
// Cleanup outbound rule on task exit
if let Some(table_id) = current_outbound_table {
let table_str = table_id.to_string();
let _ = Command::new("ip")
.arg("rule")
.arg("del")
.arg("from")
.arg(&service_ip)
.arg("lookup")
.arg(&table_str)
.arg("priority")
.arg("100")
.invoke(ErrorKind::Network)
.await;
}
} else { } else {
let ptr: JsonPointer = "/public/serverInfo/network/host".parse().unwrap(); let ptr: JsonPointer = "/public/serverInfo/network/host".parse().unwrap();
let mut watch = db.watch(ptr).await.typed::<Host>(); let mut watch = db.watch(ptr).await.typed::<Host>();
@@ -539,10 +715,11 @@ impl NetService {
.as_network() .as_network()
.as_gateways() .as_gateways()
.de()?; .de()?;
let hostname = ServerHostname::load(db.as_public().as_server_info())?;
let mut ports = db.as_private().as_available_ports().de()?; let mut ports = db.as_private().as_available_ports().de()?;
let host = host_for(db, pkg_id.as_ref(), &id)?; let host = host_for(db, pkg_id.as_ref(), &id)?;
host.add_binding(&mut ports, internal_port, options)?; host.add_binding(&mut ports, internal_port, options)?;
host.update_addresses(&gateways, &ports)?; host.update_addresses(&hostname, &gateways, &ports)?;
db.as_private_mut().as_available_ports_mut().ser(&ports)?; db.as_private_mut().as_available_ports_mut().ser(&ports)?;
Ok(()) Ok(())
}) })
@@ -563,6 +740,7 @@ impl NetService {
.as_network() .as_network()
.as_gateways() .as_gateways()
.de()?; .de()?;
let hostname = ServerHostname::load(db.as_public().as_server_info())?;
let ports = db.as_private().as_available_ports().de()?; let ports = db.as_private().as_available_ports().de()?;
if let Some(ref pkg_id) = pkg_id { if let Some(ref pkg_id) = pkg_id {
for (host_id, host) in db for (host_id, host) in db
@@ -584,7 +762,7 @@ impl NetService {
} }
Ok(()) Ok(())
})?; })?;
host.update_addresses(&gateways, &ports)?; host.update_addresses(&hostname, &gateways, &ports)?;
} }
} else { } else {
let host = db let host = db
@@ -603,7 +781,7 @@ impl NetService {
} }
Ok(()) Ok(())
})?; })?;
host.update_addresses(&gateways, &ports)?; host.update_addresses(&hostname, &gateways, &ports)?;
} }
Ok(()) Ok(())
}) })
@@ -611,13 +789,6 @@ impl NetService {
.result .result
} }
pub async fn sync_host(&self, _id: HostId) -> Result<(), Error> {
let current = self.synced.peek(|v| *v);
let mut w = self.synced.clone();
w.wait_for(|v| *v > current).await;
Ok(())
}
pub async fn remove_all(mut self) -> Result<(), Error> { pub async fn remove_all(mut self) -> Result<(), Error> {
if Weak::upgrade(&self.data.lock().await.controller).is_none() { if Weak::upgrade(&self.data.lock().await.controller).is_none() {
self.shutdown = true; self.shutdown = true;
@@ -632,6 +803,23 @@ impl NetService {
let mut w = self.synced.clone(); let mut w = self.synced.clone();
w.wait_for(|v| *v > current).await; w.wait_for(|v| *v > current).await;
self.sync_task.abort(); self.sync_task.abort();
// Clean up any outbound gateway ip rules for this service
let service_ip = self.data.lock().await.ip.to_string();
loop {
if Command::new("ip")
.arg("rule")
.arg("del")
.arg("from")
.arg(&service_ip)
.arg("priority")
.arg("100")
.invoke(ErrorKind::Network)
.await
.is_err()
{
break;
}
}
self.shutdown = true; self.shutdown = true;
Ok(()) Ok(())
} }

View File

@@ -1,12 +1,12 @@
use std::collections::BTreeSet; use std::collections::BTreeSet;
use std::net::SocketAddr; use std::net::SocketAddr;
use imbl_value::{InOMap, InternedString}; use imbl_value::InternedString;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use crate::prelude::*; use crate::prelude::*;
use crate::{GatewayId, HostId, PackageId, ServiceInterfaceId}; use crate::{ActionId, GatewayId, HostId, PackageId, ServiceInterfaceId};
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[ts(export)] #[ts(export)]
@@ -14,7 +14,7 @@ use crate::{GatewayId, HostId, PackageId, ServiceInterfaceId};
pub struct HostnameInfo { pub struct HostnameInfo {
pub ssl: bool, pub ssl: bool,
pub public: bool, pub public: bool,
pub host: InternedString, pub hostname: InternedString,
pub port: Option<u16>, pub port: Option<u16>,
pub metadata: HostnameMetadata, pub metadata: HostnameMetadata,
} }
@@ -32,6 +32,9 @@ pub enum HostnameMetadata {
gateway: GatewayId, gateway: GatewayId,
scope_id: u32, scope_id: u32,
}, },
Mdns {
gateways: BTreeSet<GatewayId>,
},
PrivateDomain { PrivateDomain {
gateways: BTreeSet<GatewayId>, gateways: BTreeSet<GatewayId>,
}, },
@@ -39,21 +42,23 @@ pub enum HostnameMetadata {
gateway: GatewayId, gateway: GatewayId,
}, },
Plugin { Plugin {
package: PackageId, package_id: PackageId,
#[serde(flatten)] remove_action: Option<ActionId>,
#[ts(skip)] overflow_actions: Vec<ActionId>,
extra: InOMap<InternedString, Value>, #[ts(type = "unknown")]
#[serde(default)]
info: Value,
}, },
} }
impl HostnameInfo { impl HostnameInfo {
pub fn to_socket_addr(&self) -> Option<SocketAddr> { pub fn to_socket_addr(&self) -> Option<SocketAddr> {
let ip = self.host.parse().ok()?; let ip = self.hostname.parse().ok()?;
Some(SocketAddr::new(ip, self.port?)) Some(SocketAddr::new(ip, self.port?))
} }
pub fn to_san_hostname(&self) -> InternedString { pub fn to_san_hostname(&self) -> InternedString {
self.host.clone() self.hostname.clone()
} }
} }
@@ -67,12 +72,70 @@ impl HostnameMetadata {
Self::Ipv4 { gateway } Self::Ipv4 { gateway }
| Self::Ipv6 { gateway, .. } | Self::Ipv6 { gateway, .. }
| Self::PublicDomain { gateway } => Box::new(std::iter::once(gateway)), | Self::PublicDomain { gateway } => Box::new(std::iter::once(gateway)),
Self::PrivateDomain { gateways } => Box::new(gateways.iter()), Self::PrivateDomain { gateways } | Self::Mdns { gateways } => Box::new(gateways.iter()),
Self::Plugin { .. } => Box::new(std::iter::empty()), Self::Plugin { .. } => Box::new(std::iter::empty()),
} }
} }
} }
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct PluginHostnameInfo {
pub package_id: Option<PackageId>,
pub host_id: HostId,
pub internal_port: u16,
pub ssl: bool,
pub public: bool,
#[ts(type = "string")]
pub hostname: InternedString,
pub port: Option<u16>,
#[ts(type = "unknown")]
#[serde(default)]
pub info: Value,
}
impl PluginHostnameInfo {
/// Convert to a `HostnameInfo` with `Plugin` metadata, using the given plugin package ID.
pub fn to_hostname_info(
&self,
plugin_package: &PackageId,
remove_action: Option<ActionId>,
overflow_actions: Vec<ActionId>,
) -> HostnameInfo {
HostnameInfo {
ssl: self.ssl,
public: self.public,
hostname: self.hostname.clone(),
port: self.port,
metadata: HostnameMetadata::Plugin {
package_id: plugin_package.clone(),
info: self.info.clone(),
remove_action,
overflow_actions,
},
}
}
/// Check if a `HostnameInfo` with Plugin metadata matches this `PluginHostnameInfo`
/// (comparing address fields only, not row_actions).
pub fn matches_hostname_info(&self, h: &HostnameInfo, plugin_package: &PackageId) -> bool {
match &h.metadata {
HostnameMetadata::Plugin {
package_id, info, ..
} => {
package_id == plugin_package
&& h.ssl == self.ssl
&& h.public == self.public
&& h.hostname == self.hostname
&& h.port == self.port
&& *info == self.info
}
_ => false,
}
}
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[ts(export)] #[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]

View File

@@ -33,10 +33,10 @@ use crate::SOURCE_DATE;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::db::model::Database; use crate::db::model::Database;
use crate::db::{DbAccess, DbAccessMut}; use crate::db::{DbAccess, DbAccessMut};
use crate::hostname::Hostname; use crate::hostname::ServerHostname;
use crate::init::check_time_is_synchronized; use crate::init::check_time_is_synchronized;
use crate::net::gateway::GatewayInfo; use crate::net::gateway::GatewayInfo;
use crate::net::tls::TlsHandler; use crate::net::tls::{TlsHandler, TlsHandlerAction};
use crate::net::web_server::{Accept, ExtractVisitor, TcpMetadata, extract}; use crate::net::web_server::{Accept, ExtractVisitor, TcpMetadata, extract};
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::Pem; use crate::util::serde::Pem;
@@ -283,7 +283,7 @@ pub fn gen_nistp256() -> Result<PKey<Private>, Error> {
#[instrument(skip_all)] #[instrument(skip_all)]
pub fn make_root_cert( pub fn make_root_cert(
root_key: &PKey<Private>, root_key: &PKey<Private>,
hostname: &Hostname, hostname: &ServerHostname,
start_time: SystemTime, start_time: SystemTime,
) -> Result<X509, Error> { ) -> Result<X509, Error> {
let mut builder = X509Builder::new()?; let mut builder = X509Builder::new()?;
@@ -300,7 +300,8 @@ pub fn make_root_cert(
builder.set_serial_number(&*rand_serial()?)?; builder.set_serial_number(&*rand_serial()?)?;
let mut subject_name_builder = X509NameBuilder::new()?; let mut subject_name_builder = X509NameBuilder::new()?;
subject_name_builder.append_entry_by_text("CN", &format!("{} Local Root CA", &*hostname.0))?; subject_name_builder
.append_entry_by_text("CN", &format!("{} Local Root CA", hostname.as_ref()))?;
subject_name_builder.append_entry_by_text("O", "Start9")?; subject_name_builder.append_entry_by_text("O", "Start9")?;
subject_name_builder.append_entry_by_text("OU", "StartOS")?; subject_name_builder.append_entry_by_text("OU", "StartOS")?;
let subject_name = subject_name_builder.build(); let subject_name = subject_name_builder.build();
@@ -619,7 +620,7 @@ where
&mut self, &mut self,
hello: &ClientHello<'_>, hello: &ClientHello<'_>,
metadata: &<A as Accept>::Metadata, metadata: &<A as Accept>::Metadata,
) -> Option<ServerConfig> { ) -> Option<TlsHandlerAction> {
let hostnames: BTreeSet<InternedString> = hello let hostnames: BTreeSet<InternedString> = hello
.server_name() .server_name()
.map(InternedString::from) .map(InternedString::from)
@@ -683,5 +684,6 @@ where
) )
} }
.log_err() .log_err()
.map(TlsHandlerAction::Tls)
} }
} }

View File

@@ -9,7 +9,7 @@ use async_compression::tokio::bufread::GzipEncoder;
use axum::Router; use axum::Router;
use axum::body::Body; use axum::body::Body;
use axum::extract::{self as x, Request}; use axum::extract::{self as x, Request};
use axum::response::{IntoResponse, Response}; use axum::response::Response;
use axum::routing::{any, get}; use axum::routing::{any, get};
use base64::display::Base64Display; use base64::display::Base64Display;
use digest::Digest; use digest::Digest;
@@ -31,7 +31,7 @@ use tokio_util::io::ReaderStream;
use url::Url; use url::Url;
use crate::context::{DiagnosticContext, InitContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, RpcContext, SetupContext};
use crate::hostname::Hostname; use crate::hostname::ServerHostname;
use crate::middleware::auth::Auth; use crate::middleware::auth::Auth;
use crate::middleware::auth::session::ValidSessionToken; use crate::middleware::auth::session::ValidSessionToken;
use crate::middleware::cors::Cors; use crate::middleware::cors::Cors;
@@ -105,8 +105,9 @@ impl UiContext for RpcContext {
get(move || { get(move || {
let ctx = self.clone(); let ctx = self.clone();
async move { async move {
ctx.account ctx.account.peek(|account| {
.peek(|account| cert_send(&account.root_ca_cert, &account.hostname)) cert_send(&account.root_ca_cert, &account.hostname.hostname)
})
} }
}), }),
) )
@@ -419,7 +420,7 @@ pub fn bad_request() -> Response {
.unwrap() .unwrap()
} }
fn cert_send(cert: &X509, hostname: &Hostname) -> Result<Response, Error> { fn cert_send(cert: &X509, hostname: &ServerHostname) -> Result<Response, Error> {
let pem = cert.to_pem()?; let pem = cert.to_pem()?;
Response::builder() Response::builder()
.status(StatusCode::OK) .status(StatusCode::OK)
@@ -435,7 +436,7 @@ fn cert_send(cert: &X509, hostname: &Hostname) -> Result<Response, Error> {
.header(http::header::CONTENT_LENGTH, pem.len()) .header(http::header::CONTENT_LENGTH, pem.len())
.header( .header(
http::header::CONTENT_DISPOSITION, http::header::CONTENT_DISPOSITION,
format!("attachment; filename={}.crt", &hostname.0), format!("attachment; filename={}.crt", hostname.as_ref()),
) )
.body(Body::from(pem)) .body(Body::from(pem))
.with_kind(ErrorKind::Network) .with_kind(ErrorKind::Network)

View File

@@ -1,5 +1,6 @@
use std::sync::Arc; use std::sync::Arc;
use std::task::{Poll, ready}; use std::task::{Poll, ready};
use std::time::Duration;
use futures::future::BoxFuture; use futures::future::BoxFuture;
use futures::stream::FuturesUnordered; use futures::stream::FuturesUnordered;
@@ -15,6 +16,14 @@ use tokio_rustls::rustls::sign::CertifiedKey;
use tokio_rustls::rustls::{ClientConfig, RootCertStore, ServerConfig}; use tokio_rustls::rustls::{ClientConfig, RootCertStore, ServerConfig};
use visit_rs::{Visit, VisitFields}; use visit_rs::{Visit, VisitFields};
/// Result of a TLS handler's decision about how to handle a connection.
pub enum TlsHandlerAction {
/// Complete the TLS handshake with this ServerConfig.
Tls(ServerConfig),
/// Don't complete TLS — rewind the BackTrackingIO and return the raw stream.
Passthrough,
}
use crate::net::http::handle_http_on_https; use crate::net::http::handle_http_on_https;
use crate::net::web_server::{Accept, AcceptStream, MetadataVisitor}; use crate::net::web_server::{Accept, AcceptStream, MetadataVisitor};
use crate::prelude::*; use crate::prelude::*;
@@ -49,7 +58,7 @@ pub trait TlsHandler<'a, A: Accept> {
&'a mut self, &'a mut self,
hello: &'a ClientHello<'a>, hello: &'a ClientHello<'a>,
metadata: &'a A::Metadata, metadata: &'a A::Metadata,
) -> impl Future<Output = Option<ServerConfig>> + Send + 'a; ) -> impl Future<Output = Option<TlsHandlerAction>> + Send + 'a;
} }
#[derive(Clone)] #[derive(Clone)]
@@ -65,7 +74,7 @@ where
&'a mut self, &'a mut self,
hello: &'a ClientHello<'a>, hello: &'a ClientHello<'a>,
metadata: &'a <A as Accept>::Metadata, metadata: &'a <A as Accept>::Metadata,
) -> Option<ServerConfig> { ) -> Option<TlsHandlerAction> {
if let Some(config) = self.0.get_config(hello, metadata).await { if let Some(config) = self.0.get_config(hello, metadata).await {
return Some(config); return Some(config);
} }
@@ -85,7 +94,7 @@ pub trait WrapTlsHandler<A: Accept> {
prev: ServerConfig, prev: ServerConfig,
hello: &'a ClientHello<'a>, hello: &'a ClientHello<'a>,
metadata: &'a <A as Accept>::Metadata, metadata: &'a <A as Accept>::Metadata,
) -> impl Future<Output = Option<ServerConfig>> + Send + 'a ) -> impl Future<Output = Option<TlsHandlerAction>> + Send + 'a
where where
Self: 'a; Self: 'a;
} }
@@ -101,9 +110,12 @@ where
&'a mut self, &'a mut self,
hello: &'a ClientHello<'a>, hello: &'a ClientHello<'a>,
metadata: &'a <A as Accept>::Metadata, metadata: &'a <A as Accept>::Metadata,
) -> Option<ServerConfig> { ) -> Option<TlsHandlerAction> {
let prev = self.inner.get_config(hello, metadata).await?; let action = self.inner.get_config(hello, metadata).await?;
self.wrapper.wrap(prev, hello, metadata).await match action {
TlsHandlerAction::Tls(cfg) => self.wrapper.wrap(cfg, hello, metadata).await,
other => Some(other),
}
} }
} }
@@ -170,7 +182,7 @@ where
let (metadata, stream) = ready!(self.accept.poll_accept(cx)?); let (metadata, stream) = ready!(self.accept.poll_accept(cx)?);
let mut tls_handler = self.tls_handler.clone(); let mut tls_handler = self.tls_handler.clone();
let mut fut = async move { let mut fut = async move {
let res = async { let res = match tokio::time::timeout(Duration::from_secs(15), async {
let mut acceptor = let mut acceptor =
LazyConfigAcceptor::new(Acceptor::default(), BackTrackingIO::new(stream)); LazyConfigAcceptor::new(Acceptor::default(), BackTrackingIO::new(stream));
let mut mid: tokio_rustls::StartHandshake<BackTrackingIO<AcceptStream>> = let mut mid: tokio_rustls::StartHandshake<BackTrackingIO<AcceptStream>> =
@@ -202,45 +214,75 @@ where
} }
}; };
let hello = mid.client_hello(); let hello = mid.client_hello();
if let Some(cfg) = tls_handler.get_config(&hello, &metadata).await { let sni = hello.server_name().map(InternedString::intern);
let buffered = mid.io.stop_buffering(); match tls_handler.get_config(&hello, &metadata).await {
mid.io Some(TlsHandlerAction::Tls(cfg)) => {
.write_all(&buffered) let buffered = mid.io.stop_buffering();
.await mid.io
.with_kind(ErrorKind::Network)?; .write_all(&buffered)
return Ok(match mid.into_stream(Arc::new(cfg)).await { .await
Ok(stream) => { .with_kind(ErrorKind::Network)?;
let s = stream.get_ref().1; return Ok(match mid.into_stream(Arc::new(cfg)).await {
Some(( Ok(stream) => {
TlsMetadata { let s = stream.get_ref().1;
inner: metadata, Some((
tls_info: TlsHandshakeInfo { TlsMetadata {
sni: s.server_name().map(InternedString::intern), inner: metadata,
alpn: s tls_info: TlsHandshakeInfo {
.alpn_protocol() sni: s
.map(|a| MaybeUtf8String(a.to_vec())), .server_name()
.map(InternedString::intern),
alpn: s
.alpn_protocol()
.map(|a| MaybeUtf8String(a.to_vec())),
},
}, },
}, Box::pin(stream) as AcceptStream,
Box::pin(stream) as AcceptStream, ))
)) }
} Err(e) => {
Err(e) => { tracing::trace!("Error completing TLS handshake: {e}");
tracing::trace!("Error completing TLS handshake: {e}"); tracing::trace!("{e:?}");
tracing::trace!("{e:?}"); None
None }
} });
}); }
Some(TlsHandlerAction::Passthrough) => {
let (dummy, _drop) = tokio::io::duplex(1);
let mut bt = std::mem::replace(
&mut mid.io,
BackTrackingIO::new(Box::pin(dummy) as AcceptStream),
);
drop(mid);
bt.rewind();
return Ok(Some((
TlsMetadata {
inner: metadata,
tls_info: TlsHandshakeInfo { sni, alpn: None },
},
Box::pin(bt) as AcceptStream,
)));
}
None => {}
} }
Ok(None) Ok(None)
} })
.await; .await
{
Ok(res) => res,
Err(_) => {
tracing::trace!("TLS handshake timed out");
Ok(None)
}
};
(tls_handler, res) (tls_handler, res)
} }
.boxed(); .boxed();
match fut.poll_unpin(cx) { match fut.poll_unpin(cx) {
Poll::Pending => { Poll::Pending => {
in_progress.push(fut); in_progress.push(fut);
cx.waker().wake_by_ref();
Poll::Pending Poll::Pending
} }
Poll::Ready((handler, res)) => { Poll::Ready((handler, res)) => {

View File

@@ -175,13 +175,19 @@ pub async fn remove_tunnel(
ctx.db ctx.db
.mutate(|db| { .mutate(|db| {
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?; let hostname = crate::hostname::ServerHostname::load(db.as_public().as_server_info())?;
let gateways = db
.as_public()
.as_server_info()
.as_network()
.as_gateways()
.de()?;
let ports = db.as_private().as_available_ports().de()?; let ports = db.as_private().as_available_ports().de()?;
for host in all_hosts(db) { for host in all_hosts(db) {
let host = host?; let host = host?;
host.as_public_domains_mut() host.as_public_domains_mut()
.mutate(|p| Ok(p.retain(|_, v| v.gateway != id)))?; .mutate(|p| Ok(p.retain(|_, v| v.gateway != id)))?;
host.update_addresses(&gateways, &ports)?; host.update_addresses(&hostname, &gateways, &ports)?;
} }
Ok(()) Ok(())
@@ -193,7 +199,13 @@ pub async fn remove_tunnel(
ctx.db ctx.db
.mutate(|db| { .mutate(|db| {
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?; let hostname = crate::hostname::ServerHostname::load(db.as_public().as_server_info())?;
let gateways = db
.as_public()
.as_server_info()
.as_network()
.as_gateways()
.de()?;
let ports = db.as_private().as_available_ports().de()?; let ports = db.as_private().as_available_ports().de()?;
for host in all_hosts(db) { for host in all_hosts(db) {
let host = host?; let host = host?;
@@ -204,7 +216,7 @@ pub async fn remove_tunnel(
d.retain(|_, gateways| !gateways.is_empty()); d.retain(|_, gateways| !gateways.is_empty());
Ok(()) Ok(())
})?; })?;
host.update_addresses(&gateways, &ports)?; host.update_addresses(&hostname, &gateways, &ports)?;
} }
Ok(()) Ok(())

View File

@@ -6,12 +6,13 @@ use std::sync::{Arc, Weak};
use std::task::{Poll, ready}; use std::task::{Poll, ready};
use async_acme::acme::ACME_TLS_ALPN_NAME; use async_acme::acme::ACME_TLS_ALPN_NAME;
use clap::Parser;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::FutureExt; use futures::FutureExt;
use futures::future::BoxFuture; use futures::future::BoxFuture;
use imbl::OrdMap; use imbl::OrdMap;
use imbl_value::{InOMap, InternedString}; use imbl_value::{InOMap, InternedString};
use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn}; use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::net::{TcpListener, TcpStream}; use tokio::net::{TcpListener, TcpStream};
use tokio_rustls::TlsConnector; use tokio_rustls::TlsConnector;
@@ -35,7 +36,7 @@ use crate::net::gateway::{
}; };
use crate::net::ssl::{CertStore, RootCaTlsHandler}; use crate::net::ssl::{CertStore, RootCaTlsHandler};
use crate::net::tls::{ use crate::net::tls::{
ChainedHandler, TlsHandlerWrapper, TlsListener, TlsMetadata, WrapTlsHandler, ChainedHandler, TlsHandlerAction, TlsHandlerWrapper, TlsListener, TlsMetadata, WrapTlsHandler,
}; };
use crate::net::utils::ipv6_is_link_local; use crate::net::utils::ipv6_is_link_local;
use crate::net::web_server::{Accept, AcceptStream, ExtractVisitor, TcpMetadata, extract}; use crate::net::web_server::{Accept, AcceptStream, ExtractVisitor, TcpMetadata, extract};
@@ -46,68 +47,228 @@ use crate::util::serde::{HandlerExtSerde, MaybeUtf8String, display_serializable}
use crate::util::sync::{SyncMutex, Watch}; use crate::util::sync::{SyncMutex, Watch};
use crate::{GatewayId, ResultExt}; use crate::{GatewayId, ResultExt};
#[derive(Debug, Clone, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct PassthroughInfo {
#[ts(type = "string")]
pub hostname: InternedString,
pub listen_port: u16,
#[ts(type = "string")]
pub backend: SocketAddr,
#[ts(type = "string[]")]
pub public_gateways: BTreeSet<GatewayId>,
#[ts(type = "string[]")]
pub private_ips: BTreeSet<IpAddr>,
}
#[derive(Debug, Clone, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
struct AddPassthroughParams {
#[arg(long)]
pub hostname: InternedString,
#[arg(long)]
pub listen_port: u16,
#[arg(long)]
pub backend: SocketAddr,
#[arg(long)]
pub public_gateway: Vec<GatewayId>,
#[arg(long)]
pub private_ip: Vec<IpAddr>,
}
#[derive(Debug, Clone, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
struct RemovePassthroughParams {
#[arg(long)]
pub hostname: InternedString,
#[arg(long)]
pub listen_port: u16,
}
pub fn vhost_api<C: Context>() -> ParentHandler<C> { pub fn vhost_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new().subcommand( ParentHandler::new()
"dump-table", .subcommand(
from_fn(|ctx: RpcContext| Ok(ctx.net_controller.vhost.dump_table())) "dump-table",
.with_display_serializable() from_fn(dump_table)
.with_custom_display_fn(|HandlerArgs { params, .. }, res| { .with_display_serializable()
use prettytable::*; .with_custom_display_fn(|HandlerArgs { params, .. }, res| {
use prettytable::*;
if let Some(format) = params.format { if let Some(format) = params.format {
display_serializable(format, res)?; display_serializable(format, res)?;
return Ok::<_, Error>(()); return Ok::<_, Error>(());
} }
let mut table = Table::new(); let mut table = Table::new();
table.add_row(row![bc => "FROM", "TO", "ACTIVE"]); table.add_row(row![bc => "FROM", "TO", "ACTIVE"]);
for (external, targets) in res { for (external, targets) in res {
for (host, targets) in targets { for (host, targets) in targets {
for (idx, target) in targets.into_iter().enumerate() { for (idx, target) in targets.into_iter().enumerate() {
table.add_row(row![ table.add_row(row![
format!( format!(
"{}:{}", "{}:{}",
host.as_ref().map(|s| &**s).unwrap_or("*"), host.as_ref().map(|s| &**s).unwrap_or("*"),
external.0 external.0
), ),
target, target,
idx == 0 idx == 0
]); ]);
}
} }
} }
}
table.print_tty(false)?; table.print_tty(false)?;
Ok(()) Ok(())
}) })
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand(
"add-passthrough",
from_fn_async(add_passthrough)
.no_display()
.with_call_remote::<CliContext>(),
)
.subcommand(
"remove-passthrough",
from_fn_async(remove_passthrough)
.no_display()
.with_call_remote::<CliContext>(),
)
.subcommand(
"list-passthrough",
from_fn(list_passthrough)
.with_display_serializable()
.with_call_remote::<CliContext>(),
)
}
fn dump_table(
ctx: RpcContext,
) -> Result<BTreeMap<JsonKey<u16>, BTreeMap<JsonKey<Option<InternedString>>, EqSet<String>>>, Error>
{
Ok(ctx.net_controller.vhost.dump_table())
}
async fn add_passthrough(
ctx: RpcContext,
AddPassthroughParams {
hostname,
listen_port,
backend,
public_gateway,
private_ip,
}: AddPassthroughParams,
) -> Result<(), Error> {
let public_gateways: BTreeSet<GatewayId> = public_gateway.into_iter().collect();
let private_ips: BTreeSet<IpAddr> = private_ip.into_iter().collect();
ctx.net_controller.vhost.add_passthrough(
hostname.clone(),
listen_port,
backend,
public_gateways.clone(),
private_ips.clone(),
)?;
ctx.db
.mutate(|db| {
let pts = db
.as_public_mut()
.as_server_info_mut()
.as_network_mut()
.as_passthroughs_mut();
let mut vec: Vec<PassthroughInfo> = pts.de()?;
vec.retain(|p| !(p.hostname == hostname && p.listen_port == listen_port));
vec.push(PassthroughInfo {
hostname,
listen_port,
backend,
public_gateways,
private_ips,
});
pts.ser(&vec)
})
.await
.result?;
Ok(())
}
async fn remove_passthrough(
ctx: RpcContext,
RemovePassthroughParams {
hostname,
listen_port,
}: RemovePassthroughParams,
) -> Result<(), Error> {
ctx.net_controller
.vhost
.remove_passthrough(&hostname, listen_port);
ctx.db
.mutate(|db| {
let pts = db
.as_public_mut()
.as_server_info_mut()
.as_network_mut()
.as_passthroughs_mut();
let mut vec: Vec<PassthroughInfo> = pts.de()?;
vec.retain(|p| !(p.hostname == hostname && p.listen_port == listen_port));
pts.ser(&vec)
})
.await
.result?;
Ok(())
}
fn list_passthrough(ctx: RpcContext) -> Result<Vec<PassthroughInfo>, Error> {
Ok(ctx.net_controller.vhost.list_passthrough())
} }
// not allowed: <=1024, >=32768, 5355, 5432, 9050, 6010, 9051, 5353 // not allowed: <=1024, >=32768, 5355, 5432, 9050, 6010, 9051, 5353
struct PassthroughHandle {
_rc: Arc<()>,
backend: SocketAddr,
public: BTreeSet<GatewayId>,
private: BTreeSet<IpAddr>,
}
pub struct VHostController { pub struct VHostController {
db: TypedPatchDb<Database>, db: TypedPatchDb<Database>,
interfaces: Arc<NetworkInterfaceController>, interfaces: Arc<NetworkInterfaceController>,
crypto_provider: Arc<CryptoProvider>, crypto_provider: Arc<CryptoProvider>,
acme_cache: AcmeTlsAlpnCache, acme_cache: AcmeTlsAlpnCache,
servers: SyncMutex<BTreeMap<u16, VHostServer<VHostBindListener>>>, servers: SyncMutex<BTreeMap<u16, VHostServer<VHostBindListener>>>,
passthrough_handles: SyncMutex<BTreeMap<(InternedString, u16), PassthroughHandle>>,
} }
impl VHostController { impl VHostController {
pub fn new( pub fn new(
db: TypedPatchDb<Database>, db: TypedPatchDb<Database>,
interfaces: Arc<NetworkInterfaceController>, interfaces: Arc<NetworkInterfaceController>,
crypto_provider: Arc<CryptoProvider>, crypto_provider: Arc<CryptoProvider>,
passthroughs: Vec<PassthroughInfo>,
) -> Self { ) -> Self {
Self { let controller = Self {
db, db,
interfaces, interfaces,
crypto_provider, crypto_provider,
acme_cache: Arc::new(SyncMutex::new(BTreeMap::new())), acme_cache: Arc::new(SyncMutex::new(BTreeMap::new())),
servers: SyncMutex::new(BTreeMap::new()), servers: SyncMutex::new(BTreeMap::new()),
passthrough_handles: SyncMutex::new(BTreeMap::new()),
};
for pt in passthroughs {
if let Err(e) = controller.add_passthrough(
pt.hostname,
pt.listen_port,
pt.backend,
pt.public_gateways,
pt.private_ips,
) {
tracing::warn!("failed to restore passthrough: {e}");
}
} }
controller
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub fn add( pub fn add(
@@ -120,20 +281,7 @@ impl VHostController {
let server = if let Some(server) = writable.remove(&external) { let server = if let Some(server) = writable.remove(&external) {
server server
} else { } else {
let bind_reqs = Watch::new(VHostBindRequirements::default()); self.create_server(external)
let listener = VHostBindListener {
ip_info: self.interfaces.watcher.subscribe(),
port: external,
bind_reqs: bind_reqs.clone_unseen(),
listeners: BTreeMap::new(),
};
VHostServer::new(
listener,
bind_reqs,
self.db.clone(),
self.crypto_provider.clone(),
self.acme_cache.clone(),
)
}; };
let rc = server.add(hostname, target); let rc = server.add(hostname, target);
writable.insert(external, server); writable.insert(external, server);
@@ -141,6 +289,75 @@ impl VHostController {
}) })
} }
fn create_server(&self, port: u16) -> VHostServer<VHostBindListener> {
let bind_reqs = Watch::new(VHostBindRequirements::default());
let listener = VHostBindListener {
ip_info: self.interfaces.watcher.subscribe(),
port,
bind_reqs: bind_reqs.clone_unseen(),
listeners: BTreeMap::new(),
};
VHostServer::new(
listener,
bind_reqs,
self.db.clone(),
self.crypto_provider.clone(),
self.acme_cache.clone(),
)
}
pub fn add_passthrough(
&self,
hostname: InternedString,
port: u16,
backend: SocketAddr,
public: BTreeSet<GatewayId>,
private: BTreeSet<IpAddr>,
) -> Result<(), Error> {
let target = ProxyTarget {
public: public.clone(),
private: private.clone(),
acme: None,
addr: backend,
add_x_forwarded_headers: false,
connect_ssl: Err(AlpnInfo::Reflect),
passthrough: true,
};
let rc = self.add(Some(hostname.clone()), port, DynVHostTarget::new(target))?;
self.passthrough_handles.mutate(|h| {
h.insert(
(hostname, port),
PassthroughHandle {
_rc: rc,
backend,
public,
private,
},
);
});
Ok(())
}
pub fn remove_passthrough(&self, hostname: &InternedString, port: u16) {
self.passthrough_handles
.mutate(|h| h.remove(&(hostname.clone(), port)));
self.gc(Some(hostname.clone()), port);
}
pub fn list_passthrough(&self) -> Vec<PassthroughInfo> {
self.passthrough_handles.peek(|h| {
h.iter()
.map(|((hostname, port), handle)| PassthroughInfo {
hostname: hostname.clone(),
listen_port: *port,
backend: handle.backend,
public_gateways: handle.public.clone(),
private_ips: handle.private.clone(),
})
.collect()
})
}
pub fn dump_table( pub fn dump_table(
&self, &self,
) -> BTreeMap<JsonKey<u16>, BTreeMap<JsonKey<Option<InternedString>>, EqSet<String>>> { ) -> BTreeMap<JsonKey<u16>, BTreeMap<JsonKey<Option<InternedString>>, EqSet<String>>> {
@@ -278,8 +495,7 @@ impl Accept for VHostBindListener {
cx: &mut std::task::Context<'_>, cx: &mut std::task::Context<'_>,
) -> Poll<Result<(Self::Metadata, AcceptStream), Error>> { ) -> Poll<Result<(Self::Metadata, AcceptStream), Error>> {
// Update listeners when ip_info or bind_reqs change // Update listeners when ip_info or bind_reqs change
while self.ip_info.poll_changed(cx).is_ready() while self.ip_info.poll_changed(cx).is_ready() || self.bind_reqs.poll_changed(cx).is_ready()
|| self.bind_reqs.poll_changed(cx).is_ready()
{ {
let reqs = self.bind_reqs.read_and_mark_seen(); let reqs = self.bind_reqs.read_and_mark_seen();
let listeners = &mut self.listeners; let listeners = &mut self.listeners;
@@ -331,6 +547,9 @@ pub trait VHostTarget<A: Accept>: std::fmt::Debug + Eq {
fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) { fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) {
(BTreeSet::new(), BTreeSet::new()) (BTreeSet::new(), BTreeSet::new())
} }
fn is_passthrough(&self) -> bool {
false
}
fn preprocess<'a>( fn preprocess<'a>(
&'a self, &'a self,
prev: ServerConfig, prev: ServerConfig,
@@ -350,6 +569,7 @@ pub trait DynVHostTargetT<A: Accept>: std::fmt::Debug + Any {
fn filter(&self, metadata: &<A as Accept>::Metadata) -> bool; fn filter(&self, metadata: &<A as Accept>::Metadata) -> bool;
fn acme(&self) -> Option<&AcmeProvider>; fn acme(&self) -> Option<&AcmeProvider>;
fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>); fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>);
fn is_passthrough(&self) -> bool;
fn preprocess<'a>( fn preprocess<'a>(
&'a self, &'a self,
prev: ServerConfig, prev: ServerConfig,
@@ -374,6 +594,9 @@ impl<A: Accept, T: VHostTarget<A> + 'static> DynVHostTargetT<A> for T {
fn acme(&self) -> Option<&AcmeProvider> { fn acme(&self) -> Option<&AcmeProvider> {
VHostTarget::acme(self) VHostTarget::acme(self)
} }
fn is_passthrough(&self) -> bool {
VHostTarget::is_passthrough(self)
}
fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) { fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) {
VHostTarget::bind_requirements(self) VHostTarget::bind_requirements(self)
} }
@@ -460,6 +683,7 @@ pub struct ProxyTarget {
pub addr: SocketAddr, pub addr: SocketAddr,
pub add_x_forwarded_headers: bool, pub add_x_forwarded_headers: bool,
pub connect_ssl: Result<Arc<ClientConfig>, AlpnInfo>, // Ok: yes, connect using ssl, pass through alpn; Err: connect tcp, use provided strategy for alpn pub connect_ssl: Result<Arc<ClientConfig>, AlpnInfo>, // Ok: yes, connect using ssl, pass through alpn; Err: connect tcp, use provided strategy for alpn
pub passthrough: bool,
} }
impl PartialEq for ProxyTarget { impl PartialEq for ProxyTarget {
fn eq(&self, other: &Self) -> bool { fn eq(&self, other: &Self) -> bool {
@@ -467,6 +691,7 @@ impl PartialEq for ProxyTarget {
&& self.private == other.private && self.private == other.private
&& self.acme == other.acme && self.acme == other.acme
&& self.addr == other.addr && self.addr == other.addr
&& self.passthrough == other.passthrough
&& self.connect_ssl.as_ref().map(Arc::as_ptr) && self.connect_ssl.as_ref().map(Arc::as_ptr)
== other.connect_ssl.as_ref().map(Arc::as_ptr) == other.connect_ssl.as_ref().map(Arc::as_ptr)
} }
@@ -481,6 +706,7 @@ impl fmt::Debug for ProxyTarget {
.field("addr", &self.addr) .field("addr", &self.addr)
.field("add_x_forwarded_headers", &self.add_x_forwarded_headers) .field("add_x_forwarded_headers", &self.add_x_forwarded_headers)
.field("connect_ssl", &self.connect_ssl.as_ref().map(|_| ())) .field("connect_ssl", &self.connect_ssl.as_ref().map(|_| ()))
.field("passthrough", &self.passthrough)
.finish() .finish()
} }
} }
@@ -506,10 +732,8 @@ where
}; };
let src = tcp.peer_addr.ip(); let src = tcp.peer_addr.ip();
// Public if: source is a gateway/router IP (NAT'd internet), // Public: source is outside all known subnets (direct internet)
// or source is outside all known subnets (direct internet) let is_public = !ip_info.subnets.iter().any(|s| s.contains(&src));
let is_public = ip_info.lan_ip.contains(&src)
|| !ip_info.subnets.iter().any(|s| s.contains(&src));
if is_public { if is_public {
self.public.contains(&gw.id) self.public.contains(&gw.id)
@@ -527,6 +751,9 @@ where
fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) { fn bind_requirements(&self) -> (BTreeSet<GatewayId>, BTreeSet<IpAddr>) {
(self.public.clone(), self.private.clone()) (self.public.clone(), self.private.clone())
} }
fn is_passthrough(&self) -> bool {
self.passthrough
}
async fn preprocess<'a>( async fn preprocess<'a>(
&'a self, &'a self,
mut prev: ServerConfig, mut prev: ServerConfig,
@@ -680,7 +907,7 @@ where
prev: ServerConfig, prev: ServerConfig,
hello: &'a ClientHello<'a>, hello: &'a ClientHello<'a>,
metadata: &'a <A as Accept>::Metadata, metadata: &'a <A as Accept>::Metadata,
) -> Option<ServerConfig> ) -> Option<TlsHandlerAction>
where where
Self: 'a, Self: 'a,
{ {
@@ -690,11 +917,12 @@ where
.flatten() .flatten()
.any(|a| a == ACME_TLS_ALPN_NAME) .any(|a| a == ACME_TLS_ALPN_NAME)
{ {
return Some(prev); return Some(TlsHandlerAction::Tls(prev));
} }
let (target, rc) = self.0.peek(|m| { let (target, rc) = self.0.peek(|m| {
m.get(&hello.server_name().map(InternedString::from)) m.get(&hello.server_name().map(InternedString::from))
.or_else(|| m.get(&None))
.into_iter() .into_iter()
.flatten() .flatten()
.filter(|(_, rc)| rc.strong_count() > 0) .filter(|(_, rc)| rc.strong_count() > 0)
@@ -702,11 +930,16 @@ where
.map(|(t, rc)| (t.clone(), rc.clone())) .map(|(t, rc)| (t.clone(), rc.clone()))
})?; })?;
let is_pt = target.0.is_passthrough();
let (prev, store) = target.into_preprocessed(rc, prev, hello, metadata).await?; let (prev, store) = target.into_preprocessed(rc, prev, hello, metadata).await?;
self.1 = Some(store); self.1 = Some(store);
Some(prev) if is_pt {
Some(TlsHandlerAction::Passthrough)
} else {
Some(TlsHandlerAction::Tls(prev))
}
} }
} }

View File

@@ -85,6 +85,7 @@ pub fn wifi<C: Context>() -> ParentHandler<C> {
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct SetWifiEnabledParams { pub struct SetWifiEnabledParams {
@@ -150,16 +151,20 @@ pub fn country<C: Context>() -> ParentHandler<C> {
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct AddParams { pub struct WifiAddParams {
#[arg(help = "help.arg.wifi-ssid")] #[arg(help = "help.arg.wifi-ssid")]
ssid: String, ssid: String,
#[arg(help = "help.arg.wifi-password")] #[arg(help = "help.arg.wifi-password")]
password: String, password: String,
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn add(ctx: RpcContext, AddParams { ssid, password }: AddParams) -> Result<(), Error> { pub async fn add(
ctx: RpcContext,
WifiAddParams { ssid, password }: WifiAddParams,
) -> Result<(), Error> {
let wifi_manager = ctx.wifi_manager.clone(); let wifi_manager = ctx.wifi_manager.clone();
if !ssid.is_ascii() { if !ssid.is_ascii() {
return Err(Error::new( return Err(Error::new(
@@ -229,15 +234,19 @@ pub async fn add(ctx: RpcContext, AddParams { ssid, password }: AddParams) -> Re
Ok(()) Ok(())
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct SsidParams { pub struct WifiSsidParams {
#[arg(help = "help.arg.wifi-ssid")] #[arg(help = "help.arg.wifi-ssid")]
ssid: String, ssid: String,
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn connect(ctx: RpcContext, SsidParams { ssid }: SsidParams) -> Result<(), Error> { pub async fn connect(
ctx: RpcContext,
WifiSsidParams { ssid }: WifiSsidParams,
) -> Result<(), Error> {
let wifi_manager = ctx.wifi_manager.clone(); let wifi_manager = ctx.wifi_manager.clone();
if !ssid.is_ascii() { if !ssid.is_ascii() {
return Err(Error::new( return Err(Error::new(
@@ -311,7 +320,7 @@ pub async fn connect(ctx: RpcContext, SsidParams { ssid }: SsidParams) -> Result
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn remove(ctx: RpcContext, SsidParams { ssid }: SsidParams) -> Result<(), Error> { pub async fn remove(ctx: RpcContext, WifiSsidParams { ssid }: WifiSsidParams) -> Result<(), Error> {
let wifi_manager = ctx.wifi_manager.clone(); let wifi_manager = ctx.wifi_manager.clone();
if !ssid.is_ascii() { if !ssid.is_ascii() {
return Err(Error::new( return Err(Error::new(
@@ -359,11 +368,13 @@ pub async fn remove(ctx: RpcContext, SsidParams { ssid }: SsidParams) -> Result<
.result?; .result?;
Ok(()) Ok(())
} }
#[derive(serde::Serialize, serde::Deserialize)] #[derive(serde::Serialize, serde::Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct WifiListInfo { pub struct WifiListInfo {
ssids: HashMap<Ssid, SignalStrength>, ssids: HashMap<Ssid, SignalStrength>,
connected: Option<Ssid>, connected: Option<Ssid>,
#[ts(type = "string | null")]
country: Option<CountryCode>, country: Option<CountryCode>,
ethernet: bool, ethernet: bool,
available_wifi: Vec<WifiListOut>, available_wifi: Vec<WifiListOut>,
@@ -374,7 +385,8 @@ pub struct WifiListInfoLow {
strength: SignalStrength, strength: SignalStrength,
security: Vec<String>, security: Vec<String>,
} }
#[derive(serde::Serialize, serde::Deserialize)] #[derive(serde::Serialize, serde::Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct WifiListOut { pub struct WifiListOut {
ssid: Ssid, ssid: Ssid,
@@ -560,6 +572,7 @@ pub async fn get_available(ctx: RpcContext, _: Empty) -> Result<Vec<WifiListOut>
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct SetCountryParams { pub struct SetCountryParams {
@@ -605,7 +618,7 @@ pub struct NetworkId(String);
/// Ssid are the names of the wifis, usually human readable. /// Ssid are the names of the wifis, usually human readable.
#[derive( #[derive(
Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, serde::Serialize, serde::Deserialize, Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, serde::Serialize, serde::Deserialize, TS,
)] )]
pub struct Ssid(String); pub struct Ssid(String);
@@ -622,6 +635,7 @@ pub struct Ssid(String);
Hash, Hash,
serde::Serialize, serde::Serialize,
serde::Deserialize, serde::Deserialize,
TS,
)] )]
pub struct SignalStrength(u8); pub struct SignalStrength(u8);

View File

@@ -75,6 +75,7 @@ pub fn notification<C: Context>() -> ParentHandler<C> {
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ListNotificationParams { pub struct ListNotificationParams {
@@ -140,6 +141,7 @@ pub async fn list(
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ModifyNotificationParams { pub struct ModifyNotificationParams {
@@ -175,6 +177,7 @@ pub async fn remove(
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ModifyNotificationBeforeParams { pub struct ModifyNotificationBeforeParams {
@@ -326,6 +329,7 @@ pub async fn create(
} }
#[derive(Debug, Clone, PartialEq, Eq, Hash, serde::Serialize, serde::Deserialize, TS)] #[derive(Debug, Clone, PartialEq, Eq, Hash, serde::Serialize, serde::Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub enum NotificationLevel { pub enum NotificationLevel {
Success, Success,
@@ -396,26 +400,31 @@ impl Map for Notifications {
} }
} }
#[derive(Debug, Serialize, Deserialize, HasModel)] #[derive(Debug, Serialize, Deserialize, HasModel, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[model = "Model<Self>"] #[model = "Model<Self>"]
pub struct Notification { pub struct Notification {
pub package_id: Option<PackageId>, pub package_id: Option<PackageId>,
#[ts(type = "string")]
pub created_at: DateTime<Utc>, pub created_at: DateTime<Utc>,
pub code: u32, pub code: u32,
pub level: NotificationLevel, pub level: NotificationLevel,
pub title: String, pub title: String,
pub message: String, pub message: String,
#[ts(type = "any")]
pub data: Value, pub data: Value,
#[serde(default = "const_true")] #[serde(default = "const_true")]
pub seen: bool, pub seen: bool,
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct NotificationWithId { pub struct NotificationWithId {
id: u32, id: u32,
#[serde(flatten)] #[serde(flatten)]
#[ts(flatten)]
notification: Notification, notification: Notification,
} }

View File

@@ -27,6 +27,63 @@ use crate::util::serde::IoFormat;
mod gpt; mod gpt;
mod mbr; mod mbr;
/// Get the EFI BootCurrent entry number (the entry firmware used to boot).
/// Returns None on non-EFI systems or if BootCurrent is not set.
async fn get_efi_boot_current() -> Result<Option<String>, Error> {
let efi_output = String::from_utf8(
Command::new("efibootmgr")
.invoke(ErrorKind::Grub)
.await?,
)
.map_err(|e| Error::new(eyre!("efibootmgr output not valid UTF-8: {e}"), ErrorKind::Grub))?;
Ok(efi_output
.lines()
.find(|line| line.starts_with("BootCurrent:"))
.and_then(|line| line.strip_prefix("BootCurrent:"))
.map(|s| s.trim().to_string()))
}
/// Promote a specific boot entry to first in the EFI boot order.
async fn promote_efi_entry(entry: &str) -> Result<(), Error> {
let efi_output = String::from_utf8(
Command::new("efibootmgr")
.invoke(ErrorKind::Grub)
.await?,
)
.map_err(|e| Error::new(eyre!("efibootmgr output not valid UTF-8: {e}"), ErrorKind::Grub))?;
let current_order = efi_output
.lines()
.find(|line| line.starts_with("BootOrder:"))
.and_then(|line| line.strip_prefix("BootOrder:"))
.map(|s| s.trim())
.unwrap_or("");
if current_order.is_empty() || current_order.starts_with(entry) {
return Ok(());
}
let other_entries: Vec<&str> = current_order
.split(',')
.filter(|e| e.trim() != entry)
.collect();
let new_order = if other_entries.is_empty() {
entry.to_string()
} else {
format!("{},{}", entry, other_entries.join(","))
};
Command::new("efibootmgr")
.arg("-o")
.arg(&new_order)
.invoke(ErrorKind::Grub)
.await?;
Ok(())
}
/// Probe a squashfs image to determine its target architecture /// Probe a squashfs image to determine its target architecture
async fn probe_squashfs_arch(squashfs_path: &Path) -> Result<InternedString, Error> { async fn probe_squashfs_arch(squashfs_path: &Path) -> Result<InternedString, Error> {
let output = String::from_utf8( let output = String::from_utf8(
@@ -428,6 +485,21 @@ pub async fn install_os(
}); });
let use_efi = tokio::fs::metadata("/sys/firmware/efi").await.is_ok(); let use_efi = tokio::fs::metadata("/sys/firmware/efi").await.is_ok();
// Save the boot entry we booted from (the USB installer) before grub-install
// overwrites the boot order.
let boot_current = if use_efi {
match get_efi_boot_current().await {
Ok(entry) => entry,
Err(e) => {
tracing::warn!("Failed to get EFI BootCurrent: {e}");
None
}
}
} else {
None
};
let InstallOsResult { part_info, rootfs } = install_os_to( let InstallOsResult { part_info, rootfs } = install_os_to(
"/run/live/medium/live/filesystem.squashfs", "/run/live/medium/live/filesystem.squashfs",
&disk.logicalname, &disk.logicalname,
@@ -439,6 +511,20 @@ pub async fn install_os(
) )
.await?; .await?;
// grub-install prepends its new entry to the EFI boot order, overriding the
// USB-first priority. Promote the USB entry (identified by BootCurrent from
// when we booted the installer) back to first, and persist the entry number
// so the upgrade script can do the same.
if let Some(ref entry) = boot_current {
if let Err(e) = promote_efi_entry(entry).await {
tracing::warn!("Failed to restore EFI boot order: {e}");
}
let efi_entry_path = rootfs.path().join("config/efi-installer-entry");
if let Err(e) = tokio::fs::write(&efi_entry_path, entry).await {
tracing::warn!("Failed to save EFI installer entry number: {e}");
}
}
ctx.config ctx.config
.mutate(|c| c.os_partitions = Some(part_info.clone())); .mutate(|c| c.os_partitions = Some(part_info.clone()));

View File

@@ -579,14 +579,12 @@ fn check_matching_info_short() {
use crate::s9pk::manifest::{Alerts, Description}; use crate::s9pk::manifest::{Alerts, Description};
use crate::util::DataUrl; use crate::util::DataUrl;
let lang_map = |s: &str| { let lang_map =
LocaleString::LanguageMap([("en".into(), s.into())].into_iter().collect()) |s: &str| LocaleString::LanguageMap([("en".into(), s.into())].into_iter().collect());
};
let info = PackageVersionInfo { let info = PackageVersionInfo {
metadata: PackageMetadata { metadata: PackageMetadata {
title: "Test Package".into(), title: "Test Package".into(),
icon: DataUrl::from_vec("image/png", vec![]),
description: Description { description: Description {
short: lang_map("A short description"), short: lang_map("A short description"),
long: lang_map("A longer description of the test package"), long: lang_map("A longer description of the test package"),
@@ -594,18 +592,19 @@ fn check_matching_info_short() {
release_notes: lang_map("Initial release"), release_notes: lang_map("Initial release"),
git_hash: None, git_hash: None,
license: "MIT".into(), license: "MIT".into(),
wrapper_repo: "https://github.com/example/wrapper".parse().unwrap(), package_repo: "https://github.com/example/wrapper".parse().unwrap(),
upstream_repo: "https://github.com/example/upstream".parse().unwrap(), upstream_repo: "https://github.com/example/upstream".parse().unwrap(),
support_site: "https://example.com/support".parse().unwrap(), marketing_url: Some("https://example.com".parse().unwrap()),
marketing_site: "https://example.com".parse().unwrap(),
donation_url: None, donation_url: None,
docs_url: None, docs_urls: Vec::new(),
alerts: Alerts::default(), alerts: Alerts::default(),
dependency_metadata: BTreeMap::new(),
os_version: exver::Version::new([0, 3, 6], []), os_version: exver::Version::new([0, 3, 6], []),
sdk_version: None, sdk_version: None,
hardware_acceleration: false, hardware_acceleration: false,
plugins: BTreeSet::new(),
}, },
icon: DataUrl::from_vec("image/png", vec![]),
dependency_metadata: BTreeMap::new(),
source_version: None, source_version: None,
s9pks: Vec::new(), s9pks: Vec::new(),
}; };

View File

@@ -17,8 +17,11 @@ use crate::registry::device_info::DeviceInfo;
use crate::rpc_continuations::Guid; use crate::rpc_continuations::Guid;
use crate::s9pk::S9pk; use crate::s9pk::S9pk;
use crate::s9pk::git_hash::GitHash; use crate::s9pk::git_hash::GitHash;
use crate::s9pk::manifest::{Alerts, Description, HardwareRequirements, LocaleString}; use crate::s9pk::manifest::{
Alerts, Description, HardwareRequirements, LocaleString, current_version,
};
use crate::s9pk::merkle_archive::source::FileSource; use crate::s9pk::merkle_archive::source::FileSource;
use crate::service::effects::plugin::PluginId;
use crate::sign::commitment::merkle_archive::MerkleArchiveCommitment; use crate::sign::commitment::merkle_archive::MerkleArchiveCommitment;
use crate::sign::{AnySignature, AnyVerifyingKey}; use crate::sign::{AnySignature, AnyVerifyingKey};
use crate::util::{DataUrl, VersionString}; use crate::util::{DataUrl, VersionString};
@@ -69,75 +72,44 @@ impl DependencyMetadata {
} }
} }
#[derive(Debug, Deserialize, Serialize, HasModel, TS, PartialEq)] fn placeholder_url() -> Url {
"https://example.com".parse().unwrap()
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel, TS, PartialEq)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[model = "Model<Self>"] #[model = "Model<Self>"]
pub struct PackageMetadata { pub struct PackageMetadata {
#[ts(type = "string")] #[ts(type = "string")]
pub title: InternedString, pub title: InternedString,
pub icon: DataUrl<'static>,
pub description: Description, pub description: Description,
pub release_notes: LocaleString, pub release_notes: LocaleString,
pub git_hash: Option<GitHash>, pub git_hash: Option<GitHash>,
#[ts(type = "string")] #[ts(type = "string")]
pub license: InternedString, pub license: InternedString,
#[ts(type = "string")] #[ts(type = "string")]
pub wrapper_repo: Url, #[serde(default = "placeholder_url")] // TODO: remove
pub package_repo: Url,
#[ts(type = "string")] #[ts(type = "string")]
pub upstream_repo: Url, pub upstream_repo: Url,
#[ts(type = "string")] #[ts(type = "string")]
pub support_site: Url, pub marketing_url: Option<Url>,
#[ts(type = "string")]
pub marketing_site: Url,
#[ts(type = "string | null")] #[ts(type = "string | null")]
pub donation_url: Option<Url>, pub donation_url: Option<Url>,
#[ts(type = "string | null")] #[serde(default)]
pub docs_url: Option<Url>, #[ts(type = "string[]")]
pub docs_urls: Vec<Url>,
#[serde(default)]
pub alerts: Alerts, pub alerts: Alerts,
pub dependency_metadata: BTreeMap<PackageId, DependencyMetadata>, #[serde(default = "current_version")]
#[ts(type = "string")] #[ts(type = "string")]
pub os_version: Version, pub os_version: Version,
#[ts(type = "string | null")] #[ts(type = "string | null")]
pub sdk_version: Option<Version>, pub sdk_version: Option<Version>,
#[serde(default)] #[serde(default)]
pub hardware_acceleration: bool, pub hardware_acceleration: bool,
} #[serde(default)]
impl PackageMetadata { pub plugins: BTreeSet<PluginId>,
pub async fn load<S: FileSource + Clone>(s9pk: &S9pk<S>) -> Result<Self, Error> {
let manifest = s9pk.as_manifest();
let mut dependency_metadata = BTreeMap::new();
for (id, info) in &manifest.dependencies.0 {
let metadata = s9pk.dependency_metadata(id).await?;
dependency_metadata.insert(
id.clone(),
DependencyMetadata {
title: metadata.map(|m| m.title),
icon: s9pk.dependency_icon_data_url(id).await?,
description: info.description.clone(),
optional: info.optional,
},
);
}
Ok(Self {
title: manifest.title.clone(),
icon: s9pk.icon_data_url().await?,
description: manifest.description.clone(),
release_notes: manifest.release_notes.clone(),
git_hash: manifest.git_hash.clone(),
license: manifest.license.clone(),
wrapper_repo: manifest.wrapper_repo.clone(),
upstream_repo: manifest.upstream_repo.clone(),
support_site: manifest.support_site.clone(),
marketing_site: manifest.marketing_site.clone(),
donation_url: manifest.donation_url.clone(),
docs_url: manifest.docs_url.clone(),
alerts: manifest.alerts.clone(),
dependency_metadata,
os_version: manifest.os_version.clone(),
sdk_version: manifest.sdk_version.clone(),
hardware_acceleration: manifest.hardware_acceleration.clone(),
})
}
} }
#[derive(Debug, Deserialize, Serialize, HasModel, TS)] #[derive(Debug, Deserialize, Serialize, HasModel, TS)]
@@ -147,6 +119,8 @@ impl PackageMetadata {
pub struct PackageVersionInfo { pub struct PackageVersionInfo {
#[serde(flatten)] #[serde(flatten)]
pub metadata: PackageMetadata, pub metadata: PackageMetadata,
pub icon: DataUrl<'static>,
pub dependency_metadata: BTreeMap<PackageId, DependencyMetadata>,
#[ts(type = "string | null")] #[ts(type = "string | null")]
pub source_version: Option<VersionRange>, pub source_version: Option<VersionRange>,
pub s9pks: Vec<(HardwareRequirements, RegistryAsset<MerkleArchiveCommitment>)>, pub s9pks: Vec<(HardwareRequirements, RegistryAsset<MerkleArchiveCommitment>)>,
@@ -156,11 +130,28 @@ impl PackageVersionInfo {
s9pk: &S9pk<S>, s9pk: &S9pk<S>,
urls: Vec<Url>, urls: Vec<Url>,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let manifest = s9pk.as_manifest();
let icon = s9pk.icon_data_url().await?;
let mut dependency_metadata = BTreeMap::new();
for (id, info) in &manifest.dependencies.0 {
let dep_meta = s9pk.dependency_metadata(id).await?;
dependency_metadata.insert(
id.clone(),
DependencyMetadata {
title: dep_meta.map(|m| m.title),
icon: s9pk.dependency_icon_data_url(id).await?,
description: info.description.clone(),
optional: info.optional,
},
);
}
Ok(Self { Ok(Self {
metadata: PackageMetadata::load(s9pk).await?, metadata: manifest.metadata.clone(),
icon,
dependency_metadata,
source_version: None, // TODO source_version: None, // TODO
s9pks: vec![( s9pks: vec![(
s9pk.as_manifest().hardware_requirements.clone(), manifest.hardware_requirements.clone(),
RegistryAsset { RegistryAsset {
published_at: Utc::now(), published_at: Utc::now(),
urls, urls,
@@ -176,6 +167,27 @@ impl PackageVersionInfo {
}) })
} }
pub fn merge_with(&mut self, other: Self, replace_urls: bool) -> Result<(), Error> { pub fn merge_with(&mut self, other: Self, replace_urls: bool) -> Result<(), Error> {
if self.metadata != other.metadata {
return Err(Error::new(
color_eyre::eyre::eyre!("{}", t!("registry.package.index.metadata-mismatch")),
ErrorKind::InvalidRequest,
));
}
if self.icon != other.icon {
return Err(Error::new(
color_eyre::eyre::eyre!("{}", t!("registry.package.index.icon-mismatch")),
ErrorKind::InvalidRequest,
));
}
if self.dependency_metadata != other.dependency_metadata {
return Err(Error::new(
color_eyre::eyre::eyre!(
"{}",
t!("registry.package.index.dependency-metadata-mismatch")
),
ErrorKind::InvalidRequest,
));
}
for (hw_req, asset) in other.s9pks { for (hw_req, asset) in other.s9pks {
if let Some((_, matching)) = self if let Some((_, matching)) = self
.s9pks .s9pks
@@ -221,10 +233,9 @@ impl PackageVersionInfo {
]); ]);
table.add_row(row![br -> "GIT HASH", self.metadata.git_hash.as_deref().unwrap_or("N/A")]); table.add_row(row![br -> "GIT HASH", self.metadata.git_hash.as_deref().unwrap_or("N/A")]);
table.add_row(row![br -> "LICENSE", &self.metadata.license]); table.add_row(row![br -> "LICENSE", &self.metadata.license]);
table.add_row(row![br -> "PACKAGE REPO", &self.metadata.wrapper_repo.to_string()]); table.add_row(row![br -> "PACKAGE REPO", &self.metadata.package_repo.to_string()]);
table.add_row(row![br -> "SERVICE REPO", &self.metadata.upstream_repo.to_string()]); table.add_row(row![br -> "SERVICE REPO", &self.metadata.upstream_repo.to_string()]);
table.add_row(row![br -> "WEBSITE", &self.metadata.marketing_site.to_string()]); table.add_row(row![br -> "WEBSITE", self.metadata.marketing_url.as_ref().map_or("N/A".to_owned(), |u| u.to_string())]);
table.add_row(row![br -> "SUPPORT", &self.metadata.support_site.to_string()]);
table table
} }
@@ -244,30 +255,7 @@ impl Model<PackageVersionInfo> {
} }
if let Some(hw) = &device_info.hardware { if let Some(hw) = &device_info.hardware {
self.as_s9pks_mut().mutate(|s9pks| { self.as_s9pks_mut().mutate(|s9pks| {
s9pks.retain(|(hw_req, _)| { s9pks.retain(|(hw_req, _)| hw_req.is_compatible(hw));
if let Some(arch) = &hw_req.arch {
if !arch.contains(&hw.arch) {
return false;
}
}
if let Some(ram) = hw_req.ram {
if hw.ram < ram {
return false;
}
}
if let Some(dev) = &hw.devices {
for device_filter in &hw_req.device {
if !dev
.iter()
.filter(|d| d.class() == &*device_filter.class)
.any(|d| device_filter.matches(d))
{
return false;
}
}
}
true
});
if hw.devices.is_some() { if hw.devices.is_some() {
s9pks.sort_by_key(|(req, _)| req.specificity_desc()); s9pks.sort_by_key(|(req, _)| req.specificity_desc());
} else { } else {
@@ -287,19 +275,17 @@ impl Model<PackageVersionInfo> {
} }
if let Some(locale) = device_info.os.language.as_deref() { if let Some(locale) = device_info.os.language.as_deref() {
let metadata = self.as_metadata_mut(); self.as_metadata_mut()
metadata
.as_alerts_mut() .as_alerts_mut()
.mutate(|a| Ok(a.localize_for(locale)))?; .mutate(|a| Ok(a.localize_for(locale)))?;
metadata self.as_dependency_metadata_mut()
.as_dependency_metadata_mut()
.as_entries_mut()? .as_entries_mut()?
.into_iter() .into_iter()
.try_for_each(|(_, d)| d.mutate(|d| Ok(d.localize_for(locale))))?; .try_for_each(|(_, d)| d.mutate(|d| Ok(d.localize_for(locale))))?;
metadata self.as_metadata_mut()
.as_description_mut() .as_description_mut()
.mutate(|d| Ok(d.localize_for(locale)))?; .mutate(|d| Ok(d.localize_for(locale)))?;
metadata self.as_metadata_mut()
.as_release_notes_mut() .as_release_notes_mut()
.mutate(|r| Ok(r.localize_for(locale)))?; .mutate(|r| Ok(r.localize_for(locale)))?;
} }

View File

@@ -58,6 +58,9 @@ pub struct AddPackageSignerParams {
#[arg(long, help = "help.arg.version-range")] #[arg(long, help = "help.arg.version-range")]
#[ts(type = "string | null")] #[ts(type = "string | null")]
pub versions: Option<VersionRange>, pub versions: Option<VersionRange>,
#[arg(long, help = "help.arg.merge")]
#[ts(optional)]
pub merge: Option<bool>,
} }
pub async fn add_package_signer( pub async fn add_package_signer(
@@ -66,6 +69,7 @@ pub async fn add_package_signer(
id, id,
signer, signer,
versions, versions,
merge,
}: AddPackageSignerParams, }: AddPackageSignerParams,
) -> Result<(), Error> { ) -> Result<(), Error> {
ctx.db ctx.db
@@ -76,13 +80,22 @@ pub async fn add_package_signer(
"unknown signer {signer}" "unknown signer {signer}"
); );
let versions = versions.unwrap_or_default();
db.as_index_mut() db.as_index_mut()
.as_package_mut() .as_package_mut()
.as_packages_mut() .as_packages_mut()
.as_idx_mut(&id) .as_idx_mut(&id)
.or_not_found(&id)? .or_not_found(&id)?
.as_authorized_mut() .as_authorized_mut()
.insert(&signer, &versions.unwrap_or_default())?; .upsert(&signer, || Ok(VersionRange::None))?
.mutate(|existing| {
*existing = if merge.unwrap_or(false) {
VersionRange::or(existing.clone(), versions)
} else {
versions
};
Ok(())
})?;
Ok(()) Ok(())
}) })

View File

@@ -3,16 +3,17 @@ use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use clap::Parser; use clap::Parser;
use rpc_toolkit::{Empty, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Empty, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::process::Command; use tokio::process::Command;
use ts_rs::TS; use ts_rs::TS;
use url::Url; use url::Url;
use crate::ImageId; use crate::ImageId;
use crate::context::CliContext; use crate::context::{CliContext, RpcContext};
use crate::prelude::*; use crate::prelude::*;
use crate::s9pk::manifest::Manifest; use crate::registry::device_info::DeviceInfo;
use crate::s9pk::manifest::{HardwareRequirements, Manifest};
use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile; use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile;
use crate::s9pk::v2::SIG_CONTEXT; use crate::s9pk::v2::SIG_CONTEXT;
use crate::s9pk::v2::pack::ImageConfig; use crate::s9pk::v2::pack::ImageConfig;
@@ -70,6 +71,15 @@ pub fn s9pk() -> ParentHandler<CliContext> {
.no_display() .no_display()
.with_about("about.publish-s9pk"), .with_about("about.publish-s9pk"),
) )
.subcommand(
"select",
from_fn_async(select)
.with_custom_display_fn(|_, path: PathBuf| {
println!("{}", path.display());
Ok(())
})
.with_about("about.select-s9pk-for-device"),
)
} }
#[derive(Deserialize, Serialize, Parser)] #[derive(Deserialize, Serialize, Parser)]
@@ -323,3 +333,97 @@ async fn publish(ctx: CliContext, S9pkPath { s9pk: s9pk_path }: S9pkPath) -> Res
.await?; .await?;
crate::registry::package::add::cli_add_package_impl(ctx, s9pk, vec![s3url], false).await crate::registry::package::add::cli_add_package_impl(ctx, s9pk, vec![s3url], false).await
} }
#[derive(Deserialize, Serialize, Parser)]
struct SelectParams {
#[arg(help = "help.arg.s9pk-file-paths")]
s9pks: Vec<PathBuf>,
}
async fn select(
HandlerArgs {
context,
params: SelectParams { s9pks },
..
}: HandlerArgs<CliContext, SelectParams>,
) -> Result<PathBuf, Error> {
// Resolve file list: use provided paths or scan cwd for *.s9pk
let paths = if s9pks.is_empty() {
let mut found = Vec::new();
let mut entries = tokio::fs::read_dir(".").await?;
while let Some(entry) = entries.next_entry().await? {
let path = entry.path();
if path.extension().and_then(|e| e.to_str()) == Some("s9pk") {
found.push(path);
}
}
if found.is_empty() {
return Err(Error::new(
eyre!("no .s9pk files found in current directory"),
ErrorKind::NotFound,
));
}
found
} else {
s9pks
};
// Fetch DeviceInfo from the target server
let device_info: DeviceInfo = from_value(
context
.call_remote::<RpcContext>("server.device-info", imbl_value::json!({}))
.await?,
)?;
// Filter and rank s9pk files by compatibility
let mut compatible: Vec<(PathBuf, HardwareRequirements)> = Vec::new();
for path in &paths {
let s9pk = match super::S9pk::open(path, None).await {
Ok(s9pk) => s9pk,
Err(e) => {
tracing::warn!("skipping {}: {e}", path.display());
continue;
}
};
let manifest = s9pk.as_manifest();
// OS version check: package's required OS version must be in server's compat range
if !manifest
.metadata
.os_version
.satisfies(&device_info.os.compat)
{
continue;
}
let hw_req = &manifest.hardware_requirements;
if let Some(hw) = &device_info.hardware {
if !hw_req.is_compatible(hw) {
continue;
}
}
compatible.push((path.clone(), hw_req.clone()));
}
if compatible.is_empty() {
return Err(Error::new(
eyre!(
"no compatible s9pk found for device (arch: {}, os: {})",
device_info
.hardware
.as_ref()
.map(|h| h.arch.to_string())
.unwrap_or_else(|| "unknown".into()),
device_info.os.version,
),
ErrorKind::NotFound,
));
}
// Sort by specificity (most specific first)
compatible.sort_by_key(|(_, req)| req.specificity_desc());
Ok(compatible.into_iter().next().unwrap().0)
}

View File

@@ -9,6 +9,7 @@ use tokio::process::Command;
use crate::dependencies::{DepInfo, Dependencies}; use crate::dependencies::{DepInfo, Dependencies};
use crate::prelude::*; use crate::prelude::*;
use crate::registry::package::index::PackageMetadata;
use crate::s9pk::manifest::{DeviceFilter, LocaleString, Manifest}; use crate::s9pk::manifest::{DeviceFilter, LocaleString, Manifest};
use crate::s9pk::merkle_archive::directory_contents::DirectoryContents; use crate::s9pk::merkle_archive::directory_contents::DirectoryContents;
use crate::s9pk::merkle_archive::source::TmpSource; use crate::s9pk::merkle_archive::source::TmpSource;
@@ -195,20 +196,30 @@ impl TryFrom<ManifestV1> for Manifest {
} }
Ok(Self { Ok(Self {
id: value.id, id: value.id,
title: format!("{} (Legacy)", value.title).into(),
version: version.into(), version: version.into(),
satisfies: BTreeSet::new(), satisfies: BTreeSet::new(),
release_notes: LocaleString::Translated(value.release_notes),
can_migrate_from: VersionRange::any(), can_migrate_from: VersionRange::any(),
can_migrate_to: VersionRange::none(), can_migrate_to: VersionRange::none(),
license: value.license.into(), metadata: PackageMetadata {
wrapper_repo: value.wrapper_repo, title: format!("{} (Legacy)", value.title).into(),
upstream_repo: value.upstream_repo, release_notes: LocaleString::Translated(value.release_notes),
support_site: value.support_site.unwrap_or_else(|| default_url.clone()), license: value.license.into(),
marketing_site: value.marketing_site.unwrap_or_else(|| default_url.clone()), package_repo: value.wrapper_repo,
donation_url: value.donation_url, upstream_repo: value.upstream_repo,
docs_url: None, marketing_url: Some(value.marketing_site.unwrap_or_else(|| default_url.clone())),
description: value.description, donation_url: value.donation_url,
docs_urls: Vec::new(),
description: value.description,
alerts: value.alerts,
git_hash: value.git_hash,
os_version: value.eos_version,
sdk_version: None,
hardware_acceleration: match value.main {
PackageProcedure::Docker(d) => d.gpu_acceleration,
PackageProcedure::Script(_) => false,
},
plugins: BTreeSet::new(),
},
images: BTreeMap::new(), images: BTreeMap::new(),
volumes: value volumes: value
.volumes .volumes
@@ -217,7 +228,6 @@ impl TryFrom<ManifestV1> for Manifest {
.map(|(id, _)| id.clone()) .map(|(id, _)| id.clone())
.chain([VolumeId::from_str("embassy").unwrap()]) .chain([VolumeId::from_str("embassy").unwrap()])
.collect(), .collect(),
alerts: value.alerts,
dependencies: Dependencies( dependencies: Dependencies(
value value
.dependencies .dependencies
@@ -252,13 +262,6 @@ impl TryFrom<ManifestV1> for Manifest {
}) })
.collect(), .collect(),
}, },
git_hash: value.git_hash,
os_version: value.eos_version,
sdk_version: None,
hardware_acceleration: match value.main {
PackageProcedure::Docker(d) => d.gpu_acceleration,
PackageProcedure::Script(_) => false,
},
}) })
} }
} }

View File

@@ -7,12 +7,11 @@ use exver::{Version, VersionRange};
use imbl_value::{InOMap, InternedString}; use imbl_value::{InOMap, InternedString};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use url::Url;
pub use crate::PackageId; pub use crate::PackageId;
use crate::dependencies::Dependencies; use crate::dependencies::Dependencies;
use crate::prelude::*; use crate::prelude::*;
use crate::s9pk::git_hash::GitHash; use crate::registry::package::index::PackageMetadata;
use crate::s9pk::merkle_archive::directory_contents::DirectoryContents; use crate::s9pk::merkle_archive::directory_contents::DirectoryContents;
use crate::s9pk::merkle_archive::expected::{Expected, Filter}; use crate::s9pk::merkle_archive::expected::{Expected, Filter};
use crate::s9pk::v2::pack::ImageConfig; use crate::s9pk::v2::pack::ImageConfig;
@@ -22,7 +21,7 @@ use crate::util::{FromStrParser, VersionString, mime};
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
use crate::{ImageId, VolumeId}; use crate::{ImageId, VolumeId};
fn current_version() -> Version { pub(crate) fn current_version() -> Version {
Current::default().semver() Current::default().semver()
} }
@@ -32,46 +31,20 @@ fn current_version() -> Version {
#[ts(export)] #[ts(export)]
pub struct Manifest { pub struct Manifest {
pub id: PackageId, pub id: PackageId,
#[ts(type = "string")]
pub title: InternedString,
pub version: VersionString, pub version: VersionString,
pub satisfies: BTreeSet<VersionString>, pub satisfies: BTreeSet<VersionString>,
pub release_notes: LocaleString,
#[ts(type = "string")] #[ts(type = "string")]
pub can_migrate_to: VersionRange, pub can_migrate_to: VersionRange,
#[ts(type = "string")] #[ts(type = "string")]
pub can_migrate_from: VersionRange, pub can_migrate_from: VersionRange,
#[ts(type = "string")] #[serde(flatten)]
pub license: InternedString, // type of license pub metadata: PackageMetadata,
#[ts(type = "string")]
pub wrapper_repo: Url,
#[ts(type = "string")]
pub upstream_repo: Url,
#[ts(type = "string")]
pub support_site: Url,
#[ts(type = "string")]
pub marketing_site: Url,
#[ts(type = "string | null")]
pub donation_url: Option<Url>,
#[ts(type = "string | null")]
pub docs_url: Option<Url>,
pub description: Description,
pub images: BTreeMap<ImageId, ImageConfig>, pub images: BTreeMap<ImageId, ImageConfig>,
pub volumes: BTreeSet<VolumeId>, pub volumes: BTreeSet<VolumeId>,
#[serde(default)] #[serde(default)]
pub alerts: Alerts,
#[serde(default)]
pub dependencies: Dependencies, pub dependencies: Dependencies,
#[serde(default)] #[serde(default)]
pub hardware_requirements: HardwareRequirements, pub hardware_requirements: HardwareRequirements,
#[serde(default)]
pub hardware_acceleration: bool,
pub git_hash: Option<GitHash>,
#[serde(default = "current_version")]
#[ts(type = "string")]
pub os_version: Version,
#[ts(type = "string | null")]
pub sdk_version: Option<Version>,
} }
impl Manifest { impl Manifest {
pub fn validate_for<'a, T: Clone>( pub fn validate_for<'a, T: Clone>(
@@ -181,6 +154,32 @@ pub struct HardwareRequirements {
pub arch: Option<BTreeSet<InternedString>>, pub arch: Option<BTreeSet<InternedString>>,
} }
impl HardwareRequirements { impl HardwareRequirements {
/// Returns true if this s9pk's hardware requirements are satisfied by the given hardware.
pub fn is_compatible(&self, hw: &crate::registry::device_info::HardwareInfo) -> bool {
if let Some(arch) = &self.arch {
if !arch.contains(&hw.arch) {
return false;
}
}
if let Some(ram) = self.ram {
if hw.ram < ram {
return false;
}
}
if let Some(devices) = &hw.devices {
for device_filter in &self.device {
if !devices
.iter()
.filter(|d| d.class() == &*device_filter.class)
.any(|d| device_filter.matches(d))
{
return false;
}
}
}
true
}
/// returns a value that can be used as a sort key to get most specific requirements first /// returns a value that can be used as a sort key to get most specific requirements first
pub fn specificity_desc(&self) -> (u32, u32, u64) { pub fn specificity_desc(&self) -> (u32, u32, u64) {
( (
@@ -240,7 +239,7 @@ impl LocaleString {
pub fn localize(&mut self) { pub fn localize(&mut self) {
self.localize_for(&*rust_i18n::locale()); self.localize_for(&*rust_i18n::locale());
} }
pub fn localized(mut self) -> String { pub fn localized(self) -> String {
self.localized_for(&*rust_i18n::locale()) self.localized_for(&*rust_i18n::locale())
} }
} }

Some files were not shown because too many files have changed in this diff Show More