feat: support preferred external ports besides 443 (#3117)
* docs: update preferred external port design in TODO * docs: add user-controlled public/private and port forward mapping to design * docs: overhaul interfaces page design with view/manage split and per-address controls * docs: move address enable/disable to overflow menu, add SSL indicator, defer UI placement decisions * chore: remove tor from startos core Tor is being moved from a built-in OS feature to a service. This removes the Arti-based Tor client, onion address management, hidden service creation, and all related code from the core backend, frontend, and SDK. - Delete core/src/net/tor/ module (~2060 lines) - Remove OnionAddress, TorSecretKey, TorController from all consumers - Remove HostnameInfo::Onion and HostAddress::Onion variants - Remove onion CRUD RPC endpoints and tor subcommand - Remove tor key handling from account and backup/restore - Remove ~12 tor-related Cargo dependencies (arti-client, torut, etc.) - Remove tor UI components, API methods, mock data, and routes - Remove OnionHostname and tor patterns/regexes from SDK - Add v0_4_0_alpha_20 database migration to strip onion data - Bump version to 0.4.0-alpha.20 * chore: flatten HostnameInfo from enum to struct HostnameInfo only had one variant (Ip) after removing Tor. Flatten it into a plain struct with fields gateway, public, hostname. Remove all kind === 'ip' type guards and narrowing across SDK, frontend, and container runtime. Update DB migration to strip the kind field. * chore: format RPCSpec.md markdown table * docs: update TODO.md with DerivedAddressInfo design, remove completed tor task * feat: implement preferred port allocation and per-address enable/disable - Add AvailablePorts::try_alloc() with SSL tracking (BTreeMap<u16, bool>) - Add DerivedAddressInfo on BindInfo with private_disabled/public_enabled/possible sets - Add Bindings wrapper with Map impl for patchdb indexed access - Flatten HostAddress from single-variant enum to struct - Replace set-gateway-enabled RPC with set-address-enabled - Remove hostname_info from Host; computed addresses now in BindInfo.addresses.possible - Compute possible addresses inline in NetServiceData::update() - Update DB migration, SDK types, frontend, and container-runtime * feat: replace InterfaceFilter with ForwardRequirements, add WildcardListener, complete alpha.20 bump - Replace DynInterfaceFilter with ForwardRequirements for per-IP forward precision with source-subnet iptables filtering for private forwards - Add WildcardListener (binds [::]:port) to replace the per-gateway NetworkInterfaceListener/SelfContainedNetworkInterfaceListener/ UpgradableListener infrastructure - Update forward-port script with src_subnet and excluded_src env vars - Remove unused filter types and listener infrastructure from gateway.rs - Add availablePorts migration (IdPool -> BTreeMap<u16, bool>) to alpha.20 - Complete version bump to 0.4.0-alpha.20 in SDK and web * outbound gateway support (#3120) * Multiple (#3111) * fix alerts i18n, fix status display, better, remove usb media, hide shutdown for install complete * trigger chnage detection for localize pipe and round out implementing localize pipe for consistency even though not needed * Fix PackageInfoShort to handle LocaleString on releaseNotes (#3112) * Fix PackageInfoShort to handle LocaleString on releaseNotes * fix: filter by target_version in get_matching_models and pass otherVersions from install * chore: add exver documentation for ai agents * frontend plus some be types --------- Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> * feat: replace SourceFilter with IpNet, add policy routing, remove MASQUERADE * build ts types and fix i18n * fix license display in marketplace * wip refactor * chore: update ts bindings for preferred port design * feat: refactor NetService to watch DB and reconcile network state - NetService sync task now uses PatchDB DbWatch instead of being called directly after DB mutations - Read gateways from DB instead of network interface context when updating host addresses - gateway sync updates all host addresses in the DB - Add Watch<u64> channel for callers to wait on sync completion - Fix ts-rs codegen bug with #[ts(skip)] on flattened Plugin field - Update SDK getServiceInterface.ts for new HostnameInfo shape - Remove unnecessary HTTPS redirect in static_server.rs - Fix tunnel/api.rs to filter for WAN IPv4 address * re-arrange (#3123) * new service interfacee page * feat: add mdns hostname metadata variant and fix vhost routing - Add HostnameMetadata::Mdns variant to distinguish mDNS from private domains - Mark mDNS addresses as private (public: false) since mDNS is local-only - Fall back to null SNI entry when hostname not found in vhost mapping - Simplify public detection in ProxyTarget filter - Pass hostname to update_addresses for mDNS domain name generation * looking good * feat: add port_forwards field to Host for tracking gateway forwarding rules * update bindings for API types, add ARCHITECTURE (#3124) * update binding for API types, add ARCHITECTURE * translations * fix: add CONNMARK restore-mark to mangle OUTPUT chain The CONNMARK --restore-mark rule was only in PREROUTING, which handles forwarded packets. Locally-bound listeners (e.g. vhost) generate replies through the OUTPUT chain, where the fwmark was never restored. This caused response packets to route via the default table instead of back through the originating interface. * chore: reserialize db on equal version, update bindings and docs - Run de/ser roundtrip in pre_init even when db version matches, ensuring all #[serde(default)] fields are populated before any typed access - Add patchdb.md documentation for TypedDbWatch patterns - Update TS bindings for CheckPortParams, CheckPortRes, ifconfigUrl - Update CLAUDE.md docs with patchdb and component-level references * fix: include public gateways for IP-based addresses in vhost targets The server hostname vhost construction only collected private IPs, always setting public to empty. Public IP addresses (Ipv4/Ipv6 metadata with public=true) were never added to the vhost target's public gateway set, causing the vhost filter to reject public traffic for IP-based addresses. * fix: add TLS handshake timeout and fix accept loop deadlock Two issues in TlsListener::poll_accept: 1. No timeout on TLS handshakes: LazyConfigAcceptor waits indefinitely for ClientHello. Attackers that complete TCP handshake but never send TLS data create zombie futures in `in_progress` that never complete. Fix: wrap the entire handshake in tokio::time::timeout(15s). 2. Missing waker on new-connection pending path: when a TCP connection is accepted and the TLS handshake is pending, poll_accept returned Pending without calling wake_by_ref(). Since the TcpListener returned Ready (not Pending), no waker was registered for it. With edge- triggered epoll and no other wakeup source, the task sleeps forever and remaining connections in the kernel accept queue are never drained. Fix: add cx.waker().wake_by_ref() so the task immediately re-polls and continues draining the accept queue. * fix: switch BackgroundJobRunner from Vec to FuturesUnordered BackgroundJobRunner stored active jobs in a Vec<BoxFuture> and polled ALL of them on every wakeup — O(n) per poll. Since this runs in the same tokio::select! as the WebServer accept loop, polling overhead from active connections directly delayed acceptance of new connections. FuturesUnordered only polls woken futures — O(woken) instead of O(n). * chore: update bindings and use typed params for outbound gateway API * feat: per-service and default outbound gateway routing Add set-outbound-gateway RPC for packages and set-default-outbound RPC for the server, with policy routing enforcement via ip rules. Fix connmark restore to skip packets with existing fwmarks, add bridge subnet routes to per-interface tables, and fix squashfs path in update-image-local.sh. * refactor: manifest wraps PackageMetadata, move dependency_metadata to PackageVersionInfo Manifest now embeds PackageMetadata via #[serde(flatten)] instead of duplicating ~14 fields. icon and dependency_metadata moved from PackageMetadata to PackageVersionInfo since they are registry-enrichment data loaded from the S9PK archive. merge_with now returns errors on metadata/icon/dependency_metadata mismatches instead of silently ignoring them. * fix: replace .status() with .invoke() for iptables/ip commands Using .status() leaks stderr directly to system logs, causing noisy iptables error messages. Switch all networking CLI invocations to use .invoke() which captures stderr properly. For check-then-act patterns (iptables -C), use .invoke().await.is_err() instead of .status().await.map_or(false, |s| s.success()). * feat: add check-dns gateway endpoint and fix per-interface routing tables Add a `check-dns` RPC endpoint that verifies whether a gateway's DNS is properly configured for private domain resolution. Uses a three-tier check: direct match (DNS == server IP), TXT challenge probe (DNS on LAN), or failure (DNS off-subnet). Fix per-interface routing tables to clone all non-default routes from the main table instead of only the interface's own subnets. This preserves LAN reachability when the priority-75 catch-all overrides default routing. Filter out status-only flags (linkdown, dead) that are invalid for `ip route add`. * refactor: rename manifest metadata fields and improve error display Rename wrapperRepo→packageRepo, marketingSite→marketingUrl, docsUrl→docsUrls (array), remove supportSite. Add display_src/display_dbg helpers to Error. Fix DepInfo description type to LocaleString. Update web UI, SDK bindings, tests, and fixtures to match. Clean up cli_attach error handling and remove dead commented code. * chore: bump sdk version to 0.4.0-beta.49 * chore: add createTask decoupling TODO * chore: add TODO to clear service error state on install/update * round out dns check, dns server check, port forward check, and gateway port forwards * chore: add TODOs for URL plugins, NAT hairpinning, and start-tunnel OTA updates * version instead of os query param * interface row clickable again, bu now with a chevron! * feat: implement URL plugins with table/row actions and prefill support - Add URL plugin effects (register, export_url, clear_urls) in core - Add PluginHostnameInfo, HostnameMetadata::Plugin, and plugin registration types - Implement plugin URL table in web UI with tableAction button and rowAction overflow menus - Thread urlPluginMetadata (packageId, hostId, interfaceId, internalPort) as prefill to actions - Add prefill support to PackageActionData so metadata passes through form dialogs - Add i18n translations for plugin error messages - Clean up plugin URLs on package uninstall * feat: split row_actions into remove_action and overflow_actions for URL plugins * touch up URL plugins table * show table even when no addresses * feat: NAT hairpinning, DNS static servers, clear service error on install - Add POSTROUTING MASQUERADE rules for container and host hairpin NAT - Allow bridge subnet containers to reach private forwards via LAN IPs - Pass bridge_subnet env var from forward.rs to forward-port script - Use DB-configured static DNS servers in resolver with DB watcher - Fall back to resolv.conf servers when no static servers configured - Clear service error state when install/update completes successfully - Remove completed TODO items * feat: builder-style InputSpec API, prefill plumbing, and port forward fix - Add addKey() and add() builder methods to InputSpec with InputSpecTools - Move OuterType to last generic param on Value, List, and all dynamic methods - Plumb prefill through getActionInput end-to-end (core → container-runtime → SDK) - Filter port_forwards to enabled addresses only - Bump SDK to 0.4.0-beta.50 * fix: propagate host locale into LXC containers and write locale.conf * chore: remove completed URL plugins TODO * feat: OTA updates for start-tunnel via apt repository (untested) - Add apt repo publish script (build/apt/publish-deb.sh) for S3-hosted repo - Add apt source config and GPG key placeholder (apt/) - Add tunnel.update.check and tunnel.update.apply RPC endpoints - Wire up update API in tunnel frontend (api service + mock) - Uses systemd-run --scope to survive service restart during update * fix: publish script dpkg-name, s3cfg fallback, and --reinstall for apply * chore: replace OTA updates TODO with UI TODO for MattDHill * feat: add getOutboundGateway effect and simplify VersionGraph init/uninit Add getOutboundGateway effect across core, container-runtime, and SDK to let services query their effective outbound gateway with callback support. Remove preInstall/uninstall hooks from VersionGraph as they are no longer needed. * frontend start-tunnel updates * chore: remove completed TODO * feat: tor hidden service key migration * chore: migrate from ts-matches to zod across all TypeScript packages * feat(core): allow setting server hostname * send prefill for tasks and hide operations to hidden fields * fix(core): preserve plugin URLs across binding updates BindInfo::update was replacing addresses with a new DerivedAddressInfo that cleared the available set, wiping plugin-exported URLs whenever bind() was called. Also simplify update_addresses plugin preservation to use retain in place rather than collecting into a separate set. * minor cleanup from patch-db audit * clean up prefill flow * frontend support for setting and changing hostname * feat(core): refactor hostname to ServerHostnameInfo with name/hostname pair - Rename Hostname to ServerHostnameInfo, add name + hostname fields - Add set_hostname_rpc for changing hostname at runtime - Migrate alpha_20: generate serverInfo.name from hostname, delete ui.name - Extract gateway.rs helpers to fix rustfmt nesting depth issue - Add i18n key for hostname validation error - Update SDK bindings * add comments to everything potentially consumer facing (#3127) * add comments to everything potentially consumer facing * rework smtp --------- Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> * implement server name * setup changes * clean up copy around addresses table * feat: add zod-deep-partial, partialValidator on InputSpec, and z.deepPartial re-export * fix: header color in zoom (#3128) * fix: merge version ranges when adding existing package signer (#3125) * fix: merge version ranges when adding existing package signer Previously, add_package_signer unconditionally inserted the new version range, overwriting any existing authorization for that signer. Now it OR-merges the new range with the existing one, so running signer add multiple times accumulates permissions rather than replacing them. * add --merge flag to registry package signer add Default behavior remains overwrite. When --merge is passed, the new version range is OR-merged with the existing one, allowing admins to accumulate permissions incrementally. * add missing attribute to TS type * make merge optional * upsert instead of insert * VersionRange::None on upsert * fix: header color in zoom --------- Co-authored-by: Dominion5254 <musashidisciple@proton.me> * update snake and add about this server to system general * chore: bump sdk to beta.53, wrap z.deepPartial with passthrough * reset instead of reset defaults * action failure show dialog * chore: bump sdk to beta.54, add device-info RPC, improve SDK abort handling and InputSpec filtering - Bump SDK version to 0.4.0-beta.54 - Add `server.device-info` RPC endpoint and `s9pk select` CLI command - Extract `HardwareRequirements::is_compatible()` method, reuse in registry filtering - Add `AbortedError` class with `muteUnhandled` flag, replace generic abort errors - Handle unhandled promise rejections in container-runtime with mute support - Improve `InputSpec.filter()` with `keepByDefault` param and boolean filter values - Accept readonly tuples in `CommandType` and `splitCommand` - Remove `sync_host` calls from host API handlers (binding/address changes) - Filter mDNS hostnames by secure gateway availability - Derive mDNS enabled state from LAN IPs in web UI - Add "Open UI" action to address table, disable mDNS toggle - Hide debug details in service error component - Update rpc-toolkit docs for no-params handlers * fix: add --no-nvram to efi grub-install to preserve built-in boot order * update snake * diable actions when in error state * chore: split out nvidia variant * misc bugfixes * create manage-release script (untested) * fix: preserve z namespace types for sdk consumers * sdk version bump * new checkPort types * multiple bugs and better port forward ux * fix link * chore: todos and formatting * fix build --------- Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> Co-authored-by: Matt Hill <mattnine@protonmail.com> Co-authored-by: Alex Inkin <alexander@inkin.ru> Co-authored-by: Dominion5254 <musashidisciple@proton.me>
@@ -1,5 +1 @@
|
|||||||
{
|
{}
|
||||||
"attribution": {
|
|
||||||
"commit": ""
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
17
.github/workflows/startos-iso.yaml
vendored
@@ -25,10 +25,13 @@ on:
|
|||||||
- ALL
|
- ALL
|
||||||
- x86_64
|
- x86_64
|
||||||
- x86_64-nonfree
|
- x86_64-nonfree
|
||||||
|
- x86_64-nvidia
|
||||||
- aarch64
|
- aarch64
|
||||||
- aarch64-nonfree
|
- aarch64-nonfree
|
||||||
|
- aarch64-nvidia
|
||||||
# - raspberrypi
|
# - raspberrypi
|
||||||
- riscv64
|
- riscv64
|
||||||
|
- riscv64-nonfree
|
||||||
deploy:
|
deploy:
|
||||||
type: choice
|
type: choice
|
||||||
description: Deploy
|
description: Deploy
|
||||||
@@ -65,10 +68,13 @@ jobs:
|
|||||||
fromJson('{
|
fromJson('{
|
||||||
"x86_64": ["x86_64"],
|
"x86_64": ["x86_64"],
|
||||||
"x86_64-nonfree": ["x86_64"],
|
"x86_64-nonfree": ["x86_64"],
|
||||||
|
"x86_64-nvidia": ["x86_64"],
|
||||||
"aarch64": ["aarch64"],
|
"aarch64": ["aarch64"],
|
||||||
"aarch64-nonfree": ["aarch64"],
|
"aarch64-nonfree": ["aarch64"],
|
||||||
|
"aarch64-nvidia": ["aarch64"],
|
||||||
"raspberrypi": ["aarch64"],
|
"raspberrypi": ["aarch64"],
|
||||||
"riscv64": ["riscv64"],
|
"riscv64": ["riscv64"],
|
||||||
|
"riscv64-nonfree": ["riscv64"],
|
||||||
"ALL": ["x86_64", "aarch64", "riscv64"]
|
"ALL": ["x86_64", "aarch64", "riscv64"]
|
||||||
}')[github.event.inputs.platform || 'ALL']
|
}')[github.event.inputs.platform || 'ALL']
|
||||||
}}
|
}}
|
||||||
@@ -125,7 +131,7 @@ jobs:
|
|||||||
format(
|
format(
|
||||||
'[
|
'[
|
||||||
["{0}"],
|
["{0}"],
|
||||||
["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64"]
|
["x86_64", "x86_64-nonfree", "x86_64-nvidia", "aarch64", "aarch64-nonfree", "aarch64-nvidia", "riscv64", "riscv64-nonfree"]
|
||||||
]',
|
]',
|
||||||
github.event.inputs.platform || 'ALL'
|
github.event.inputs.platform || 'ALL'
|
||||||
)
|
)
|
||||||
@@ -139,18 +145,24 @@ jobs:
|
|||||||
fromJson('{
|
fromJson('{
|
||||||
"x86_64": "ubuntu-latest",
|
"x86_64": "ubuntu-latest",
|
||||||
"x86_64-nonfree": "ubuntu-latest",
|
"x86_64-nonfree": "ubuntu-latest",
|
||||||
|
"x86_64-nvidia": "ubuntu-latest",
|
||||||
"aarch64": "ubuntu-24.04-arm",
|
"aarch64": "ubuntu-24.04-arm",
|
||||||
"aarch64-nonfree": "ubuntu-24.04-arm",
|
"aarch64-nonfree": "ubuntu-24.04-arm",
|
||||||
|
"aarch64-nvidia": "ubuntu-24.04-arm",
|
||||||
"raspberrypi": "ubuntu-24.04-arm",
|
"raspberrypi": "ubuntu-24.04-arm",
|
||||||
"riscv64": "ubuntu-24.04-arm",
|
"riscv64": "ubuntu-24.04-arm",
|
||||||
|
"riscv64-nonfree": "ubuntu-24.04-arm",
|
||||||
}')[matrix.platform],
|
}')[matrix.platform],
|
||||||
fromJson('{
|
fromJson('{
|
||||||
"x86_64": "buildjet-8vcpu-ubuntu-2204",
|
"x86_64": "buildjet-8vcpu-ubuntu-2204",
|
||||||
"x86_64-nonfree": "buildjet-8vcpu-ubuntu-2204",
|
"x86_64-nonfree": "buildjet-8vcpu-ubuntu-2204",
|
||||||
|
"x86_64-nvidia": "buildjet-8vcpu-ubuntu-2204",
|
||||||
"aarch64": "buildjet-8vcpu-ubuntu-2204-arm",
|
"aarch64": "buildjet-8vcpu-ubuntu-2204-arm",
|
||||||
"aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm",
|
"aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm",
|
||||||
|
"aarch64-nvidia": "buildjet-8vcpu-ubuntu-2204-arm",
|
||||||
"raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm",
|
"raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm",
|
||||||
"riscv64": "buildjet-8vcpu-ubuntu-2204",
|
"riscv64": "buildjet-8vcpu-ubuntu-2204",
|
||||||
|
"riscv64-nonfree": "buildjet-8vcpu-ubuntu-2204",
|
||||||
}')[matrix.platform]
|
}')[matrix.platform]
|
||||||
)
|
)
|
||||||
)[github.event.inputs.runner == 'fast']
|
)[github.event.inputs.runner == 'fast']
|
||||||
@@ -161,10 +173,13 @@ jobs:
|
|||||||
fromJson('{
|
fromJson('{
|
||||||
"x86_64": "x86_64",
|
"x86_64": "x86_64",
|
||||||
"x86_64-nonfree": "x86_64",
|
"x86_64-nonfree": "x86_64",
|
||||||
|
"x86_64-nvidia": "x86_64",
|
||||||
"aarch64": "aarch64",
|
"aarch64": "aarch64",
|
||||||
"aarch64-nonfree": "aarch64",
|
"aarch64-nonfree": "aarch64",
|
||||||
|
"aarch64-nvidia": "aarch64",
|
||||||
"raspberrypi": "aarch64",
|
"raspberrypi": "aarch64",
|
||||||
"riscv64": "riscv64",
|
"riscv64": "riscv64",
|
||||||
|
"riscv64-nonfree": "riscv64",
|
||||||
}')[matrix.platform]
|
}')[matrix.platform]
|
||||||
}}
|
}}
|
||||||
steps:
|
steps:
|
||||||
|
|||||||
2
.gitignore
vendored
@@ -21,4 +21,4 @@ secrets.db
|
|||||||
/build/lib/firmware
|
/build/lib/firmware
|
||||||
tmp
|
tmp
|
||||||
web/.i18n-checked
|
web/.i18n-checked
|
||||||
agents/USER.md
|
docs/USER.md
|
||||||
|
|||||||
101
ARCHITECTURE.md
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
# Architecture
|
||||||
|
|
||||||
|
StartOS is an open-source Linux distribution for running personal servers. It manages discovery, installation, network configuration, backups, and health monitoring of self-hosted services.
|
||||||
|
|
||||||
|
## Tech Stack
|
||||||
|
|
||||||
|
- Backend: Rust (async/Tokio, Axum web framework)
|
||||||
|
- Frontend: Angular 20 + TypeScript + TaigaUI
|
||||||
|
- Container runtime: Node.js/TypeScript with LXC
|
||||||
|
- Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync
|
||||||
|
- API: JSON-RPC via rpc-toolkit (see `core/rpc-toolkit.md`)
|
||||||
|
- Auth: Password + session cookie, public/private key signatures, local authcookie (see `core/src/middleware/auth/`)
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/
|
||||||
|
├── assets/ # Screenshots for README
|
||||||
|
├── build/ # Auxiliary files and scripts for deployed images
|
||||||
|
├── container-runtime/ # Node.js program managing package containers
|
||||||
|
├── core/ # Rust backend: API, daemon (startd), CLI (start-cli)
|
||||||
|
├── debian/ # Debian package maintainer scripts
|
||||||
|
├── image-recipe/ # Scripts for building StartOS images
|
||||||
|
├── patch-db/ # (submodule) Diff-based data store for frontend sync
|
||||||
|
├── sdk/ # TypeScript SDK for building StartOS packages
|
||||||
|
└── web/ # Web UIs (Angular)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Components
|
||||||
|
|
||||||
|
- **`core/`** — Rust backend daemon. Produces a single binary `startbox` that is symlinked as `startd` (main daemon), `start-cli` (CLI), `start-container` (runs inside LXC containers), `registrybox` (package registry), and `tunnelbox` (VPN/tunnel). Handles all backend logic: RPC API, service lifecycle, networking (DNS, ACME, WiFi, Tor, WireGuard), backups, and database state management. See [core/ARCHITECTURE.md](core/ARCHITECTURE.md).
|
||||||
|
|
||||||
|
- **`web/`** — Angular 20 + TypeScript workspace using Taiga UI. Contains three applications (admin UI, setup wizard, VPN management) and two shared libraries (common components/services, marketplace). Communicates with the backend exclusively via JSON-RPC. See [web/ARCHITECTURE.md](web/ARCHITECTURE.md).
|
||||||
|
|
||||||
|
- **`container-runtime/`** — Node.js runtime that runs inside each service's LXC container. Loads the service's JavaScript from its S9PK package and manages subcontainers. Communicates with the host daemon via JSON-RPC over Unix socket. See [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md).
|
||||||
|
|
||||||
|
- **`sdk/`** — TypeScript SDK for packaging services for StartOS (`@start9labs/start-sdk`). Split into `base/` (core types, ABI definitions, effects interface, consumed by web as `@start9labs/start-sdk-base`) and `package/` (full SDK for service developers, consumed by container-runtime as `@start9labs/start-sdk`).
|
||||||
|
|
||||||
|
- **`patch-db/`** — Git submodule providing diff-based state synchronization. Uses CBOR encoding. Backend mutations produce diffs that are pushed to the frontend via WebSocket, enabling reactive UI updates without polling. See [patch-db repo](https://github.com/Start9Labs/patch-db).
|
||||||
|
|
||||||
|
## Build Pipeline
|
||||||
|
|
||||||
|
Components have a strict dependency chain. Changes flow in one direction:
|
||||||
|
|
||||||
|
```
|
||||||
|
Rust (core/)
|
||||||
|
→ cargo test exports ts-rs types to core/bindings/
|
||||||
|
→ rsync copies to sdk/base/lib/osBindings/
|
||||||
|
→ SDK build produces baseDist/ and dist/
|
||||||
|
→ web/ consumes baseDist/ (via @start9labs/start-sdk-base)
|
||||||
|
→ container-runtime/ consumes dist/ (via @start9labs/start-sdk)
|
||||||
|
```
|
||||||
|
|
||||||
|
Key make targets along this chain:
|
||||||
|
|
||||||
|
| Step | Command | What it does |
|
||||||
|
|---|---|---|
|
||||||
|
| 1 | `cargo check -p start-os` | Verify Rust compiles |
|
||||||
|
| 2 | `make ts-bindings` | Export ts-rs types → rsync to SDK |
|
||||||
|
| 3 | `cd sdk && make baseDist dist` | Build SDK packages |
|
||||||
|
| 4 | `cd web && npm run check` | Type-check Angular projects |
|
||||||
|
| 5 | `cd container-runtime && npm run check` | Type-check runtime |
|
||||||
|
|
||||||
|
**Important**: Editing `sdk/base/lib/osBindings/*.ts` alone is NOT sufficient — you must rebuild the SDK bundle (step 3) before web/container-runtime can see the changes.
|
||||||
|
|
||||||
|
## Cross-Layer Verification
|
||||||
|
|
||||||
|
When making changes across multiple layers (Rust, SDK, web, container-runtime), verify in this order:
|
||||||
|
|
||||||
|
1. **Rust**: `cargo check -p start-os` — verifies core compiles
|
||||||
|
2. **TS bindings**: `make ts-bindings` — regenerates TypeScript types from Rust `#[ts(export)]` structs
|
||||||
|
- Runs `./core/build/build-ts.sh` to export ts-rs types to `core/bindings/`
|
||||||
|
- Syncs `core/bindings/` → `sdk/base/lib/osBindings/` via rsync
|
||||||
|
- If you manually edit files in `sdk/base/lib/osBindings/`, you must still rebuild the SDK (step 3)
|
||||||
|
3. **SDK bundle**: `cd sdk && make baseDist dist` — compiles SDK source into packages
|
||||||
|
- `baseDist/` is consumed by `/web` (via `@start9labs/start-sdk-base`)
|
||||||
|
- `dist/` is consumed by `/container-runtime` (via `@start9labs/start-sdk`)
|
||||||
|
- Web and container-runtime reference the **built** SDK, not source files
|
||||||
|
4. **Web type check**: `cd web && npm run check` — type-checks all Angular projects
|
||||||
|
5. **Container runtime type check**: `cd container-runtime && npm run check` — type-checks the runtime
|
||||||
|
|
||||||
|
## Data Flow: Backend to Frontend
|
||||||
|
|
||||||
|
StartOS uses Patch-DB for reactive state synchronization:
|
||||||
|
|
||||||
|
1. The backend mutates state via `db.mutate()`, producing CBOR diffs
|
||||||
|
2. Diffs are pushed to the frontend over a persistent WebSocket connection
|
||||||
|
3. The frontend applies diffs to its local state copy and notifies observers
|
||||||
|
4. Components watch specific database paths via `PatchDB.watch$()`, receiving updates reactively
|
||||||
|
|
||||||
|
This means the UI is always eventually consistent with the backend — after any mutating API call, the frontend waits for the corresponding PatchDB diff before resolving, so the UI reflects the result immediately.
|
||||||
|
|
||||||
|
## Further Reading
|
||||||
|
|
||||||
|
- [core/ARCHITECTURE.md](core/ARCHITECTURE.md) — Rust backend architecture
|
||||||
|
- [web/ARCHITECTURE.md](web/ARCHITECTURE.md) — Angular frontend architecture
|
||||||
|
- [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md) — Container runtime details
|
||||||
|
- [core/rpc-toolkit.md](core/rpc-toolkit.md) — JSON-RPC handler patterns
|
||||||
|
- [core/s9pk-structure.md](core/s9pk-structure.md) — S9PK package format
|
||||||
|
- [docs/exver.md](docs/exver.md) — Extended versioning format
|
||||||
|
- [docs/VERSION_BUMP.md](docs/VERSION_BUMP.md) — Version bumping guide
|
||||||
123
CLAUDE.md
@@ -2,142 +2,55 @@
|
|||||||
|
|
||||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
## Project Overview
|
## Architecture
|
||||||
|
|
||||||
StartOS is an open-source Linux distribution for running personal servers. It manages discovery, installation, network configuration, backups, and health monitoring of self-hosted services.
|
See [ARCHITECTURE.md](ARCHITECTURE.md) for the full system architecture, component map, build pipeline, and cross-layer verification order.
|
||||||
|
|
||||||
**Tech Stack:**
|
Each major component has its own `CLAUDE.md` with detailed guidance: `core/`, `web/`, `container-runtime/`, `sdk/`.
|
||||||
- Backend: Rust (async/Tokio, Axum web framework)
|
|
||||||
- Frontend: Angular 20 + TypeScript + TaigaUI
|
|
||||||
- Container runtime: Node.js/TypeScript with LXC
|
|
||||||
- Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync
|
|
||||||
- API: JSON-RPC via rpc-toolkit (see `agents/rpc-toolkit.md`)
|
|
||||||
- Auth: Password + session cookie, public/private key signatures, local authcookie (see `core/src/middleware/auth/`)
|
|
||||||
|
|
||||||
## Build & Development
|
## Build & Development
|
||||||
|
|
||||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for:
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for:
|
||||||
|
|
||||||
- Environment setup and requirements
|
- Environment setup and requirements
|
||||||
- Build commands and make targets
|
- Build commands and make targets
|
||||||
- Testing and formatting commands
|
- Testing and formatting commands
|
||||||
- Environment variables
|
- Environment variables
|
||||||
|
|
||||||
**Quick reference:**
|
**Quick reference:**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
. ./devmode.sh # Enable dev mode
|
. ./devmode.sh # Enable dev mode
|
||||||
make update-startbox REMOTE=start9@<ip> # Fastest iteration (binary + UI)
|
make update-startbox REMOTE=start9@<ip> # Fastest iteration (binary + UI)
|
||||||
make test-core # Run Rust tests
|
make test-core # Run Rust tests
|
||||||
```
|
```
|
||||||
|
|
||||||
## Architecture
|
## Operating Rules
|
||||||
|
|
||||||
### Core (`/core`)
|
- Always verify cross-layer changes using the order described in [ARCHITECTURE.md](ARCHITECTURE.md#cross-layer-verification)
|
||||||
The Rust backend daemon. Main binaries:
|
- Check component-level CLAUDE.md files for component-specific conventions. ALWAYS read it before operating on that component.
|
||||||
- `startbox` - Main daemon (runs as `startd`)
|
- Follow existing patterns before inventing new ones
|
||||||
- `start-cli` - CLI interface
|
- Always use `make` recipes when they exist for testing builds rather than manually invoking build commands
|
||||||
- `start-container` - Runs inside LXC containers; communicates with host and manages subcontainers
|
|
||||||
- `registrybox` - Registry daemon
|
|
||||||
- `tunnelbox` - VPN/tunnel daemon
|
|
||||||
|
|
||||||
**Key modules:**
|
|
||||||
- `src/context/` - Context types (RpcContext, CliContext, InitContext, DiagnosticContext)
|
|
||||||
- `src/service/` - Service lifecycle management with actor pattern (`service_actor.rs`)
|
|
||||||
- `src/db/model/` - Patch-DB models (`public.rs` synced to frontend, `private.rs` backend-only)
|
|
||||||
- `src/net/` - Networking (DNS, ACME, WiFi, Tor via Arti, WireGuard)
|
|
||||||
- `src/s9pk/` - S9PK package format (merkle archive)
|
|
||||||
- `src/registry/` - Package registry management
|
|
||||||
|
|
||||||
**RPC Pattern:** See `agents/rpc-toolkit.md`
|
|
||||||
|
|
||||||
### Web (`/web`)
|
|
||||||
Angular projects sharing common code:
|
|
||||||
- `projects/ui/` - Main admin interface
|
|
||||||
- `projects/setup-wizard/` - Initial setup
|
|
||||||
- `projects/start-tunnel/` - VPN management UI
|
|
||||||
- `projects/shared/` - Common library (API clients, components)
|
|
||||||
- `projects/marketplace/` - Service discovery
|
|
||||||
|
|
||||||
**Development:**
|
|
||||||
```bash
|
|
||||||
cd web
|
|
||||||
npm ci
|
|
||||||
npm run start:ui # Dev server with mocks
|
|
||||||
npm run build:ui # Production build
|
|
||||||
npm run check # Type check all projects
|
|
||||||
```
|
|
||||||
|
|
||||||
### Container Runtime (`/container-runtime`)
|
|
||||||
Node.js runtime that manages service containers via RPC. See `RPCSpec.md` for protocol.
|
|
||||||
|
|
||||||
**Container Architecture:**
|
|
||||||
```
|
|
||||||
LXC Container (uniform base for all services)
|
|
||||||
└── systemd
|
|
||||||
└── container-runtime.service
|
|
||||||
└── Loads /usr/lib/startos/package/index.js (from s9pk javascript.squashfs)
|
|
||||||
└── Package JS launches subcontainers (from images in s9pk)
|
|
||||||
```
|
|
||||||
|
|
||||||
The container runtime communicates with the host via JSON-RPC over Unix socket. Package JavaScript must export functions conforming to the `ABI` type defined in `sdk/base/lib/types.ts`.
|
|
||||||
|
|
||||||
**`/media/startos/` directory (mounted by host into container):**
|
|
||||||
|
|
||||||
| Path | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `volumes/<name>/` | Package data volumes (id-mapped, persistent) |
|
|
||||||
| `assets/` | Read-only assets from s9pk `assets.squashfs` |
|
|
||||||
| `images/<name>/` | Container images (squashfs, used for subcontainers) |
|
|
||||||
| `images/<name>.env` | Environment variables for image |
|
|
||||||
| `images/<name>.json` | Image metadata |
|
|
||||||
| `backup/` | Backup mount point (mounted during backup operations) |
|
|
||||||
| `rpc/service.sock` | RPC socket (container runtime listens here) |
|
|
||||||
| `rpc/host.sock` | Host RPC socket (for effects callbacks to host) |
|
|
||||||
|
|
||||||
**S9PK Structure:** See `agents/s9pk-structure.md`
|
|
||||||
|
|
||||||
### SDK (`/sdk`)
|
|
||||||
TypeScript SDK for packaging services (`@start9labs/start-sdk`).
|
|
||||||
|
|
||||||
- `base/` - Core types, ABI definitions, effects interface (`@start9labs/start-sdk-base`)
|
|
||||||
- `package/` - Full SDK for package developers, re-exports base
|
|
||||||
|
|
||||||
### Patch-DB (`/patch-db`)
|
|
||||||
Git submodule providing diff-based state synchronization. Changes to `db/model/public.rs` automatically sync to the frontend.
|
|
||||||
|
|
||||||
**Key patterns:**
|
|
||||||
- `db.peek().await` - Get a read-only snapshot of the database state
|
|
||||||
- `db.mutate(|db| { ... }).await` - Apply mutations atomically, returns `MutateResult`
|
|
||||||
- `#[derive(HasModel)]` - Derive macro for types stored in the database, generates typed accessors
|
|
||||||
|
|
||||||
**Generated accessor types** (from `HasModel` derive):
|
|
||||||
- `as_field()` - Immutable reference: `&Model<T>`
|
|
||||||
- `as_field_mut()` - Mutable reference: `&mut Model<T>`
|
|
||||||
- `into_field()` - Owned value: `Model<T>`
|
|
||||||
|
|
||||||
**`Model<T>` APIs** (from `db/prelude.rs`):
|
|
||||||
- `.de()` - Deserialize to `T`
|
|
||||||
- `.ser(&value)` - Serialize from `T`
|
|
||||||
- `.mutate(|v| ...)` - Deserialize, mutate, reserialize
|
|
||||||
- For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()`
|
|
||||||
|
|
||||||
## Supplementary Documentation
|
## Supplementary Documentation
|
||||||
|
|
||||||
The `agents/` directory contains detailed documentation for AI assistants:
|
The `docs/` directory contains cross-cutting documentation for AI assistants:
|
||||||
|
|
||||||
- `TODO.md` - Pending tasks for AI agents (check this first, remove items when completed)
|
- `TODO.md` - Pending tasks for AI agents (check this first, remove items when completed)
|
||||||
- `USER.md` - Current user identifier (gitignored, see below)
|
- `USER.md` - Current user identifier (gitignored, see below)
|
||||||
- `rpc-toolkit.md` - JSON-RPC patterns and handler configuration
|
- `exver.md` - Extended versioning format (used across core, sdk, and web)
|
||||||
- `core-rust-patterns.md` - Common utilities and patterns for Rust code in `/core` (guard pattern, mount guards, etc.)
|
- `VERSION_BUMP.md` - Guide for bumping the StartOS version across the codebase
|
||||||
- `s9pk-structure.md` - S9PK package format structure
|
|
||||||
- `i18n-patterns.md` - Internationalization key conventions and usage in `/core`
|
Component-specific docs live alongside their code (e.g., `core/rpc-toolkit.md`, `core/i18n-patterns.md`).
|
||||||
|
|
||||||
### Session Startup
|
### Session Startup
|
||||||
|
|
||||||
On startup:
|
On startup:
|
||||||
|
|
||||||
1. **Check for `agents/USER.md`** - If it doesn't exist, prompt the user for their name/identifier and create it. This file is gitignored since it varies per developer.
|
1. **Check for `docs/USER.md`** - If it doesn't exist, prompt the user for their name/identifier and create it. This file is gitignored since it varies per developer.
|
||||||
|
|
||||||
|
2. **Check `docs/TODO.md` for relevant tasks** - Show TODOs that either:
|
||||||
|
|
||||||
2. **Check `agents/TODO.md` for relevant tasks** - Show TODOs that either:
|
|
||||||
- Have no `@username` tag (relevant to everyone)
|
- Have no `@username` tag (relevant to everyone)
|
||||||
- Are tagged with the current user's identifier
|
- Are tagged with the current user's identifier
|
||||||
|
|
||||||
|
|||||||
100
CONTRIBUTING.md
@@ -1,37 +1,45 @@
|
|||||||
# Contributing to StartOS
|
# Contributing to StartOS
|
||||||
|
|
||||||
This guide is for contributing to the StartOS. If you are interested in packaging a service for StartOS, visit the [service packaging guide](https://docs.start9.com/latest/packaging-guide/). If you are interested in promoting, providing technical support, creating tutorials, or helping in other ways, please visit the [Start9 website](https://start9.com/contribute).
|
This guide is for contributing to the StartOS. If you are interested in packaging a service for StartOS, visit the [service packaging guide](https://github.com/Start9Labs/ai-service-packaging). If you are interested in promoting, providing technical support, creating tutorials, or helping in other ways, please visit the [Start9 website](https://start9.com/contribute).
|
||||||
|
|
||||||
## Collaboration
|
## Collaboration
|
||||||
|
|
||||||
- [Matrix](https://matrix.to/#/#community-dev:matrix.start9labs.com)
|
- [Matrix](https://matrix.to/#/#dev-startos:matrix.start9labs.com)
|
||||||
- [Telegram](https://t.me/start9_labs/47471)
|
|
||||||
|
|
||||||
## Project Structure
|
For project structure and system architecture, see [ARCHITECTURE.md](ARCHITECTURE.md).
|
||||||
|
|
||||||
```bash
|
|
||||||
/
|
|
||||||
├── assets/ # Screenshots for README
|
|
||||||
├── build/ # Auxiliary files and scripts for deployed images
|
|
||||||
├── container-runtime/ # Node.js program managing package containers
|
|
||||||
├── core/ # Rust backend: API, daemon (startd), CLI (start-cli)
|
|
||||||
├── debian/ # Debian package maintainer scripts
|
|
||||||
├── image-recipe/ # Scripts for building StartOS images
|
|
||||||
├── patch-db/ # (submodule) Diff-based data store for frontend sync
|
|
||||||
├── sdk/ # TypeScript SDK for building StartOS packages
|
|
||||||
└── web/ # Web UIs (Angular)
|
|
||||||
```
|
|
||||||
|
|
||||||
See component READMEs for details:
|
|
||||||
- [`core`](core/README.md)
|
|
||||||
- [`web`](web/README.md)
|
|
||||||
- [`build`](build/README.md)
|
|
||||||
- [`patch-db`](https://github.com/Start9Labs/patch-db)
|
|
||||||
|
|
||||||
## Environment Setup
|
## Environment Setup
|
||||||
|
|
||||||
|
### Installing Dependencies (Debian/Ubuntu)
|
||||||
|
|
||||||
|
> Debian/Ubuntu is the only officially supported build environment.
|
||||||
|
> MacOS has limited build capabilities and Windows requires [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install).
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
git clone https://github.com/Start9Labs/start-os.git --recurse-submodules
|
sudo apt update
|
||||||
|
sudo apt install -y ca-certificates curl gpg build-essential
|
||||||
|
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
||||||
|
echo "deb [arch=$(dpkg-architecture -q DEB_HOST_ARCH) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian bookworm stable" | sudo tee /etc/apt/sources.list.d/docker.list
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y sed grep gawk jq gzip brotli containerd.io docker-ce docker-ce-cli docker-compose-plugin qemu-user-static binfmt-support squashfs-tools git debspawn rsync b3sum
|
||||||
|
sudo mkdir -p /etc/debspawn/
|
||||||
|
echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml
|
||||||
|
sudo usermod -aG docker $USER
|
||||||
|
sudo su $USER
|
||||||
|
docker run --privileged --rm tonistiigi/binfmt --install all
|
||||||
|
docker buildx create --use
|
||||||
|
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # proceed with default installation
|
||||||
|
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
|
||||||
|
source ~/.bashrc
|
||||||
|
nvm install 24
|
||||||
|
nvm use 24
|
||||||
|
nvm alias default 24 # this prevents your machine from reverting back to another version
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloning the Repository
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git clone --recursive https://github.com/Start9Labs/start-os.git --branch next/major
|
||||||
cd start-os
|
cd start-os
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -64,18 +72,20 @@ This project uses [GNU Make](https://www.gnu.org/software/make/) to build its co
|
|||||||
### Environment Variables
|
### Environment Variables
|
||||||
|
|
||||||
| Variable | Description |
|
| Variable | Description |
|
||||||
|----------|-------------|
|
| -------------------- | --------------------------------------------------------------------------------------------------- |
|
||||||
| `PLATFORM` | Target platform: `x86_64`, `x86_64-nonfree`, `aarch64`, `aarch64-nonfree`, `riscv64`, `raspberrypi` |
|
| `PLATFORM` | Target platform: `x86_64`, `x86_64-nonfree`, `aarch64`, `aarch64-nonfree`, `riscv64`, `raspberrypi` |
|
||||||
| `ENVIRONMENT` | Hyphen-separated feature flags (see below) |
|
| `ENVIRONMENT` | Hyphen-separated feature flags (see below) |
|
||||||
| `PROFILE` | Build profile: `release` (default) or `dev` |
|
| `PROFILE` | Build profile: `release` (default) or `dev` |
|
||||||
| `GIT_BRANCH_AS_HASH` | Set to `1` to use git branch name as version hash (avoids rebuilds) |
|
| `GIT_BRANCH_AS_HASH` | Set to `1` to use git branch name as version hash (avoids rebuilds) |
|
||||||
|
|
||||||
**ENVIRONMENT flags:**
|
**ENVIRONMENT flags:**
|
||||||
|
|
||||||
- `dev` - Enables password SSH before setup, skips frontend compression
|
- `dev` - Enables password SSH before setup, skips frontend compression
|
||||||
- `unstable` - Enables assertions and debugging with performance penalty
|
- `unstable` - Enables assertions and debugging with performance penalty
|
||||||
- `console` - Enables tokio-console for async debugging
|
- `console` - Enables tokio-console for async debugging
|
||||||
|
|
||||||
**Platform notes:**
|
**Platform notes:**
|
||||||
|
|
||||||
- `-nonfree` variants include proprietary firmware and drivers
|
- `-nonfree` variants include proprietary firmware and drivers
|
||||||
- `raspberrypi` includes non-free components by necessity
|
- `raspberrypi` includes non-free components by necessity
|
||||||
- Platform is remembered between builds if not specified
|
- Platform is remembered between builds if not specified
|
||||||
@@ -85,7 +95,7 @@ This project uses [GNU Make](https://www.gnu.org/software/make/) to build its co
|
|||||||
#### Building
|
#### Building
|
||||||
|
|
||||||
| Target | Description |
|
| Target | Description |
|
||||||
|--------|-------------|
|
| ------------- | ---------------------------------------------- |
|
||||||
| `iso` | Create full `.iso` image (not for raspberrypi) |
|
| `iso` | Create full `.iso` image (not for raspberrypi) |
|
||||||
| `img` | Create full `.img` image (raspberrypi only) |
|
| `img` | Create full `.img` image (raspberrypi only) |
|
||||||
| `deb` | Build Debian package |
|
| `deb` | Build Debian package |
|
||||||
@@ -99,7 +109,7 @@ This project uses [GNU Make](https://www.gnu.org/software/make/) to build its co
|
|||||||
For devices on the same network:
|
For devices on the same network:
|
||||||
|
|
||||||
| Target | Description |
|
| Target | Description |
|
||||||
|--------|-------------|
|
| ------------------------------------ | ----------------------------------------------- |
|
||||||
| `update-startbox REMOTE=start9@<ip>` | Deploy binary + UI only (fastest) |
|
| `update-startbox REMOTE=start9@<ip>` | Deploy binary + UI only (fastest) |
|
||||||
| `update-deb REMOTE=start9@<ip>` | Deploy full Debian package |
|
| `update-deb REMOTE=start9@<ip>` | Deploy full Debian package |
|
||||||
| `update REMOTE=start9@<ip>` | OTA-style update |
|
| `update REMOTE=start9@<ip>` | OTA-style update |
|
||||||
@@ -109,15 +119,41 @@ For devices on the same network:
|
|||||||
For devices on different networks (uses [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)):
|
For devices on different networks (uses [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)):
|
||||||
|
|
||||||
| Target | Description |
|
| Target | Description |
|
||||||
|--------|-------------|
|
| ------------------- | -------------------- |
|
||||||
| `wormhole` | Send startbox binary |
|
| `wormhole` | Send startbox binary |
|
||||||
| `wormhole-deb` | Send Debian package |
|
| `wormhole-deb` | Send Debian package |
|
||||||
| `wormhole-squashfs` | Send squashfs image |
|
| `wormhole-squashfs` | Send squashfs image |
|
||||||
|
|
||||||
|
### Creating a VM
|
||||||
|
|
||||||
|
Install virt-manager:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y virt-manager
|
||||||
|
sudo usermod -aG libvirt $USER
|
||||||
|
sudo su $USER
|
||||||
|
virt-manager
|
||||||
|
```
|
||||||
|
|
||||||
|
Follow the screenshot walkthrough in [`assets/create-vm/`](assets/create-vm/) to create a new virtual machine. Key steps:
|
||||||
|
|
||||||
|
1. Create a new virtual machine
|
||||||
|
2. Browse for the ISO — create a storage pool pointing to your `results/` directory
|
||||||
|
3. Select "Generic or unknown OS"
|
||||||
|
4. Set memory and CPUs
|
||||||
|
5. Create a disk and name the VM
|
||||||
|
|
||||||
|
Build an ISO first:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
PLATFORM=$(uname -m) ENVIRONMENT=dev make iso
|
||||||
|
```
|
||||||
|
|
||||||
#### Other
|
#### Other
|
||||||
|
|
||||||
| Target | Description |
|
| Target | Description |
|
||||||
|--------|-------------|
|
| ------------------------ | ------------------------------------------- |
|
||||||
| `format` | Run code formatting (Rust nightly required) |
|
| `format` | Run code formatting (Rust nightly required) |
|
||||||
| `test` | Run all automated tests |
|
| `test` | Run all automated tests |
|
||||||
| `test-core` | Run Rust tests |
|
| `test-core` | Run Rust tests |
|
||||||
@@ -156,15 +192,18 @@ Run the formatters before committing. Configuration is handled by `rustfmt.toml`
|
|||||||
### Documentation & Comments
|
### Documentation & Comments
|
||||||
|
|
||||||
**Rust:**
|
**Rust:**
|
||||||
|
|
||||||
- Add doc comments (`///`) to public APIs, structs, and non-obvious functions
|
- Add doc comments (`///`) to public APIs, structs, and non-obvious functions
|
||||||
- Use `//` comments sparingly for complex logic that isn't self-evident
|
- Use `//` comments sparingly for complex logic that isn't self-evident
|
||||||
- Prefer self-documenting code (clear naming, small functions) over comments
|
- Prefer self-documenting code (clear naming, small functions) over comments
|
||||||
|
|
||||||
**TypeScript:**
|
**TypeScript:**
|
||||||
|
|
||||||
- Document exported functions and complex types with JSDoc
|
- Document exported functions and complex types with JSDoc
|
||||||
- Keep comments focused on "why" rather than "what"
|
- Keep comments focused on "why" rather than "what"
|
||||||
|
|
||||||
**General:**
|
**General:**
|
||||||
|
|
||||||
- Don't add comments that just restate the code
|
- Don't add comments that just restate the code
|
||||||
- Update or remove comments when code changes
|
- Update or remove comments when code changes
|
||||||
- TODOs should include context: `// TODO(username): reason`
|
- TODOs should include context: `// TODO(username): reason`
|
||||||
@@ -182,6 +221,7 @@ Use [Conventional Commits](https://www.conventionalcommits.org/):
|
|||||||
```
|
```
|
||||||
|
|
||||||
**Types:**
|
**Types:**
|
||||||
|
|
||||||
- `feat` - New feature
|
- `feat` - New feature
|
||||||
- `fix` - Bug fix
|
- `fix` - Bug fix
|
||||||
- `docs` - Documentation only
|
- `docs` - Documentation only
|
||||||
@@ -191,10 +231,10 @@ Use [Conventional Commits](https://www.conventionalcommits.org/):
|
|||||||
- `chore` - Build process, dependencies, etc.
|
- `chore` - Build process, dependencies, etc.
|
||||||
|
|
||||||
**Examples:**
|
**Examples:**
|
||||||
|
|
||||||
```
|
```
|
||||||
feat(web): add dark mode toggle
|
feat(web): add dark mode toggle
|
||||||
fix(core): resolve race condition in service startup
|
fix(core): resolve race condition in service startup
|
||||||
docs: update CONTRIBUTING.md with style guidelines
|
docs: update CONTRIBUTING.md with style guidelines
|
||||||
refactor(sdk): simplify package validation logic
|
refactor(sdk): simplify package validation logic
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
134
DEVELOPMENT.md
@@ -1,134 +0,0 @@
|
|||||||
# Setting up your development environment on Debian/Ubuntu
|
|
||||||
|
|
||||||
A step-by-step guide
|
|
||||||
|
|
||||||
> This is the only officially supported build environment.
|
|
||||||
> MacOS has limited build capabilities and Windows requires [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install)
|
|
||||||
|
|
||||||
## Installing dependencies
|
|
||||||
|
|
||||||
Run the following commands one at a time
|
|
||||||
|
|
||||||
```sh
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install -y ca-certificates curl gpg build-essential
|
|
||||||
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
|
||||||
echo "deb [arch=$(dpkg-architecture -q DEB_HOST_ARCH) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian bookworm stable" | sudo tee /etc/apt/sources.list.d/docker.list
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install -y sed grep gawk jq gzip brotli containerd.io docker-ce docker-ce-cli docker-compose-plugin qemu-user-static binfmt-support squashfs-tools git debspawn rsync b3sum
|
|
||||||
sudo mkdir -p /etc/debspawn/
|
|
||||||
echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml
|
|
||||||
sudo usermod -aG docker $USER
|
|
||||||
sudo su $USER
|
|
||||||
docker run --privileged --rm tonistiigi/binfmt --install all
|
|
||||||
docker buildx create --use
|
|
||||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # proceed with default installation
|
|
||||||
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
|
|
||||||
source ~/.bashrc
|
|
||||||
nvm install 24
|
|
||||||
nvm use 24
|
|
||||||
nvm alias default 24 # this prevents your machine from reverting back to another version
|
|
||||||
```
|
|
||||||
|
|
||||||
## Cloning the repository
|
|
||||||
|
|
||||||
```sh
|
|
||||||
git clone --recursive https://github.com/Start9Labs/start-os.git --branch next/major
|
|
||||||
cd start-os
|
|
||||||
```
|
|
||||||
|
|
||||||
## Building an ISO
|
|
||||||
|
|
||||||
```sh
|
|
||||||
PLATFORM=$(uname -m) ENVIRONMENT=dev make iso
|
|
||||||
```
|
|
||||||
|
|
||||||
This will build an ISO for your current architecture. If you are building to run on an architecture other than the one you are currently on, replace `$(uname -m)` with the correct platform for the device (one of `aarch64`, `aarch64-nonfree`, `x86_64`, `x86_64-nonfree`, `raspberrypi`)
|
|
||||||
|
|
||||||
## Creating a VM
|
|
||||||
|
|
||||||
### Install virt-manager
|
|
||||||
|
|
||||||
```sh
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install -y virt-manager
|
|
||||||
sudo usermod -aG libvirt $USER
|
|
||||||
sudo su $USER
|
|
||||||
```
|
|
||||||
|
|
||||||
### Launch virt-manager
|
|
||||||
|
|
||||||
```sh
|
|
||||||
virt-manager
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create new virtual machine
|
|
||||||
|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||
|
|
||||||
#### make sure to set "Target Path" to the path to your results directory in start-os
|
|
||||||
|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||
|
|
||||||
## Updating a VM
|
|
||||||
|
|
||||||
The fastest way to update a VM to your latest code depends on what you changed:
|
|
||||||
|
|
||||||
### UI or startd:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
PLATFORM=$(uname -m) ENVIRONMENT=dev make update-startbox REMOTE=start9@<VM IP>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Container runtime or debian dependencies:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
PLATFORM=$(uname -m) ENVIRONMENT=dev make update-deb REMOTE=start9@<VM IP>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Image recipe:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
PLATFORM=$(uname -m) ENVIRONMENT=dev make update-squashfs REMOTE=start9@<VM IP>
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
If the device you are building for is not available via ssh, it is also possible to use `magic-wormhole` to send the relevant files.
|
|
||||||
|
|
||||||
### Prerequisites:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install -y magic-wormhole
|
|
||||||
```
|
|
||||||
|
|
||||||
As before, the fastest way to update a VM to your latest code depends on what you changed. Each of the following commands will return a command to paste into the shell of the device you would like to upgrade.
|
|
||||||
|
|
||||||
### UI or startd:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
PLATFORM=$(uname -m) ENVIRONMENT=dev make wormhole
|
|
||||||
```
|
|
||||||
|
|
||||||
### Container runtime or debian dependencies:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
PLATFORM=$(uname -m) ENVIRONMENT=dev make wormhole-deb
|
|
||||||
```
|
|
||||||
|
|
||||||
### Image recipe:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
PLATFORM=$(uname -m) ENVIRONMENT=dev make wormhole-squashfs
|
|
||||||
```
|
|
||||||
15
Makefile
@@ -7,7 +7,7 @@ GIT_HASH_FILE := $(shell ./build/env/check-git-hash.sh)
|
|||||||
VERSION_FILE := $(shell ./build/env/check-version.sh)
|
VERSION_FILE := $(shell ./build/env/check-version.sh)
|
||||||
BASENAME := $(shell PROJECT=startos ./build/env/basename.sh)
|
BASENAME := $(shell PROJECT=startos ./build/env/basename.sh)
|
||||||
PLATFORM := $(shell if [ -f $(PLATFORM_FILE) ]; then cat $(PLATFORM_FILE); else echo unknown; fi)
|
PLATFORM := $(shell if [ -f $(PLATFORM_FILE) ]; then cat $(PLATFORM_FILE); else echo unknown; fi)
|
||||||
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi)
|
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; elif [ "$(PLATFORM)" = "rockchip64" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g; s/-nvidia$$//g'; fi)
|
||||||
RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi)
|
RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi)
|
||||||
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh)
|
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh)
|
||||||
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh)
|
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh)
|
||||||
@@ -139,6 +139,11 @@ install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox
|
|||||||
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
|
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
|
||||||
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
|
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
|
||||||
|
|
||||||
|
$(call mkdir,$(DESTDIR)/etc/apt/sources.list.d)
|
||||||
|
$(call cp,apt/start9.list,$(DESTDIR)/etc/apt/sources.list.d/start9.list)
|
||||||
|
$(call mkdir,$(DESTDIR)/usr/share/keyrings)
|
||||||
|
$(call cp,apt/start9.gpg,$(DESTDIR)/usr/share/keyrings/start9.gpg)
|
||||||
|
|
||||||
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
|
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
|
||||||
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-tunnelbox.sh
|
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-tunnelbox.sh
|
||||||
|
|
||||||
@@ -236,9 +241,9 @@ update-startbox: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
|
|||||||
update-deb: results/$(BASENAME).deb # better than update, but only available from debian
|
update-deb: results/$(BASENAME).deb # better than update, but only available from debian
|
||||||
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
|
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
|
||||||
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
|
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
|
||||||
$(call mkdir,/media/startos/next/tmp/startos-deb)
|
$(call mkdir,/media/startos/next/var/tmp/startos-deb)
|
||||||
$(call cp,results/$(BASENAME).deb,/media/startos/next/tmp/startos-deb/$(BASENAME).deb)
|
$(call cp,results/$(BASENAME).deb,/media/startos/next/var/tmp/startos-deb/$(BASENAME).deb)
|
||||||
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y --reinstall /tmp/startos-deb/$(BASENAME).deb"')
|
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y --reinstall /var/tmp/startos-deb/$(BASENAME).deb"')
|
||||||
|
|
||||||
update-squashfs: results/$(BASENAME).squashfs
|
update-squashfs: results/$(BASENAME).squashfs
|
||||||
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
|
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
|
||||||
@@ -278,7 +283,7 @@ core/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
|
|||||||
rm -rf core/bindings
|
rm -rf core/bindings
|
||||||
./core/build/build-ts.sh
|
./core/build/build-ts.sh
|
||||||
ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts
|
ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts
|
||||||
npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/bindings/*.ts
|
npm --prefix sdk/base exec -- prettier --config=./sdk/base/package.json -w './core/bindings/**/*.ts'
|
||||||
touch core/bindings/index.ts
|
touch core/bindings/index.ts
|
||||||
|
|
||||||
sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
|
sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
|
||||||
|
|||||||
86
README.md
@@ -13,70 +13,58 @@
|
|||||||
<a href="https://twitter.com/start9labs">
|
<a href="https://twitter.com/start9labs">
|
||||||
<img alt="X (formerly Twitter) Follow" src="https://img.shields.io/twitter/follow/start9labs">
|
<img alt="X (formerly Twitter) Follow" src="https://img.shields.io/twitter/follow/start9labs">
|
||||||
</a>
|
</a>
|
||||||
<a href="https://matrix.to/#/#community:matrix.start9labs.com">
|
|
||||||
<img alt="Static Badge" src="https://img.shields.io/badge/community-matrix-yellow?logo=matrix">
|
|
||||||
</a>
|
|
||||||
<a href="https://t.me/start9_labs">
|
|
||||||
<img alt="Static Badge" src="https://img.shields.io/badge/community-telegram-blue?logo=telegram">
|
|
||||||
</a>
|
|
||||||
<a href="https://docs.start9.com">
|
<a href="https://docs.start9.com">
|
||||||
<img alt="Static Badge" src="https://img.shields.io/badge/docs-orange?label=%F0%9F%91%A4%20support">
|
<img alt="Static Badge" src="https://img.shields.io/badge/docs-orange?label=%F0%9F%91%A4%20support">
|
||||||
</a>
|
</a>
|
||||||
<a href="https://matrix.to/#/#community-dev:matrix.start9labs.com">
|
<a href="https://matrix.to/#/#dev-startos:matrix.start9labs.com">
|
||||||
<img alt="Static Badge" src="https://img.shields.io/badge/developer-matrix-darkcyan?logo=matrix">
|
<img alt="Static Badge" src="https://img.shields.io/badge/developer-matrix-darkcyan?logo=matrix">
|
||||||
</a>
|
</a>
|
||||||
<a href="https://start9.com">
|
<a href="https://start9.com">
|
||||||
<img alt="Website" src="https://img.shields.io/website?up_message=online&down_message=offline&url=https%3A%2F%2Fstart9.com&logo=website&label=%F0%9F%8C%90%20website">
|
<img alt="Website" src="https://img.shields.io/website?up_message=online&down_message=offline&url=https%3A%2F%2Fstart9.com&logo=website&label=%F0%9F%8C%90%20website">
|
||||||
</a>
|
</a>
|
||||||
</div>
|
</div>
|
||||||
<br />
|
|
||||||
<div align="center">
|
|
||||||
<h3>
|
|
||||||
Welcome to the era of Sovereign Computing
|
|
||||||
</h3>
|
|
||||||
<p>
|
|
||||||
StartOS is an open source Linux distribution optimized for running a personal server. It facilitates the discovery, installation, network configuration, service configuration, data backup, dependency management, and health monitoring of self-hosted software services.
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
<br />
|
|
||||||
<p align="center">
|
|
||||||
<img src="assets/StartOS.png" alt="StartOS" width="85%">
|
|
||||||
</p>
|
|
||||||
<br />
|
|
||||||
|
|
||||||
## Running StartOS
|
## What is StartOS?
|
||||||
> [!WARNING]
|
|
||||||
> StartOS is in beta. It lacks features. It doesn't always work perfectly. Start9 servers are not plug and play. Using them properly requires some effort and patience. Please do not use StartOS or purchase a server if you are unable or unwilling to follow instructions and learn new concepts.
|
|
||||||
|
|
||||||
### 💰 Buy a Start9 server
|
StartOS is an open-source Linux distribution for running a personal server. It handles discovery, installation, network configuration, data backup, dependency management, and health monitoring of self-hosted services.
|
||||||
This is the most convenient option. Simply [buy a server](https://store.start9.com) from Start9 and plug it in.
|
|
||||||
|
|
||||||
### 👷 Build your own server
|
**Tech stack:** Rust backend (Tokio/Axum), Angular frontend, Node.js container runtime with LXC, and a custom diff-based database ([Patch-DB](https://github.com/Start9Labs/patch-db)) for reactive state synchronization.
|
||||||
This option is easier than you might imagine, and there are 4 reasons why you might prefer it:
|
|
||||||
1. You already have hardware
|
|
||||||
1. You want to save on shipping costs
|
|
||||||
1. You prefer not to divulge your physical address
|
|
||||||
1. You just like building things
|
|
||||||
|
|
||||||
To pursue this option, follow one of our [DIY guides](https://start9.com/latest/diy).
|
Services run in isolated LXC containers, packaged as [S9PKs](https://github.com/Start9Labs/start-os/blob/master/core/s9pk-structure.md) — a signed, merkle-archived format that supports partial downloads and cryptographic verification.
|
||||||
|
|
||||||
## ❤️ Contributing
|
## What can you do with it?
|
||||||
There are multiple ways to contribute: work directly on StartOS, package a service for the marketplace, or help with documentation and guides. To learn more about contributing, see [here](https://start9.com/contribute/).
|
|
||||||
|
|
||||||
To report security issues, please email our security team - security@start9.com.
|
StartOS lets you self-host services that would otherwise depend on third-party cloud providers — giving you full ownership of your data and infrastructure.
|
||||||
|
|
||||||
## 🌎 Marketplace
|
Browse available services on the [Start9 Marketplace](https://marketplace.start9.com/), including:
|
||||||
There are dozens of services available for StartOS, and new ones are being added all the time. Check out the full list of available services [here](https://marketplace.start9.com/marketplace). To read more about the Marketplace ecosystem, check out this [blog post](https://blog.start9.com/start9-marketplace-strategy/)
|
|
||||||
|
|
||||||
## 🖥️ User Interface Screenshots
|
- **Bitcoin & Lightning** — Run a full Bitcoin node, Lightning node, BTCPay Server, and other payment infrastructure
|
||||||
|
- **Communication** — Self-host Matrix, SimpleX, or other messaging platforms
|
||||||
|
- **Cloud Storage** — Run Nextcloud, Vaultwarden, and other productivity tools
|
||||||
|
|
||||||
<p align="center">
|
Services are added by the community. If a service you want isn't available, you can [package it yourself](https://github.com/Start9Labs/ai-service-packaging/).
|
||||||
<img src="assets/registry.png" alt="StartOS Marketplace" width="49%">
|
|
||||||
<img src="assets/community.png" alt="StartOS Community Registry" width="49%">
|
## Getting StartOS
|
||||||
<img src="assets/c-lightning.png" alt="StartOS NextCloud Service" width="49%">
|
|
||||||
<img src="assets/btcpay.png" alt="StartOS BTCPay Service" width="49%">
|
### Buy a Start9 server
|
||||||
<img src="assets/nextcloud.png" alt="StartOS System Settings" width="49%">
|
|
||||||
<img src="assets/system.png" alt="StartOS System Settings" width="49%">
|
The easiest path. [Buy a server](https://store.start9.com) from Start9 and plug it in.
|
||||||
<img src="assets/welcome.png" alt="StartOS System Settings" width="49%">
|
|
||||||
<img src="assets/logs.png" alt="StartOS System Settings" width="49%">
|
### Build your own
|
||||||
</p>
|
|
||||||
|
Follow the [install guide](https://docs.start9.com/start-os/installing.html) to install StartOS on your own hardware. . Reasons to go this route:
|
||||||
|
|
||||||
|
1. You already have compatible hardware
|
||||||
|
2. You want to save on shipping costs
|
||||||
|
3. You prefer not to share your physical address
|
||||||
|
4. You enjoy building things
|
||||||
|
|
||||||
|
### Build from source
|
||||||
|
|
||||||
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for environment setup, build instructions, and development workflow.
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
There are multiple ways to contribute: work directly on StartOS, package a service for the marketplace, or help with documentation and guides. See [CONTRIBUTING.md](CONTRIBUTING.md) or visit [start9.com/contribute](https://start9.com/contribute/).
|
||||||
|
|
||||||
|
To report security issues, email [security@start9.com](mailto:security@start9.com).
|
||||||
|
|||||||
@@ -1,9 +0,0 @@
|
|||||||
# AI Agent TODOs
|
|
||||||
|
|
||||||
Pending tasks for AI agents. Remove items when completed.
|
|
||||||
|
|
||||||
## Unreviewed CLAUDE.md Sections
|
|
||||||
|
|
||||||
- [ ] Architecture - Web (`/web`) - @MattDHill
|
|
||||||
|
|
||||||
|
|
||||||
BIN
apt/start9.gpg
Normal file
1
apt/start9.list
Normal file
@@ -0,0 +1 @@
|
|||||||
|
deb [arch=amd64,arm64,riscv64 signed-by=/usr/share/keyrings/start9.gpg] https://start9-debs.nyc3.cdn.digitaloceanspaces.com stable main
|
||||||
|
Before Width: | Height: | Size: 2.1 MiB |
|
Before Width: | Height: | Size: 396 KiB |
|
Before Width: | Height: | Size: 402 KiB |
|
Before Width: | Height: | Size: 591 KiB |
BIN
assets/logs.png
|
Before Width: | Height: | Size: 1.6 MiB |
|
Before Width: | Height: | Size: 319 KiB |
|
Before Width: | Height: | Size: 521 KiB |
|
Before Width: | Height: | Size: 331 KiB |
|
Before Width: | Height: | Size: 402 KiB |
138
build/apt/publish-deb.sh
Executable file
@@ -0,0 +1,138 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Publish .deb files to an S3-hosted apt repository.
|
||||||
|
#
|
||||||
|
# Usage: publish-deb.sh <deb-file-or-directory> [<deb-file-or-directory> ...]
|
||||||
|
#
|
||||||
|
# Environment variables:
|
||||||
|
# GPG_PRIVATE_KEY - Armored GPG private key (imported if set)
|
||||||
|
# GPG_KEY_ID - GPG key ID for signing
|
||||||
|
# S3_ACCESS_KEY - S3 access key
|
||||||
|
# S3_SECRET_KEY - S3 secret key
|
||||||
|
# S3_ENDPOINT - S3 endpoint (default: https://nyc3.digitaloceanspaces.com)
|
||||||
|
# S3_BUCKET - S3 bucket name (default: start9-debs)
|
||||||
|
# SUITE - Apt suite name (default: stable)
|
||||||
|
# COMPONENT - Apt component name (default: main)
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
if [ $# -eq 0 ]; then
|
||||||
|
echo "Usage: $0 <deb-file-or-directory> [...]" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
BUCKET="${S3_BUCKET:-start9-debs}"
|
||||||
|
ENDPOINT="${S3_ENDPOINT:-https://nyc3.digitaloceanspaces.com}"
|
||||||
|
SUITE="${SUITE:-stable}"
|
||||||
|
COMPONENT="${COMPONENT:-main}"
|
||||||
|
REPO_DIR="$(mktemp -d)"
|
||||||
|
|
||||||
|
cleanup() {
|
||||||
|
rm -rf "$REPO_DIR"
|
||||||
|
}
|
||||||
|
trap cleanup EXIT
|
||||||
|
|
||||||
|
# Import GPG key if provided
|
||||||
|
if [ -n "$GPG_PRIVATE_KEY" ]; then
|
||||||
|
echo "$GPG_PRIVATE_KEY" | gpg --batch --import 2>/dev/null
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Configure s3cmd
|
||||||
|
if [ -n "$S3_ACCESS_KEY" ] && [ -n "$S3_SECRET_KEY" ]; then
|
||||||
|
S3CMD_CONFIG="$(mktemp)"
|
||||||
|
cat > "$S3CMD_CONFIG" <<EOF
|
||||||
|
[default]
|
||||||
|
access_key = ${S3_ACCESS_KEY}
|
||||||
|
secret_key = ${S3_SECRET_KEY}
|
||||||
|
host_base = $(echo "$ENDPOINT" | sed 's|https://||')
|
||||||
|
host_bucket = %(bucket)s.$(echo "$ENDPOINT" | sed 's|https://||')
|
||||||
|
use_https = True
|
||||||
|
EOF
|
||||||
|
s3() {
|
||||||
|
s3cmd -c "$S3CMD_CONFIG" "$@"
|
||||||
|
}
|
||||||
|
else
|
||||||
|
# Fall back to default ~/.s3cfg
|
||||||
|
S3CMD_CONFIG=""
|
||||||
|
s3() {
|
||||||
|
s3cmd "$@"
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Sync existing repo from S3
|
||||||
|
echo "Syncing existing repo from s3://${BUCKET}/ ..."
|
||||||
|
s3 sync --no-mime-magic "s3://${BUCKET}/" "$REPO_DIR/" 2>/dev/null || true
|
||||||
|
|
||||||
|
# Collect all .deb files from arguments
|
||||||
|
DEB_FILES=()
|
||||||
|
for arg in "$@"; do
|
||||||
|
if [ -d "$arg" ]; then
|
||||||
|
while IFS= read -r -d '' f; do
|
||||||
|
DEB_FILES+=("$f")
|
||||||
|
done < <(find "$arg" -name '*.deb' -print0)
|
||||||
|
elif [ -f "$arg" ]; then
|
||||||
|
DEB_FILES+=("$arg")
|
||||||
|
else
|
||||||
|
echo "Warning: $arg is not a file or directory, skipping" >&2
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ ${#DEB_FILES[@]} -eq 0 ]; then
|
||||||
|
echo "No .deb files found" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Copy each deb to the pool, renaming to standard format
|
||||||
|
for deb in "${DEB_FILES[@]}"; do
|
||||||
|
PKG_NAME="$(dpkg-deb --field "$deb" Package)"
|
||||||
|
POOL_DIR="$REPO_DIR/pool/${COMPONENT}/${PKG_NAME:0:1}/${PKG_NAME}"
|
||||||
|
mkdir -p "$POOL_DIR"
|
||||||
|
cp "$deb" "$POOL_DIR/"
|
||||||
|
dpkg-name -o "$POOL_DIR/$(basename "$deb")" 2>/dev/null || true
|
||||||
|
echo "Added: $(basename "$deb") -> pool/${COMPONENT}/${PKG_NAME:0:1}/${PKG_NAME}/"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Generate Packages indices for each architecture
|
||||||
|
for arch in amd64 arm64 riscv64; do
|
||||||
|
BINARY_DIR="$REPO_DIR/dists/${SUITE}/${COMPONENT}/binary-${arch}"
|
||||||
|
mkdir -p "$BINARY_DIR"
|
||||||
|
(
|
||||||
|
cd "$REPO_DIR"
|
||||||
|
dpkg-scanpackages --arch "$arch" pool/ > "$BINARY_DIR/Packages"
|
||||||
|
gzip -k -f "$BINARY_DIR/Packages"
|
||||||
|
)
|
||||||
|
echo "Generated Packages index for ${arch}"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Generate Release file
|
||||||
|
(
|
||||||
|
cd "$REPO_DIR/dists/${SUITE}"
|
||||||
|
apt-ftparchive release \
|
||||||
|
-o "APT::FTPArchive::Release::Origin=Start9" \
|
||||||
|
-o "APT::FTPArchive::Release::Label=Start9" \
|
||||||
|
-o "APT::FTPArchive::Release::Suite=${SUITE}" \
|
||||||
|
-o "APT::FTPArchive::Release::Codename=${SUITE}" \
|
||||||
|
-o "APT::FTPArchive::Release::Architectures=amd64 arm64 riscv64" \
|
||||||
|
-o "APT::FTPArchive::Release::Components=${COMPONENT}" \
|
||||||
|
. > Release
|
||||||
|
)
|
||||||
|
echo "Generated Release file"
|
||||||
|
|
||||||
|
# Sign if GPG key is available
|
||||||
|
if [ -n "$GPG_KEY_ID" ]; then
|
||||||
|
(
|
||||||
|
cd "$REPO_DIR/dists/${SUITE}"
|
||||||
|
gpg --default-key "$GPG_KEY_ID" --batch --yes --detach-sign -o Release.gpg Release
|
||||||
|
gpg --default-key "$GPG_KEY_ID" --batch --yes --clearsign -o InRelease Release
|
||||||
|
)
|
||||||
|
echo "Signed Release file with key ${GPG_KEY_ID}"
|
||||||
|
else
|
||||||
|
echo "Warning: GPG_KEY_ID not set, Release file is unsigned" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Upload to S3
|
||||||
|
echo "Uploading to s3://${BUCKET}/ ..."
|
||||||
|
s3 sync --acl-public --no-mime-magic "$REPO_DIR/" "s3://${BUCKET}/"
|
||||||
|
|
||||||
|
[ -n "$S3CMD_CONFIG" ] && rm -f "$S3CMD_CONFIG"
|
||||||
|
echo "Done."
|
||||||
@@ -55,6 +55,7 @@ socat
|
|||||||
sqlite3
|
sqlite3
|
||||||
squashfs-tools
|
squashfs-tools
|
||||||
squashfs-tools-ng
|
squashfs-tools-ng
|
||||||
|
ssl-cert
|
||||||
sudo
|
sudo
|
||||||
systemd
|
systemd
|
||||||
systemd-resolved
|
systemd-resolved
|
||||||
|
|||||||
1
build/dpkg-deps/dev.depends
Normal file
@@ -0,0 +1 @@
|
|||||||
|
+ nmap
|
||||||
@@ -12,6 +12,10 @@ fi
|
|||||||
if [[ "$PLATFORM" =~ -nonfree$ ]]; then
|
if [[ "$PLATFORM" =~ -nonfree$ ]]; then
|
||||||
FEATURES+=("nonfree")
|
FEATURES+=("nonfree")
|
||||||
fi
|
fi
|
||||||
|
if [[ "$PLATFORM" =~ -nvidia$ ]]; then
|
||||||
|
FEATURES+=("nonfree")
|
||||||
|
FEATURES+=("nvidia")
|
||||||
|
fi
|
||||||
|
|
||||||
feature_file_checker='
|
feature_file_checker='
|
||||||
/^#/ { next }
|
/^#/ { next }
|
||||||
|
|||||||
@@ -5,6 +5,3 @@
|
|||||||
+ firmware-libertas
|
+ firmware-libertas
|
||||||
+ firmware-misc-nonfree
|
+ firmware-misc-nonfree
|
||||||
+ firmware-realtek
|
+ firmware-realtek
|
||||||
+ nvidia-container-toolkit
|
|
||||||
# + nvidia-driver
|
|
||||||
# + nvidia-kernel-dkms
|
|
||||||
1
build/dpkg-deps/nvidia.depends
Normal file
@@ -0,0 +1 @@
|
|||||||
|
+ nvidia-container-toolkit
|
||||||
@@ -34,14 +34,14 @@ fi
|
|||||||
IMAGE_BASENAME=startos-${VERSION_FULL}_${IB_TARGET_PLATFORM}
|
IMAGE_BASENAME=startos-${VERSION_FULL}_${IB_TARGET_PLATFORM}
|
||||||
|
|
||||||
BOOTLOADERS=grub-efi
|
BOOTLOADERS=grub-efi
|
||||||
if [ "$IB_TARGET_PLATFORM" = "x86_64" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nonfree" ]; then
|
if [ "$IB_TARGET_PLATFORM" = "x86_64" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nvidia" ]; then
|
||||||
IB_TARGET_ARCH=amd64
|
IB_TARGET_ARCH=amd64
|
||||||
QEMU_ARCH=x86_64
|
QEMU_ARCH=x86_64
|
||||||
BOOTLOADERS=grub-efi,syslinux
|
BOOTLOADERS=grub-efi,syslinux
|
||||||
elif [ "$IB_TARGET_PLATFORM" = "aarch64" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "raspberrypi" ] || [ "$IB_TARGET_PLATFORM" = "rockchip64" ]; then
|
elif [ "$IB_TARGET_PLATFORM" = "aarch64" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nvidia" ] || [ "$IB_TARGET_PLATFORM" = "raspberrypi" ] || [ "$IB_TARGET_PLATFORM" = "rockchip64" ]; then
|
||||||
IB_TARGET_ARCH=arm64
|
IB_TARGET_ARCH=arm64
|
||||||
QEMU_ARCH=aarch64
|
QEMU_ARCH=aarch64
|
||||||
elif [ "$IB_TARGET_PLATFORM" = "riscv64" ]; then
|
elif [ "$IB_TARGET_PLATFORM" = "riscv64" ] || [ "$IB_TARGET_PLATFORM" = "riscv64-nonfree" ]; then
|
||||||
IB_TARGET_ARCH=riscv64
|
IB_TARGET_ARCH=riscv64
|
||||||
QEMU_ARCH=riscv64
|
QEMU_ARCH=riscv64
|
||||||
else
|
else
|
||||||
@@ -60,9 +60,13 @@ mkdir -p $prep_results_dir
|
|||||||
cd $prep_results_dir
|
cd $prep_results_dir
|
||||||
|
|
||||||
NON_FREE=
|
NON_FREE=
|
||||||
if [[ "${IB_TARGET_PLATFORM}" =~ -nonfree$ ]] || [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
|
if [[ "${IB_TARGET_PLATFORM}" =~ -nonfree$ ]] || [[ "${IB_TARGET_PLATFORM}" =~ -nvidia$ ]] || [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
|
||||||
NON_FREE=1
|
NON_FREE=1
|
||||||
fi
|
fi
|
||||||
|
NVIDIA=
|
||||||
|
if [[ "${IB_TARGET_PLATFORM}" =~ -nvidia$ ]]; then
|
||||||
|
NVIDIA=1
|
||||||
|
fi
|
||||||
IMAGE_TYPE=iso
|
IMAGE_TYPE=iso
|
||||||
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ] || [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
|
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ] || [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
|
||||||
IMAGE_TYPE=img
|
IMAGE_TYPE=img
|
||||||
@@ -177,7 +181,7 @@ if [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
|
|||||||
echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list
|
echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ "$NON_FREE" = 1 ]; then
|
if [ "$NVIDIA" = 1 ]; then
|
||||||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o config/archives/nvidia-container-toolkit.key
|
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o config/archives/nvidia-container-toolkit.key
|
||||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
|
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
|
||||||
| sed 's#deb https://#deb [signed-by=/etc/apt/trusted.gpg.d/nvidia-container-toolkit.key.gpg] https://#g' \
|
| sed 's#deb https://#deb [signed-by=/etc/apt/trusted.gpg.d/nvidia-container-toolkit.key.gpg] https://#g' \
|
||||||
@@ -205,11 +209,11 @@ cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF
|
|||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
|
if [ "${NVIDIA}" = "1" ]; then
|
||||||
# install a specific NVIDIA driver version
|
# install a specific NVIDIA driver version
|
||||||
|
|
||||||
# ---------------- configuration ----------------
|
# ---------------- configuration ----------------
|
||||||
NVIDIA_DRIVER_VERSION="\${NVIDIA_DRIVER_VERSION:-580.119.02}"
|
NVIDIA_DRIVER_VERSION="\${NVIDIA_DRIVER_VERSION:-580.126.09}"
|
||||||
|
|
||||||
BASE_URL="https://download.nvidia.com/XFree86/Linux-${QEMU_ARCH}"
|
BASE_URL="https://download.nvidia.com/XFree86/Linux-${QEMU_ARCH}"
|
||||||
|
|
||||||
@@ -259,12 +263,15 @@ if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
|
|||||||
|
|
||||||
echo "[nvidia-hook] Running NVIDIA installer for kernel \${KVER}" >&2
|
echo "[nvidia-hook] Running NVIDIA installer for kernel \${KVER}" >&2
|
||||||
|
|
||||||
sh "\${RUN_PATH}" \
|
if ! sh "\${RUN_PATH}" \
|
||||||
--silent \
|
--silent \
|
||||||
--kernel-name="\${KVER}" \
|
--kernel-name="\${KVER}" \
|
||||||
--no-x-check \
|
--no-x-check \
|
||||||
--no-nouveau-check \
|
--no-nouveau-check \
|
||||||
--no-runlevel-check
|
--no-runlevel-check; then
|
||||||
|
cat /var/log/nvidia-installer.log
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
# Rebuild module metadata
|
# Rebuild module metadata
|
||||||
echo "[nvidia-hook] Running depmod for \${KVER}" >&2
|
echo "[nvidia-hook] Running depmod for \${KVER}" >&2
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ if [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$dprefix" ] || [ -z "$sport" ] || [ -
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
NAME="F$(echo "$sip:$sport -> $dip/$dprefix:$dport" | sha256sum | head -c 15)"
|
NAME="F$(echo "$sip:$sport -> $dip/$dprefix:$dport ${src_subnet:-any}" | sha256sum | head -c 15)"
|
||||||
|
|
||||||
for kind in INPUT FORWARD ACCEPT; do
|
for kind in INPUT FORWARD ACCEPT; do
|
||||||
if ! iptables -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
|
if ! iptables -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
|
||||||
@@ -13,7 +13,7 @@ for kind in INPUT FORWARD ACCEPT; do
|
|||||||
iptables -A $kind -j "${NAME}_${kind}"
|
iptables -A $kind -j "${NAME}_${kind}"
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
for kind in PREROUTING INPUT OUTPUT POSTROUTING; do
|
for kind in PREROUTING OUTPUT POSTROUTING; do
|
||||||
if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
|
if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
|
||||||
iptables -t nat -N "${NAME}_${kind}" 2> /dev/null
|
iptables -t nat -N "${NAME}_${kind}" 2> /dev/null
|
||||||
iptables -t nat -A $kind -j "${NAME}_${kind}"
|
iptables -t nat -A $kind -j "${NAME}_${kind}"
|
||||||
@@ -26,7 +26,7 @@ trap 'err=1' ERR
|
|||||||
for kind in INPUT FORWARD ACCEPT; do
|
for kind in INPUT FORWARD ACCEPT; do
|
||||||
iptables -F "${NAME}_${kind}" 2> /dev/null
|
iptables -F "${NAME}_${kind}" 2> /dev/null
|
||||||
done
|
done
|
||||||
for kind in PREROUTING INPUT OUTPUT POSTROUTING; do
|
for kind in PREROUTING OUTPUT POSTROUTING; do
|
||||||
iptables -t nat -F "${NAME}_${kind}" 2> /dev/null
|
iptables -t nat -F "${NAME}_${kind}" 2> /dev/null
|
||||||
done
|
done
|
||||||
if [ "$UNDO" = 1 ]; then
|
if [ "$UNDO" = 1 ]; then
|
||||||
@@ -36,20 +36,37 @@ if [ "$UNDO" = 1 ]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# DNAT: rewrite destination for incoming packets (external traffic)
|
# DNAT: rewrite destination for incoming packets (external traffic)
|
||||||
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
# When src_subnet is set, only forward traffic from that subnet (private forwards)
|
||||||
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
if [ -n "$src_subnet" ]; then
|
||||||
|
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||||
|
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||||
|
# Also allow containers on the bridge subnet to reach this forward
|
||||||
|
if [ -n "$bridge_subnet" ]; then
|
||||||
|
iptables -t nat -A ${NAME}_PREROUTING -s "$bridge_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||||
|
iptables -t nat -A ${NAME}_PREROUTING -s "$bridge_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||||
|
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||||
|
fi
|
||||||
|
|
||||||
# DNAT: rewrite destination for locally-originated packets (hairpin from host itself)
|
# DNAT: rewrite destination for locally-originated packets (hairpin from host itself)
|
||||||
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||||
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||||
|
|
||||||
# MASQUERADE: rewrite source for all forwarded traffic to the destination
|
|
||||||
# This ensures responses are routed back through the host regardless of source IP
|
|
||||||
iptables -t nat -A ${NAME}_POSTROUTING -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
|
|
||||||
iptables -t nat -A ${NAME}_POSTROUTING -d "$dip" -p udp --dport "$dport" -j MASQUERADE
|
|
||||||
|
|
||||||
# Allow new connections to be forwarded to the destination
|
# Allow new connections to be forwarded to the destination
|
||||||
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
|
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
|
||||||
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT
|
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT
|
||||||
|
|
||||||
|
# NAT hairpin: masquerade traffic from the bridge subnet or host to the DNAT
|
||||||
|
# target, so replies route back through the host for proper NAT reversal.
|
||||||
|
# Container-to-container hairpin (source is on the bridge subnet)
|
||||||
|
if [ -n "$bridge_subnet" ]; then
|
||||||
|
iptables -t nat -A ${NAME}_POSTROUTING -s "$bridge_subnet" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
|
||||||
|
iptables -t nat -A ${NAME}_POSTROUTING -s "$bridge_subnet" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
|
||||||
|
fi
|
||||||
|
# Host-to-container hairpin (host connects to its own gateway IP, source is sip)
|
||||||
|
iptables -t nat -A ${NAME}_POSTROUTING -s "$sip" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
|
||||||
|
iptables -t nat -A ${NAME}_POSTROUTING -s "$sip" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
|
||||||
|
|
||||||
exit $err
|
exit $err
|
||||||
@@ -62,7 +62,7 @@ fi
|
|||||||
chroot /media/startos/next bash -e << "EOF"
|
chroot /media/startos/next bash -e << "EOF"
|
||||||
|
|
||||||
if [ -f /boot/grub/grub.cfg ]; then
|
if [ -f /boot/grub/grub.cfg ]; then
|
||||||
grub-install /dev/$(eval $(lsblk -o MOUNTPOINT,PKNAME -P | grep 'MOUNTPOINT="/media/startos/root"') && echo $PKNAME)
|
grub-install --no-nvram /dev/$(eval $(lsblk -o MOUNTPOINT,PKNAME -P | grep 'MOUNTPOINT="/media/startos/root"') && echo $PKNAME)
|
||||||
update-grub
|
update-grub
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|||||||
332
build/manage-release.sh
Executable file
@@ -0,0 +1,332 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
REPO="Start9Labs/start-os"
|
||||||
|
REGISTRY="https://alpha-registry-x.start9.com"
|
||||||
|
S3_BUCKET="s3://startos-images"
|
||||||
|
S3_CDN="https://startos-images.nyc3.cdn.digitaloceanspaces.com"
|
||||||
|
START9_GPG_KEY="2D63C217"
|
||||||
|
|
||||||
|
ARCHES="aarch64 aarch64-nonfree aarch64-nvidia riscv64 riscv64-nonfree x86_64 x86_64-nonfree x86_64-nvidia"
|
||||||
|
CLI_ARCHES="aarch64 riscv64 x86_64"
|
||||||
|
|
||||||
|
require_version() {
|
||||||
|
if [ -z "$VERSION" ]; then
|
||||||
|
>&2 echo '$VERSION required'
|
||||||
|
exit 2
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
release_dir() {
|
||||||
|
echo "$HOME/Downloads/v$VERSION"
|
||||||
|
}
|
||||||
|
|
||||||
|
ensure_release_dir() {
|
||||||
|
local dir
|
||||||
|
dir=$(release_dir)
|
||||||
|
if [ "$CLEAN" = "1" ]; then
|
||||||
|
rm -rf "$dir"
|
||||||
|
fi
|
||||||
|
mkdir -p "$dir"
|
||||||
|
cd "$dir"
|
||||||
|
}
|
||||||
|
|
||||||
|
enter_release_dir() {
|
||||||
|
local dir
|
||||||
|
dir=$(release_dir)
|
||||||
|
if [ ! -d "$dir" ]; then
|
||||||
|
>&2 echo "Release directory $dir does not exist. Run 'download' or 'pull' first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
cd "$dir"
|
||||||
|
}
|
||||||
|
|
||||||
|
cli_target_for() {
|
||||||
|
local arch=$1 os=$2
|
||||||
|
local pair="${arch}-${os}"
|
||||||
|
if [ "$pair" = "riscv64-linux" ]; then
|
||||||
|
echo "riscv64gc-unknown-linux-musl"
|
||||||
|
elif [ "$pair" = "riscv64-macos" ]; then
|
||||||
|
return 1
|
||||||
|
elif [ "$os" = "linux" ]; then
|
||||||
|
echo "${arch}-unknown-linux-musl"
|
||||||
|
elif [ "$os" = "macos" ]; then
|
||||||
|
echo "${arch}-apple-darwin"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
release_files() {
|
||||||
|
for file in *.iso *.squashfs *.deb; do
|
||||||
|
[ -f "$file" ] && echo "$file"
|
||||||
|
done
|
||||||
|
for file in start-cli_*; do
|
||||||
|
[[ "$file" == *.asc ]] && continue
|
||||||
|
[ -f "$file" ] && echo "$file"
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
resolve_gh_user() {
|
||||||
|
GH_USER=${GH_USER:-$(gh api user -q .login 2>/dev/null || true)}
|
||||||
|
GH_GPG_KEY=$(git config user.signingkey 2>/dev/null || true)
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Subcommands ---
|
||||||
|
|
||||||
|
cmd_download() {
|
||||||
|
require_version
|
||||||
|
ensure_release_dir
|
||||||
|
|
||||||
|
if [ -n "$RUN_ID" ]; then
|
||||||
|
for arch in $ARCHES; do
|
||||||
|
while ! gh run download -R $REPO "$RUN_ID" -n "$arch.squashfs" -D "$(pwd)"; do sleep 1; done
|
||||||
|
done
|
||||||
|
for arch in $ARCHES; do
|
||||||
|
while ! gh run download -R $REPO "$RUN_ID" -n "$arch.iso" -D "$(pwd)"; do sleep 1; done
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$ST_RUN_ID" ]; then
|
||||||
|
for arch in $CLI_ARCHES; do
|
||||||
|
while ! gh run download -R $REPO "$ST_RUN_ID" -n "start-tunnel_$arch.deb" -D "$(pwd)"; do sleep 1; done
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$CLI_RUN_ID" ]; then
|
||||||
|
for arch in $CLI_ARCHES; do
|
||||||
|
for os in linux macos; do
|
||||||
|
local target
|
||||||
|
target=$(cli_target_for "$arch" "$os") || continue
|
||||||
|
while ! gh run download -R $REPO "$CLI_RUN_ID" -n "start-cli_$target" -D "$(pwd)"; do sleep 1; done
|
||||||
|
mv start-cli "start-cli_${arch}-${os}"
|
||||||
|
done
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd_pull() {
|
||||||
|
require_version
|
||||||
|
ensure_release_dir
|
||||||
|
|
||||||
|
echo "Downloading release assets from tag v$VERSION..."
|
||||||
|
|
||||||
|
# Download debs and CLI binaries from the GH release
|
||||||
|
for file in $(gh release view -R $REPO "v$VERSION" --json assets -q '.assets[].name' | grep -E '\.(deb)$|^start-cli_'); do
|
||||||
|
gh release download -R $REPO "v$VERSION" -p "$file" -D "$(pwd)" --clobber
|
||||||
|
done
|
||||||
|
|
||||||
|
# Download ISOs and squashfs from S3 CDN
|
||||||
|
for arch in $ARCHES; do
|
||||||
|
for ext in squashfs iso; do
|
||||||
|
# Get the actual filename from the GH release asset list or body
|
||||||
|
local filename
|
||||||
|
filename=$(gh release view -R $REPO "v$VERSION" --json assets -q ".assets[].name" | grep "_${arch}\\.${ext}$" || true)
|
||||||
|
if [ -z "$filename" ]; then
|
||||||
|
filename=$(gh release view -R $REPO "v$VERSION" --json body -q .body | grep -oP "[^ ]*_${arch}\\.${ext}" | head -1 || true)
|
||||||
|
fi
|
||||||
|
if [ -n "$filename" ]; then
|
||||||
|
echo "Downloading $filename from S3..."
|
||||||
|
curl -fSL -o "$filename" "$S3_CDN/v$VERSION/$filename"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd_register() {
|
||||||
|
require_version
|
||||||
|
enter_release_dir
|
||||||
|
start-cli --registry=$REGISTRY registry os version add "$VERSION" "v$VERSION" '' ">=0.3.5 <=$VERSION"
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd_upload() {
|
||||||
|
require_version
|
||||||
|
enter_release_dir
|
||||||
|
|
||||||
|
for file in $(release_files); do
|
||||||
|
gh release upload -R $REPO "v$VERSION" "$file"
|
||||||
|
done
|
||||||
|
for file in *.iso *.squashfs; do
|
||||||
|
s3cmd put -P "$file" "$S3_BUCKET/v$VERSION/$file"
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd_index() {
|
||||||
|
require_version
|
||||||
|
enter_release_dir
|
||||||
|
|
||||||
|
for arch in $ARCHES; do
|
||||||
|
for file in *_"$arch".squashfs *_"$arch".iso; do
|
||||||
|
start-cli --registry=$REGISTRY registry os asset add --platform="$arch" --version="$VERSION" "$file" "$S3_CDN/v$VERSION/$file"
|
||||||
|
done
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd_sign() {
|
||||||
|
require_version
|
||||||
|
enter_release_dir
|
||||||
|
resolve_gh_user
|
||||||
|
|
||||||
|
for file in $(release_files); do
|
||||||
|
gpg -u $START9_GPG_KEY --detach-sign --armor -o "${file}.start9.asc" "$file"
|
||||||
|
if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then
|
||||||
|
gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "${file}.${GH_USER}.asc" "$file"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
gpg --export -a $START9_GPG_KEY > start9.key.asc
|
||||||
|
if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then
|
||||||
|
gpg --export -a "$GH_GPG_KEY" > "${GH_USER}.key.asc"
|
||||||
|
else
|
||||||
|
>&2 echo 'Warning: could not determine GitHub user or GPG signing key, skipping personal signature'
|
||||||
|
fi
|
||||||
|
tar -czvf signatures.tar.gz *.asc
|
||||||
|
|
||||||
|
gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd_cosign() {
|
||||||
|
require_version
|
||||||
|
enter_release_dir
|
||||||
|
resolve_gh_user
|
||||||
|
|
||||||
|
if [ -z "$GH_USER" ] || [ -z "$GH_GPG_KEY" ]; then
|
||||||
|
>&2 echo 'Error: could not determine GitHub user or GPG signing key'
|
||||||
|
>&2 echo "Set GH_USER and/or configure git user.signingkey"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Downloading existing signatures..."
|
||||||
|
gh release download -R $REPO "v$VERSION" -p "signatures.tar.gz" -D "$(pwd)" --clobber
|
||||||
|
tar -xzf signatures.tar.gz
|
||||||
|
|
||||||
|
echo "Adding personal signatures as $GH_USER..."
|
||||||
|
for file in $(release_files); do
|
||||||
|
gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "${file}.${GH_USER}.asc" "$file"
|
||||||
|
done
|
||||||
|
|
||||||
|
gpg --export -a "$GH_GPG_KEY" > "${GH_USER}.key.asc"
|
||||||
|
|
||||||
|
echo "Re-packing signatures..."
|
||||||
|
tar -czvf signatures.tar.gz *.asc
|
||||||
|
|
||||||
|
gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber
|
||||||
|
echo "Done. Personal signatures for $GH_USER added to v$VERSION."
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd_notes() {
|
||||||
|
require_version
|
||||||
|
enter_release_dir
|
||||||
|
|
||||||
|
cat << EOF
|
||||||
|
# ISO Downloads
|
||||||
|
|
||||||
|
- [x86_64/AMD64]($S3_CDN/v$VERSION/$(ls *_x86_64-nonfree.iso))
|
||||||
|
- [x86_64/AMD64 + NVIDIA]($S3_CDN/v$VERSION/$(ls *_x86_64-nvidia.iso))
|
||||||
|
- [x86_64/AMD64-slim (FOSS-only)]($S3_CDN/v$VERSION/$(ls *_x86_64.iso) "Without proprietary software or drivers")
|
||||||
|
- [aarch64/ARM64]($S3_CDN/v$VERSION/$(ls *_aarch64-nonfree.iso))
|
||||||
|
- [aarch64/ARM64 + NVIDIA]($S3_CDN/v$VERSION/$(ls *_aarch64-nvidia.iso))
|
||||||
|
- [aarch64/ARM64-slim (FOSS-Only)]($S3_CDN/v$VERSION/$(ls *_aarch64.iso) "Without proprietary software or drivers")
|
||||||
|
- [RISCV64 (RVA23)]($S3_CDN/v$VERSION/$(ls *_riscv64-nonfree.iso))
|
||||||
|
- [RISCV64 (RVA23)-slim (FOSS-only)]($S3_CDN/v$VERSION/$(ls *_riscv64.iso) "Without proprietary software or drivers")
|
||||||
|
|
||||||
|
EOF
|
||||||
|
cat << 'EOF'
|
||||||
|
# StartOS Checksums
|
||||||
|
|
||||||
|
## SHA-256
|
||||||
|
```
|
||||||
|
EOF
|
||||||
|
sha256sum *.iso *.squashfs
|
||||||
|
cat << 'EOF'
|
||||||
|
```
|
||||||
|
|
||||||
|
## BLAKE-3
|
||||||
|
```
|
||||||
|
EOF
|
||||||
|
b3sum *.iso *.squashfs
|
||||||
|
cat << 'EOF'
|
||||||
|
```
|
||||||
|
|
||||||
|
# Start-Tunnel Checksums
|
||||||
|
|
||||||
|
## SHA-256
|
||||||
|
```
|
||||||
|
EOF
|
||||||
|
sha256sum start-tunnel*.deb
|
||||||
|
cat << 'EOF'
|
||||||
|
```
|
||||||
|
|
||||||
|
## BLAKE-3
|
||||||
|
```
|
||||||
|
EOF
|
||||||
|
b3sum start-tunnel*.deb
|
||||||
|
cat << 'EOF'
|
||||||
|
```
|
||||||
|
|
||||||
|
# start-cli Checksums
|
||||||
|
|
||||||
|
## SHA-256
|
||||||
|
```
|
||||||
|
EOF
|
||||||
|
release_files | grep '^start-cli_' | xargs sha256sum
|
||||||
|
cat << 'EOF'
|
||||||
|
```
|
||||||
|
|
||||||
|
## BLAKE-3
|
||||||
|
```
|
||||||
|
EOF
|
||||||
|
release_files | grep '^start-cli_' | xargs b3sum
|
||||||
|
cat << 'EOF'
|
||||||
|
```
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd_full_release() {
|
||||||
|
cmd_download
|
||||||
|
cmd_register
|
||||||
|
cmd_upload
|
||||||
|
cmd_index
|
||||||
|
cmd_sign
|
||||||
|
cmd_notes
|
||||||
|
}
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat << 'EOF'
|
||||||
|
Usage: manage-release.sh <subcommand>
|
||||||
|
|
||||||
|
Subcommands:
|
||||||
|
download Download artifacts from GitHub Actions runs
|
||||||
|
Requires: RUN_ID, ST_RUN_ID, CLI_RUN_ID (any combination)
|
||||||
|
pull Download an existing release from the GH tag and S3
|
||||||
|
register Register the version in the Start9 registry
|
||||||
|
upload Upload artifacts to GitHub Releases and S3
|
||||||
|
index Add assets to the registry index
|
||||||
|
sign Sign all artifacts with Start9 org key (+ personal key if available)
|
||||||
|
and upload signatures.tar.gz
|
||||||
|
cosign Add personal GPG signature to an existing release's signatures
|
||||||
|
(requires 'pull' first so you can verify assets before signing)
|
||||||
|
notes Print release notes with download links and checksums
|
||||||
|
full-release Run: download → register → upload → index → sign → notes
|
||||||
|
|
||||||
|
Environment variables:
|
||||||
|
VERSION (required) Release version
|
||||||
|
RUN_ID GitHub Actions run ID for OS images (download subcommand)
|
||||||
|
ST_RUN_ID GitHub Actions run ID for start-tunnel (download subcommand)
|
||||||
|
CLI_RUN_ID GitHub Actions run ID for start-cli (download subcommand)
|
||||||
|
GH_USER Override GitHub username (default: autodetected via gh cli)
|
||||||
|
CLEAN Set to 1 to wipe and recreate the release directory
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
case "${1:-}" in
|
||||||
|
download) cmd_download ;;
|
||||||
|
pull) cmd_pull ;;
|
||||||
|
register) cmd_register ;;
|
||||||
|
upload) cmd_upload ;;
|
||||||
|
index) cmd_index ;;
|
||||||
|
sign) cmd_sign ;;
|
||||||
|
cosign) cmd_cosign ;;
|
||||||
|
notes) cmd_notes ;;
|
||||||
|
full-release) cmd_full_release ;;
|
||||||
|
*) usage; exit 1 ;;
|
||||||
|
esac
|
||||||
@@ -1,142 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
if [ -z "$VERSION" ]; then
|
|
||||||
>&2 echo '$VERSION required'
|
|
||||||
exit 2
|
|
||||||
fi
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
if [ "$SKIP_DL" != "1" ]; then
|
|
||||||
if [ "$SKIP_CLEAN" != "1" ]; then
|
|
||||||
rm -rf ~/Downloads/v$VERSION
|
|
||||||
mkdir ~/Downloads/v$VERSION
|
|
||||||
cd ~/Downloads/v$VERSION
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "$RUN_ID" ]; then
|
|
||||||
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
|
|
||||||
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.squashfs -D $(pwd); do sleep 1; done
|
|
||||||
done
|
|
||||||
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
|
|
||||||
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.iso -D $(pwd); do sleep 1; done
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "$ST_RUN_ID" ]; then
|
|
||||||
for arch in aarch64 riscv64 x86_64; do
|
|
||||||
while ! gh run download -R Start9Labs/start-os $ST_RUN_ID -n start-tunnel_$arch.deb -D $(pwd); do sleep 1; done
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "$CLI_RUN_ID" ]; then
|
|
||||||
for arch in aarch64 riscv64 x86_64; do
|
|
||||||
for os in linux macos; do
|
|
||||||
pair=${arch}-${os}
|
|
||||||
if [ "${pair}" = "riscv64-linux" ]; then
|
|
||||||
target=riscv64gc-unknown-linux-musl
|
|
||||||
elif [ "${pair}" = "riscv64-macos" ]; then
|
|
||||||
continue
|
|
||||||
elif [ "${os}" = "linux" ]; then
|
|
||||||
target="${arch}-unknown-linux-musl"
|
|
||||||
elif [ "${os}" = "macos" ]; then
|
|
||||||
target="${arch}-apple-darwin"
|
|
||||||
fi
|
|
||||||
while ! gh run download -R Start9Labs/start-os $CLI_RUN_ID -n start-cli_$target -D $(pwd); do sleep 1; done
|
|
||||||
mv start-cli "start-cli_${pair}"
|
|
||||||
done
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
cd ~/Downloads/v$VERSION
|
|
||||||
fi
|
|
||||||
|
|
||||||
start-cli --registry=https://alpha-registry-x.start9.com registry os version add $VERSION "v$VERSION" '' ">=0.3.5 <=$VERSION"
|
|
||||||
|
|
||||||
if [ "$SKIP_UL" = "2" ]; then
|
|
||||||
exit 2
|
|
||||||
elif [ "$SKIP_UL" != "1" ]; then
|
|
||||||
for file in *.deb start-cli_*; do
|
|
||||||
gh release upload -R Start9Labs/start-os v$VERSION $file
|
|
||||||
done
|
|
||||||
for file in *.iso *.squashfs; do
|
|
||||||
s3cmd put -P $file s3://startos-images/v$VERSION/$file
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$SKIP_INDEX" != "1" ]; then
|
|
||||||
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
|
|
||||||
for file in *_$arch.squashfs *_$arch.iso; do
|
|
||||||
start-cli --registry=https://alpha-registry-x.start9.com registry os asset add --platform=$arch --version=$VERSION $file https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$file
|
|
||||||
done
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
for file in *.iso *.squashfs *.deb start-cli_*; do
|
|
||||||
gpg -u 7CFFDA41CA66056A --detach-sign --armor -o "${file}.asc" "$file"
|
|
||||||
done
|
|
||||||
|
|
||||||
gpg --export -a 7CFFDA41CA66056A > dr-bonez.key.asc
|
|
||||||
tar -czvf signatures.tar.gz *.asc
|
|
||||||
|
|
||||||
gh release upload -R Start9Labs/start-os v$VERSION signatures.tar.gz
|
|
||||||
|
|
||||||
cat << EOF
|
|
||||||
# ISO Downloads
|
|
||||||
|
|
||||||
- [x86_64/AMD64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64-nonfree.iso))
|
|
||||||
- [x86_64/AMD64-slim (FOSS-only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64.iso) "Without proprietary software or drivers")
|
|
||||||
- [aarch64/ARM64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64-nonfree.iso))
|
|
||||||
- [aarch64/ARM64-slim (FOSS-Only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64.iso) "Without proprietary software or drivers")
|
|
||||||
- [RISCV64 (RVA23)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_riscv64.iso))
|
|
||||||
|
|
||||||
EOF
|
|
||||||
cat << 'EOF'
|
|
||||||
# StartOS Checksums
|
|
||||||
|
|
||||||
## SHA-256
|
|
||||||
```
|
|
||||||
EOF
|
|
||||||
sha256sum *.iso *.squashfs
|
|
||||||
cat << 'EOF'
|
|
||||||
```
|
|
||||||
|
|
||||||
## BLAKE-3
|
|
||||||
```
|
|
||||||
EOF
|
|
||||||
b3sum *.iso *.squashfs
|
|
||||||
cat << 'EOF'
|
|
||||||
```
|
|
||||||
|
|
||||||
# Start-Tunnel Checksums
|
|
||||||
|
|
||||||
## SHA-256
|
|
||||||
```
|
|
||||||
EOF
|
|
||||||
sha256sum start-tunnel*.deb
|
|
||||||
cat << 'EOF'
|
|
||||||
```
|
|
||||||
|
|
||||||
## BLAKE-3
|
|
||||||
```
|
|
||||||
EOF
|
|
||||||
b3sum start-tunnel*.deb
|
|
||||||
cat << 'EOF'
|
|
||||||
```
|
|
||||||
|
|
||||||
# start-cli Checksums
|
|
||||||
|
|
||||||
## SHA-256
|
|
||||||
```
|
|
||||||
EOF
|
|
||||||
sha256sum start-cli_*
|
|
||||||
cat << 'EOF'
|
|
||||||
```
|
|
||||||
|
|
||||||
## BLAKE-3
|
|
||||||
```
|
|
||||||
EOF
|
|
||||||
b3sum start-cli_*
|
|
||||||
cat << 'EOF'
|
|
||||||
```
|
|
||||||
EOF
|
|
||||||
32
container-runtime/CLAUDE.md
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
# Container Runtime — Node.js Service Manager
|
||||||
|
|
||||||
|
Node.js runtime that manages service containers via JSON-RPC. See `RPCSpec.md` in this directory for the full RPC protocol.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
LXC Container (uniform base for all services)
|
||||||
|
└── systemd
|
||||||
|
└── container-runtime.service
|
||||||
|
└── Loads /usr/lib/startos/package/index.js (from s9pk javascript.squashfs)
|
||||||
|
└── Package JS launches subcontainers (from images in s9pk)
|
||||||
|
```
|
||||||
|
|
||||||
|
The container runtime communicates with the host via JSON-RPC over Unix socket. Package JavaScript must export functions conforming to the `ABI` type defined in `sdk/base/lib/types.ts`.
|
||||||
|
|
||||||
|
## `/media/startos/` Directory (mounted by host into container)
|
||||||
|
|
||||||
|
| Path | Description |
|
||||||
|
| -------------------- | ----------------------------------------------------- |
|
||||||
|
| `volumes/<name>/` | Package data volumes (id-mapped, persistent) |
|
||||||
|
| `assets/` | Read-only assets from s9pk `assets.squashfs` |
|
||||||
|
| `images/<name>/` | Container images (squashfs, used for subcontainers) |
|
||||||
|
| `images/<name>.env` | Environment variables for image |
|
||||||
|
| `images/<name>.json` | Image metadata |
|
||||||
|
| `backup/` | Backup mount point (mounted during backup operations) |
|
||||||
|
| `rpc/service.sock` | RPC socket (container runtime listens here) |
|
||||||
|
| `rpc/host.sock` | Host RPC socket (for effects callbacks to host) |
|
||||||
|
|
||||||
|
## S9PK Structure
|
||||||
|
|
||||||
|
See `../core/s9pk-structure.md` for the S9PK package format.
|
||||||
@@ -140,7 +140,7 @@ Evaluate a script in the runtime context. Used for debugging.
|
|||||||
The `execute` and `sandbox` methods route to procedures based on the `procedure` path:
|
The `execute` and `sandbox` methods route to procedures based on the `procedure` path:
|
||||||
|
|
||||||
| Procedure | Description |
|
| Procedure | Description |
|
||||||
|-----------|-------------|
|
| -------------------------- | ---------------------------- |
|
||||||
| `/backup/create` | Create a backup |
|
| `/backup/create` | Create a backup |
|
||||||
| `/actions/{name}/getInput` | Get input spec for an action |
|
| `/actions/{name}/getInput` | Get input spec for an action |
|
||||||
| `/actions/{name}/run` | Run an action with input |
|
| `/actions/{name}/run` | Run an action with input |
|
||||||
|
|||||||
30
container-runtime/__mocks__/mime.js
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
// Mock for ESM-only mime package — Jest's module loader doesn't support require(esm)
|
||||||
|
const types = {
|
||||||
|
".png": "image/png",
|
||||||
|
".jpg": "image/jpeg",
|
||||||
|
".jpeg": "image/jpeg",
|
||||||
|
".gif": "image/gif",
|
||||||
|
".svg": "image/svg+xml",
|
||||||
|
".webp": "image/webp",
|
||||||
|
".ico": "image/x-icon",
|
||||||
|
".json": "application/json",
|
||||||
|
".js": "application/javascript",
|
||||||
|
".html": "text/html",
|
||||||
|
".css": "text/css",
|
||||||
|
".txt": "text/plain",
|
||||||
|
".md": "text/markdown",
|
||||||
|
}
|
||||||
|
|
||||||
|
module.exports = {
|
||||||
|
default: {
|
||||||
|
getType(path) {
|
||||||
|
const ext = "." + path.split(".").pop()
|
||||||
|
return types[ext] || null
|
||||||
|
},
|
||||||
|
getExtension(type) {
|
||||||
|
const entry = Object.entries(types).find(([, v]) => v === type)
|
||||||
|
return entry ? entry[0].slice(1) : null
|
||||||
|
},
|
||||||
|
},
|
||||||
|
__esModule: true,
|
||||||
|
}
|
||||||
@@ -5,4 +5,7 @@ module.exports = {
|
|||||||
testEnvironment: "node",
|
testEnvironment: "node",
|
||||||
rootDir: "./src/",
|
rootDir: "./src/",
|
||||||
modulePathIgnorePatterns: ["./dist/"],
|
modulePathIgnorePatterns: ["./dist/"],
|
||||||
|
moduleNameMapper: {
|
||||||
|
"^mime$": "<rootDir>/../__mocks__/mime.js",
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
14
container-runtime/package-lock.json
generated
@@ -19,7 +19,6 @@
|
|||||||
"lodash.merge": "^4.6.2",
|
"lodash.merge": "^4.6.2",
|
||||||
"mime": "^4.0.7",
|
"mime": "^4.0.7",
|
||||||
"node-fetch": "^3.1.0",
|
"node-fetch": "^3.1.0",
|
||||||
"ts-matches": "^6.3.2",
|
|
||||||
"tslib": "^2.5.3",
|
"tslib": "^2.5.3",
|
||||||
"typescript": "^5.1.3",
|
"typescript": "^5.1.3",
|
||||||
"yaml": "^2.3.1"
|
"yaml": "^2.3.1"
|
||||||
@@ -38,7 +37,7 @@
|
|||||||
},
|
},
|
||||||
"../sdk/dist": {
|
"../sdk/dist": {
|
||||||
"name": "@start9labs/start-sdk",
|
"name": "@start9labs/start-sdk",
|
||||||
"version": "0.4.0-beta.48",
|
"version": "0.4.0-beta.55",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@iarna/toml": "^3.0.0",
|
"@iarna/toml": "^3.0.0",
|
||||||
@@ -49,8 +48,9 @@
|
|||||||
"ini": "^5.0.0",
|
"ini": "^5.0.0",
|
||||||
"isomorphic-fetch": "^3.0.0",
|
"isomorphic-fetch": "^3.0.0",
|
||||||
"mime": "^4.0.7",
|
"mime": "^4.0.7",
|
||||||
"ts-matches": "^6.3.2",
|
"yaml": "^2.7.1",
|
||||||
"yaml": "^2.7.1"
|
"zod": "^4.3.6",
|
||||||
|
"zod-deep-partial": "^1.2.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/jest": "^29.4.0",
|
"@types/jest": "^29.4.0",
|
||||||
@@ -6494,12 +6494,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/ts-matches": {
|
|
||||||
"version": "6.3.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/ts-matches/-/ts-matches-6.3.2.tgz",
|
|
||||||
"integrity": "sha512-UhSgJymF8cLd4y0vV29qlKVCkQpUtekAaujXbQVc729FezS8HwqzepqvtjzQ3HboatIqN/Idor85O2RMwT7lIQ==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/tslib": {
|
"node_modules/tslib": {
|
||||||
"version": "2.8.1",
|
"version": "2.8.1",
|
||||||
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
|
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
|
||||||
|
|||||||
@@ -28,7 +28,6 @@
|
|||||||
"lodash.merge": "^4.6.2",
|
"lodash.merge": "^4.6.2",
|
||||||
"mime": "^4.0.7",
|
"mime": "^4.0.7",
|
||||||
"node-fetch": "^3.1.0",
|
"node-fetch": "^3.1.0",
|
||||||
"ts-matches": "^6.3.2",
|
|
||||||
"tslib": "^2.5.3",
|
"tslib": "^2.5.3",
|
||||||
"typescript": "^5.1.3",
|
"typescript": "^5.1.3",
|
||||||
"yaml": "^2.3.1"
|
"yaml": "^2.3.1"
|
||||||
|
|||||||
@@ -3,33 +3,39 @@ import {
|
|||||||
types as T,
|
types as T,
|
||||||
utils,
|
utils,
|
||||||
VersionRange,
|
VersionRange,
|
||||||
|
z,
|
||||||
} from "@start9labs/start-sdk"
|
} from "@start9labs/start-sdk"
|
||||||
import * as net from "net"
|
import * as net from "net"
|
||||||
import { object, string, number, literals, some, unknown } from "ts-matches"
|
|
||||||
import { Effects } from "../Models/Effects"
|
import { Effects } from "../Models/Effects"
|
||||||
|
|
||||||
import { CallbackHolder } from "../Models/CallbackHolder"
|
import { CallbackHolder } from "../Models/CallbackHolder"
|
||||||
import { asError } from "@start9labs/start-sdk/base/lib/util"
|
import { asError } from "@start9labs/start-sdk/base/lib/util"
|
||||||
const matchRpcError = object({
|
const matchRpcError = z.object({
|
||||||
error: object({
|
error: z.object({
|
||||||
code: number,
|
code: z.number(),
|
||||||
message: string,
|
message: z.string(),
|
||||||
data: some(
|
data: z
|
||||||
string,
|
.union([
|
||||||
object({
|
z.string(),
|
||||||
details: string,
|
z.object({
|
||||||
debug: string.nullable().optional(),
|
details: z.string(),
|
||||||
|
debug: z.string().nullable().optional(),
|
||||||
}),
|
}),
|
||||||
)
|
])
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
}),
|
}),
|
||||||
})
|
})
|
||||||
const testRpcError = matchRpcError.test
|
function testRpcError(v: unknown): v is RpcError {
|
||||||
const testRpcResult = object({
|
return matchRpcError.safeParse(v).success
|
||||||
result: unknown,
|
}
|
||||||
}).test
|
const matchRpcResult = z.object({
|
||||||
type RpcError = typeof matchRpcError._TYPE
|
result: z.unknown(),
|
||||||
|
})
|
||||||
|
function testRpcResult(v: unknown): v is z.infer<typeof matchRpcResult> {
|
||||||
|
return matchRpcResult.safeParse(v).success
|
||||||
|
}
|
||||||
|
type RpcError = z.infer<typeof matchRpcError>
|
||||||
|
|
||||||
const SOCKET_PATH = "/media/startos/rpc/host.sock"
|
const SOCKET_PATH = "/media/startos/rpc/host.sock"
|
||||||
let hostSystemId = 0
|
let hostSystemId = 0
|
||||||
@@ -71,7 +77,7 @@ const rpcRoundFor =
|
|||||||
"Error in host RPC:",
|
"Error in host RPC:",
|
||||||
utils.asError({ method, params, error: res.error }),
|
utils.asError({ method, params, error: res.error }),
|
||||||
)
|
)
|
||||||
if (string.test(res.error.data)) {
|
if (typeof res.error.data === "string") {
|
||||||
message += ": " + res.error.data
|
message += ": " + res.error.data
|
||||||
console.error(`Details: ${res.error.data}`)
|
console.error(`Details: ${res.error.data}`)
|
||||||
} else {
|
} else {
|
||||||
@@ -253,6 +259,14 @@ export function makeEffects(context: EffectContext): Effects {
|
|||||||
callback: context.callbacks?.addCallback(options.callback) || null,
|
callback: context.callbacks?.addCallback(options.callback) || null,
|
||||||
}) as ReturnType<T.Effects["getSystemSmtp"]>
|
}) as ReturnType<T.Effects["getSystemSmtp"]>
|
||||||
},
|
},
|
||||||
|
getOutboundGateway(
|
||||||
|
...[options]: Parameters<T.Effects["getOutboundGateway"]>
|
||||||
|
) {
|
||||||
|
return rpcRound("get-outbound-gateway", {
|
||||||
|
...options,
|
||||||
|
callback: context.callbacks?.addCallback(options.callback) || null,
|
||||||
|
}) as ReturnType<T.Effects["getOutboundGateway"]>
|
||||||
|
},
|
||||||
listServiceInterfaces(
|
listServiceInterfaces(
|
||||||
...[options]: Parameters<T.Effects["listServiceInterfaces"]>
|
...[options]: Parameters<T.Effects["listServiceInterfaces"]>
|
||||||
) {
|
) {
|
||||||
@@ -316,6 +330,31 @@ export function makeEffects(context: EffectContext): Effects {
|
|||||||
T.Effects["setDataVersion"]
|
T.Effects["setDataVersion"]
|
||||||
>
|
>
|
||||||
},
|
},
|
||||||
|
plugin: {
|
||||||
|
url: {
|
||||||
|
register(
|
||||||
|
...[options]: Parameters<T.Effects["plugin"]["url"]["register"]>
|
||||||
|
) {
|
||||||
|
return rpcRound("plugin.url.register", options) as ReturnType<
|
||||||
|
T.Effects["plugin"]["url"]["register"]
|
||||||
|
>
|
||||||
|
},
|
||||||
|
exportUrl(
|
||||||
|
...[options]: Parameters<T.Effects["plugin"]["url"]["exportUrl"]>
|
||||||
|
) {
|
||||||
|
return rpcRound("plugin.url.export-url", options) as ReturnType<
|
||||||
|
T.Effects["plugin"]["url"]["exportUrl"]
|
||||||
|
>
|
||||||
|
},
|
||||||
|
clearUrls(
|
||||||
|
...[options]: Parameters<T.Effects["plugin"]["url"]["clearUrls"]>
|
||||||
|
) {
|
||||||
|
return rpcRound("plugin.url.clear-urls", options) as ReturnType<
|
||||||
|
T.Effects["plugin"]["url"]["clearUrls"]
|
||||||
|
>
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
if (context.callbacks?.onLeaveContext)
|
if (context.callbacks?.onLeaveContext)
|
||||||
self.onLeaveContext(() => {
|
self.onLeaveContext(() => {
|
||||||
|
|||||||
@@ -1,25 +1,13 @@
|
|||||||
// @ts-check
|
// @ts-check
|
||||||
|
|
||||||
import * as net from "net"
|
import * as net from "net"
|
||||||
import {
|
|
||||||
object,
|
|
||||||
some,
|
|
||||||
string,
|
|
||||||
literal,
|
|
||||||
array,
|
|
||||||
number,
|
|
||||||
matches,
|
|
||||||
any,
|
|
||||||
shape,
|
|
||||||
anyOf,
|
|
||||||
literals,
|
|
||||||
} from "ts-matches"
|
|
||||||
|
|
||||||
import {
|
import {
|
||||||
ExtendedVersion,
|
ExtendedVersion,
|
||||||
types as T,
|
types as T,
|
||||||
utils,
|
utils,
|
||||||
VersionRange,
|
VersionRange,
|
||||||
|
z,
|
||||||
} from "@start9labs/start-sdk"
|
} from "@start9labs/start-sdk"
|
||||||
import * as fs from "fs"
|
import * as fs from "fs"
|
||||||
|
|
||||||
@@ -29,89 +17,92 @@ import { jsonPath, unNestPath } from "../Models/JsonPath"
|
|||||||
import { System } from "../Interfaces/System"
|
import { System } from "../Interfaces/System"
|
||||||
import { makeEffects } from "./EffectCreator"
|
import { makeEffects } from "./EffectCreator"
|
||||||
type MaybePromise<T> = T | Promise<T>
|
type MaybePromise<T> = T | Promise<T>
|
||||||
export const matchRpcResult = anyOf(
|
export const matchRpcResult = z.union([
|
||||||
object({ result: any }),
|
z.object({ result: z.any() }),
|
||||||
object({
|
z.object({
|
||||||
error: object({
|
error: z.object({
|
||||||
code: number,
|
code: z.number(),
|
||||||
message: string,
|
message: z.string(),
|
||||||
data: object({
|
data: z
|
||||||
details: string.optional(),
|
.object({
|
||||||
debug: any.optional(),
|
details: z.string().optional(),
|
||||||
|
debug: z.any().optional(),
|
||||||
})
|
})
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
}),
|
}),
|
||||||
}),
|
}),
|
||||||
)
|
])
|
||||||
|
|
||||||
export type RpcResult = typeof matchRpcResult._TYPE
|
export type RpcResult = z.infer<typeof matchRpcResult>
|
||||||
type SocketResponse = ({ jsonrpc: "2.0"; id: IdType } & RpcResult) | null
|
type SocketResponse = ({ jsonrpc: "2.0"; id: IdType } & RpcResult) | null
|
||||||
|
|
||||||
const SOCKET_PARENT = "/media/startos/rpc"
|
const SOCKET_PARENT = "/media/startos/rpc"
|
||||||
const SOCKET_PATH = "/media/startos/rpc/service.sock"
|
const SOCKET_PATH = "/media/startos/rpc/service.sock"
|
||||||
const jsonrpc = "2.0" as const
|
const jsonrpc = "2.0" as const
|
||||||
|
|
||||||
const isResult = object({ result: any }).test
|
const isResultSchema = z.object({ result: z.any() })
|
||||||
|
const isResult = (v: unknown): v is z.infer<typeof isResultSchema> =>
|
||||||
|
isResultSchema.safeParse(v).success
|
||||||
|
|
||||||
const idType = some(string, number, literal(null))
|
const idType = z.union([z.string(), z.number(), z.literal(null)])
|
||||||
type IdType = null | string | number | undefined
|
type IdType = null | string | number | undefined
|
||||||
const runType = object({
|
const runType = z.object({
|
||||||
id: idType.optional(),
|
id: idType.optional(),
|
||||||
method: literal("execute"),
|
method: z.literal("execute"),
|
||||||
params: object({
|
params: z.object({
|
||||||
id: string,
|
id: z.string(),
|
||||||
procedure: string,
|
procedure: z.string(),
|
||||||
input: any,
|
input: z.any(),
|
||||||
timeout: number.nullable().optional(),
|
timeout: z.number().nullable().optional(),
|
||||||
}),
|
}),
|
||||||
})
|
})
|
||||||
const sandboxRunType = object({
|
const sandboxRunType = z.object({
|
||||||
id: idType.optional(),
|
id: idType.optional(),
|
||||||
method: literal("sandbox"),
|
method: z.literal("sandbox"),
|
||||||
params: object({
|
params: z.object({
|
||||||
id: string,
|
id: z.string(),
|
||||||
procedure: string,
|
procedure: z.string(),
|
||||||
input: any,
|
input: z.any(),
|
||||||
timeout: number.nullable().optional(),
|
timeout: z.number().nullable().optional(),
|
||||||
}),
|
}),
|
||||||
})
|
})
|
||||||
const callbackType = object({
|
const callbackType = z.object({
|
||||||
method: literal("callback"),
|
method: z.literal("callback"),
|
||||||
params: object({
|
params: z.object({
|
||||||
id: number,
|
id: z.number(),
|
||||||
args: array,
|
args: z.array(z.unknown()),
|
||||||
}),
|
}),
|
||||||
})
|
})
|
||||||
const initType = object({
|
const initType = z.object({
|
||||||
id: idType.optional(),
|
id: idType.optional(),
|
||||||
method: literal("init"),
|
method: z.literal("init"),
|
||||||
params: object({
|
params: z.object({
|
||||||
id: string,
|
id: z.string(),
|
||||||
kind: literals("install", "update", "restore").nullable(),
|
kind: z.enum(["install", "update", "restore"]).nullable(),
|
||||||
}),
|
}),
|
||||||
})
|
})
|
||||||
const startType = object({
|
const startType = z.object({
|
||||||
id: idType.optional(),
|
id: idType.optional(),
|
||||||
method: literal("start"),
|
method: z.literal("start"),
|
||||||
})
|
})
|
||||||
const stopType = object({
|
const stopType = z.object({
|
||||||
id: idType.optional(),
|
id: idType.optional(),
|
||||||
method: literal("stop"),
|
method: z.literal("stop"),
|
||||||
})
|
})
|
||||||
const exitType = object({
|
const exitType = z.object({
|
||||||
id: idType.optional(),
|
id: idType.optional(),
|
||||||
method: literal("exit"),
|
method: z.literal("exit"),
|
||||||
params: object({
|
params: z.object({
|
||||||
id: string,
|
id: z.string(),
|
||||||
target: string.nullable(),
|
target: z.string().nullable(),
|
||||||
}),
|
}),
|
||||||
})
|
})
|
||||||
const evalType = object({
|
const evalType = z.object({
|
||||||
id: idType.optional(),
|
id: idType.optional(),
|
||||||
method: literal("eval"),
|
method: z.literal("eval"),
|
||||||
params: object({
|
params: z.object({
|
||||||
script: string,
|
script: z.string(),
|
||||||
}),
|
}),
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -144,7 +135,9 @@ const handleRpc = (id: IdType, result: Promise<RpcResult>) =>
|
|||||||
},
|
},
|
||||||
}))
|
}))
|
||||||
|
|
||||||
const hasId = object({ id: idType }).test
|
const hasIdSchema = z.object({ id: idType })
|
||||||
|
const hasId = (v: unknown): v is z.infer<typeof hasIdSchema> =>
|
||||||
|
hasIdSchema.safeParse(v).success
|
||||||
export class RpcListener {
|
export class RpcListener {
|
||||||
shouldExit = false
|
shouldExit = false
|
||||||
unixSocketServer = net.createServer(async (server) => {})
|
unixSocketServer = net.createServer(async (server) => {})
|
||||||
@@ -246,40 +239,52 @@ export class RpcListener {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private dealWithInput(input: unknown): MaybePromise<SocketResponse> {
|
private dealWithInput(input: unknown): MaybePromise<SocketResponse> {
|
||||||
return matches(input)
|
const parsed = z.object({ method: z.string() }).safeParse(input)
|
||||||
.when(runType, async ({ id, params }) => {
|
if (!parsed.success) {
|
||||||
const system = this.system
|
console.warn(
|
||||||
const procedure = jsonPath.unsafeCast(params.procedure)
|
`Couldn't parse the following input ${JSON.stringify(input)}`,
|
||||||
const { input, timeout, id: eventId } = params
|
|
||||||
const result = this.getResult(
|
|
||||||
procedure,
|
|
||||||
system,
|
|
||||||
eventId,
|
|
||||||
timeout,
|
|
||||||
input,
|
|
||||||
)
|
)
|
||||||
|
return {
|
||||||
|
jsonrpc,
|
||||||
|
id: (input as any)?.id,
|
||||||
|
error: {
|
||||||
|
code: -32602,
|
||||||
|
message: "invalid params",
|
||||||
|
data: {
|
||||||
|
details: JSON.stringify(input),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
switch (parsed.data.method) {
|
||||||
|
case "execute": {
|
||||||
|
const { id, params } = runType.parse(input)
|
||||||
|
const system = this.system
|
||||||
|
const procedure = jsonPath.parse(params.procedure)
|
||||||
|
const { input: inp, timeout, id: eventId } = params
|
||||||
|
const result = this.getResult(procedure, system, eventId, timeout, inp)
|
||||||
|
|
||||||
return handleRpc(id, result)
|
return handleRpc(id, result)
|
||||||
})
|
}
|
||||||
.when(sandboxRunType, async ({ id, params }) => {
|
case "sandbox": {
|
||||||
|
const { id, params } = sandboxRunType.parse(input)
|
||||||
const system = this.system
|
const system = this.system
|
||||||
const procedure = jsonPath.unsafeCast(params.procedure)
|
const procedure = jsonPath.parse(params.procedure)
|
||||||
const { input, timeout, id: eventId } = params
|
const { input: inp, timeout, id: eventId } = params
|
||||||
const result = this.getResult(
|
const result = this.getResult(procedure, system, eventId, timeout, inp)
|
||||||
procedure,
|
|
||||||
system,
|
|
||||||
eventId,
|
|
||||||
timeout,
|
|
||||||
input,
|
|
||||||
)
|
|
||||||
|
|
||||||
return handleRpc(id, result)
|
return handleRpc(id, result)
|
||||||
})
|
}
|
||||||
.when(callbackType, async ({ params: { id, args } }) => {
|
case "callback": {
|
||||||
|
const {
|
||||||
|
params: { id, args },
|
||||||
|
} = callbackType.parse(input)
|
||||||
this.callCallback(id, args)
|
this.callCallback(id, args)
|
||||||
return null
|
return null
|
||||||
})
|
}
|
||||||
.when(startType, async ({ id }) => {
|
case "start": {
|
||||||
|
const { id } = startType.parse(input)
|
||||||
const callbacks =
|
const callbacks =
|
||||||
this.callbacks?.getChild("main") || this.callbacks?.child("main")
|
this.callbacks?.getChild("main") || this.callbacks?.child("main")
|
||||||
const effects = makeEffects({
|
const effects = makeEffects({
|
||||||
@@ -290,8 +295,9 @@ export class RpcListener {
|
|||||||
id,
|
id,
|
||||||
this.system.start(effects).then((result) => ({ result })),
|
this.system.start(effects).then((result) => ({ result })),
|
||||||
)
|
)
|
||||||
})
|
}
|
||||||
.when(stopType, async ({ id }) => {
|
case "stop": {
|
||||||
|
const { id } = stopType.parse(input)
|
||||||
return handleRpc(
|
return handleRpc(
|
||||||
id,
|
id,
|
||||||
this.system.stop().then((result) => {
|
this.system.stop().then((result) => {
|
||||||
@@ -300,8 +306,9 @@ export class RpcListener {
|
|||||||
return { result }
|
return { result }
|
||||||
}),
|
}),
|
||||||
)
|
)
|
||||||
})
|
}
|
||||||
.when(exitType, async ({ id, params }) => {
|
case "exit": {
|
||||||
|
const { id, params } = exitType.parse(input)
|
||||||
return handleRpc(
|
return handleRpc(
|
||||||
id,
|
id,
|
||||||
(async () => {
|
(async () => {
|
||||||
@@ -323,8 +330,9 @@ export class RpcListener {
|
|||||||
}
|
}
|
||||||
})().then((result) => ({ result })),
|
})().then((result) => ({ result })),
|
||||||
)
|
)
|
||||||
})
|
}
|
||||||
.when(initType, async ({ id, params }) => {
|
case "init": {
|
||||||
|
const { id, params } = initType.parse(input)
|
||||||
return handleRpc(
|
return handleRpc(
|
||||||
id,
|
id,
|
||||||
(async () => {
|
(async () => {
|
||||||
@@ -349,8 +357,9 @@ export class RpcListener {
|
|||||||
}
|
}
|
||||||
})().then((result) => ({ result })),
|
})().then((result) => ({ result })),
|
||||||
)
|
)
|
||||||
})
|
}
|
||||||
.when(evalType, async ({ id, params }) => {
|
case "eval": {
|
||||||
|
const { id, params } = evalType.parse(input)
|
||||||
return handleRpc(
|
return handleRpc(
|
||||||
id,
|
id,
|
||||||
(async () => {
|
(async () => {
|
||||||
@@ -375,41 +384,28 @@ export class RpcListener {
|
|||||||
}
|
}
|
||||||
})(),
|
})(),
|
||||||
)
|
)
|
||||||
})
|
}
|
||||||
.when(
|
default: {
|
||||||
shape({ id: idType.optional(), method: string }),
|
const { id, method } = z
|
||||||
({ id, method }) => ({
|
.object({ id: idType.optional(), method: z.string() })
|
||||||
|
.passthrough()
|
||||||
|
.parse(input)
|
||||||
|
return {
|
||||||
jsonrpc,
|
jsonrpc,
|
||||||
id,
|
id,
|
||||||
error: {
|
error: {
|
||||||
code: -32601,
|
code: -32601,
|
||||||
message: `Method not found`,
|
message: "Method not found",
|
||||||
data: {
|
data: {
|
||||||
details: method,
|
details: method,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
}),
|
|
||||||
)
|
|
||||||
|
|
||||||
.defaultToLazy(() => {
|
|
||||||
console.warn(
|
|
||||||
`Couldn't parse the following input ${JSON.stringify(input)}`,
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
jsonrpc,
|
|
||||||
id: (input as any)?.id,
|
|
||||||
error: {
|
|
||||||
code: -32602,
|
|
||||||
message: "invalid params",
|
|
||||||
data: {
|
|
||||||
details: JSON.stringify(input),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
})
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
private getResult(
|
private getResult(
|
||||||
procedure: typeof jsonPath._TYPE,
|
procedure: z.infer<typeof jsonPath>,
|
||||||
system: System,
|
system: System,
|
||||||
eventId: string,
|
eventId: string,
|
||||||
timeout: number | null | undefined,
|
timeout: number | null | undefined,
|
||||||
@@ -437,6 +433,7 @@ export class RpcListener {
|
|||||||
return system.getActionInput(
|
return system.getActionInput(
|
||||||
effects,
|
effects,
|
||||||
procedures[2],
|
procedures[2],
|
||||||
|
input?.prefill ?? null,
|
||||||
timeout || null,
|
timeout || null,
|
||||||
)
|
)
|
||||||
case procedures[1] === "actions" && procedures[3] === "run":
|
case procedures[1] === "actions" && procedures[3] === "run":
|
||||||
@@ -448,26 +445,18 @@ export class RpcListener {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})().then(ensureResultTypeShape, (error) =>
|
})().then(ensureResultTypeShape, (error) => {
|
||||||
matches(error)
|
const errorSchema = z.object({
|
||||||
.when(
|
error: z.string(),
|
||||||
object({
|
code: z.number().default(0),
|
||||||
error: string,
|
})
|
||||||
code: number.defaultTo(0),
|
const parsed = errorSchema.safeParse(error)
|
||||||
}),
|
if (parsed.success) {
|
||||||
(error) => ({
|
return {
|
||||||
error: {
|
error: { code: parsed.data.code, message: parsed.data.error },
|
||||||
code: error.code,
|
}
|
||||||
message: error.error,
|
}
|
||||||
},
|
return { error: { code: 0, message: String(error) } }
|
||||||
}),
|
})
|
||||||
)
|
|
||||||
.defaultToLazy(() => ({
|
|
||||||
error: {
|
|
||||||
code: 0,
|
|
||||||
message: String(error),
|
|
||||||
},
|
|
||||||
})),
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ import * as fs from "fs/promises"
|
|||||||
import * as cp from "child_process"
|
import * as cp from "child_process"
|
||||||
import { SubContainer, types as T } from "@start9labs/start-sdk"
|
import { SubContainer, types as T } from "@start9labs/start-sdk"
|
||||||
import { promisify } from "util"
|
import { promisify } from "util"
|
||||||
import { DockerProcedure, VolumeId } from "../../../Models/DockerProcedure"
|
import { DockerProcedure } from "../../../Models/DockerProcedure"
|
||||||
import { Volume } from "./matchVolume"
|
import { Volume } from "./matchVolume"
|
||||||
import {
|
import {
|
||||||
CommandOptions,
|
CommandOptions,
|
||||||
@@ -28,7 +28,7 @@ export class DockerProcedureContainer extends Drop {
|
|||||||
effects: T.Effects,
|
effects: T.Effects,
|
||||||
packageId: string,
|
packageId: string,
|
||||||
data: DockerProcedure,
|
data: DockerProcedure,
|
||||||
volumes: { [id: VolumeId]: Volume },
|
volumes: { [id: string]: Volume },
|
||||||
name: string,
|
name: string,
|
||||||
options: { subcontainer?: SubContainer<SDKManifest> } = {},
|
options: { subcontainer?: SubContainer<SDKManifest> } = {},
|
||||||
) {
|
) {
|
||||||
@@ -47,7 +47,7 @@ export class DockerProcedureContainer extends Drop {
|
|||||||
effects: T.Effects,
|
effects: T.Effects,
|
||||||
packageId: string,
|
packageId: string,
|
||||||
data: DockerProcedure,
|
data: DockerProcedure,
|
||||||
volumes: { [id: VolumeId]: Volume },
|
volumes: { [id: string]: Volume },
|
||||||
name: string,
|
name: string,
|
||||||
) {
|
) {
|
||||||
const subcontainer = await SubContainerOwned.of(
|
const subcontainer = await SubContainerOwned.of(
|
||||||
@@ -64,7 +64,7 @@ export class DockerProcedureContainer extends Drop {
|
|||||||
? `${subcontainer.rootfs}${mounts[mount]}`
|
? `${subcontainer.rootfs}${mounts[mount]}`
|
||||||
: `${subcontainer.rootfs}/${mounts[mount]}`
|
: `${subcontainer.rootfs}/${mounts[mount]}`
|
||||||
await fs.mkdir(path, { recursive: true })
|
await fs.mkdir(path, { recursive: true })
|
||||||
const volumeMount = volumes[mount]
|
const volumeMount: Volume = volumes[mount]
|
||||||
if (volumeMount.type === "data") {
|
if (volumeMount.type === "data") {
|
||||||
await subcontainer.mount(
|
await subcontainer.mount(
|
||||||
Mounts.of().mountVolume({
|
Mounts.of().mountVolume({
|
||||||
@@ -82,18 +82,15 @@ export class DockerProcedureContainer extends Drop {
|
|||||||
}),
|
}),
|
||||||
)
|
)
|
||||||
} else if (volumeMount.type === "certificate") {
|
} else if (volumeMount.type === "certificate") {
|
||||||
|
const hostInfo = await effects.getHostInfo({
|
||||||
|
hostId: volumeMount["interface-id"],
|
||||||
|
})
|
||||||
const hostnames = [
|
const hostnames = [
|
||||||
`${packageId}.embassy`,
|
`${packageId}.embassy`,
|
||||||
...new Set(
|
...new Set(
|
||||||
Object.values(
|
Object.values(hostInfo?.bindings || {})
|
||||||
(
|
.flatMap((b) => b.addresses.available)
|
||||||
await effects.getHostInfo({
|
.map((h) => h.hostname),
|
||||||
hostId: volumeMount["interface-id"],
|
|
||||||
})
|
|
||||||
)?.hostnameInfo || {},
|
|
||||||
)
|
|
||||||
.flatMap((h) => h)
|
|
||||||
.flatMap((h) => (h.kind === "onion" ? [h.hostname.value] : [])),
|
|
||||||
).values(),
|
).values(),
|
||||||
]
|
]
|
||||||
const certChain = await effects.getSslCertificate({
|
const certChain = await effects.getSslCertificate({
|
||||||
|
|||||||
@@ -15,26 +15,11 @@ import { System } from "../../../Interfaces/System"
|
|||||||
import { matchManifest, Manifest } from "./matchManifest"
|
import { matchManifest, Manifest } from "./matchManifest"
|
||||||
import * as childProcess from "node:child_process"
|
import * as childProcess from "node:child_process"
|
||||||
import { DockerProcedureContainer } from "./DockerProcedureContainer"
|
import { DockerProcedureContainer } from "./DockerProcedureContainer"
|
||||||
|
import { DockerProcedure } from "../../../Models/DockerProcedure"
|
||||||
import { promisify } from "node:util"
|
import { promisify } from "node:util"
|
||||||
import * as U from "./oldEmbassyTypes"
|
import * as U from "./oldEmbassyTypes"
|
||||||
import { MainLoop } from "./MainLoop"
|
import { MainLoop } from "./MainLoop"
|
||||||
import {
|
import { z } from "@start9labs/start-sdk"
|
||||||
matches,
|
|
||||||
boolean,
|
|
||||||
dictionary,
|
|
||||||
literal,
|
|
||||||
literals,
|
|
||||||
object,
|
|
||||||
string,
|
|
||||||
unknown,
|
|
||||||
any,
|
|
||||||
tuple,
|
|
||||||
number,
|
|
||||||
anyOf,
|
|
||||||
deferred,
|
|
||||||
Parser,
|
|
||||||
array,
|
|
||||||
} from "ts-matches"
|
|
||||||
import { AddSslOptions } from "@start9labs/start-sdk/base/lib/osBindings"
|
import { AddSslOptions } from "@start9labs/start-sdk/base/lib/osBindings"
|
||||||
import {
|
import {
|
||||||
BindOptionsByProtocol,
|
BindOptionsByProtocol,
|
||||||
@@ -57,6 +42,15 @@ function todo(): never {
|
|||||||
throw new Error("Not implemented")
|
throw new Error("Not implemented")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Local type for procedure values from the manifest.
|
||||||
|
* The manifest's zod schemas use ZodTypeAny casts that produce `unknown` in zod v4.
|
||||||
|
* This type restores the expected shape for type-safe property access.
|
||||||
|
*/
|
||||||
|
type Procedure =
|
||||||
|
| (DockerProcedure & { type: "docker" })
|
||||||
|
| { type: "script"; args: unknown[] | null }
|
||||||
|
|
||||||
const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json"
|
const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json"
|
||||||
export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js"
|
export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js"
|
||||||
|
|
||||||
@@ -65,26 +59,24 @@ const configFile = FileHelper.json(
|
|||||||
base: new Volume("embassy"),
|
base: new Volume("embassy"),
|
||||||
subpath: "config.json",
|
subpath: "config.json",
|
||||||
},
|
},
|
||||||
matches.any,
|
z.any(),
|
||||||
)
|
)
|
||||||
const dependsOnFile = FileHelper.json(
|
const dependsOnFile = FileHelper.json(
|
||||||
{
|
{
|
||||||
base: new Volume("embassy"),
|
base: new Volume("embassy"),
|
||||||
subpath: "dependsOn.json",
|
subpath: "dependsOn.json",
|
||||||
},
|
},
|
||||||
dictionary([string, array(string)]),
|
z.record(z.string(), z.array(z.string())),
|
||||||
)
|
)
|
||||||
|
|
||||||
const matchResult = object({
|
const matchResult = z.object({
|
||||||
result: any,
|
result: z.any(),
|
||||||
})
|
})
|
||||||
const matchError = object({
|
const matchError = z.object({
|
||||||
error: string,
|
error: z.string(),
|
||||||
})
|
})
|
||||||
const matchErrorCode = object<{
|
const matchErrorCode = z.object({
|
||||||
"error-code": [number, string] | readonly [number, string]
|
"error-code": z.tuple([z.number(), z.string()]),
|
||||||
}>({
|
|
||||||
"error-code": tuple(number, string),
|
|
||||||
})
|
})
|
||||||
|
|
||||||
const assertNever = (
|
const assertNever = (
|
||||||
@@ -96,29 +88,34 @@ const assertNever = (
|
|||||||
/**
|
/**
|
||||||
Should be changing the type for specific properties, and this is mostly a transformation for the old return types to the newer one.
|
Should be changing the type for specific properties, and this is mostly a transformation for the old return types to the newer one.
|
||||||
*/
|
*/
|
||||||
|
function isMatchResult(a: unknown): a is z.infer<typeof matchResult> {
|
||||||
|
return matchResult.safeParse(a).success
|
||||||
|
}
|
||||||
|
function isMatchError(a: unknown): a is z.infer<typeof matchError> {
|
||||||
|
return matchError.safeParse(a).success
|
||||||
|
}
|
||||||
|
function isMatchErrorCode(a: unknown): a is z.infer<typeof matchErrorCode> {
|
||||||
|
return matchErrorCode.safeParse(a).success
|
||||||
|
}
|
||||||
const fromReturnType = <A>(a: U.ResultType<A>): A => {
|
const fromReturnType = <A>(a: U.ResultType<A>): A => {
|
||||||
if (matchResult.test(a)) {
|
if (isMatchResult(a)) {
|
||||||
return a.result
|
return a.result
|
||||||
}
|
}
|
||||||
if (matchError.test(a)) {
|
if (isMatchError(a)) {
|
||||||
console.info({ passedErrorStack: new Error().stack, error: a.error })
|
console.info({ passedErrorStack: new Error().stack, error: a.error })
|
||||||
throw { error: a.error }
|
throw { error: a.error }
|
||||||
}
|
}
|
||||||
if (matchErrorCode.test(a)) {
|
if (isMatchErrorCode(a)) {
|
||||||
const [code, message] = a["error-code"]
|
const [code, message] = a["error-code"]
|
||||||
throw { error: message, code }
|
throw { error: message, code }
|
||||||
}
|
}
|
||||||
return assertNever(a)
|
return assertNever(a as never)
|
||||||
}
|
}
|
||||||
|
|
||||||
const matchSetResult = object({
|
const matchSetResult = z.object({
|
||||||
"depends-on": dictionary([string, array(string)])
|
"depends-on": z.record(z.string(), z.array(z.string())).nullable().optional(),
|
||||||
.nullable()
|
dependsOn: z.record(z.string(), z.array(z.string())).nullable().optional(),
|
||||||
.optional(),
|
signal: z.enum([
|
||||||
dependsOn: dictionary([string, array(string)])
|
|
||||||
.nullable()
|
|
||||||
.optional(),
|
|
||||||
signal: literals(
|
|
||||||
"SIGTERM",
|
"SIGTERM",
|
||||||
"SIGHUP",
|
"SIGHUP",
|
||||||
"SIGINT",
|
"SIGINT",
|
||||||
@@ -151,7 +148,7 @@ const matchSetResult = object({
|
|||||||
"SIGPWR",
|
"SIGPWR",
|
||||||
"SIGSYS",
|
"SIGSYS",
|
||||||
"SIGINFO",
|
"SIGINFO",
|
||||||
),
|
]),
|
||||||
})
|
})
|
||||||
|
|
||||||
type OldGetConfigRes = {
|
type OldGetConfigRes = {
|
||||||
@@ -233,33 +230,29 @@ const asProperty = (x: PackagePropertiesV2): PropertiesReturn =>
|
|||||||
Object.fromEntries(
|
Object.fromEntries(
|
||||||
Object.entries(x).map(([key, value]) => [key, asProperty_(value)]),
|
Object.entries(x).map(([key, value]) => [key, asProperty_(value)]),
|
||||||
)
|
)
|
||||||
const [matchPackageProperties, setMatchPackageProperties] =
|
const matchPackagePropertyObject: z.ZodType<PackagePropertyObject> = z.object({
|
||||||
deferred<PackagePropertiesV2>()
|
value: z.lazy(() => matchPackageProperties),
|
||||||
const matchPackagePropertyObject: Parser<unknown, PackagePropertyObject> =
|
type: z.literal("object"),
|
||||||
object({
|
description: z.string(),
|
||||||
value: matchPackageProperties,
|
})
|
||||||
type: literal("object"),
|
|
||||||
description: string,
|
|
||||||
})
|
|
||||||
|
|
||||||
const matchPackagePropertyString: Parser<unknown, PackagePropertyString> =
|
const matchPackagePropertyString: z.ZodType<PackagePropertyString> = z.object({
|
||||||
object({
|
type: z.literal("string"),
|
||||||
type: literal("string"),
|
description: z.string().nullable().optional(),
|
||||||
description: string.nullable().optional(),
|
value: z.string(),
|
||||||
value: string,
|
copyable: z.boolean().nullable().optional(),
|
||||||
copyable: boolean.nullable().optional(),
|
qr: z.boolean().nullable().optional(),
|
||||||
qr: boolean.nullable().optional(),
|
masked: z.boolean().nullable().optional(),
|
||||||
masked: boolean.nullable().optional(),
|
})
|
||||||
})
|
const matchPackageProperties: z.ZodType<PackagePropertiesV2> = z.lazy(() =>
|
||||||
setMatchPackageProperties(
|
z.record(
|
||||||
dictionary([
|
z.string(),
|
||||||
string,
|
z.union([matchPackagePropertyObject, matchPackagePropertyString]),
|
||||||
anyOf(matchPackagePropertyObject, matchPackagePropertyString),
|
),
|
||||||
]),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const matchProperties = object({
|
const matchProperties = z.object({
|
||||||
version: literal(2),
|
version: z.literal(2),
|
||||||
data: matchPackageProperties,
|
data: matchPackageProperties,
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -303,7 +296,7 @@ export class SystemForEmbassy implements System {
|
|||||||
})
|
})
|
||||||
const manifestData = await fs.readFile(manifestLocation, "utf-8")
|
const manifestData = await fs.readFile(manifestLocation, "utf-8")
|
||||||
return new SystemForEmbassy(
|
return new SystemForEmbassy(
|
||||||
matchManifest.unsafeCast(JSON.parse(manifestData)),
|
matchManifest.parse(JSON.parse(manifestData)),
|
||||||
moduleCode,
|
moduleCode,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -389,7 +382,9 @@ export class SystemForEmbassy implements System {
|
|||||||
delete this.currentRunning
|
delete this.currentRunning
|
||||||
if (currentRunning) {
|
if (currentRunning) {
|
||||||
await currentRunning.clean({
|
await currentRunning.clean({
|
||||||
timeout: fromDuration(this.manifest.main["sigterm-timeout"] || "30s"),
|
timeout: fromDuration(
|
||||||
|
(this.manifest.main["sigterm-timeout"] as any) || "30s",
|
||||||
|
),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -510,6 +505,7 @@ export class SystemForEmbassy implements System {
|
|||||||
async getActionInput(
|
async getActionInput(
|
||||||
effects: Effects,
|
effects: Effects,
|
||||||
actionId: string,
|
actionId: string,
|
||||||
|
_prefill: Record<string, unknown> | null,
|
||||||
timeoutMs: number | null,
|
timeoutMs: number | null,
|
||||||
): Promise<T.ActionInput | null> {
|
): Promise<T.ActionInput | null> {
|
||||||
if (actionId === "config") {
|
if (actionId === "config") {
|
||||||
@@ -622,7 +618,7 @@ export class SystemForEmbassy implements System {
|
|||||||
effects: Effects,
|
effects: Effects,
|
||||||
timeoutMs: number | null,
|
timeoutMs: number | null,
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
const backup = this.manifest.backup.create
|
const backup = this.manifest.backup.create as Procedure
|
||||||
if (backup.type === "docker") {
|
if (backup.type === "docker") {
|
||||||
const commands = [backup.entrypoint, ...backup.args]
|
const commands = [backup.entrypoint, ...backup.args]
|
||||||
const container = await DockerProcedureContainer.of(
|
const container = await DockerProcedureContainer.of(
|
||||||
@@ -655,7 +651,7 @@ export class SystemForEmbassy implements System {
|
|||||||
encoding: "utf-8",
|
encoding: "utf-8",
|
||||||
})
|
})
|
||||||
.catch((_) => null)
|
.catch((_) => null)
|
||||||
const restoreBackup = this.manifest.backup.restore
|
const restoreBackup = this.manifest.backup.restore as Procedure
|
||||||
if (restoreBackup.type === "docker") {
|
if (restoreBackup.type === "docker") {
|
||||||
const commands = [restoreBackup.entrypoint, ...restoreBackup.args]
|
const commands = [restoreBackup.entrypoint, ...restoreBackup.args]
|
||||||
const container = await DockerProcedureContainer.of(
|
const container = await DockerProcedureContainer.of(
|
||||||
@@ -688,7 +684,7 @@ export class SystemForEmbassy implements System {
|
|||||||
effects: Effects,
|
effects: Effects,
|
||||||
timeoutMs: number | null,
|
timeoutMs: number | null,
|
||||||
): Promise<OldGetConfigRes> {
|
): Promise<OldGetConfigRes> {
|
||||||
const config = this.manifest.config?.get
|
const config = this.manifest.config?.get as Procedure | undefined
|
||||||
if (!config) return { spec: {} }
|
if (!config) return { spec: {} }
|
||||||
if (config.type === "docker") {
|
if (config.type === "docker") {
|
||||||
const commands = [config.entrypoint, ...config.args]
|
const commands = [config.entrypoint, ...config.args]
|
||||||
@@ -730,7 +726,7 @@ export class SystemForEmbassy implements System {
|
|||||||
)
|
)
|
||||||
await updateConfig(effects, this.manifest, spec, newConfig)
|
await updateConfig(effects, this.manifest, spec, newConfig)
|
||||||
await configFile.write(effects, newConfig)
|
await configFile.write(effects, newConfig)
|
||||||
const setConfigValue = this.manifest.config?.set
|
const setConfigValue = this.manifest.config?.set as Procedure | undefined
|
||||||
if (!setConfigValue) return
|
if (!setConfigValue) return
|
||||||
if (setConfigValue.type === "docker") {
|
if (setConfigValue.type === "docker") {
|
||||||
const commands = [
|
const commands = [
|
||||||
@@ -745,7 +741,7 @@ export class SystemForEmbassy implements System {
|
|||||||
this.manifest.volumes,
|
this.manifest.volumes,
|
||||||
`Set Config - ${commands.join(" ")}`,
|
`Set Config - ${commands.join(" ")}`,
|
||||||
)
|
)
|
||||||
const answer = matchSetResult.unsafeCast(
|
const answer = matchSetResult.parse(
|
||||||
JSON.parse(
|
JSON.parse(
|
||||||
(await container.execFail(commands, timeoutMs)).stdout.toString(),
|
(await container.execFail(commands, timeoutMs)).stdout.toString(),
|
||||||
),
|
),
|
||||||
@@ -758,7 +754,7 @@ export class SystemForEmbassy implements System {
|
|||||||
const method = moduleCode.setConfig
|
const method = moduleCode.setConfig
|
||||||
if (!method) throw new Error("Expecting that the method setConfig exists")
|
if (!method) throw new Error("Expecting that the method setConfig exists")
|
||||||
|
|
||||||
const answer = matchSetResult.unsafeCast(
|
const answer = matchSetResult.parse(
|
||||||
await method(
|
await method(
|
||||||
polyfillEffects(effects, this.manifest),
|
polyfillEffects(effects, this.manifest),
|
||||||
newConfig as U.Config,
|
newConfig as U.Config,
|
||||||
@@ -787,7 +783,11 @@ export class SystemForEmbassy implements System {
|
|||||||
const requiredDeps = {
|
const requiredDeps = {
|
||||||
...Object.fromEntries(
|
...Object.fromEntries(
|
||||||
Object.entries(this.manifest.dependencies ?? {})
|
Object.entries(this.manifest.dependencies ?? {})
|
||||||
.filter(([k, v]) => v?.requirement.type === "required")
|
.filter(
|
||||||
|
([k, v]) =>
|
||||||
|
(v?.requirement as { type: string } | undefined)?.type ===
|
||||||
|
"required",
|
||||||
|
)
|
||||||
.map((x) => [x[0], []]) || [],
|
.map((x) => [x[0], []]) || [],
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
@@ -855,7 +855,7 @@ export class SystemForEmbassy implements System {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (migration) {
|
if (migration) {
|
||||||
const [_, procedure] = migration
|
const [_, procedure] = migration as readonly [unknown, Procedure]
|
||||||
if (procedure.type === "docker") {
|
if (procedure.type === "docker") {
|
||||||
const commands = [procedure.entrypoint, ...procedure.args]
|
const commands = [procedure.entrypoint, ...procedure.args]
|
||||||
const container = await DockerProcedureContainer.of(
|
const container = await DockerProcedureContainer.of(
|
||||||
@@ -893,7 +893,10 @@ export class SystemForEmbassy implements System {
|
|||||||
effects: Effects,
|
effects: Effects,
|
||||||
timeoutMs: number | null,
|
timeoutMs: number | null,
|
||||||
): Promise<PropertiesReturn> {
|
): Promise<PropertiesReturn> {
|
||||||
const setConfigValue = this.manifest.properties
|
const setConfigValue = this.manifest.properties as
|
||||||
|
| Procedure
|
||||||
|
| null
|
||||||
|
| undefined
|
||||||
if (!setConfigValue) throw new Error("There is no properties")
|
if (!setConfigValue) throw new Error("There is no properties")
|
||||||
if (setConfigValue.type === "docker") {
|
if (setConfigValue.type === "docker") {
|
||||||
const commands = [setConfigValue.entrypoint, ...setConfigValue.args]
|
const commands = [setConfigValue.entrypoint, ...setConfigValue.args]
|
||||||
@@ -904,7 +907,7 @@ export class SystemForEmbassy implements System {
|
|||||||
this.manifest.volumes,
|
this.manifest.volumes,
|
||||||
`Properties - ${commands.join(" ")}`,
|
`Properties - ${commands.join(" ")}`,
|
||||||
)
|
)
|
||||||
const properties = matchProperties.unsafeCast(
|
const properties = matchProperties.parse(
|
||||||
JSON.parse(
|
JSON.parse(
|
||||||
(await container.execFail(commands, timeoutMs)).stdout.toString(),
|
(await container.execFail(commands, timeoutMs)).stdout.toString(),
|
||||||
),
|
),
|
||||||
@@ -915,7 +918,7 @@ export class SystemForEmbassy implements System {
|
|||||||
const method = moduleCode.properties
|
const method = moduleCode.properties
|
||||||
if (!method)
|
if (!method)
|
||||||
throw new Error("Expecting that the method properties exists")
|
throw new Error("Expecting that the method properties exists")
|
||||||
const properties = matchProperties.unsafeCast(
|
const properties = matchProperties.parse(
|
||||||
await method(polyfillEffects(effects, this.manifest)).then(
|
await method(polyfillEffects(effects, this.manifest)).then(
|
||||||
fromReturnType,
|
fromReturnType,
|
||||||
),
|
),
|
||||||
@@ -930,7 +933,8 @@ export class SystemForEmbassy implements System {
|
|||||||
formData: unknown,
|
formData: unknown,
|
||||||
timeoutMs: number | null,
|
timeoutMs: number | null,
|
||||||
): Promise<T.ActionResult> {
|
): Promise<T.ActionResult> {
|
||||||
const actionProcedure = this.manifest.actions?.[actionId]?.implementation
|
const actionProcedure = this.manifest.actions?.[actionId]
|
||||||
|
?.implementation as Procedure | undefined
|
||||||
const toActionResult = ({
|
const toActionResult = ({
|
||||||
message,
|
message,
|
||||||
value,
|
value,
|
||||||
@@ -997,7 +1001,9 @@ export class SystemForEmbassy implements System {
|
|||||||
oldConfig: unknown,
|
oldConfig: unknown,
|
||||||
timeoutMs: number | null,
|
timeoutMs: number | null,
|
||||||
): Promise<object> {
|
): Promise<object> {
|
||||||
const actionProcedure = this.manifest.dependencies?.[id]?.config?.check
|
const actionProcedure = this.manifest.dependencies?.[id]?.config?.check as
|
||||||
|
| Procedure
|
||||||
|
| undefined
|
||||||
if (!actionProcedure) return { message: "Action not found", value: null }
|
if (!actionProcedure) return { message: "Action not found", value: null }
|
||||||
if (actionProcedure.type === "docker") {
|
if (actionProcedure.type === "docker") {
|
||||||
const commands = [
|
const commands = [
|
||||||
@@ -1089,40 +1095,50 @@ export class SystemForEmbassy implements System {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const matchPointer = object({
|
const matchPointer = z.object({
|
||||||
type: literal("pointer"),
|
type: z.literal("pointer"),
|
||||||
})
|
})
|
||||||
|
|
||||||
const matchPointerPackage = object({
|
const matchPointerPackage = z.object({
|
||||||
subtype: literal("package"),
|
subtype: z.literal("package"),
|
||||||
target: literals("tor-key", "tor-address", "lan-address"),
|
target: z.enum(["tor-key", "tor-address", "lan-address"]),
|
||||||
"package-id": string,
|
"package-id": z.string(),
|
||||||
interface: string,
|
interface: z.string(),
|
||||||
})
|
})
|
||||||
const matchPointerConfig = object({
|
const matchPointerConfig = z.object({
|
||||||
subtype: literal("package"),
|
subtype: z.literal("package"),
|
||||||
target: literals("config"),
|
target: z.enum(["config"]),
|
||||||
"package-id": string,
|
"package-id": z.string(),
|
||||||
selector: string,
|
selector: z.string(),
|
||||||
multi: boolean,
|
multi: z.boolean(),
|
||||||
})
|
})
|
||||||
const matchSpec = object({
|
const matchSpec = z.object({
|
||||||
spec: object,
|
spec: z.record(z.string(), z.unknown()),
|
||||||
})
|
})
|
||||||
const matchVariants = object({ variants: dictionary([string, unknown]) })
|
const matchVariants = z.object({ variants: z.record(z.string(), z.unknown()) })
|
||||||
|
function isMatchPointer(v: unknown): v is z.infer<typeof matchPointer> {
|
||||||
|
return matchPointer.safeParse(v).success
|
||||||
|
}
|
||||||
|
function isMatchSpec(v: unknown): v is z.infer<typeof matchSpec> {
|
||||||
|
return matchSpec.safeParse(v).success
|
||||||
|
}
|
||||||
|
function isMatchVariants(v: unknown): v is z.infer<typeof matchVariants> {
|
||||||
|
return matchVariants.safeParse(v).success
|
||||||
|
}
|
||||||
function cleanSpecOfPointers<T>(mutSpec: T): T {
|
function cleanSpecOfPointers<T>(mutSpec: T): T {
|
||||||
if (!object.test(mutSpec)) return mutSpec
|
if (typeof mutSpec !== "object" || mutSpec === null) return mutSpec
|
||||||
for (const key in mutSpec) {
|
for (const key in mutSpec) {
|
||||||
const value = mutSpec[key]
|
const value = mutSpec[key]
|
||||||
if (matchSpec.test(value)) value.spec = cleanSpecOfPointers(value.spec)
|
if (isMatchSpec(value))
|
||||||
if (matchVariants.test(value))
|
value.spec = cleanSpecOfPointers(value.spec) as Record<string, unknown>
|
||||||
|
if (isMatchVariants(value))
|
||||||
value.variants = Object.fromEntries(
|
value.variants = Object.fromEntries(
|
||||||
Object.entries(value.variants).map(([key, value]) => [
|
Object.entries(value.variants).map(([key, value]) => [
|
||||||
key,
|
key,
|
||||||
cleanSpecOfPointers(value),
|
cleanSpecOfPointers(value),
|
||||||
]),
|
]),
|
||||||
)
|
)
|
||||||
if (!matchPointer.test(value)) continue
|
if (!isMatchPointer(value)) continue
|
||||||
delete mutSpec[key]
|
delete mutSpec[key]
|
||||||
// // if (value.target === )
|
// // if (value.target === )
|
||||||
}
|
}
|
||||||
@@ -1244,12 +1260,8 @@ async function updateConfig(
|
|||||||
? ""
|
? ""
|
||||||
: catchFn(
|
: catchFn(
|
||||||
() =>
|
() =>
|
||||||
(specValue.target === "lan-address"
|
filled.addressInfo!.filter({ kind: "mdns" })!.hostnames[0]
|
||||||
? filled.addressInfo!.filter({ kind: "mdns" }) ||
|
.hostname,
|
||||||
filled.addressInfo!.onion
|
|
||||||
: filled.addressInfo!.onion ||
|
|
||||||
filled.addressInfo!.filter({ kind: "mdns" })
|
|
||||||
).hostnames[0].hostname.value,
|
|
||||||
) || ""
|
) || ""
|
||||||
mutConfigValue[key] = url
|
mutConfigValue[key] = url
|
||||||
}
|
}
|
||||||
@@ -1272,7 +1284,7 @@ function extractServiceInterfaceId(manifest: Manifest, specInterface: string) {
|
|||||||
}
|
}
|
||||||
async function convertToNewConfig(value: OldGetConfigRes) {
|
async function convertToNewConfig(value: OldGetConfigRes) {
|
||||||
try {
|
try {
|
||||||
const valueSpec: OldConfigSpec = matchOldConfigSpec.unsafeCast(value.spec)
|
const valueSpec: OldConfigSpec = matchOldConfigSpec.parse(value.spec)
|
||||||
const spec = transformConfigSpec(valueSpec)
|
const spec = transformConfigSpec(valueSpec)
|
||||||
if (!value.config) return { spec, config: null }
|
if (!value.config) return { spec, config: null }
|
||||||
const config = transformOldConfigToNew(valueSpec, value.config) ?? null
|
const config = transformOldConfigToNew(valueSpec, value.config) ?? null
|
||||||
|
|||||||
@@ -4,9 +4,9 @@ import synapseManifest from "./__fixtures__/synapseManifest"
|
|||||||
|
|
||||||
describe("matchManifest", () => {
|
describe("matchManifest", () => {
|
||||||
test("gittea", () => {
|
test("gittea", () => {
|
||||||
matchManifest.unsafeCast(giteaManifest)
|
matchManifest.parse(giteaManifest)
|
||||||
})
|
})
|
||||||
test("synapse", () => {
|
test("synapse", () => {
|
||||||
matchManifest.unsafeCast(synapseManifest)
|
matchManifest.parse(synapseManifest)
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,116 +1,111 @@
|
|||||||
import {
|
import { z } from "@start9labs/start-sdk"
|
||||||
object,
|
|
||||||
literal,
|
|
||||||
string,
|
|
||||||
array,
|
|
||||||
boolean,
|
|
||||||
dictionary,
|
|
||||||
literals,
|
|
||||||
number,
|
|
||||||
unknown,
|
|
||||||
some,
|
|
||||||
every,
|
|
||||||
} from "ts-matches"
|
|
||||||
import { matchVolume } from "./matchVolume"
|
import { matchVolume } from "./matchVolume"
|
||||||
import { matchDockerProcedure } from "../../../Models/DockerProcedure"
|
import { matchDockerProcedure } from "../../../Models/DockerProcedure"
|
||||||
|
|
||||||
const matchJsProcedure = object({
|
const matchJsProcedure = z.object({
|
||||||
type: literal("script"),
|
type: z.literal("script"),
|
||||||
args: array(unknown).nullable().optional().defaultTo([]),
|
args: z.array(z.unknown()).nullable().optional().default([]),
|
||||||
})
|
})
|
||||||
|
|
||||||
const matchProcedure = some(matchDockerProcedure, matchJsProcedure)
|
const matchProcedure = z.union([matchDockerProcedure, matchJsProcedure])
|
||||||
export type Procedure = typeof matchProcedure._TYPE
|
export type Procedure = z.infer<typeof matchProcedure>
|
||||||
|
|
||||||
const matchAction = object({
|
const matchAction = z.object({
|
||||||
name: string,
|
name: z.string(),
|
||||||
description: string,
|
description: z.string(),
|
||||||
warning: string.nullable().optional(),
|
warning: z.string().nullable().optional(),
|
||||||
implementation: matchProcedure,
|
implementation: matchProcedure,
|
||||||
"allowed-statuses": array(literals("running", "stopped")),
|
"allowed-statuses": z.array(z.enum(["running", "stopped"])),
|
||||||
"input-spec": unknown.nullable().optional(),
|
"input-spec": z.unknown().nullable().optional(),
|
||||||
})
|
})
|
||||||
export const matchManifest = object({
|
export const matchManifest = z.object({
|
||||||
id: string,
|
id: z.string(),
|
||||||
title: string,
|
title: z.string(),
|
||||||
version: string,
|
version: z.string(),
|
||||||
main: matchDockerProcedure,
|
main: matchDockerProcedure,
|
||||||
assets: object({
|
assets: z
|
||||||
assets: string.nullable().optional(),
|
.object({
|
||||||
scripts: string.nullable().optional(),
|
assets: z.string().nullable().optional(),
|
||||||
|
scripts: z.string().nullable().optional(),
|
||||||
})
|
})
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
"health-checks": dictionary([
|
"health-checks": z.record(
|
||||||
string,
|
z.string(),
|
||||||
every(
|
z.intersection(
|
||||||
matchProcedure,
|
matchProcedure,
|
||||||
object({
|
z.object({
|
||||||
name: string,
|
name: z.string(),
|
||||||
["success-message"]: string.nullable().optional(),
|
"success-message": z.string().nullable().optional(),
|
||||||
}),
|
}),
|
||||||
),
|
),
|
||||||
]),
|
),
|
||||||
config: object({
|
config: z
|
||||||
|
.object({
|
||||||
get: matchProcedure,
|
get: matchProcedure,
|
||||||
set: matchProcedure,
|
set: matchProcedure,
|
||||||
})
|
})
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
properties: matchProcedure.nullable().optional(),
|
properties: matchProcedure.nullable().optional(),
|
||||||
volumes: dictionary([string, matchVolume]),
|
volumes: z.record(z.string(), matchVolume),
|
||||||
interfaces: dictionary([
|
interfaces: z.record(
|
||||||
string,
|
z.string(),
|
||||||
object({
|
z.object({
|
||||||
name: string,
|
name: z.string(),
|
||||||
description: string,
|
description: z.string(),
|
||||||
"tor-config": object({
|
"tor-config": z
|
||||||
"port-mapping": dictionary([string, string]),
|
.object({
|
||||||
|
"port-mapping": z.record(z.string(), z.string()),
|
||||||
})
|
})
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
"lan-config": dictionary([
|
"lan-config": z
|
||||||
string,
|
.record(
|
||||||
object({
|
z.string(),
|
||||||
ssl: boolean,
|
z.object({
|
||||||
internal: number,
|
ssl: z.boolean(),
|
||||||
|
internal: z.number(),
|
||||||
}),
|
}),
|
||||||
])
|
)
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
ui: boolean,
|
ui: z.boolean(),
|
||||||
protocols: array(string),
|
protocols: z.array(z.string()),
|
||||||
}),
|
}),
|
||||||
]),
|
),
|
||||||
backup: object({
|
backup: z.object({
|
||||||
create: matchProcedure,
|
create: matchProcedure,
|
||||||
restore: matchProcedure,
|
restore: matchProcedure,
|
||||||
}),
|
}),
|
||||||
migrations: object({
|
migrations: z
|
||||||
to: dictionary([string, matchProcedure]),
|
.object({
|
||||||
from: dictionary([string, matchProcedure]),
|
to: z.record(z.string(), matchProcedure),
|
||||||
|
from: z.record(z.string(), matchProcedure),
|
||||||
})
|
})
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
dependencies: dictionary([
|
dependencies: z.record(
|
||||||
string,
|
z.string(),
|
||||||
object({
|
z
|
||||||
version: string,
|
.object({
|
||||||
requirement: some(
|
version: z.string(),
|
||||||
object({
|
requirement: z.union([
|
||||||
type: literal("opt-in"),
|
z.object({
|
||||||
how: string,
|
type: z.literal("opt-in"),
|
||||||
|
how: z.string(),
|
||||||
}),
|
}),
|
||||||
object({
|
z.object({
|
||||||
type: literal("opt-out"),
|
type: z.literal("opt-out"),
|
||||||
how: string,
|
how: z.string(),
|
||||||
}),
|
}),
|
||||||
object({
|
z.object({
|
||||||
type: literal("required"),
|
type: z.literal("required"),
|
||||||
}),
|
}),
|
||||||
),
|
]),
|
||||||
description: string.nullable().optional(),
|
description: z.string().nullable().optional(),
|
||||||
config: object({
|
config: z
|
||||||
|
.object({
|
||||||
check: matchProcedure,
|
check: matchProcedure,
|
||||||
"auto-configure": matchProcedure,
|
"auto-configure": matchProcedure,
|
||||||
})
|
})
|
||||||
@@ -119,8 +114,8 @@ export const matchManifest = object({
|
|||||||
})
|
})
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
]),
|
),
|
||||||
|
|
||||||
actions: dictionary([string, matchAction]),
|
actions: z.record(z.string(), matchAction),
|
||||||
})
|
})
|
||||||
export type Manifest = typeof matchManifest._TYPE
|
export type Manifest = z.infer<typeof matchManifest>
|
||||||
|
|||||||
@@ -1,32 +1,32 @@
|
|||||||
import { object, literal, string, boolean, some } from "ts-matches"
|
import { z } from "@start9labs/start-sdk"
|
||||||
|
|
||||||
const matchDataVolume = object({
|
const matchDataVolume = z.object({
|
||||||
type: literal("data"),
|
type: z.literal("data"),
|
||||||
readonly: boolean.optional(),
|
readonly: z.boolean().optional(),
|
||||||
})
|
})
|
||||||
const matchAssetVolume = object({
|
const matchAssetVolume = z.object({
|
||||||
type: literal("assets"),
|
type: z.literal("assets"),
|
||||||
})
|
})
|
||||||
const matchPointerVolume = object({
|
const matchPointerVolume = z.object({
|
||||||
type: literal("pointer"),
|
type: z.literal("pointer"),
|
||||||
"package-id": string,
|
"package-id": z.string(),
|
||||||
"volume-id": string,
|
"volume-id": z.string(),
|
||||||
path: string,
|
path: z.string(),
|
||||||
readonly: boolean,
|
readonly: z.boolean(),
|
||||||
})
|
})
|
||||||
const matchCertificateVolume = object({
|
const matchCertificateVolume = z.object({
|
||||||
type: literal("certificate"),
|
type: z.literal("certificate"),
|
||||||
"interface-id": string,
|
"interface-id": z.string(),
|
||||||
})
|
})
|
||||||
const matchBackupVolume = object({
|
const matchBackupVolume = z.object({
|
||||||
type: literal("backup"),
|
type: z.literal("backup"),
|
||||||
readonly: boolean,
|
readonly: z.boolean(),
|
||||||
})
|
})
|
||||||
export const matchVolume = some(
|
export const matchVolume = z.union([
|
||||||
matchDataVolume,
|
matchDataVolume,
|
||||||
matchAssetVolume,
|
matchAssetVolume,
|
||||||
matchPointerVolume,
|
matchPointerVolume,
|
||||||
matchCertificateVolume,
|
matchCertificateVolume,
|
||||||
matchBackupVolume,
|
matchBackupVolume,
|
||||||
)
|
])
|
||||||
export type Volume = typeof matchVolume._TYPE
|
export type Volume = z.infer<typeof matchVolume>
|
||||||
|
|||||||
@@ -12,43 +12,43 @@ import nostrConfig2 from "./__fixtures__/nostrConfig2"
|
|||||||
|
|
||||||
describe("transformConfigSpec", () => {
|
describe("transformConfigSpec", () => {
|
||||||
test("matchOldConfigSpec(embassyPages.homepage.variants[web-page])", () => {
|
test("matchOldConfigSpec(embassyPages.homepage.variants[web-page])", () => {
|
||||||
matchOldConfigSpec.unsafeCast(
|
matchOldConfigSpec.parse(
|
||||||
fixtureEmbassyPagesConfig.homepage.variants["web-page"],
|
fixtureEmbassyPagesConfig.homepage.variants["web-page"],
|
||||||
)
|
)
|
||||||
})
|
})
|
||||||
test("matchOldConfigSpec(embassyPages)", () => {
|
test("matchOldConfigSpec(embassyPages)", () => {
|
||||||
matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig)
|
matchOldConfigSpec.parse(fixtureEmbassyPagesConfig)
|
||||||
})
|
})
|
||||||
test("transformConfigSpec(embassyPages)", () => {
|
test("transformConfigSpec(embassyPages)", () => {
|
||||||
const spec = matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig)
|
const spec = matchOldConfigSpec.parse(fixtureEmbassyPagesConfig)
|
||||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||||
})
|
})
|
||||||
|
|
||||||
test("matchOldConfigSpec(RTL.nodes)", () => {
|
test("matchOldConfigSpec(RTL.nodes)", () => {
|
||||||
matchOldValueSpecList.unsafeCast(fixtureRTLConfig.nodes)
|
matchOldValueSpecList.parse(fixtureRTLConfig.nodes)
|
||||||
})
|
})
|
||||||
test("matchOldConfigSpec(RTL)", () => {
|
test("matchOldConfigSpec(RTL)", () => {
|
||||||
matchOldConfigSpec.unsafeCast(fixtureRTLConfig)
|
matchOldConfigSpec.parse(fixtureRTLConfig)
|
||||||
})
|
})
|
||||||
test("transformConfigSpec(RTL)", () => {
|
test("transformConfigSpec(RTL)", () => {
|
||||||
const spec = matchOldConfigSpec.unsafeCast(fixtureRTLConfig)
|
const spec = matchOldConfigSpec.parse(fixtureRTLConfig)
|
||||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||||
})
|
})
|
||||||
|
|
||||||
test("transformConfigSpec(searNXG)", () => {
|
test("transformConfigSpec(searNXG)", () => {
|
||||||
const spec = matchOldConfigSpec.unsafeCast(searNXG)
|
const spec = matchOldConfigSpec.parse(searNXG)
|
||||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||||
})
|
})
|
||||||
test("transformConfigSpec(bitcoind)", () => {
|
test("transformConfigSpec(bitcoind)", () => {
|
||||||
const spec = matchOldConfigSpec.unsafeCast(bitcoind)
|
const spec = matchOldConfigSpec.parse(bitcoind)
|
||||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||||
})
|
})
|
||||||
test("transformConfigSpec(nostr)", () => {
|
test("transformConfigSpec(nostr)", () => {
|
||||||
const spec = matchOldConfigSpec.unsafeCast(nostr)
|
const spec = matchOldConfigSpec.parse(nostr)
|
||||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||||
})
|
})
|
||||||
test("transformConfigSpec(nostr2)", () => {
|
test("transformConfigSpec(nostr2)", () => {
|
||||||
const spec = matchOldConfigSpec.unsafeCast(nostrConfig2)
|
const spec = matchOldConfigSpec.parse(nostrConfig2)
|
||||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,19 +1,4 @@
|
|||||||
import { IST } from "@start9labs/start-sdk"
|
import { IST, z } from "@start9labs/start-sdk"
|
||||||
import {
|
|
||||||
dictionary,
|
|
||||||
object,
|
|
||||||
anyOf,
|
|
||||||
string,
|
|
||||||
literals,
|
|
||||||
array,
|
|
||||||
number,
|
|
||||||
boolean,
|
|
||||||
Parser,
|
|
||||||
deferred,
|
|
||||||
every,
|
|
||||||
nill,
|
|
||||||
literal,
|
|
||||||
} from "ts-matches"
|
|
||||||
|
|
||||||
export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
|
export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
|
||||||
return Object.entries(oldSpec).reduce((inputSpec, [key, oldVal]) => {
|
return Object.entries(oldSpec).reduce((inputSpec, [key, oldVal]) => {
|
||||||
@@ -82,7 +67,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
|
|||||||
name: oldVal.name,
|
name: oldVal.name,
|
||||||
description: oldVal.description || null,
|
description: oldVal.description || null,
|
||||||
warning: oldVal.warning || null,
|
warning: oldVal.warning || null,
|
||||||
spec: transformConfigSpec(matchOldConfigSpec.unsafeCast(oldVal.spec)),
|
spec: transformConfigSpec(matchOldConfigSpec.parse(oldVal.spec)),
|
||||||
}
|
}
|
||||||
} else if (oldVal.type === "string") {
|
} else if (oldVal.type === "string") {
|
||||||
newVal = {
|
newVal = {
|
||||||
@@ -121,7 +106,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
|
|||||||
...obj,
|
...obj,
|
||||||
[id]: {
|
[id]: {
|
||||||
name: oldVal.tag["variant-names"][id] || id,
|
name: oldVal.tag["variant-names"][id] || id,
|
||||||
spec: transformConfigSpec(matchOldConfigSpec.unsafeCast(spec)),
|
spec: transformConfigSpec(matchOldConfigSpec.parse(spec)),
|
||||||
},
|
},
|
||||||
}),
|
}),
|
||||||
{} as Record<string, { name: string; spec: IST.InputSpec }>,
|
{} as Record<string, { name: string; spec: IST.InputSpec }>,
|
||||||
@@ -153,7 +138,7 @@ export function transformOldConfigToNew(
|
|||||||
|
|
||||||
if (isObject(val)) {
|
if (isObject(val)) {
|
||||||
newVal = transformOldConfigToNew(
|
newVal = transformOldConfigToNew(
|
||||||
matchOldConfigSpec.unsafeCast(val.spec),
|
matchOldConfigSpec.parse(val.spec),
|
||||||
config[key],
|
config[key],
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -172,7 +157,7 @@ export function transformOldConfigToNew(
|
|||||||
newVal = {
|
newVal = {
|
||||||
selection,
|
selection,
|
||||||
value: transformOldConfigToNew(
|
value: transformOldConfigToNew(
|
||||||
matchOldConfigSpec.unsafeCast(val.variants[selection]),
|
matchOldConfigSpec.parse(val.variants[selection]),
|
||||||
config[key],
|
config[key],
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
@@ -183,10 +168,7 @@ export function transformOldConfigToNew(
|
|||||||
|
|
||||||
if (isObjectList(val)) {
|
if (isObjectList(val)) {
|
||||||
newVal = (config[key] as object[]).map((obj) =>
|
newVal = (config[key] as object[]).map((obj) =>
|
||||||
transformOldConfigToNew(
|
transformOldConfigToNew(matchOldConfigSpec.parse(val.spec.spec), obj),
|
||||||
matchOldConfigSpec.unsafeCast(val.spec.spec),
|
|
||||||
obj,
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
} else if (isUnionList(val)) return obj
|
} else if (isUnionList(val)) return obj
|
||||||
}
|
}
|
||||||
@@ -212,7 +194,7 @@ export function transformNewConfigToOld(
|
|||||||
|
|
||||||
if (isObject(val)) {
|
if (isObject(val)) {
|
||||||
newVal = transformNewConfigToOld(
|
newVal = transformNewConfigToOld(
|
||||||
matchOldConfigSpec.unsafeCast(val.spec),
|
matchOldConfigSpec.parse(val.spec),
|
||||||
config[key],
|
config[key],
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -221,7 +203,7 @@ export function transformNewConfigToOld(
|
|||||||
newVal = {
|
newVal = {
|
||||||
[val.tag.id]: config[key].selection,
|
[val.tag.id]: config[key].selection,
|
||||||
...transformNewConfigToOld(
|
...transformNewConfigToOld(
|
||||||
matchOldConfigSpec.unsafeCast(val.variants[config[key].selection]),
|
matchOldConfigSpec.parse(val.variants[config[key].selection]),
|
||||||
config[key].value,
|
config[key].value,
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
@@ -230,10 +212,7 @@ export function transformNewConfigToOld(
|
|||||||
if (isList(val)) {
|
if (isList(val)) {
|
||||||
if (isObjectList(val)) {
|
if (isObjectList(val)) {
|
||||||
newVal = (config[key] as object[]).map((obj) =>
|
newVal = (config[key] as object[]).map((obj) =>
|
||||||
transformNewConfigToOld(
|
transformNewConfigToOld(matchOldConfigSpec.parse(val.spec.spec), obj),
|
||||||
matchOldConfigSpec.unsafeCast(val.spec.spec),
|
|
||||||
obj,
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
} else if (isUnionList(val)) return obj
|
} else if (isUnionList(val)) return obj
|
||||||
}
|
}
|
||||||
@@ -337,9 +316,7 @@ function getListSpec(
|
|||||||
default: oldVal.default as Record<string, unknown>[],
|
default: oldVal.default as Record<string, unknown>[],
|
||||||
spec: {
|
spec: {
|
||||||
type: "object",
|
type: "object",
|
||||||
spec: transformConfigSpec(
|
spec: transformConfigSpec(matchOldConfigSpec.parse(oldVal.spec.spec)),
|
||||||
matchOldConfigSpec.unsafeCast(oldVal.spec.spec),
|
|
||||||
),
|
|
||||||
uniqueBy: oldVal.spec["unique-by"] || null,
|
uniqueBy: oldVal.spec["unique-by"] || null,
|
||||||
displayAs: oldVal.spec["display-as"] || null,
|
displayAs: oldVal.spec["display-as"] || null,
|
||||||
},
|
},
|
||||||
@@ -393,211 +370,281 @@ function isUnionList(
|
|||||||
}
|
}
|
||||||
|
|
||||||
export type OldConfigSpec = Record<string, OldValueSpec>
|
export type OldConfigSpec = Record<string, OldValueSpec>
|
||||||
const [_matchOldConfigSpec, setMatchOldConfigSpec] = deferred<unknown>()
|
export const matchOldConfigSpec: z.ZodType<OldConfigSpec> = z.lazy(() =>
|
||||||
export const matchOldConfigSpec = _matchOldConfigSpec as Parser<
|
z.record(z.string(), matchOldValueSpec),
|
||||||
unknown,
|
|
||||||
OldConfigSpec
|
|
||||||
>
|
|
||||||
export const matchOldDefaultString = anyOf(
|
|
||||||
string,
|
|
||||||
object({ charset: string, len: number }),
|
|
||||||
)
|
)
|
||||||
type OldDefaultString = typeof matchOldDefaultString._TYPE
|
export const matchOldDefaultString = z.union([
|
||||||
|
z.string(),
|
||||||
|
z.object({ charset: z.string(), len: z.number() }),
|
||||||
|
])
|
||||||
|
type OldDefaultString = z.infer<typeof matchOldDefaultString>
|
||||||
|
|
||||||
export const matchOldValueSpecString = object({
|
export const matchOldValueSpecString = z.object({
|
||||||
type: literals("string"),
|
type: z.enum(["string"]),
|
||||||
name: string,
|
name: z.string(),
|
||||||
masked: boolean.nullable().optional(),
|
masked: z.boolean().nullable().optional(),
|
||||||
copyable: boolean.nullable().optional(),
|
copyable: z.boolean().nullable().optional(),
|
||||||
nullable: boolean.nullable().optional(),
|
nullable: z.boolean().nullable().optional(),
|
||||||
placeholder: string.nullable().optional(),
|
placeholder: z.string().nullable().optional(),
|
||||||
pattern: string.nullable().optional(),
|
pattern: z.string().nullable().optional(),
|
||||||
"pattern-description": string.nullable().optional(),
|
"pattern-description": z.string().nullable().optional(),
|
||||||
default: matchOldDefaultString.nullable().optional(),
|
default: matchOldDefaultString.nullable().optional(),
|
||||||
textarea: boolean.nullable().optional(),
|
textarea: z.boolean().nullable().optional(),
|
||||||
description: string.nullable().optional(),
|
description: z.string().nullable().optional(),
|
||||||
warning: string.nullable().optional(),
|
warning: z.string().nullable().optional(),
|
||||||
})
|
})
|
||||||
|
|
||||||
export const matchOldValueSpecNumber = object({
|
export const matchOldValueSpecNumber = z.object({
|
||||||
type: literals("number"),
|
type: z.enum(["number"]),
|
||||||
nullable: boolean,
|
nullable: z.boolean(),
|
||||||
name: string,
|
name: z.string(),
|
||||||
range: string,
|
range: z.string(),
|
||||||
integral: boolean,
|
integral: z.boolean(),
|
||||||
default: number.nullable().optional(),
|
default: z.number().nullable().optional(),
|
||||||
description: string.nullable().optional(),
|
description: z.string().nullable().optional(),
|
||||||
warning: string.nullable().optional(),
|
warning: z.string().nullable().optional(),
|
||||||
units: string.nullable().optional(),
|
units: z.string().nullable().optional(),
|
||||||
placeholder: anyOf(number, string).nullable().optional(),
|
placeholder: z.union([z.number(), z.string()]).nullable().optional(),
|
||||||
})
|
})
|
||||||
type OldValueSpecNumber = typeof matchOldValueSpecNumber._TYPE
|
type OldValueSpecNumber = z.infer<typeof matchOldValueSpecNumber>
|
||||||
|
|
||||||
export const matchOldValueSpecBoolean = object({
|
export const matchOldValueSpecBoolean = z.object({
|
||||||
type: literals("boolean"),
|
type: z.enum(["boolean"]),
|
||||||
default: boolean,
|
default: z.boolean(),
|
||||||
name: string,
|
name: z.string(),
|
||||||
description: string.nullable().optional(),
|
description: z.string().nullable().optional(),
|
||||||
warning: string.nullable().optional(),
|
warning: z.string().nullable().optional(),
|
||||||
})
|
})
|
||||||
type OldValueSpecBoolean = typeof matchOldValueSpecBoolean._TYPE
|
type OldValueSpecBoolean = z.infer<typeof matchOldValueSpecBoolean>
|
||||||
|
|
||||||
const matchOldValueSpecObject = object({
|
type OldValueSpecObject = {
|
||||||
type: literals("object"),
|
type: "object"
|
||||||
spec: _matchOldConfigSpec,
|
spec: OldConfigSpec
|
||||||
name: string,
|
name: string
|
||||||
description: string.nullable().optional(),
|
description?: string | null
|
||||||
warning: string.nullable().optional(),
|
warning?: string | null
|
||||||
|
}
|
||||||
|
const matchOldValueSpecObject: z.ZodType<OldValueSpecObject> = z.object({
|
||||||
|
type: z.enum(["object"]),
|
||||||
|
spec: z.lazy(() => matchOldConfigSpec),
|
||||||
|
name: z.string(),
|
||||||
|
description: z.string().nullable().optional(),
|
||||||
|
warning: z.string().nullable().optional(),
|
||||||
})
|
})
|
||||||
type OldValueSpecObject = typeof matchOldValueSpecObject._TYPE
|
|
||||||
|
|
||||||
const matchOldValueSpecEnum = object({
|
const matchOldValueSpecEnum = z.object({
|
||||||
values: array(string),
|
values: z.array(z.string()),
|
||||||
"value-names": dictionary([string, string]),
|
"value-names": z.record(z.string(), z.string()),
|
||||||
type: literals("enum"),
|
type: z.enum(["enum"]),
|
||||||
default: string,
|
default: z.string(),
|
||||||
name: string,
|
name: z.string(),
|
||||||
description: string.nullable().optional(),
|
description: z.string().nullable().optional(),
|
||||||
warning: string.nullable().optional(),
|
warning: z.string().nullable().optional(),
|
||||||
})
|
})
|
||||||
type OldValueSpecEnum = typeof matchOldValueSpecEnum._TYPE
|
type OldValueSpecEnum = z.infer<typeof matchOldValueSpecEnum>
|
||||||
|
|
||||||
const matchOldUnionTagSpec = object({
|
const matchOldUnionTagSpec = z.object({
|
||||||
id: string, // The name of the field containing one of the union variants
|
id: z.string(), // The name of the field containing one of the union variants
|
||||||
"variant-names": dictionary([string, string]), // The name of each variant
|
"variant-names": z.record(z.string(), z.string()), // The name of each variant
|
||||||
name: string,
|
name: z.string(),
|
||||||
description: string.nullable().optional(),
|
description: z.string().nullable().optional(),
|
||||||
warning: string.nullable().optional(),
|
warning: z.string().nullable().optional(),
|
||||||
})
|
})
|
||||||
const matchOldValueSpecUnion = object({
|
type OldValueSpecUnion = {
|
||||||
type: literals("union"),
|
type: "union"
|
||||||
|
tag: z.infer<typeof matchOldUnionTagSpec>
|
||||||
|
variants: Record<string, OldConfigSpec>
|
||||||
|
default: string
|
||||||
|
}
|
||||||
|
const matchOldValueSpecUnion: z.ZodType<OldValueSpecUnion> = z.object({
|
||||||
|
type: z.enum(["union"]),
|
||||||
tag: matchOldUnionTagSpec,
|
tag: matchOldUnionTagSpec,
|
||||||
variants: dictionary([string, _matchOldConfigSpec]),
|
variants: z.record(
|
||||||
default: string,
|
z.string(),
|
||||||
|
z.lazy(() => matchOldConfigSpec),
|
||||||
|
),
|
||||||
|
default: z.string(),
|
||||||
})
|
})
|
||||||
type OldValueSpecUnion = typeof matchOldValueSpecUnion._TYPE
|
|
||||||
|
|
||||||
const [matchOldUniqueBy, setOldUniqueBy] = deferred<OldUniqueBy>()
|
|
||||||
type OldUniqueBy =
|
type OldUniqueBy =
|
||||||
| null
|
| null
|
||||||
| string
|
| string
|
||||||
| { any: OldUniqueBy[] }
|
| { any: OldUniqueBy[] }
|
||||||
| { all: OldUniqueBy[] }
|
| { all: OldUniqueBy[] }
|
||||||
|
|
||||||
setOldUniqueBy(
|
const matchOldUniqueBy: z.ZodType<OldUniqueBy> = z.lazy(() =>
|
||||||
anyOf(
|
z.union([
|
||||||
nill,
|
z.null(),
|
||||||
string,
|
z.string(),
|
||||||
object({ any: array(matchOldUniqueBy) }),
|
z.object({ any: z.array(matchOldUniqueBy) }),
|
||||||
object({ all: array(matchOldUniqueBy) }),
|
z.object({ all: z.array(matchOldUniqueBy) }),
|
||||||
),
|
]),
|
||||||
)
|
)
|
||||||
|
|
||||||
const matchOldListValueSpecObject = object({
|
type OldListValueSpecObject = {
|
||||||
spec: _matchOldConfigSpec, // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
|
spec: OldConfigSpec
|
||||||
|
"unique-by"?: OldUniqueBy | null
|
||||||
|
"display-as"?: string | null
|
||||||
|
}
|
||||||
|
const matchOldListValueSpecObject: z.ZodType<OldListValueSpecObject> = z.object(
|
||||||
|
{
|
||||||
|
spec: z.lazy(() => matchOldConfigSpec), // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
|
||||||
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list
|
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list
|
||||||
"display-as": string.nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
|
"display-as": z.string().nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
|
||||||
})
|
},
|
||||||
const matchOldListValueSpecUnion = object({
|
)
|
||||||
|
type OldListValueSpecUnion = {
|
||||||
|
"unique-by"?: OldUniqueBy | null
|
||||||
|
"display-as"?: string | null
|
||||||
|
tag: z.infer<typeof matchOldUnionTagSpec>
|
||||||
|
variants: Record<string, OldConfigSpec>
|
||||||
|
}
|
||||||
|
const matchOldListValueSpecUnion: z.ZodType<OldListValueSpecUnion> = z.object({
|
||||||
"unique-by": matchOldUniqueBy.nullable().optional(),
|
"unique-by": matchOldUniqueBy.nullable().optional(),
|
||||||
"display-as": string.nullable().optional(),
|
"display-as": z.string().nullable().optional(),
|
||||||
tag: matchOldUnionTagSpec,
|
tag: matchOldUnionTagSpec,
|
||||||
variants: dictionary([string, _matchOldConfigSpec]),
|
variants: z.record(
|
||||||
|
z.string(),
|
||||||
|
z.lazy(() => matchOldConfigSpec),
|
||||||
|
),
|
||||||
})
|
})
|
||||||
const matchOldListValueSpecString = object({
|
const matchOldListValueSpecString = z.object({
|
||||||
masked: boolean.nullable().optional(),
|
masked: z.boolean().nullable().optional(),
|
||||||
copyable: boolean.nullable().optional(),
|
copyable: z.boolean().nullable().optional(),
|
||||||
pattern: string.nullable().optional(),
|
pattern: z.string().nullable().optional(),
|
||||||
"pattern-description": string.nullable().optional(),
|
"pattern-description": z.string().nullable().optional(),
|
||||||
placeholder: string.nullable().optional(),
|
placeholder: z.string().nullable().optional(),
|
||||||
})
|
})
|
||||||
|
|
||||||
const matchOldListValueSpecEnum = object({
|
const matchOldListValueSpecEnum = z.object({
|
||||||
values: array(string),
|
values: z.array(z.string()),
|
||||||
"value-names": dictionary([string, string]),
|
"value-names": z.record(z.string(), z.string()),
|
||||||
})
|
})
|
||||||
const matchOldListValueSpecNumber = object({
|
const matchOldListValueSpecNumber = z.object({
|
||||||
range: string,
|
range: z.string(),
|
||||||
integral: boolean,
|
integral: z.boolean(),
|
||||||
units: string.nullable().optional(),
|
units: z.string().nullable().optional(),
|
||||||
placeholder: anyOf(number, string).nullable().optional(),
|
placeholder: z.union([z.number(), z.string()]).nullable().optional(),
|
||||||
})
|
})
|
||||||
|
|
||||||
|
type OldValueSpecListBase = {
|
||||||
|
type: "list"
|
||||||
|
range: string
|
||||||
|
default: string[] | number[] | OldDefaultString[] | Record<string, unknown>[]
|
||||||
|
name: string
|
||||||
|
description?: string | null
|
||||||
|
warning?: string | null
|
||||||
|
}
|
||||||
|
|
||||||
|
type OldValueSpecList = OldValueSpecListBase &
|
||||||
|
(
|
||||||
|
| { subtype: "string"; spec: z.infer<typeof matchOldListValueSpecString> }
|
||||||
|
| { subtype: "enum"; spec: z.infer<typeof matchOldListValueSpecEnum> }
|
||||||
|
| { subtype: "object"; spec: OldListValueSpecObject }
|
||||||
|
| { subtype: "number"; spec: z.infer<typeof matchOldListValueSpecNumber> }
|
||||||
|
| { subtype: "union"; spec: OldListValueSpecUnion }
|
||||||
|
)
|
||||||
|
|
||||||
// represents a spec for a list
|
// represents a spec for a list
|
||||||
export const matchOldValueSpecList = every(
|
export const matchOldValueSpecList: z.ZodType<OldValueSpecList> =
|
||||||
object({
|
z.intersection(
|
||||||
type: literals("list"),
|
z.object({
|
||||||
range: string, // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules
|
type: z.enum(["list"]),
|
||||||
default: anyOf(
|
range: z.string(), // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules
|
||||||
array(string),
|
default: z.union([
|
||||||
array(number),
|
z.array(z.string()),
|
||||||
array(matchOldDefaultString),
|
z.array(z.number()),
|
||||||
array(object),
|
z.array(matchOldDefaultString),
|
||||||
),
|
z.array(z.object({}).passthrough()),
|
||||||
name: string,
|
]),
|
||||||
description: string.nullable().optional(),
|
name: z.string(),
|
||||||
warning: string.nullable().optional(),
|
description: z.string().nullable().optional(),
|
||||||
|
warning: z.string().nullable().optional(),
|
||||||
}),
|
}),
|
||||||
anyOf(
|
z.union([
|
||||||
object({
|
z.object({
|
||||||
subtype: literals("string"),
|
subtype: z.enum(["string"]),
|
||||||
spec: matchOldListValueSpecString,
|
spec: matchOldListValueSpecString,
|
||||||
}),
|
}),
|
||||||
object({
|
z.object({
|
||||||
subtype: literals("enum"),
|
subtype: z.enum(["enum"]),
|
||||||
spec: matchOldListValueSpecEnum,
|
spec: matchOldListValueSpecEnum,
|
||||||
}),
|
}),
|
||||||
object({
|
z.object({
|
||||||
subtype: literals("object"),
|
subtype: z.enum(["object"]),
|
||||||
spec: matchOldListValueSpecObject,
|
spec: matchOldListValueSpecObject,
|
||||||
}),
|
}),
|
||||||
object({
|
z.object({
|
||||||
subtype: literals("number"),
|
subtype: z.enum(["number"]),
|
||||||
spec: matchOldListValueSpecNumber,
|
spec: matchOldListValueSpecNumber,
|
||||||
}),
|
}),
|
||||||
object({
|
z.object({
|
||||||
subtype: literals("union"),
|
subtype: z.enum(["union"]),
|
||||||
spec: matchOldListValueSpecUnion,
|
spec: matchOldListValueSpecUnion,
|
||||||
}),
|
}),
|
||||||
),
|
]),
|
||||||
)
|
) as unknown as z.ZodType<OldValueSpecList>
|
||||||
type OldValueSpecList = typeof matchOldValueSpecList._TYPE
|
|
||||||
|
|
||||||
const matchOldValueSpecPointer = every(
|
type OldValueSpecPointer = {
|
||||||
object({
|
type: "pointer"
|
||||||
type: literal("pointer"),
|
} & (
|
||||||
}),
|
| {
|
||||||
anyOf(
|
subtype: "package"
|
||||||
object({
|
target: "tor-key" | "tor-address" | "lan-address"
|
||||||
subtype: literal("package"),
|
"package-id": string
|
||||||
target: literals("tor-key", "tor-address", "lan-address"),
|
interface: string
|
||||||
"package-id": string,
|
}
|
||||||
interface: string,
|
| {
|
||||||
}),
|
subtype: "package"
|
||||||
object({
|
target: "config"
|
||||||
subtype: literal("package"),
|
"package-id": string
|
||||||
target: literals("config"),
|
selector: string
|
||||||
"package-id": string,
|
multi: boolean
|
||||||
selector: string,
|
}
|
||||||
multi: boolean,
|
|
||||||
}),
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
type OldValueSpecPointer = typeof matchOldValueSpecPointer._TYPE
|
const matchOldValueSpecPointer: z.ZodType<OldValueSpecPointer> = z.intersection(
|
||||||
|
z.object({
|
||||||
|
type: z.literal("pointer"),
|
||||||
|
}),
|
||||||
|
z.union([
|
||||||
|
z.object({
|
||||||
|
subtype: z.literal("package"),
|
||||||
|
target: z.enum(["tor-key", "tor-address", "lan-address"]),
|
||||||
|
"package-id": z.string(),
|
||||||
|
interface: z.string(),
|
||||||
|
}),
|
||||||
|
z.object({
|
||||||
|
subtype: z.literal("package"),
|
||||||
|
target: z.enum(["config"]),
|
||||||
|
"package-id": z.string(),
|
||||||
|
selector: z.string(),
|
||||||
|
multi: z.boolean(),
|
||||||
|
}),
|
||||||
|
]),
|
||||||
|
) as unknown as z.ZodType<OldValueSpecPointer>
|
||||||
|
|
||||||
export const matchOldValueSpec = anyOf(
|
type OldValueSpecString = z.infer<typeof matchOldValueSpecString>
|
||||||
|
|
||||||
|
type OldValueSpec =
|
||||||
|
| OldValueSpecString
|
||||||
|
| OldValueSpecNumber
|
||||||
|
| OldValueSpecBoolean
|
||||||
|
| OldValueSpecObject
|
||||||
|
| OldValueSpecEnum
|
||||||
|
| OldValueSpecList
|
||||||
|
| OldValueSpecUnion
|
||||||
|
| OldValueSpecPointer
|
||||||
|
|
||||||
|
export const matchOldValueSpec: z.ZodType<OldValueSpec> = z.union([
|
||||||
matchOldValueSpecString,
|
matchOldValueSpecString,
|
||||||
matchOldValueSpecNumber,
|
matchOldValueSpecNumber,
|
||||||
matchOldValueSpecBoolean,
|
matchOldValueSpecBoolean,
|
||||||
matchOldValueSpecObject,
|
matchOldValueSpecObject as z.ZodType<OldValueSpecObject>,
|
||||||
matchOldValueSpecEnum,
|
matchOldValueSpecEnum,
|
||||||
matchOldValueSpecList,
|
matchOldValueSpecList as z.ZodType<OldValueSpecList>,
|
||||||
matchOldValueSpecUnion,
|
matchOldValueSpecUnion as z.ZodType<OldValueSpecUnion>,
|
||||||
matchOldValueSpecPointer,
|
matchOldValueSpecPointer as z.ZodType<OldValueSpecPointer>,
|
||||||
)
|
])
|
||||||
type OldValueSpec = typeof matchOldValueSpec._TYPE
|
|
||||||
|
|
||||||
setMatchOldConfigSpec(dictionary([string, matchOldValueSpec]))
|
|
||||||
|
|
||||||
export class Range {
|
export class Range {
|
||||||
min?: number
|
min?: number
|
||||||
|
|||||||
@@ -47,11 +47,12 @@ export class SystemForStartOs implements System {
|
|||||||
getActionInput(
|
getActionInput(
|
||||||
effects: Effects,
|
effects: Effects,
|
||||||
id: string,
|
id: string,
|
||||||
|
prefill: Record<string, unknown> | null,
|
||||||
timeoutMs: number | null,
|
timeoutMs: number | null,
|
||||||
): Promise<T.ActionInput | null> {
|
): Promise<T.ActionInput | null> {
|
||||||
const action = this.abi.actions.get(id)
|
const action = this.abi.actions.get(id)
|
||||||
if (!action) throw new Error(`Action ${id} not found`)
|
if (!action) throw new Error(`Action ${id} not found`)
|
||||||
return action.getInput({ effects })
|
return action.getInput({ effects, prefill })
|
||||||
}
|
}
|
||||||
runAction(
|
runAction(
|
||||||
effects: Effects,
|
effects: Effects,
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ export type System = {
|
|||||||
getActionInput(
|
getActionInput(
|
||||||
effects: Effects,
|
effects: Effects,
|
||||||
actionId: string,
|
actionId: string,
|
||||||
|
prefill: Record<string, unknown> | null,
|
||||||
timeoutMs: number | null,
|
timeoutMs: number | null,
|
||||||
): Promise<T.ActionInput | null>
|
): Promise<T.ActionInput | null>
|
||||||
|
|
||||||
|
|||||||
@@ -1,41 +1,19 @@
|
|||||||
import {
|
import { z } from "@start9labs/start-sdk"
|
||||||
object,
|
|
||||||
literal,
|
|
||||||
string,
|
|
||||||
boolean,
|
|
||||||
array,
|
|
||||||
dictionary,
|
|
||||||
literals,
|
|
||||||
number,
|
|
||||||
Parser,
|
|
||||||
some,
|
|
||||||
} from "ts-matches"
|
|
||||||
import { matchDuration } from "./Duration"
|
import { matchDuration } from "./Duration"
|
||||||
|
|
||||||
const VolumeId = string
|
export const matchDockerProcedure = z.object({
|
||||||
const Path = string
|
type: z.literal("docker"),
|
||||||
|
image: z.string(),
|
||||||
export type VolumeId = string
|
system: z.boolean().optional(),
|
||||||
export type Path = string
|
entrypoint: z.string(),
|
||||||
export const matchDockerProcedure = object({
|
args: z.array(z.string()).default([]),
|
||||||
type: literal("docker"),
|
mounts: z.record(z.string(), z.string()).optional(),
|
||||||
image: string,
|
"io-format": z
|
||||||
system: boolean.optional(),
|
.enum(["json", "json-pretty", "yaml", "cbor", "toml", "toml-pretty"])
|
||||||
entrypoint: string,
|
|
||||||
args: array(string).defaultTo([]),
|
|
||||||
mounts: dictionary([VolumeId, Path]).optional(),
|
|
||||||
"io-format": literals(
|
|
||||||
"json",
|
|
||||||
"json-pretty",
|
|
||||||
"yaml",
|
|
||||||
"cbor",
|
|
||||||
"toml",
|
|
||||||
"toml-pretty",
|
|
||||||
)
|
|
||||||
.nullable()
|
.nullable()
|
||||||
.optional(),
|
.optional(),
|
||||||
"sigterm-timeout": some(number, matchDuration).onMismatch(30),
|
"sigterm-timeout": z.union([z.number(), matchDuration]).catch(30),
|
||||||
inject: boolean.defaultTo(false),
|
inject: z.boolean().default(false),
|
||||||
})
|
})
|
||||||
|
|
||||||
export type DockerProcedure = typeof matchDockerProcedure._TYPE
|
export type DockerProcedure = z.infer<typeof matchDockerProcedure>
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
import { string } from "ts-matches"
|
import { z } from "@start9labs/start-sdk"
|
||||||
|
|
||||||
export type TimeUnit = "d" | "h" | "s" | "ms" | "m" | "µs" | "ns"
|
export type TimeUnit = "d" | "h" | "s" | "ms" | "m" | "µs" | "ns"
|
||||||
export type Duration = `${number}${TimeUnit}`
|
export type Duration = `${number}${TimeUnit}`
|
||||||
|
|
||||||
const durationRegex = /^([0-9]*(\.[0-9]+)?)(ns|µs|ms|s|m|d)$/
|
const durationRegex = /^([0-9]*(\.[0-9]+)?)(ns|µs|ms|s|m|d)$/
|
||||||
|
|
||||||
export const matchDuration = string.refine(isDuration)
|
export const matchDuration = z.string().refine(isDuration)
|
||||||
export function isDuration(value: string): value is Duration {
|
export function isDuration(value: string): value is Duration {
|
||||||
return durationRegex.test(value)
|
return durationRegex.test(value)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { literals, some, string } from "ts-matches"
|
import { z } from "@start9labs/start-sdk"
|
||||||
|
|
||||||
type NestedPath<A extends string, B extends string> = `/${A}/${string}/${B}`
|
type NestedPath<A extends string, B extends string> = `/${A}/${string}/${B}`
|
||||||
type NestedPaths = NestedPath<"actions", "run" | "getInput">
|
type NestedPaths = NestedPath<"actions", "run" | "getInput">
|
||||||
@@ -17,14 +17,14 @@ function isNestedPath(path: string): path is NestedPaths {
|
|||||||
return true
|
return true
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
export const jsonPath = some(
|
export const jsonPath = z.union([
|
||||||
literals(
|
z.enum([
|
||||||
"/packageInit",
|
"/packageInit",
|
||||||
"/packageUninit",
|
"/packageUninit",
|
||||||
"/backup/create",
|
"/backup/create",
|
||||||
"/backup/restore",
|
"/backup/restore",
|
||||||
),
|
]),
|
||||||
string.refine(isNestedPath, "isNestedPath"),
|
z.string().refine(isNestedPath),
|
||||||
)
|
])
|
||||||
|
|
||||||
export type JsonPath = typeof jsonPath._TYPE
|
export type JsonPath = z.infer<typeof jsonPath>
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
import { RpcListener } from "./Adapters/RpcListener"
|
import { RpcListener } from "./Adapters/RpcListener"
|
||||||
import { SystemForEmbassy } from "./Adapters/Systems/SystemForEmbassy"
|
|
||||||
import { AllGetDependencies } from "./Interfaces/AllGetDependencies"
|
import { AllGetDependencies } from "./Interfaces/AllGetDependencies"
|
||||||
import { getSystem } from "./Adapters/Systems"
|
import { getSystem } from "./Adapters/Systems"
|
||||||
|
|
||||||
@@ -7,6 +6,18 @@ const getDependencies: AllGetDependencies = {
|
|||||||
system: getSystem,
|
system: getSystem,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
process.on("unhandledRejection", (reason) => {
|
||||||
|
if (
|
||||||
|
reason instanceof Error &&
|
||||||
|
"muteUnhandled" in reason &&
|
||||||
|
reason.muteUnhandled
|
||||||
|
) {
|
||||||
|
// mute
|
||||||
|
} else {
|
||||||
|
console.error("Unhandled promise rejection", reason)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
for (let s of ["SIGTERM", "SIGINT", "SIGHUP"]) {
|
for (let s of ["SIGTERM", "SIGINT", "SIGHUP"]) {
|
||||||
process.on(s, (s) => {
|
process.on(s, (s) => {
|
||||||
console.log(`Caught ${s}`)
|
console.log(`Caught ${s}`)
|
||||||
|
|||||||
@@ -16,6 +16,6 @@ case $ARCH in
|
|||||||
esac
|
esac
|
||||||
|
|
||||||
docker run --rm $USE_TTY --platform=$DOCKER_PLATFORM -eARCH --privileged -v "$(pwd):/root/start-os" start9/build-env /root/start-os/container-runtime/update-image.sh
|
docker run --rm $USE_TTY --platform=$DOCKER_PLATFORM -eARCH --privileged -v "$(pwd):/root/start-os" start9/build-env /root/start-os/container-runtime/update-image.sh
|
||||||
if [ "$(ls -nd "rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then
|
if [ "$(ls -nd "container-runtime/rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then
|
||||||
docker run --rm $USE_TTY -v "$(pwd):/root/start-os" start9/build-env chown -R $UID:$UID /root/start-os/container-runtime
|
docker run --rm $USE_TTY -v "$(pwd):/root/start-os" start9/build-env chown -R $UID:$UID /root/start-os/container-runtime
|
||||||
fi
|
fi
|
||||||
72
core/ARCHITECTURE.md
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
# Core Architecture
|
||||||
|
|
||||||
|
The Rust backend daemon for StartOS.
|
||||||
|
|
||||||
|
## Binaries
|
||||||
|
|
||||||
|
The crate produces a single binary `startbox` that is symlinked under different names for different behavior:
|
||||||
|
|
||||||
|
- `startbox` / `startd` — Main daemon
|
||||||
|
- `start-cli` — CLI interface
|
||||||
|
- `start-container` — Runs inside LXC containers; communicates with host and manages subcontainers
|
||||||
|
- `registrybox` — Registry daemon
|
||||||
|
- `tunnelbox` — VPN/tunnel daemon
|
||||||
|
|
||||||
|
## Crate Structure
|
||||||
|
|
||||||
|
- `startos` — Core library that supports building `startbox`
|
||||||
|
- `helpers` — Utility functions used across both `startos` and `js-engine`
|
||||||
|
- `models` — Types shared across `startos`, `js-engine`, and `helpers`
|
||||||
|
|
||||||
|
## Key Modules
|
||||||
|
|
||||||
|
- `src/context/` — Context types (RpcContext, CliContext, InitContext, DiagnosticContext)
|
||||||
|
- `src/service/` — Service lifecycle management with actor pattern (`service_actor.rs`)
|
||||||
|
- `src/db/model/` — Patch-DB models (`public.rs` synced to frontend, `private.rs` backend-only)
|
||||||
|
- `src/net/` — Networking (DNS, ACME, WiFi, Tor via Arti, WireGuard)
|
||||||
|
- `src/s9pk/` — S9PK package format (merkle archive)
|
||||||
|
- `src/registry/` — Package registry management
|
||||||
|
|
||||||
|
## RPC Pattern
|
||||||
|
|
||||||
|
The API is JSON-RPC (not REST). All endpoints are RPC methods organized in a hierarchical command structure using [rpc-toolkit](https://github.com/Start9Labs/rpc-toolkit). Handlers are registered in a tree of `ParentHandler` nodes, with four handler types: `from_fn_async` (standard), `from_fn_async_local` (non-Send), `from_fn` (sync), and `from_fn_blocking` (blocking). Metadata like `.with_about()` drives middleware and documentation.
|
||||||
|
|
||||||
|
See [rpc-toolkit.md](rpc-toolkit.md) for full handler patterns and configuration.
|
||||||
|
|
||||||
|
## Patch-DB Patterns
|
||||||
|
|
||||||
|
Patch-DB provides diff-based state synchronization. Changes to `db/model/public.rs` automatically sync to the frontend.
|
||||||
|
|
||||||
|
**Key patterns:**
|
||||||
|
- `db.peek().await` — Get a read-only snapshot of the database state
|
||||||
|
- `db.mutate(|db| { ... }).await` — Apply mutations atomically, returns `MutateResult`
|
||||||
|
- `#[derive(HasModel)]` — Derive macro for types stored in the database, generates typed accessors
|
||||||
|
|
||||||
|
**Generated accessor types** (from `HasModel` derive):
|
||||||
|
- `as_field()` — Immutable reference: `&Model<T>`
|
||||||
|
- `as_field_mut()` — Mutable reference: `&mut Model<T>`
|
||||||
|
- `into_field()` — Owned value: `Model<T>`
|
||||||
|
|
||||||
|
**`Model<T>` APIs** (from `db/prelude.rs`):
|
||||||
|
- `.de()` — Deserialize to `T`
|
||||||
|
- `.ser(&value)` — Serialize from `T`
|
||||||
|
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize
|
||||||
|
- For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()`
|
||||||
|
|
||||||
|
See [patchdb.md](patchdb.md) for `TypedDbWatch<T>` construction, API, and usage patterns.
|
||||||
|
|
||||||
|
## i18n
|
||||||
|
|
||||||
|
See [i18n-patterns.md](i18n-patterns.md) for internationalization key conventions and the `t!()` macro.
|
||||||
|
|
||||||
|
## Rust Utilities & Patterns
|
||||||
|
|
||||||
|
See [core-rust-patterns.md](core-rust-patterns.md) for common utilities (Invoke trait, Guard pattern, mount guards, Apply trait, etc.).
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [rpc-toolkit.md](rpc-toolkit.md) — JSON-RPC handler patterns
|
||||||
|
- [patchdb.md](patchdb.md) — Patch-DB watch patterns and TypedDbWatch
|
||||||
|
- [i18n-patterns.md](i18n-patterns.md) — Internationalization conventions
|
||||||
|
- [core-rust-patterns.md](core-rust-patterns.md) — Common Rust utilities
|
||||||
|
- [s9pk-structure.md](s9pk-structure.md) — S9PK package format
|
||||||
27
core/CLAUDE.md
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
# Core — Rust Backend
|
||||||
|
|
||||||
|
The Rust backend daemon for StartOS.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
See [ARCHITECTURE.md](ARCHITECTURE.md) for binaries, modules, Patch-DB patterns, and related documentation.
|
||||||
|
|
||||||
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for how to add RPC endpoints, TS-exported types, and i18n keys.
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo check -p start-os # Type check
|
||||||
|
make test-core # Run tests
|
||||||
|
make ts-bindings # Regenerate TS types after changing #[ts(export)] structs
|
||||||
|
cd sdk && make baseDist dist # Rebuild SDK after ts-bindings
|
||||||
|
```
|
||||||
|
|
||||||
|
## Operating Rules
|
||||||
|
|
||||||
|
- Always run `cargo check -p start-os` after modifying Rust code
|
||||||
|
- When adding RPC endpoints, follow the patterns in [rpc-toolkit.md](rpc-toolkit.md)
|
||||||
|
- When modifying `#[ts(export)]` types, regenerate bindings and rebuild the SDK (see [ARCHITECTURE.md](../ARCHITECTURE.md#build-pipeline))
|
||||||
|
- When adding i18n keys, add all 5 locales in `core/locales/i18n.yaml` (see [i18n-patterns.md](i18n-patterns.md))
|
||||||
|
- When using DB watches, follow the `TypedDbWatch<T>` patterns in [patchdb.md](patchdb.md)
|
||||||
|
- **Always use `.invoke(ErrorKind::...)` instead of `.status()` when running CLI commands** via `tokio::process::Command`. The `Invoke` trait (from `crate::util::Invoke`) captures stdout/stderr and checks exit codes properly. Using `.status()` leaks stderr directly to system logs, creating noise. For check-then-act patterns (e.g. `iptables -C`), use `.invoke(...).await.is_ok()` / `.is_err()` instead of `.status().await.map_or(false, |s| s.success())`.
|
||||||
49
core/CONTRIBUTING.md
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
# Contributing to Core
|
||||||
|
|
||||||
|
For general environment setup, cloning, and build system, see the root [CONTRIBUTING.md](../CONTRIBUTING.md).
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- [Rust](https://rustup.rs) (nightly for formatting)
|
||||||
|
- [rust-analyzer](https://rust-analyzer.github.io/) recommended
|
||||||
|
- [Docker](https://docs.docker.com/get-docker/) (for cross-compilation via `rust-zig-builder` container)
|
||||||
|
|
||||||
|
## Common Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo check -p start-os # Type check
|
||||||
|
cargo test --features=test # Run tests (or: make test-core)
|
||||||
|
make format # Format with nightly rustfmt
|
||||||
|
cd core && cargo test <test_name> --features=test # Run a specific test
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adding a New RPC Endpoint
|
||||||
|
|
||||||
|
1. Define a params struct with `#[derive(Deserialize, Serialize)]`
|
||||||
|
2. Choose a handler type (`from_fn_async` for most cases)
|
||||||
|
3. Write the handler function: `async fn my_handler(ctx: RpcContext, params: MyParams) -> Result<MyResponse, Error>`
|
||||||
|
4. Register it in the appropriate `ParentHandler` tree
|
||||||
|
5. If params/response should be available in TypeScript, add `#[derive(TS)]` and `#[ts(export)]`
|
||||||
|
|
||||||
|
See [rpc-toolkit.md](rpc-toolkit.md) for full handler patterns and all four handler types.
|
||||||
|
|
||||||
|
## Adding TS-Exported Types
|
||||||
|
|
||||||
|
When a Rust type needs to be available in TypeScript (for the web frontend or SDK):
|
||||||
|
|
||||||
|
1. Add `ts_rs::TS` to the derive list and `#[ts(export)]` to the struct/enum
|
||||||
|
2. Use `#[serde(rename_all = "camelCase")]` for JS-friendly field names
|
||||||
|
3. For types that don't implement TS (like `DateTime<Utc>`, `exver::Version`), use `#[ts(type = "string")]` overrides
|
||||||
|
4. For `u64` fields that should be JS `number` (not `bigint`), use `#[ts(type = "number")]`
|
||||||
|
5. Run `make ts-bindings` to regenerate — files appear in `core/bindings/` then sync to `sdk/base/lib/osBindings/`
|
||||||
|
6. Rebuild the SDK: `cd sdk && make baseDist dist`
|
||||||
|
|
||||||
|
## Adding i18n Keys
|
||||||
|
|
||||||
|
1. Add the key to `core/locales/i18n.yaml` with all 5 language translations
|
||||||
|
2. Use the `t!("your.key.name")` macro in Rust code
|
||||||
|
3. Follow existing namespace conventions — match the module path where the key is used
|
||||||
|
4. Use kebab-case for multi-word segments
|
||||||
|
5. Translations are validated at compile time
|
||||||
|
|
||||||
|
See [i18n-patterns.md](i18n-patterns.md) for full conventions.
|
||||||
3387
core/Cargo.lock
generated
@@ -15,7 +15,7 @@ license = "MIT"
|
|||||||
name = "start-os"
|
name = "start-os"
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
repository = "https://github.com/Start9Labs/start-os"
|
repository = "https://github.com/Start9Labs/start-os"
|
||||||
version = "0.4.0-alpha.19" # VERSION_BUMP
|
version = "0.4.0-alpha.20" # VERSION_BUMP
|
||||||
|
|
||||||
[lib]
|
[lib]
|
||||||
name = "startos"
|
name = "startos"
|
||||||
@@ -42,17 +42,6 @@ name = "tunnelbox"
|
|||||||
path = "src/main/tunnelbox.rs"
|
path = "src/main/tunnelbox.rs"
|
||||||
|
|
||||||
[features]
|
[features]
|
||||||
arti = [
|
|
||||||
"arti-client",
|
|
||||||
"safelog",
|
|
||||||
"tor-cell",
|
|
||||||
"tor-hscrypto",
|
|
||||||
"tor-hsservice",
|
|
||||||
"tor-keymgr",
|
|
||||||
"tor-llcrypto",
|
|
||||||
"tor-proto",
|
|
||||||
"tor-rtcompat",
|
|
||||||
]
|
|
||||||
beta = []
|
beta = []
|
||||||
console = ["console-subscriber", "tokio/tracing"]
|
console = ["console-subscriber", "tokio/tracing"]
|
||||||
default = []
|
default = []
|
||||||
@@ -62,16 +51,6 @@ unstable = ["backtrace-on-stack-overflow"]
|
|||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
aes = { version = "0.7.5", features = ["ctr"] }
|
aes = { version = "0.7.5", features = ["ctr"] }
|
||||||
arti-client = { version = "0.33", features = [
|
|
||||||
"compression",
|
|
||||||
"ephemeral-keystore",
|
|
||||||
"experimental-api",
|
|
||||||
"onion-service-client",
|
|
||||||
"onion-service-service",
|
|
||||||
"rustls",
|
|
||||||
"static",
|
|
||||||
"tokio",
|
|
||||||
], default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
async-acme = { version = "0.6.0", git = "https://github.com/dr-bonez/async-acme.git", features = [
|
async-acme = { version = "0.6.0", git = "https://github.com/dr-bonez/async-acme.git", features = [
|
||||||
"use_rustls",
|
"use_rustls",
|
||||||
"use_tokio",
|
"use_tokio",
|
||||||
@@ -100,7 +79,6 @@ console-subscriber = { version = "0.5.0", optional = true }
|
|||||||
const_format = "0.2.34"
|
const_format = "0.2.34"
|
||||||
cookie = "0.18.0"
|
cookie = "0.18.0"
|
||||||
cookie_store = "0.22.0"
|
cookie_store = "0.22.0"
|
||||||
curve25519-dalek = "4.1.3"
|
|
||||||
der = { version = "0.7.9", features = ["derive", "pem"] }
|
der = { version = "0.7.9", features = ["derive", "pem"] }
|
||||||
digest = "0.10.7"
|
digest = "0.10.7"
|
||||||
divrem = "1.0.0"
|
divrem = "1.0.0"
|
||||||
@@ -216,7 +194,6 @@ rpassword = "7.2.0"
|
|||||||
rust-argon2 = "3.0.0"
|
rust-argon2 = "3.0.0"
|
||||||
rust-i18n = "3.1.5"
|
rust-i18n = "3.1.5"
|
||||||
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
|
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
|
||||||
safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
semver = { version = "1.0.20", features = ["serde"] }
|
semver = { version = "1.0.20", features = ["serde"] }
|
||||||
serde = { version = "1.0", features = ["derive", "rc"] }
|
serde = { version = "1.0", features = ["derive", "rc"] }
|
||||||
serde_cbor = { package = "ciborium", version = "0.2.1" }
|
serde_cbor = { package = "ciborium", version = "0.2.1" }
|
||||||
@@ -244,23 +221,6 @@ tokio-stream = { version = "0.1.14", features = ["io-util", "net", "sync"] }
|
|||||||
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
|
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
|
||||||
tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] }
|
tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] }
|
||||||
tokio-util = { version = "0.7.9", features = ["io"] }
|
tokio-util = { version = "0.7.9", features = ["io"] }
|
||||||
tor-cell = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
tor-hscrypto = { version = "0.33", features = [
|
|
||||||
"full",
|
|
||||||
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
tor-hsservice = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
tor-keymgr = { version = "0.33", features = [
|
|
||||||
"ephemeral-keystore",
|
|
||||||
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
tor-llcrypto = { version = "0.33", features = [
|
|
||||||
"full",
|
|
||||||
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
tor-proto = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
tor-rtcompat = { version = "0.33", features = [
|
|
||||||
"rustls",
|
|
||||||
"tokio",
|
|
||||||
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
|
|
||||||
torut = "0.2.1"
|
|
||||||
tower-service = "0.3.3"
|
tower-service = "0.3.3"
|
||||||
tracing = "0.1.39"
|
tracing = "0.1.39"
|
||||||
tracing-error = "0.2.0"
|
tracing-error = "0.2.0"
|
||||||
|
|||||||
@@ -22,9 +22,7 @@ several different names for different behavior:
|
|||||||
- `start-sdk`: This is a CLI tool that aids in building and packaging services
|
- `start-sdk`: This is a CLI tool that aids in building and packaging services
|
||||||
you wish to deploy to StartOS
|
you wish to deploy to StartOS
|
||||||
|
|
||||||
## Questions
|
## Documentation
|
||||||
|
|
||||||
If you have questions about how various pieces of the backend system work. Open
|
- [ARCHITECTURE.md](ARCHITECTURE.md) — Backend architecture, modules, and patterns
|
||||||
an issue and tag the following people
|
- [CONTRIBUTING.md](CONTRIBUTING.md) — How to contribute to core
|
||||||
|
|
||||||
- dr-bonez
|
|
||||||
|
|||||||
@@ -7,11 +7,11 @@ source ./builder-alias.sh
|
|||||||
set -ea
|
set -ea
|
||||||
shopt -s expand_aliases
|
shopt -s expand_aliases
|
||||||
|
|
||||||
PROFILE=${PROFILE:-release}
|
PROFILE=${PROFILE:-debug}
|
||||||
if [ "${PROFILE}" = "release" ]; then
|
if [ "${PROFILE}" = "release" ]; then
|
||||||
BUILD_FLAGS="--release"
|
BUILD_FLAGS="--release"
|
||||||
else
|
else
|
||||||
if [ "$PROFILE" != "debug"]; then
|
if [ "$PROFILE" != "debug" ]; then
|
||||||
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
|
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
|
||||||
PROFILE=debug
|
PROFILE=debug
|
||||||
fi
|
fi
|
||||||
@@ -38,7 +38,7 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
|
|||||||
fi
|
fi
|
||||||
echo "FEATURES=\"$FEATURES\""
|
echo "FEATURES=\"$FEATURES\""
|
||||||
echo "RUSTFLAGS=\"$RUSTFLAGS\""
|
echo "RUSTFLAGS=\"$RUSTFLAGS\""
|
||||||
rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features test,$FEATURES --locked 'export_bindings_'
|
rust-zig-builder cargo test --manifest-path=./core/Cargo.toml --lib $BUILD_FLAGS --features test,$FEATURES --locked 'export_bindings_'
|
||||||
if [ "$(ls -nd "core/bindings" | awk '{ print $3 }')" != "$UID" ]; then
|
if [ "$(ls -nd "core/bindings" | awk '{ print $3 }')" != "$UID" ]; then
|
||||||
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/bindings && chown -R $UID:$UID /usr/local/cargo"
|
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/bindings && chown -R $UID:$UID /usr/local/cargo"
|
||||||
fi
|
fi
|
||||||
@@ -197,6 +197,13 @@ setup.transferring-data:
|
|||||||
fr_FR: "Transfert de données"
|
fr_FR: "Transfert de données"
|
||||||
pl_PL: "Przesyłanie danych"
|
pl_PL: "Przesyłanie danych"
|
||||||
|
|
||||||
|
setup.password-required:
|
||||||
|
en_US: "Password is required for fresh setup"
|
||||||
|
de_DE: "Passwort ist für die Ersteinrichtung erforderlich"
|
||||||
|
es_ES: "Se requiere contraseña para la configuración inicial"
|
||||||
|
fr_FR: "Le mot de passe est requis pour la première configuration"
|
||||||
|
pl_PL: "Hasło jest wymagane do nowej konfiguracji"
|
||||||
|
|
||||||
# system.rs
|
# system.rs
|
||||||
system.governor-not-available:
|
system.governor-not-available:
|
||||||
en_US: "Governor %{governor} not available"
|
en_US: "Governor %{governor} not available"
|
||||||
@@ -994,6 +1001,27 @@ disk.mount.binding:
|
|||||||
fr_FR: "Liaison de %{src} à %{dst}"
|
fr_FR: "Liaison de %{src} à %{dst}"
|
||||||
pl_PL: "Wiązanie %{src} do %{dst}"
|
pl_PL: "Wiązanie %{src} do %{dst}"
|
||||||
|
|
||||||
|
hostname.empty:
|
||||||
|
en_US: "Hostname cannot be empty"
|
||||||
|
de_DE: "Der Hostname darf nicht leer sein"
|
||||||
|
es_ES: "El nombre de host no puede estar vacío"
|
||||||
|
fr_FR: "Le nom d'hôte ne peut pas être vide"
|
||||||
|
pl_PL: "Nazwa hosta nie może być pusta"
|
||||||
|
|
||||||
|
hostname.invalid-character:
|
||||||
|
en_US: "Invalid character in hostname: %{char}"
|
||||||
|
de_DE: "Ungültiges Zeichen im Hostnamen: %{char}"
|
||||||
|
es_ES: "Carácter no válido en el nombre de host: %{char}"
|
||||||
|
fr_FR: "Caractère invalide dans le nom d'hôte : %{char}"
|
||||||
|
pl_PL: "Nieprawidłowy znak w nazwie hosta: %{char}"
|
||||||
|
|
||||||
|
hostname.must-provide-name-or-hostname:
|
||||||
|
en_US: "Must provide at least one of: name, hostname"
|
||||||
|
de_DE: "Es muss mindestens eines angegeben werden: name, hostname"
|
||||||
|
es_ES: "Se debe proporcionar al menos uno de: name, hostname"
|
||||||
|
fr_FR: "Vous devez fournir au moins l'un des éléments suivants : name, hostname"
|
||||||
|
pl_PL: "Należy podać co najmniej jedno z: name, hostname"
|
||||||
|
|
||||||
# init.rs
|
# init.rs
|
||||||
init.running-preinit:
|
init.running-preinit:
|
||||||
en_US: "Running preinit.sh"
|
en_US: "Running preinit.sh"
|
||||||
@@ -1243,6 +1271,21 @@ backup.target.cifs.target-not-found-id:
|
|||||||
fr_FR: "ID de cible de sauvegarde %{id} non trouvé"
|
fr_FR: "ID de cible de sauvegarde %{id} non trouvé"
|
||||||
pl_PL: "Nie znaleziono ID celu kopii zapasowej %{id}"
|
pl_PL: "Nie znaleziono ID celu kopii zapasowej %{id}"
|
||||||
|
|
||||||
|
# service/effects/net/plugin.rs
|
||||||
|
net.plugin.manifest-missing-plugin:
|
||||||
|
en_US: "manifest does not declare the \"%{plugin}\" plugin"
|
||||||
|
de_DE: "Manifest deklariert das Plugin \"%{plugin}\" nicht"
|
||||||
|
es_ES: "el manifiesto no declara el plugin \"%{plugin}\""
|
||||||
|
fr_FR: "le manifeste ne déclare pas le plugin \"%{plugin}\""
|
||||||
|
pl_PL: "manifest nie deklaruje wtyczki \"%{plugin}\""
|
||||||
|
|
||||||
|
net.plugin.binding-not-found:
|
||||||
|
en_US: "binding not found: %{binding}"
|
||||||
|
de_DE: "Bindung nicht gefunden: %{binding}"
|
||||||
|
es_ES: "enlace no encontrado: %{binding}"
|
||||||
|
fr_FR: "liaison introuvable : %{binding}"
|
||||||
|
pl_PL: "powiązanie nie znalezione: %{binding}"
|
||||||
|
|
||||||
# net/ssl.rs
|
# net/ssl.rs
|
||||||
net.ssl.unreachable:
|
net.ssl.unreachable:
|
||||||
en_US: "unreachable"
|
en_US: "unreachable"
|
||||||
@@ -1790,6 +1833,28 @@ registry.package.remove-mirror.unauthorized:
|
|||||||
fr_FR: "Non autorisé"
|
fr_FR: "Non autorisé"
|
||||||
pl_PL: "Brak autoryzacji"
|
pl_PL: "Brak autoryzacji"
|
||||||
|
|
||||||
|
# registry/package/index.rs
|
||||||
|
registry.package.index.metadata-mismatch:
|
||||||
|
en_US: "package metadata mismatch: remove the existing version first, then re-add"
|
||||||
|
de_DE: "Paketmetadaten stimmen nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
|
||||||
|
es_ES: "discrepancia de metadatos del paquete: elimine la versión existente primero, luego vuelva a agregarla"
|
||||||
|
fr_FR: "discordance des métadonnées du paquet : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
|
||||||
|
pl_PL: "niezgodność metadanych pakietu: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
|
||||||
|
|
||||||
|
registry.package.index.icon-mismatch:
|
||||||
|
en_US: "package icon mismatch: remove the existing version first, then re-add"
|
||||||
|
de_DE: "Paketsymbol stimmt nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
|
||||||
|
es_ES: "discrepancia del icono del paquete: elimine la versión existente primero, luego vuelva a agregarla"
|
||||||
|
fr_FR: "discordance de l'icône du paquet : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
|
||||||
|
pl_PL: "niezgodność ikony pakietu: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
|
||||||
|
|
||||||
|
registry.package.index.dependency-metadata-mismatch:
|
||||||
|
en_US: "dependency metadata mismatch: remove the existing version first, then re-add"
|
||||||
|
de_DE: "Abhängigkeitsmetadaten stimmen nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
|
||||||
|
es_ES: "discrepancia de metadatos de dependencia: elimine la versión existente primero, luego vuelva a agregarla"
|
||||||
|
fr_FR: "discordance des métadonnées de dépendance : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
|
||||||
|
pl_PL: "niezgodność metadanych zależności: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
|
||||||
|
|
||||||
# registry/package/get.rs
|
# registry/package/get.rs
|
||||||
registry.package.get.version-not-found:
|
registry.package.get.version-not-found:
|
||||||
en_US: "Could not find a version of %{id} that satisfies %{version}"
|
en_US: "Could not find a version of %{id} that satisfies %{version}"
|
||||||
@@ -3087,7 +3152,7 @@ help.arg.smtp-from:
|
|||||||
fr_FR: "Adresse de l'expéditeur"
|
fr_FR: "Adresse de l'expéditeur"
|
||||||
pl_PL: "Adres nadawcy e-mail"
|
pl_PL: "Adres nadawcy e-mail"
|
||||||
|
|
||||||
help.arg.smtp-login:
|
help.arg.smtp-username:
|
||||||
en_US: "SMTP authentication username"
|
en_US: "SMTP authentication username"
|
||||||
de_DE: "SMTP-Authentifizierungsbenutzername"
|
de_DE: "SMTP-Authentifizierungsbenutzername"
|
||||||
es_ES: "Nombre de usuario de autenticación SMTP"
|
es_ES: "Nombre de usuario de autenticación SMTP"
|
||||||
@@ -3108,13 +3173,20 @@ help.arg.smtp-port:
|
|||||||
fr_FR: "Port du serveur SMTP"
|
fr_FR: "Port du serveur SMTP"
|
||||||
pl_PL: "Port serwera SMTP"
|
pl_PL: "Port serwera SMTP"
|
||||||
|
|
||||||
help.arg.smtp-server:
|
help.arg.smtp-host:
|
||||||
en_US: "SMTP server hostname"
|
en_US: "SMTP server hostname"
|
||||||
de_DE: "SMTP-Server-Hostname"
|
de_DE: "SMTP-Server-Hostname"
|
||||||
es_ES: "Nombre de host del servidor SMTP"
|
es_ES: "Nombre de host del servidor SMTP"
|
||||||
fr_FR: "Nom d'hôte du serveur SMTP"
|
fr_FR: "Nom d'hôte du serveur SMTP"
|
||||||
pl_PL: "Nazwa hosta serwera SMTP"
|
pl_PL: "Nazwa hosta serwera SMTP"
|
||||||
|
|
||||||
|
help.arg.smtp-security:
|
||||||
|
en_US: "Connection security mode (starttls or tls)"
|
||||||
|
de_DE: "Verbindungssicherheitsmodus (starttls oder tls)"
|
||||||
|
es_ES: "Modo de seguridad de conexión (starttls o tls)"
|
||||||
|
fr_FR: "Mode de sécurité de connexion (starttls ou tls)"
|
||||||
|
pl_PL: "Tryb zabezpieczeń połączenia (starttls lub tls)"
|
||||||
|
|
||||||
help.arg.smtp-to:
|
help.arg.smtp-to:
|
||||||
en_US: "Email recipient address"
|
en_US: "Email recipient address"
|
||||||
de_DE: "E-Mail-Empfängeradresse"
|
de_DE: "E-Mail-Empfängeradresse"
|
||||||
@@ -3612,6 +3684,13 @@ help.arg.s9pk-file-path:
|
|||||||
fr_FR: "Chemin vers le fichier de paquet s9pk"
|
fr_FR: "Chemin vers le fichier de paquet s9pk"
|
||||||
pl_PL: "Ścieżka do pliku pakietu s9pk"
|
pl_PL: "Ścieżka do pliku pakietu s9pk"
|
||||||
|
|
||||||
|
help.arg.s9pk-file-paths:
|
||||||
|
en_US: "Paths to s9pk package files"
|
||||||
|
de_DE: "Pfade zu s9pk-Paketdateien"
|
||||||
|
es_ES: "Rutas a los archivos de paquete s9pk"
|
||||||
|
fr_FR: "Chemins vers les fichiers de paquet s9pk"
|
||||||
|
pl_PL: "Ścieżki do plików pakietów s9pk"
|
||||||
|
|
||||||
help.arg.session-ids:
|
help.arg.session-ids:
|
||||||
en_US: "Session identifiers"
|
en_US: "Session identifiers"
|
||||||
de_DE: "Sitzungskennungen"
|
de_DE: "Sitzungskennungen"
|
||||||
@@ -3935,6 +4014,13 @@ about.allow-gateway-infer-inbound-access-from-wan:
|
|||||||
fr_FR: "Permettre à cette passerelle de déduire si elle a un accès entrant depuis le WAN en fonction de son adresse IPv4"
|
fr_FR: "Permettre à cette passerelle de déduire si elle a un accès entrant depuis le WAN en fonction de son adresse IPv4"
|
||||||
pl_PL: "Pozwól tej bramce wywnioskować, czy ma dostęp przychodzący z WAN na podstawie adresu IPv4"
|
pl_PL: "Pozwól tej bramce wywnioskować, czy ma dostęp przychodzący z WAN na podstawie adresu IPv4"
|
||||||
|
|
||||||
|
about.apply-available-update:
|
||||||
|
en_US: "Apply available update"
|
||||||
|
de_DE: "Verfügbares Update anwenden"
|
||||||
|
es_ES: "Aplicar actualización disponible"
|
||||||
|
fr_FR: "Appliquer la mise à jour disponible"
|
||||||
|
pl_PL: "Zastosuj dostępną aktualizację"
|
||||||
|
|
||||||
about.calculate-blake3-hash-for-file:
|
about.calculate-blake3-hash-for-file:
|
||||||
en_US: "Calculate blake3 hash for a file"
|
en_US: "Calculate blake3 hash for a file"
|
||||||
de_DE: "Blake3-Hash für eine Datei berechnen"
|
de_DE: "Blake3-Hash für eine Datei berechnen"
|
||||||
@@ -3949,6 +4035,20 @@ about.cancel-install-package:
|
|||||||
fr_FR: "Annuler l'installation d'un paquet"
|
fr_FR: "Annuler l'installation d'un paquet"
|
||||||
pl_PL: "Anuluj instalację pakietu"
|
pl_PL: "Anuluj instalację pakietu"
|
||||||
|
|
||||||
|
about.check-dns-configuration:
|
||||||
|
en_US: "Check DNS configuration for a gateway"
|
||||||
|
de_DE: "DNS-Konfiguration für ein Gateway prüfen"
|
||||||
|
es_ES: "Verificar la configuración DNS de un gateway"
|
||||||
|
fr_FR: "Vérifier la configuration DNS d'une passerelle"
|
||||||
|
pl_PL: "Sprawdź konfigurację DNS bramy"
|
||||||
|
|
||||||
|
about.check-for-updates:
|
||||||
|
en_US: "Check for available updates"
|
||||||
|
de_DE: "Nach verfügbaren Updates suchen"
|
||||||
|
es_ES: "Buscar actualizaciones disponibles"
|
||||||
|
fr_FR: "Vérifier les mises à jour disponibles"
|
||||||
|
pl_PL: "Sprawdź dostępne aktualizacje"
|
||||||
|
|
||||||
about.check-update-startos:
|
about.check-update-startos:
|
||||||
en_US: "Check a given registry for StartOS updates and update if available"
|
en_US: "Check a given registry for StartOS updates and update if available"
|
||||||
de_DE: "Ein bestimmtes Registry auf StartOS-Updates prüfen und bei Verfügbarkeit aktualisieren"
|
de_DE: "Ein bestimmtes Registry auf StartOS-Updates prüfen und bei Verfügbarkeit aktualisieren"
|
||||||
@@ -4887,6 +4987,13 @@ about.publish-s9pk:
|
|||||||
fr_FR: "Publier s9pk dans le bucket S3 et indexer dans le registre"
|
fr_FR: "Publier s9pk dans le bucket S3 et indexer dans le registre"
|
||||||
pl_PL: "Opublikuj s9pk do bucketu S3 i zindeksuj w rejestrze"
|
pl_PL: "Opublikuj s9pk do bucketu S3 i zindeksuj w rejestrze"
|
||||||
|
|
||||||
|
about.select-s9pk-for-device:
|
||||||
|
en_US: "Select the best compatible s9pk for a target device"
|
||||||
|
de_DE: "Das beste kompatible s9pk für ein Zielgerät auswählen"
|
||||||
|
es_ES: "Seleccionar el s9pk más compatible para un dispositivo destino"
|
||||||
|
fr_FR: "Sélectionner le meilleur s9pk compatible pour un appareil cible"
|
||||||
|
pl_PL: "Wybierz najlepiej kompatybilny s9pk dla urządzenia docelowego"
|
||||||
|
|
||||||
about.rebuild-service-container:
|
about.rebuild-service-container:
|
||||||
en_US: "Rebuild service container"
|
en_US: "Rebuild service container"
|
||||||
de_DE: "Dienst-Container neu erstellen"
|
de_DE: "Dienst-Container neu erstellen"
|
||||||
@@ -5139,6 +5246,13 @@ about.set-country:
|
|||||||
fr_FR: "Définir le pays"
|
fr_FR: "Définir le pays"
|
||||||
pl_PL: "Ustaw kraj"
|
pl_PL: "Ustaw kraj"
|
||||||
|
|
||||||
|
about.set-hostname:
|
||||||
|
en_US: "Set the server hostname"
|
||||||
|
de_DE: "Den Server-Hostnamen festlegen"
|
||||||
|
es_ES: "Establecer el nombre de host del servidor"
|
||||||
|
fr_FR: "Définir le nom d'hôte du serveur"
|
||||||
|
pl_PL: "Ustaw nazwę hosta serwera"
|
||||||
|
|
||||||
about.set-gateway-enabled-for-binding:
|
about.set-gateway-enabled-for-binding:
|
||||||
en_US: "Set gateway enabled for binding"
|
en_US: "Set gateway enabled for binding"
|
||||||
de_DE: "Gateway für Bindung aktivieren"
|
de_DE: "Gateway für Bindung aktivieren"
|
||||||
|
|||||||
105
core/patchdb.md
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
# Patch-DB Patterns
|
||||||
|
|
||||||
|
## Model<T> and HasModel
|
||||||
|
|
||||||
|
Types stored in the database derive `HasModel`, which generates typed accessor methods on `Model<T>`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[derive(Debug, Deserialize, Serialize, HasModel)]
|
||||||
|
#[serde(rename_all = "camelCase")]
|
||||||
|
#[model = "Model<Self>"]
|
||||||
|
pub struct ServerInfo {
|
||||||
|
pub version: Version,
|
||||||
|
pub network: NetworkInfo,
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generated accessors** (one per field):
|
||||||
|
- `as_version()` — `&Model<Version>`
|
||||||
|
- `as_version_mut()` — `&mut Model<Version>`
|
||||||
|
- `into_version()` — `Model<Version>`
|
||||||
|
|
||||||
|
**`Model<T>` APIs:**
|
||||||
|
- `.de()` — Deserialize to `T`
|
||||||
|
- `.ser(&value)` — Serialize from `T`
|
||||||
|
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize
|
||||||
|
- For maps: `.keys()`, `.as_idx(&key)`, `.insert()`, `.remove()`, `.contains_key()`
|
||||||
|
|
||||||
|
## Database Access
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// Read-only snapshot
|
||||||
|
let snap = db.peek().await;
|
||||||
|
let version = snap.as_public().as_server_info().as_version().de()?;
|
||||||
|
|
||||||
|
// Atomic mutation
|
||||||
|
db.mutate(|db| {
|
||||||
|
db.as_public_mut().as_server_info_mut().as_version_mut().ser(&new_version)?;
|
||||||
|
Ok(())
|
||||||
|
}).await;
|
||||||
|
```
|
||||||
|
|
||||||
|
## TypedDbWatch<T>
|
||||||
|
|
||||||
|
Watch a JSON pointer path for changes and deserialize as a typed value. Requires `T: HasModel`.
|
||||||
|
|
||||||
|
### Construction
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use patch_db::json_ptr::JsonPointer;
|
||||||
|
|
||||||
|
let ptr: JsonPointer = "/public/serverInfo".parse().unwrap();
|
||||||
|
let mut watch = db.watch(ptr).await.typed::<ServerInfo>();
|
||||||
|
```
|
||||||
|
|
||||||
|
### API
|
||||||
|
|
||||||
|
- `watch.peek()?.de()?` — Get current value as `T`
|
||||||
|
- `watch.changed().await?` — Wait until the watched path changes
|
||||||
|
- `watch.peek()?.as_field().de()?` — Access nested fields via `HasModel` accessors
|
||||||
|
|
||||||
|
### Usage Patterns
|
||||||
|
|
||||||
|
**Wait for a condition, then proceed:**
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// Wait for DB version to match current OS version
|
||||||
|
let current = Current::default().semver();
|
||||||
|
let mut watch = db
|
||||||
|
.watch("/public/serverInfo".parse().unwrap())
|
||||||
|
.await
|
||||||
|
.typed::<ServerInfo>();
|
||||||
|
loop {
|
||||||
|
let server_info = watch.peek()?.de()?;
|
||||||
|
if server_info.version == current {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
watch.changed().await?;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**React to changes in a loop:**
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// From net_controller.rs — react to host changes
|
||||||
|
let mut watch = db
|
||||||
|
.watch("/public/serverInfo/network/host".parse().unwrap())
|
||||||
|
.await
|
||||||
|
.typed::<Host>();
|
||||||
|
loop {
|
||||||
|
if let Err(e) = watch.changed().await {
|
||||||
|
tracing::error!("DB watch disconnected: {e}");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
let host = watch.peek()?.de()?;
|
||||||
|
// ... process host ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Real Examples
|
||||||
|
|
||||||
|
- `net_controller.rs:469` — Watch `Hosts` for package network changes
|
||||||
|
- `net_controller.rs:493` — Watch `Host` for main UI network changes
|
||||||
|
- `service_actor.rs:37` — Watch `StatusInfo` for service state transitions
|
||||||
|
- `gateway.rs:1212` — Wait for DB migrations to complete before syncing
|
||||||
@@ -21,6 +21,14 @@ pub async fn my_handler(ctx: RpcContext, params: MyParams) -> Result<MyResponse,
|
|||||||
from_fn_async(my_handler)
|
from_fn_async(my_handler)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If a handler takes no params, simply omit the params argument entirely (no need for `_: Empty`):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub async fn no_params_handler(ctx: RpcContext) -> Result<MyResponse, Error> {
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### `from_fn_async_local` - Non-thread-safe async handlers
|
### `from_fn_async_local` - Non-thread-safe async handlers
|
||||||
For async functions that are not `Send` (cannot be safely moved between threads). Use when working with non-thread-safe types.
|
For async functions that are not `Send` (cannot be safely moved between threads). Use when working with non-thread-safe types.
|
||||||
|
|
||||||
@@ -181,9 +189,9 @@ pub struct MyParams {
|
|||||||
|
|
||||||
### Adding a New RPC Endpoint
|
### Adding a New RPC Endpoint
|
||||||
|
|
||||||
1. Define params struct with `Deserialize, Serialize, Parser, TS`
|
1. Define params struct with `Deserialize, Serialize, Parser, TS` (skip if no params needed)
|
||||||
2. Choose handler type based on sync/async and thread-safety
|
2. Choose handler type based on sync/async and thread-safety
|
||||||
3. Write handler function taking `(Context, Params) -> Result<Response, Error>`
|
3. Write handler function taking `(Context, Params) -> Result<Response, Error>` (omit Params if none needed)
|
||||||
4. Add to parent handler with appropriate extensions (display modifiers before `with_about`)
|
4. Add to parent handler with appropriate extensions (display modifiers before `with_about`)
|
||||||
5. TypeScript types auto-generated via `make ts-bindings`
|
5. TypeScript types auto-generated via `make ts-bindings`
|
||||||
|
|
||||||
@@ -6,9 +6,8 @@ use openssl::pkey::{PKey, Private};
|
|||||||
use openssl::x509::X509;
|
use openssl::x509::X509;
|
||||||
|
|
||||||
use crate::db::model::DatabaseModel;
|
use crate::db::model::DatabaseModel;
|
||||||
use crate::hostname::{Hostname, generate_hostname, generate_id};
|
use crate::hostname::{ServerHostnameInfo, generate_hostname, generate_id};
|
||||||
use crate::net::ssl::{gen_nistp256, make_root_cert};
|
use crate::net::ssl::{gen_nistp256, make_root_cert};
|
||||||
use crate::net::tor::TorSecretKey;
|
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::util::serde::Pem;
|
use crate::util::serde::Pem;
|
||||||
|
|
||||||
@@ -24,21 +23,27 @@ fn hash_password(password: &str) -> Result<String, Error> {
|
|||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
pub struct AccountInfo {
|
pub struct AccountInfo {
|
||||||
pub server_id: String,
|
pub server_id: String,
|
||||||
pub hostname: Hostname,
|
pub hostname: ServerHostnameInfo,
|
||||||
pub password: String,
|
pub password: String,
|
||||||
pub tor_keys: Vec<TorSecretKey>,
|
|
||||||
pub root_ca_key: PKey<Private>,
|
pub root_ca_key: PKey<Private>,
|
||||||
pub root_ca_cert: X509,
|
pub root_ca_cert: X509,
|
||||||
pub ssh_key: ssh_key::PrivateKey,
|
pub ssh_key: ssh_key::PrivateKey,
|
||||||
pub developer_key: ed25519_dalek::SigningKey,
|
pub developer_key: ed25519_dalek::SigningKey,
|
||||||
}
|
}
|
||||||
impl AccountInfo {
|
impl AccountInfo {
|
||||||
pub fn new(password: &str, start_time: SystemTime) -> Result<Self, Error> {
|
pub fn new(
|
||||||
|
password: &str,
|
||||||
|
start_time: SystemTime,
|
||||||
|
hostname: Option<ServerHostnameInfo>,
|
||||||
|
) -> Result<Self, Error> {
|
||||||
let server_id = generate_id();
|
let server_id = generate_id();
|
||||||
let hostname = generate_hostname();
|
let hostname = if let Some(h) = hostname {
|
||||||
let tor_key = vec![TorSecretKey::generate()];
|
h
|
||||||
|
} else {
|
||||||
|
ServerHostnameInfo::from_hostname(generate_hostname())
|
||||||
|
};
|
||||||
let root_ca_key = gen_nistp256()?;
|
let root_ca_key = gen_nistp256()?;
|
||||||
let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?;
|
let root_ca_cert = make_root_cert(&root_ca_key, &hostname.hostname, start_time)?;
|
||||||
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
|
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
|
||||||
&mut ssh_key::rand_core::OsRng::default(),
|
&mut ssh_key::rand_core::OsRng::default(),
|
||||||
));
|
));
|
||||||
@@ -48,7 +53,6 @@ impl AccountInfo {
|
|||||||
server_id,
|
server_id,
|
||||||
hostname,
|
hostname,
|
||||||
password: hash_password(password)?,
|
password: hash_password(password)?,
|
||||||
tor_keys: tor_key,
|
|
||||||
root_ca_key,
|
root_ca_key,
|
||||||
root_ca_cert,
|
root_ca_cert,
|
||||||
ssh_key,
|
ssh_key,
|
||||||
@@ -58,20 +62,9 @@ impl AccountInfo {
|
|||||||
|
|
||||||
pub fn load(db: &DatabaseModel) -> Result<Self, Error> {
|
pub fn load(db: &DatabaseModel) -> Result<Self, Error> {
|
||||||
let server_id = db.as_public().as_server_info().as_id().de()?;
|
let server_id = db.as_public().as_server_info().as_id().de()?;
|
||||||
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
let hostname = ServerHostnameInfo::load(db.as_public().as_server_info())?;
|
||||||
let password = db.as_private().as_password().de()?;
|
let password = db.as_private().as_password().de()?;
|
||||||
let key_store = db.as_private().as_key_store();
|
let key_store = db.as_private().as_key_store();
|
||||||
let tor_addrs = db
|
|
||||||
.as_public()
|
|
||||||
.as_server_info()
|
|
||||||
.as_network()
|
|
||||||
.as_host()
|
|
||||||
.as_onions()
|
|
||||||
.de()?;
|
|
||||||
let tor_keys = tor_addrs
|
|
||||||
.into_iter()
|
|
||||||
.map(|tor_addr| key_store.as_onion().get_key(&tor_addr))
|
|
||||||
.collect::<Result<_, _>>()?;
|
|
||||||
let cert_store = key_store.as_local_certs();
|
let cert_store = key_store.as_local_certs();
|
||||||
let root_ca_key = cert_store.as_root_key().de()?.0;
|
let root_ca_key = cert_store.as_root_key().de()?.0;
|
||||||
let root_ca_cert = cert_store.as_root_cert().de()?.0;
|
let root_ca_cert = cert_store.as_root_cert().de()?.0;
|
||||||
@@ -82,7 +75,6 @@ impl AccountInfo {
|
|||||||
server_id,
|
server_id,
|
||||||
hostname,
|
hostname,
|
||||||
password,
|
password,
|
||||||
tor_keys,
|
|
||||||
root_ca_key,
|
root_ca_key,
|
||||||
root_ca_cert,
|
root_ca_cert,
|
||||||
ssh_key,
|
ssh_key,
|
||||||
@@ -93,21 +85,10 @@ impl AccountInfo {
|
|||||||
pub fn save(&self, db: &mut DatabaseModel) -> Result<(), Error> {
|
pub fn save(&self, db: &mut DatabaseModel) -> Result<(), Error> {
|
||||||
let server_info = db.as_public_mut().as_server_info_mut();
|
let server_info = db.as_public_mut().as_server_info_mut();
|
||||||
server_info.as_id_mut().ser(&self.server_id)?;
|
server_info.as_id_mut().ser(&self.server_id)?;
|
||||||
server_info.as_hostname_mut().ser(&self.hostname.0)?;
|
self.hostname.save(server_info)?;
|
||||||
server_info
|
server_info
|
||||||
.as_pubkey_mut()
|
.as_pubkey_mut()
|
||||||
.ser(&self.ssh_key.public_key().to_openssh()?)?;
|
.ser(&self.ssh_key.public_key().to_openssh()?)?;
|
||||||
server_info
|
|
||||||
.as_network_mut()
|
|
||||||
.as_host_mut()
|
|
||||||
.as_onions_mut()
|
|
||||||
.ser(
|
|
||||||
&self
|
|
||||||
.tor_keys
|
|
||||||
.iter()
|
|
||||||
.map(|tor_key| tor_key.onion_address())
|
|
||||||
.collect(),
|
|
||||||
)?;
|
|
||||||
server_info.as_password_hash_mut().ser(&self.password)?;
|
server_info.as_password_hash_mut().ser(&self.password)?;
|
||||||
db.as_private_mut().as_password_mut().ser(&self.password)?;
|
db.as_private_mut().as_password_mut().ser(&self.password)?;
|
||||||
db.as_private_mut()
|
db.as_private_mut()
|
||||||
@@ -117,9 +98,6 @@ impl AccountInfo {
|
|||||||
.as_developer_key_mut()
|
.as_developer_key_mut()
|
||||||
.ser(Pem::new_ref(&self.developer_key))?;
|
.ser(Pem::new_ref(&self.developer_key))?;
|
||||||
let key_store = db.as_private_mut().as_key_store_mut();
|
let key_store = db.as_private_mut().as_key_store_mut();
|
||||||
for tor_key in &self.tor_keys {
|
|
||||||
key_store.as_onion_mut().insert_key(tor_key)?;
|
|
||||||
}
|
|
||||||
let cert_store = key_store.as_local_certs_mut();
|
let cert_store = key_store.as_local_certs_mut();
|
||||||
if cert_store.as_root_cert().de()?.0 != self.root_ca_cert {
|
if cert_store.as_root_cert().de()?.0 != self.root_ca_cert {
|
||||||
cert_store
|
cert_store
|
||||||
@@ -145,14 +123,8 @@ impl AccountInfo {
|
|||||||
|
|
||||||
pub fn hostnames(&self) -> impl IntoIterator<Item = InternedString> + Send + '_ {
|
pub fn hostnames(&self) -> impl IntoIterator<Item = InternedString> + Send + '_ {
|
||||||
[
|
[
|
||||||
self.hostname.no_dot_host_name(),
|
(*self.hostname.hostname).clone(),
|
||||||
self.hostname.local_domain_name(),
|
self.hostname.hostname.local_domain_name(),
|
||||||
]
|
]
|
||||||
.into_iter()
|
|
||||||
.chain(
|
|
||||||
self.tor_keys
|
|
||||||
.iter()
|
|
||||||
.map(|k| InternedString::from_display(&k.onion_address())),
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -67,6 +67,10 @@ pub struct GetActionInputParams {
|
|||||||
pub package_id: PackageId,
|
pub package_id: PackageId,
|
||||||
#[arg(help = "help.arg.action-id")]
|
#[arg(help = "help.arg.action-id")]
|
||||||
pub action_id: ActionId,
|
pub action_id: ActionId,
|
||||||
|
#[ts(type = "Record<string, unknown> | null")]
|
||||||
|
#[serde(default)]
|
||||||
|
#[arg(skip)]
|
||||||
|
pub prefill: Option<Value>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
@@ -75,6 +79,7 @@ pub async fn get_action_input(
|
|||||||
GetActionInputParams {
|
GetActionInputParams {
|
||||||
package_id,
|
package_id,
|
||||||
action_id,
|
action_id,
|
||||||
|
prefill,
|
||||||
}: GetActionInputParams,
|
}: GetActionInputParams,
|
||||||
) -> Result<Option<ActionInput>, Error> {
|
) -> Result<Option<ActionInput>, Error> {
|
||||||
ctx.services
|
ctx.services
|
||||||
@@ -82,7 +87,7 @@ pub async fn get_action_input(
|
|||||||
.await
|
.await
|
||||||
.as_ref()
|
.as_ref()
|
||||||
.or_not_found(lazy_format!("Manager for {}", package_id))?
|
.or_not_found(lazy_format!("Manager for {}", package_id))?
|
||||||
.get_action_input(Guid::new(), action_id)
|
.get_action_input(Guid::new(), action_id, prefill.unwrap_or(Value::Null))
|
||||||
.await
|
.await
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -271,6 +276,7 @@ pub fn display_action_result<T: Serialize>(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, TS)]
|
#[derive(Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct RunActionParams {
|
pub struct RunActionParams {
|
||||||
pub package_id: PackageId,
|
pub package_id: PackageId,
|
||||||
@@ -362,6 +368,7 @@ pub async fn run_action(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct ClearTaskParams {
|
pub struct ClearTaskParams {
|
||||||
|
|||||||
@@ -418,6 +418,7 @@ impl AsLogoutSessionId for KillSessionId {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct KillParams {
|
pub struct KillParams {
|
||||||
@@ -435,6 +436,7 @@ pub async fn kill<C: SessionAuthContext>(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct ResetPasswordParams {
|
pub struct ResetPasswordParams {
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ use crate::util::serde::IoFormat;
|
|||||||
use crate::version::VersionT;
|
use crate::version::VersionT;
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct BackupParams {
|
pub struct BackupParams {
|
||||||
@@ -270,9 +271,9 @@ async fn perform_backup(
|
|||||||
package_backups.insert(
|
package_backups.insert(
|
||||||
id.clone(),
|
id.clone(),
|
||||||
PackageBackupInfo {
|
PackageBackupInfo {
|
||||||
os_version: manifest.as_os_version().de()?,
|
os_version: manifest.as_metadata().as_os_version().de()?,
|
||||||
version: manifest.as_version().de()?,
|
version: manifest.as_version().de()?,
|
||||||
title: manifest.as_title().de()?,
|
title: manifest.as_metadata().as_title().de()?,
|
||||||
timestamp: Utc::now(),
|
timestamp: Utc::now(),
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
@@ -337,7 +338,7 @@ async fn perform_backup(
|
|||||||
let timestamp = Utc::now();
|
let timestamp = Utc::now();
|
||||||
|
|
||||||
backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into();
|
backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into();
|
||||||
backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.clone());
|
backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.hostname.clone());
|
||||||
backup_guard.unencrypted_metadata.timestamp = timestamp.clone();
|
backup_guard.unencrypted_metadata.timestamp = timestamp.clone();
|
||||||
backup_guard.metadata.version = crate::version::Current::default().semver().into();
|
backup_guard.metadata.version = crate::version::Current::default().semver().into();
|
||||||
backup_guard.metadata.timestamp = Some(timestamp);
|
backup_guard.metadata.timestamp = Some(timestamp);
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ use std::collections::BTreeMap;
|
|||||||
|
|
||||||
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
|
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
use ts_rs::TS;
|
||||||
|
|
||||||
use crate::PackageId;
|
use crate::PackageId;
|
||||||
use crate::context::CliContext;
|
use crate::context::CliContext;
|
||||||
@@ -13,19 +14,22 @@ pub mod os;
|
|||||||
pub mod restore;
|
pub mod restore;
|
||||||
pub mod target;
|
pub mod target;
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize)]
|
#[derive(Debug, Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
pub struct BackupReport {
|
pub struct BackupReport {
|
||||||
server: ServerBackupReport,
|
server: ServerBackupReport,
|
||||||
packages: BTreeMap<PackageId, PackageBackupReport>,
|
packages: BTreeMap<PackageId, PackageBackupReport>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize)]
|
#[derive(Debug, Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
pub struct ServerBackupReport {
|
pub struct ServerBackupReport {
|
||||||
attempted: bool,
|
attempted: bool,
|
||||||
error: Option<String>,
|
error: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize)]
|
#[derive(Debug, Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
pub struct PackageBackupReport {
|
pub struct PackageBackupReport {
|
||||||
pub error: Option<String>,
|
pub error: Option<String>,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,10 +6,8 @@ use serde::{Deserialize, Serialize};
|
|||||||
use ssh_key::private::Ed25519Keypair;
|
use ssh_key::private::Ed25519Keypair;
|
||||||
|
|
||||||
use crate::account::AccountInfo;
|
use crate::account::AccountInfo;
|
||||||
use crate::hostname::{Hostname, generate_hostname, generate_id};
|
use crate::hostname::{ServerHostname, ServerHostnameInfo, generate_hostname, generate_id};
|
||||||
use crate::net::tor::TorSecretKey;
|
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::util::crypto::ed25519_expand_key;
|
|
||||||
use crate::util::serde::{Base32, Base64, Pem};
|
use crate::util::serde::{Base32, Base64, Pem};
|
||||||
|
|
||||||
pub struct OsBackup {
|
pub struct OsBackup {
|
||||||
@@ -29,10 +27,12 @@ impl<'de> Deserialize<'de> for OsBackup {
|
|||||||
.map_err(serde::de::Error::custom)?,
|
.map_err(serde::de::Error::custom)?,
|
||||||
1 => patch_db::value::from_value::<OsBackupV1>(tagged.rest)
|
1 => patch_db::value::from_value::<OsBackupV1>(tagged.rest)
|
||||||
.map_err(serde::de::Error::custom)?
|
.map_err(serde::de::Error::custom)?
|
||||||
.project(),
|
.project()
|
||||||
|
.map_err(serde::de::Error::custom)?,
|
||||||
2 => patch_db::value::from_value::<OsBackupV2>(tagged.rest)
|
2 => patch_db::value::from_value::<OsBackupV2>(tagged.rest)
|
||||||
.map_err(serde::de::Error::custom)?
|
.map_err(serde::de::Error::custom)?
|
||||||
.project(),
|
.project()
|
||||||
|
.map_err(serde::de::Error::custom)?,
|
||||||
v => {
|
v => {
|
||||||
return Err(serde::de::Error::custom(&format!(
|
return Err(serde::de::Error::custom(&format!(
|
||||||
"Unknown backup version {v}"
|
"Unknown backup version {v}"
|
||||||
@@ -77,7 +77,7 @@ impl OsBackupV0 {
|
|||||||
Ok(OsBackup {
|
Ok(OsBackup {
|
||||||
account: AccountInfo {
|
account: AccountInfo {
|
||||||
server_id: generate_id(),
|
server_id: generate_id(),
|
||||||
hostname: generate_hostname(),
|
hostname: ServerHostnameInfo::from_hostname(generate_hostname()),
|
||||||
password: Default::default(),
|
password: Default::default(),
|
||||||
root_ca_key: self.root_ca_key.0,
|
root_ca_key: self.root_ca_key.0,
|
||||||
root_ca_cert: self.root_ca_cert.0,
|
root_ca_cert: self.root_ca_cert.0,
|
||||||
@@ -85,10 +85,6 @@ impl OsBackupV0 {
|
|||||||
&mut ssh_key::rand_core::OsRng::default(),
|
&mut ssh_key::rand_core::OsRng::default(),
|
||||||
ssh_key::Algorithm::Ed25519,
|
ssh_key::Algorithm::Ed25519,
|
||||||
)?,
|
)?,
|
||||||
tor_keys: TorSecretKey::from_bytes(self.tor_key.0)
|
|
||||||
.ok()
|
|
||||||
.into_iter()
|
|
||||||
.collect(),
|
|
||||||
developer_key: ed25519_dalek::SigningKey::generate(
|
developer_key: ed25519_dalek::SigningKey::generate(
|
||||||
&mut ssh_key::rand_core::OsRng::default(),
|
&mut ssh_key::rand_core::OsRng::default(),
|
||||||
),
|
),
|
||||||
@@ -110,23 +106,19 @@ struct OsBackupV1 {
|
|||||||
ui: Value, // JSON Value
|
ui: Value, // JSON Value
|
||||||
}
|
}
|
||||||
impl OsBackupV1 {
|
impl OsBackupV1 {
|
||||||
fn project(self) -> OsBackup {
|
fn project(self) -> Result<OsBackup, Error> {
|
||||||
OsBackup {
|
Ok(OsBackup {
|
||||||
account: AccountInfo {
|
account: AccountInfo {
|
||||||
server_id: self.server_id,
|
server_id: self.server_id,
|
||||||
hostname: Hostname(self.hostname),
|
hostname: ServerHostnameInfo::from_hostname(ServerHostname::new(self.hostname)?),
|
||||||
password: Default::default(),
|
password: Default::default(),
|
||||||
root_ca_key: self.root_ca_key.0,
|
root_ca_key: self.root_ca_key.0,
|
||||||
root_ca_cert: self.root_ca_cert.0,
|
root_ca_cert: self.root_ca_cert.0,
|
||||||
ssh_key: ssh_key::PrivateKey::from(Ed25519Keypair::from_seed(&self.net_key.0)),
|
ssh_key: ssh_key::PrivateKey::from(Ed25519Keypair::from_seed(&self.net_key.0)),
|
||||||
tor_keys: TorSecretKey::from_bytes(ed25519_expand_key(&self.net_key.0))
|
|
||||||
.ok()
|
|
||||||
.into_iter()
|
|
||||||
.collect(),
|
|
||||||
developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
|
developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
|
||||||
},
|
},
|
||||||
ui: self.ui,
|
ui: self.ui,
|
||||||
}
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -140,34 +132,31 @@ struct OsBackupV2 {
|
|||||||
root_ca_key: Pem<PKey<Private>>, // PEM Encoded OpenSSL Key
|
root_ca_key: Pem<PKey<Private>>, // PEM Encoded OpenSSL Key
|
||||||
root_ca_cert: Pem<X509>, // PEM Encoded OpenSSL X509 Certificate
|
root_ca_cert: Pem<X509>, // PEM Encoded OpenSSL X509 Certificate
|
||||||
ssh_key: Pem<ssh_key::PrivateKey>, // PEM Encoded OpenSSH Key
|
ssh_key: Pem<ssh_key::PrivateKey>, // PEM Encoded OpenSSH Key
|
||||||
tor_keys: Vec<TorSecretKey>, // Base64 Encoded Ed25519 Expanded Secret Key
|
|
||||||
compat_s9pk_key: Pem<ed25519_dalek::SigningKey>, // PEM Encoded ED25519 Key
|
compat_s9pk_key: Pem<ed25519_dalek::SigningKey>, // PEM Encoded ED25519 Key
|
||||||
ui: Value, // JSON Value
|
ui: Value, // JSON Value
|
||||||
}
|
}
|
||||||
impl OsBackupV2 {
|
impl OsBackupV2 {
|
||||||
fn project(self) -> OsBackup {
|
fn project(self) -> Result<OsBackup, Error> {
|
||||||
OsBackup {
|
Ok(OsBackup {
|
||||||
account: AccountInfo {
|
account: AccountInfo {
|
||||||
server_id: self.server_id,
|
server_id: self.server_id,
|
||||||
hostname: Hostname(self.hostname),
|
hostname: ServerHostnameInfo::from_hostname(ServerHostname::new(self.hostname)?),
|
||||||
password: Default::default(),
|
password: Default::default(),
|
||||||
root_ca_key: self.root_ca_key.0,
|
root_ca_key: self.root_ca_key.0,
|
||||||
root_ca_cert: self.root_ca_cert.0,
|
root_ca_cert: self.root_ca_cert.0,
|
||||||
ssh_key: self.ssh_key.0,
|
ssh_key: self.ssh_key.0,
|
||||||
tor_keys: self.tor_keys,
|
|
||||||
developer_key: self.compat_s9pk_key.0,
|
developer_key: self.compat_s9pk_key.0,
|
||||||
},
|
},
|
||||||
ui: self.ui,
|
ui: self.ui,
|
||||||
}
|
})
|
||||||
}
|
}
|
||||||
fn unproject(backup: &OsBackup) -> Self {
|
fn unproject(backup: &OsBackup) -> Self {
|
||||||
Self {
|
Self {
|
||||||
server_id: backup.account.server_id.clone(),
|
server_id: backup.account.server_id.clone(),
|
||||||
hostname: backup.account.hostname.0.clone(),
|
hostname: (*backup.account.hostname.hostname).clone(),
|
||||||
root_ca_key: Pem(backup.account.root_ca_key.clone()),
|
root_ca_key: Pem(backup.account.root_ca_key.clone()),
|
||||||
root_ca_cert: Pem(backup.account.root_ca_cert.clone()),
|
root_ca_cert: Pem(backup.account.root_ca_cert.clone()),
|
||||||
ssh_key: Pem(backup.account.ssh_key.clone()),
|
ssh_key: Pem(backup.account.ssh_key.clone()),
|
||||||
tor_keys: backup.account.tor_keys.clone(),
|
|
||||||
compat_s9pk_key: Pem(backup.account.developer_key.clone()),
|
compat_s9pk_key: Pem(backup.account.developer_key.clone()),
|
||||||
ui: backup.ui.clone(),
|
ui: backup.ui.clone(),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ use crate::db::model::Database;
|
|||||||
use crate::disk::mount::backup::BackupMountGuard;
|
use crate::disk::mount::backup::BackupMountGuard;
|
||||||
use crate::disk::mount::filesystem::ReadWrite;
|
use crate::disk::mount::filesystem::ReadWrite;
|
||||||
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
|
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
|
||||||
|
use crate::hostname::ServerHostnameInfo;
|
||||||
use crate::init::init;
|
use crate::init::init;
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::progress::ProgressUnits;
|
use crate::progress::ProgressUnits;
|
||||||
@@ -30,6 +31,7 @@ use crate::{PLATFORM, PackageId};
|
|||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
|
#[ts(export)]
|
||||||
pub struct RestorePackageParams {
|
pub struct RestorePackageParams {
|
||||||
#[arg(help = "help.arg.package-ids")]
|
#[arg(help = "help.arg.package-ids")]
|
||||||
pub ids: Vec<PackageId>,
|
pub ids: Vec<PackageId>,
|
||||||
@@ -84,11 +86,12 @@ pub async fn restore_packages_rpc(
|
|||||||
pub async fn recover_full_server(
|
pub async fn recover_full_server(
|
||||||
ctx: &SetupContext,
|
ctx: &SetupContext,
|
||||||
disk_guid: InternedString,
|
disk_guid: InternedString,
|
||||||
password: String,
|
password: Option<String>,
|
||||||
recovery_source: TmpMountGuard,
|
recovery_source: TmpMountGuard,
|
||||||
server_id: &str,
|
server_id: &str,
|
||||||
recovery_password: &str,
|
recovery_password: &str,
|
||||||
kiosk: Option<bool>,
|
kiosk: Option<bool>,
|
||||||
|
hostname: Option<ServerHostnameInfo>,
|
||||||
SetupExecuteProgress {
|
SetupExecuteProgress {
|
||||||
init_phases,
|
init_phases,
|
||||||
restore_phase,
|
restore_phase,
|
||||||
@@ -107,12 +110,18 @@ pub async fn recover_full_server(
|
|||||||
.with_ctx(|_| (ErrorKind::Filesystem, os_backup_path.display().to_string()))?,
|
.with_ctx(|_| (ErrorKind::Filesystem, os_backup_path.display().to_string()))?,
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
|
if let Some(password) = password {
|
||||||
os_backup.account.password = argon2::hash_encoded(
|
os_backup.account.password = argon2::hash_encoded(
|
||||||
password.as_bytes(),
|
password.as_bytes(),
|
||||||
&rand::random::<[u8; 16]>()[..],
|
&rand::random::<[u8; 16]>()[..],
|
||||||
&argon2::Config::rfc9106_low_mem(),
|
&argon2::Config::rfc9106_low_mem(),
|
||||||
)
|
)
|
||||||
.with_kind(ErrorKind::PasswordHashGeneration)?;
|
.with_kind(ErrorKind::PasswordHashGeneration)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(h) = hostname {
|
||||||
|
os_backup.account.hostname = h;
|
||||||
|
}
|
||||||
|
|
||||||
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
|
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
|
||||||
sync_kiosk(kiosk).await?;
|
sync_kiosk(kiosk).await?;
|
||||||
@@ -182,7 +191,7 @@ pub async fn recover_full_server(
|
|||||||
|
|
||||||
Ok((
|
Ok((
|
||||||
SetupResult {
|
SetupResult {
|
||||||
hostname: os_backup.account.hostname,
|
hostname: os_backup.account.hostname.hostname,
|
||||||
root_ca: Pem(os_backup.account.root_ca_cert),
|
root_ca: Pem(os_backup.account.root_ca_cert),
|
||||||
needs_restart: ctx.install_rootfs.peek(|a| a.is_some()),
|
needs_restart: ctx.install_rootfs.peek(|a| a.is_some()),
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -36,7 +36,8 @@ impl Map for CifsTargets {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize)]
|
#[derive(Debug, Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct CifsBackupTarget {
|
pub struct CifsBackupTarget {
|
||||||
hostname: String,
|
hostname: String,
|
||||||
@@ -72,9 +73,10 @@ pub fn cifs<C: Context>() -> ParentHandler<C> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct AddParams {
|
pub struct CifsAddParams {
|
||||||
#[arg(help = "help.arg.cifs-hostname")]
|
#[arg(help = "help.arg.cifs-hostname")]
|
||||||
pub hostname: String,
|
pub hostname: String,
|
||||||
#[arg(help = "help.arg.cifs-path")]
|
#[arg(help = "help.arg.cifs-path")]
|
||||||
@@ -87,12 +89,12 @@ pub struct AddParams {
|
|||||||
|
|
||||||
pub async fn add(
|
pub async fn add(
|
||||||
ctx: RpcContext,
|
ctx: RpcContext,
|
||||||
AddParams {
|
CifsAddParams {
|
||||||
hostname,
|
hostname,
|
||||||
path,
|
path,
|
||||||
username,
|
username,
|
||||||
password,
|
password,
|
||||||
}: AddParams,
|
}: CifsAddParams,
|
||||||
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
|
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
|
||||||
let cifs = Cifs {
|
let cifs = Cifs {
|
||||||
hostname,
|
hostname,
|
||||||
@@ -131,9 +133,10 @@ pub async fn add(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct UpdateParams {
|
pub struct CifsUpdateParams {
|
||||||
#[arg(help = "help.arg.backup-target-id")]
|
#[arg(help = "help.arg.backup-target-id")]
|
||||||
pub id: BackupTargetId,
|
pub id: BackupTargetId,
|
||||||
#[arg(help = "help.arg.cifs-hostname")]
|
#[arg(help = "help.arg.cifs-hostname")]
|
||||||
@@ -148,13 +151,13 @@ pub struct UpdateParams {
|
|||||||
|
|
||||||
pub async fn update(
|
pub async fn update(
|
||||||
ctx: RpcContext,
|
ctx: RpcContext,
|
||||||
UpdateParams {
|
CifsUpdateParams {
|
||||||
id,
|
id,
|
||||||
hostname,
|
hostname,
|
||||||
path,
|
path,
|
||||||
username,
|
username,
|
||||||
password,
|
password,
|
||||||
}: UpdateParams,
|
}: CifsUpdateParams,
|
||||||
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
|
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
|
||||||
let id = if let BackupTargetId::Cifs { id } = id {
|
let id = if let BackupTargetId::Cifs { id } = id {
|
||||||
id
|
id
|
||||||
@@ -207,14 +210,18 @@ pub async fn update(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct RemoveParams {
|
pub struct CifsRemoveParams {
|
||||||
#[arg(help = "help.arg.backup-target-id")]
|
#[arg(help = "help.arg.backup-target-id")]
|
||||||
pub id: BackupTargetId,
|
pub id: BackupTargetId,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn remove(ctx: RpcContext, RemoveParams { id }: RemoveParams) -> Result<(), Error> {
|
pub async fn remove(
|
||||||
|
ctx: RpcContext,
|
||||||
|
CifsRemoveParams { id }: CifsRemoveParams,
|
||||||
|
) -> Result<(), Error> {
|
||||||
let id = if let BackupTargetId::Cifs { id } = id {
|
let id = if let BackupTargetId::Cifs { id } = id {
|
||||||
id
|
id
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -34,7 +34,8 @@ use crate::util::{FromStrParser, VersionString};
|
|||||||
|
|
||||||
pub mod cifs;
|
pub mod cifs;
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize)]
|
#[derive(Debug, Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(tag = "type")]
|
#[serde(tag = "type")]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub enum BackupTarget {
|
pub enum BackupTarget {
|
||||||
@@ -49,7 +50,7 @@ pub enum BackupTarget {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, TS)]
|
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, TS)]
|
||||||
#[ts(type = "string")]
|
#[ts(export, type = "string")]
|
||||||
pub enum BackupTargetId {
|
pub enum BackupTargetId {
|
||||||
Disk { logicalname: PathBuf },
|
Disk { logicalname: PathBuf },
|
||||||
Cifs { id: u32 },
|
Cifs { id: u32 },
|
||||||
@@ -111,6 +112,7 @@ impl Serialize for BackupTargetId {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize, TS)]
|
#[derive(Debug, Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(tag = "type")]
|
#[serde(tag = "type")]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub enum BackupTargetFS {
|
pub enum BackupTargetFS {
|
||||||
@@ -210,20 +212,26 @@ pub async fn list(ctx: RpcContext) -> Result<BTreeMap<BackupTargetId, BackupTarg
|
|||||||
.collect())
|
.collect())
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
#[derive(Clone, Debug, Default, Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct BackupInfo {
|
pub struct BackupInfo {
|
||||||
|
#[ts(type = "string")]
|
||||||
pub version: Version,
|
pub version: Version,
|
||||||
|
#[ts(type = "string | null")]
|
||||||
pub timestamp: Option<DateTime<Utc>>,
|
pub timestamp: Option<DateTime<Utc>>,
|
||||||
pub package_backups: BTreeMap<PackageId, PackageBackupInfo>,
|
pub package_backups: BTreeMap<PackageId, PackageBackupInfo>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, Deserialize, Serialize)]
|
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct PackageBackupInfo {
|
pub struct PackageBackupInfo {
|
||||||
pub title: InternedString,
|
pub title: InternedString,
|
||||||
pub version: VersionString,
|
pub version: VersionString,
|
||||||
|
#[ts(type = "string")]
|
||||||
pub os_version: Version,
|
pub os_version: Version,
|
||||||
|
#[ts(type = "string")]
|
||||||
pub timestamp: DateTime<Utc>,
|
pub timestamp: DateTime<Utc>,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -265,6 +273,7 @@ fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) -> Re
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct InfoParams {
|
pub struct InfoParams {
|
||||||
@@ -387,6 +396,7 @@ pub async fn mount(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct UmountParams {
|
pub struct UmountParams {
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ use crate::disk::fsck::RepairStrategy;
|
|||||||
use crate::disk::main::DEFAULT_PASSWORD;
|
use crate::disk::main::DEFAULT_PASSWORD;
|
||||||
use crate::firmware::{check_for_firmware_update, update_firmware};
|
use crate::firmware::{check_for_firmware_update, update_firmware};
|
||||||
use crate::init::{InitPhases, STANDBY_MODE_PATH};
|
use crate::init::{InitPhases, STANDBY_MODE_PATH};
|
||||||
use crate::net::gateway::UpgradableListener;
|
use crate::net::gateway::WildcardListener;
|
||||||
use crate::net::web_server::WebServer;
|
use crate::net::web_server::WebServer;
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::progress::FullProgressTracker;
|
use crate::progress::FullProgressTracker;
|
||||||
@@ -19,7 +19,7 @@ use crate::{DATA_DIR, PLATFORM};
|
|||||||
|
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
async fn setup_or_init(
|
async fn setup_or_init(
|
||||||
server: &mut WebServer<UpgradableListener>,
|
server: &mut WebServer<WildcardListener>,
|
||||||
config: &ServerConfig,
|
config: &ServerConfig,
|
||||||
) -> Result<Result<(RpcContext, FullProgressTracker), Shutdown>, Error> {
|
) -> Result<Result<(RpcContext, FullProgressTracker), Shutdown>, Error> {
|
||||||
if let Some(firmware) = check_for_firmware_update()
|
if let Some(firmware) = check_for_firmware_update()
|
||||||
@@ -204,7 +204,7 @@ async fn setup_or_init(
|
|||||||
|
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
pub async fn main(
|
pub async fn main(
|
||||||
server: &mut WebServer<UpgradableListener>,
|
server: &mut WebServer<WildcardListener>,
|
||||||
config: &ServerConfig,
|
config: &ServerConfig,
|
||||||
) -> Result<Result<(RpcContext, FullProgressTracker), Shutdown>, Error> {
|
) -> Result<Result<(RpcContext, FullProgressTracker), Shutdown>, Error> {
|
||||||
if &*PLATFORM == "raspberrypi" && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() {
|
if &*PLATFORM == "raspberrypi" && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() {
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ use tracing::instrument;
|
|||||||
use crate::context::config::ServerConfig;
|
use crate::context::config::ServerConfig;
|
||||||
use crate::context::rpc::InitRpcContextPhases;
|
use crate::context::rpc::InitRpcContextPhases;
|
||||||
use crate::context::{DiagnosticContext, InitContext, RpcContext};
|
use crate::context::{DiagnosticContext, InitContext, RpcContext};
|
||||||
use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener};
|
use crate::net::gateway::WildcardListener;
|
||||||
use crate::net::static_server::refresher;
|
use crate::net::static_server::refresher;
|
||||||
use crate::net::web_server::{Acceptor, WebServer};
|
use crate::net::web_server::{Acceptor, WebServer};
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
@@ -23,7 +23,7 @@ use crate::util::logger::LOGGER;
|
|||||||
|
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
async fn inner_main(
|
async fn inner_main(
|
||||||
server: &mut WebServer<UpgradableListener>,
|
server: &mut WebServer<WildcardListener>,
|
||||||
config: &ServerConfig,
|
config: &ServerConfig,
|
||||||
) -> Result<Option<Shutdown>, Error> {
|
) -> Result<Option<Shutdown>, Error> {
|
||||||
let rpc_ctx = if !tokio::fs::metadata("/run/startos/initialized")
|
let rpc_ctx = if !tokio::fs::metadata("/run/startos/initialized")
|
||||||
@@ -70,7 +70,8 @@ async fn inner_main(
|
|||||||
};
|
};
|
||||||
|
|
||||||
let (rpc_ctx, shutdown) = async {
|
let (rpc_ctx, shutdown) = async {
|
||||||
crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.clone())).await?;
|
crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.hostname.clone()))
|
||||||
|
.await?;
|
||||||
|
|
||||||
let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
|
let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
|
||||||
|
|
||||||
@@ -147,10 +148,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
|
|||||||
.build()
|
.build()
|
||||||
.expect(&t!("bins.startd.failed-to-initialize-runtime"));
|
.expect(&t!("bins.startd.failed-to-initialize-runtime"));
|
||||||
let res = rt.block_on(async {
|
let res = rt.block_on(async {
|
||||||
let mut server = WebServer::new(
|
let mut server = WebServer::new(Acceptor::new(WildcardListener::new(80)?), refresher());
|
||||||
Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)),
|
|
||||||
refresher(),
|
|
||||||
);
|
|
||||||
match inner_main(&mut server, &config).await {
|
match inner_main(&mut server, &config).await {
|
||||||
Ok(a) => {
|
Ok(a) => {
|
||||||
server.shutdown().await;
|
server.shutdown().await;
|
||||||
|
|||||||
@@ -7,13 +7,13 @@ use clap::Parser;
|
|||||||
use futures::FutureExt;
|
use futures::FutureExt;
|
||||||
use rpc_toolkit::CliApp;
|
use rpc_toolkit::CliApp;
|
||||||
use rust_i18n::t;
|
use rust_i18n::t;
|
||||||
|
use tokio::net::TcpListener;
|
||||||
use tokio::signal::unix::signal;
|
use tokio::signal::unix::signal;
|
||||||
use tracing::instrument;
|
use tracing::instrument;
|
||||||
use visit_rs::Visit;
|
use visit_rs::Visit;
|
||||||
|
|
||||||
use crate::context::CliContext;
|
use crate::context::CliContext;
|
||||||
use crate::context::config::ClientConfig;
|
use crate::context::config::ClientConfig;
|
||||||
use crate::net::gateway::{Bind, BindTcp};
|
|
||||||
use crate::net::tls::TlsListener;
|
use crate::net::tls::TlsListener;
|
||||||
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
|
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
@@ -57,7 +57,12 @@ async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
|
|||||||
if !a.contains_key(&key) {
|
if !a.contains_key(&key) {
|
||||||
match (|| {
|
match (|| {
|
||||||
Ok::<_, Error>(TlsListener::new(
|
Ok::<_, Error>(TlsListener::new(
|
||||||
BindTcp.bind(addr)?,
|
TcpListener::from_std(
|
||||||
|
mio::net::TcpListener::bind(addr)
|
||||||
|
.with_kind(ErrorKind::Network)?
|
||||||
|
.into(),
|
||||||
|
)
|
||||||
|
.with_kind(ErrorKind::Network)?,
|
||||||
TunnelCertHandler {
|
TunnelCertHandler {
|
||||||
db: https_db.clone(),
|
db: https_db.clone(),
|
||||||
crypto_provider: Arc::new(tokio_rustls::rustls::crypto::ring::default_provider()),
|
crypto_provider: Arc::new(tokio_rustls::rustls::crypto::ring::default_provider()),
|
||||||
|
|||||||
@@ -10,7 +10,6 @@ use std::time::Duration;
|
|||||||
use chrono::{TimeDelta, Utc};
|
use chrono::{TimeDelta, Utc};
|
||||||
use imbl::OrdMap;
|
use imbl::OrdMap;
|
||||||
use imbl_value::InternedString;
|
use imbl_value::InternedString;
|
||||||
use itertools::Itertools;
|
|
||||||
use josekit::jwk::Jwk;
|
use josekit::jwk::Jwk;
|
||||||
use reqwest::{Client, Proxy};
|
use reqwest::{Client, Proxy};
|
||||||
use rpc_toolkit::yajrc::RpcError;
|
use rpc_toolkit::yajrc::RpcError;
|
||||||
@@ -25,7 +24,6 @@ use crate::account::AccountInfo;
|
|||||||
use crate::auth::Sessions;
|
use crate::auth::Sessions;
|
||||||
use crate::context::config::ServerConfig;
|
use crate::context::config::ServerConfig;
|
||||||
use crate::db::model::Database;
|
use crate::db::model::Database;
|
||||||
use crate::db::model::package::TaskSeverity;
|
|
||||||
use crate::disk::OsPartitionInfo;
|
use crate::disk::OsPartitionInfo;
|
||||||
use crate::disk::mount::filesystem::bind::Bind;
|
use crate::disk::mount::filesystem::bind::Bind;
|
||||||
use crate::disk::mount::filesystem::block_dev::BlockDev;
|
use crate::disk::mount::filesystem::block_dev::BlockDev;
|
||||||
@@ -34,7 +32,7 @@ use crate::disk::mount::guard::MountGuard;
|
|||||||
use crate::init::{InitResult, check_time_is_synchronized};
|
use crate::init::{InitResult, check_time_is_synchronized};
|
||||||
use crate::install::PKG_ARCHIVE_DIR;
|
use crate::install::PKG_ARCHIVE_DIR;
|
||||||
use crate::lxc::LxcManager;
|
use crate::lxc::LxcManager;
|
||||||
use crate::net::gateway::UpgradableListener;
|
use crate::net::gateway::WildcardListener;
|
||||||
use crate::net::net_controller::{NetController, NetService};
|
use crate::net::net_controller::{NetController, NetService};
|
||||||
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
|
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
|
||||||
use crate::net::utils::{find_eth_iface, find_wifi_iface};
|
use crate::net::utils::{find_eth_iface, find_wifi_iface};
|
||||||
@@ -44,7 +42,6 @@ use crate::prelude::*;
|
|||||||
use crate::progress::{FullProgressTracker, PhaseProgressTrackerHandle};
|
use crate::progress::{FullProgressTracker, PhaseProgressTrackerHandle};
|
||||||
use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations};
|
use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations};
|
||||||
use crate::service::ServiceMap;
|
use crate::service::ServiceMap;
|
||||||
use crate::service::action::update_tasks;
|
|
||||||
use crate::service::effects::callbacks::ServiceCallbacks;
|
use crate::service::effects::callbacks::ServiceCallbacks;
|
||||||
use crate::service::effects::subcontainer::NVIDIA_OVERLAY_PATH;
|
use crate::service::effects::subcontainer::NVIDIA_OVERLAY_PATH;
|
||||||
use crate::shutdown::Shutdown;
|
use crate::shutdown::Shutdown;
|
||||||
@@ -53,7 +50,7 @@ use crate::util::future::NonDetachingJoinHandle;
|
|||||||
use crate::util::io::{TmpDir, delete_file};
|
use crate::util::io::{TmpDir, delete_file};
|
||||||
use crate::util::lshw::LshwDevice;
|
use crate::util::lshw::LshwDevice;
|
||||||
use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
|
use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
|
||||||
use crate::{ActionId, DATA_DIR, PLATFORM, PackageId};
|
use crate::{DATA_DIR, PLATFORM, PackageId};
|
||||||
|
|
||||||
pub struct RpcContextSeed {
|
pub struct RpcContextSeed {
|
||||||
is_closed: AtomicBool,
|
is_closed: AtomicBool,
|
||||||
@@ -114,7 +111,6 @@ pub struct CleanupInitPhases {
|
|||||||
cleanup_sessions: PhaseProgressTrackerHandle,
|
cleanup_sessions: PhaseProgressTrackerHandle,
|
||||||
init_services: PhaseProgressTrackerHandle,
|
init_services: PhaseProgressTrackerHandle,
|
||||||
prune_s9pks: PhaseProgressTrackerHandle,
|
prune_s9pks: PhaseProgressTrackerHandle,
|
||||||
check_tasks: PhaseProgressTrackerHandle,
|
|
||||||
}
|
}
|
||||||
impl CleanupInitPhases {
|
impl CleanupInitPhases {
|
||||||
pub fn new(handle: &FullProgressTracker) -> Self {
|
pub fn new(handle: &FullProgressTracker) -> Self {
|
||||||
@@ -122,7 +118,6 @@ impl CleanupInitPhases {
|
|||||||
cleanup_sessions: handle.add_phase("Cleaning up sessions".into(), Some(1)),
|
cleanup_sessions: handle.add_phase("Cleaning up sessions".into(), Some(1)),
|
||||||
init_services: handle.add_phase("Initializing services".into(), Some(10)),
|
init_services: handle.add_phase("Initializing services".into(), Some(10)),
|
||||||
prune_s9pks: handle.add_phase("Pruning S9PKs".into(), Some(1)),
|
prune_s9pks: handle.add_phase("Pruning S9PKs".into(), Some(1)),
|
||||||
check_tasks: handle.add_phase("Checking action requests".into(), Some(1)),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -132,7 +127,7 @@ pub struct RpcContext(Arc<RpcContextSeed>);
|
|||||||
impl RpcContext {
|
impl RpcContext {
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
pub async fn init(
|
pub async fn init(
|
||||||
webserver: &WebServerAcceptorSetter<UpgradableListener>,
|
webserver: &WebServerAcceptorSetter<WildcardListener>,
|
||||||
config: &ServerConfig,
|
config: &ServerConfig,
|
||||||
disk_guid: InternedString,
|
disk_guid: InternedString,
|
||||||
init_result: Option<InitResult>,
|
init_result: Option<InitResult>,
|
||||||
@@ -165,16 +160,15 @@ impl RpcContext {
|
|||||||
{
|
{
|
||||||
(net_ctrl, os_net_service)
|
(net_ctrl, os_net_service)
|
||||||
} else {
|
} else {
|
||||||
let net_ctrl =
|
let net_ctrl = Arc::new(NetController::init(db.clone(), socks_proxy).await?);
|
||||||
Arc::new(NetController::init(db.clone(), &account.hostname, socks_proxy).await?);
|
webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe()));
|
||||||
webserver.try_upgrade(|a| net_ctrl.net_iface.watcher.upgrade_listener(a))?;
|
|
||||||
let os_net_service = net_ctrl.os_bindings().await?;
|
let os_net_service = net_ctrl.os_bindings().await?;
|
||||||
(net_ctrl, os_net_service)
|
(net_ctrl, os_net_service)
|
||||||
};
|
};
|
||||||
init_net_ctrl.complete();
|
init_net_ctrl.complete();
|
||||||
tracing::info!("{}", t!("context.rpc.initialized-net-controller"));
|
tracing::info!("{}", t!("context.rpc.initialized-net-controller"));
|
||||||
|
|
||||||
if PLATFORM.ends_with("-nonfree") {
|
if PLATFORM.ends_with("-nvidia") {
|
||||||
if let Err(e) = Command::new("nvidia-smi")
|
if let Err(e) = Command::new("nvidia-smi")
|
||||||
.invoke(ErrorKind::ParseSysInfo)
|
.invoke(ErrorKind::ParseSysInfo)
|
||||||
.await
|
.await
|
||||||
@@ -412,7 +406,6 @@ impl RpcContext {
|
|||||||
mut cleanup_sessions,
|
mut cleanup_sessions,
|
||||||
mut init_services,
|
mut init_services,
|
||||||
mut prune_s9pks,
|
mut prune_s9pks,
|
||||||
mut check_tasks,
|
|
||||||
}: CleanupInitPhases,
|
}: CleanupInitPhases,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
cleanup_sessions.start();
|
cleanup_sessions.start();
|
||||||
@@ -504,76 +497,6 @@ impl RpcContext {
|
|||||||
}
|
}
|
||||||
prune_s9pks.complete();
|
prune_s9pks.complete();
|
||||||
|
|
||||||
check_tasks.start();
|
|
||||||
let mut action_input: OrdMap<PackageId, BTreeMap<ActionId, Value>> = OrdMap::new();
|
|
||||||
let tasks: BTreeSet<_> = peek
|
|
||||||
.as_public()
|
|
||||||
.as_package_data()
|
|
||||||
.as_entries()?
|
|
||||||
.into_iter()
|
|
||||||
.map(|(_, pde)| {
|
|
||||||
Ok(pde
|
|
||||||
.as_tasks()
|
|
||||||
.as_entries()?
|
|
||||||
.into_iter()
|
|
||||||
.map(|(_, r)| {
|
|
||||||
let t = r.as_task();
|
|
||||||
Ok::<_, Error>(if t.as_input().transpose_ref().is_some() {
|
|
||||||
Some((t.as_package_id().de()?, t.as_action_id().de()?))
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.filter_map_ok(|a| a))
|
|
||||||
})
|
|
||||||
.flatten_ok()
|
|
||||||
.map(|a| a.and_then(|a| a))
|
|
||||||
.try_collect()?;
|
|
||||||
let procedure_id = Guid::new();
|
|
||||||
for (package_id, action_id) in tasks {
|
|
||||||
if let Some(service) = self.services.get(&package_id).await.as_ref() {
|
|
||||||
if let Some(input) = service
|
|
||||||
.get_action_input(procedure_id.clone(), action_id.clone())
|
|
||||||
.await
|
|
||||||
.log_err()
|
|
||||||
.flatten()
|
|
||||||
.and_then(|i| i.value)
|
|
||||||
{
|
|
||||||
action_input
|
|
||||||
.entry(package_id)
|
|
||||||
.or_default()
|
|
||||||
.insert(action_id, input);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
self.db
|
|
||||||
.mutate(|db| {
|
|
||||||
for (package_id, action_input) in &action_input {
|
|
||||||
for (action_id, input) in action_input {
|
|
||||||
for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
|
|
||||||
pde.as_tasks_mut().mutate(|tasks| {
|
|
||||||
Ok(update_tasks(tasks, package_id, action_id, input, false))
|
|
||||||
})?;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
|
|
||||||
if pde
|
|
||||||
.as_tasks()
|
|
||||||
.de()?
|
|
||||||
.into_iter()
|
|
||||||
.any(|(_, t)| t.active && t.task.severity == TaskSeverity::Critical)
|
|
||||||
{
|
|
||||||
pde.as_status_info_mut().stop()?;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Ok(())
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.result?;
|
|
||||||
check_tasks.complete();
|
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
pub async fn call_remote<RemoteContext>(
|
pub async fn call_remote<RemoteContext>(
|
||||||
|
|||||||
@@ -19,8 +19,8 @@ use crate::MAIN_DATA;
|
|||||||
use crate::context::RpcContext;
|
use crate::context::RpcContext;
|
||||||
use crate::context::config::ServerConfig;
|
use crate::context::config::ServerConfig;
|
||||||
use crate::disk::mount::guard::{MountGuard, TmpMountGuard};
|
use crate::disk::mount::guard::{MountGuard, TmpMountGuard};
|
||||||
use crate::hostname::Hostname;
|
use crate::hostname::ServerHostname;
|
||||||
use crate::net::gateway::UpgradableListener;
|
use crate::net::gateway::WildcardListener;
|
||||||
use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
|
use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::progress::FullProgressTracker;
|
use crate::progress::FullProgressTracker;
|
||||||
@@ -45,13 +45,13 @@ lazy_static::lazy_static! {
|
|||||||
#[ts(export)]
|
#[ts(export)]
|
||||||
pub struct SetupResult {
|
pub struct SetupResult {
|
||||||
#[ts(type = "string")]
|
#[ts(type = "string")]
|
||||||
pub hostname: Hostname,
|
pub hostname: ServerHostname,
|
||||||
pub root_ca: Pem<X509>,
|
pub root_ca: Pem<X509>,
|
||||||
pub needs_restart: bool,
|
pub needs_restart: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct SetupContextSeed {
|
pub struct SetupContextSeed {
|
||||||
pub webserver: WebServerAcceptorSetter<UpgradableListener>,
|
pub webserver: WebServerAcceptorSetter<WildcardListener>,
|
||||||
pub config: SyncMutex<ServerConfig>,
|
pub config: SyncMutex<ServerConfig>,
|
||||||
pub disable_encryption: bool,
|
pub disable_encryption: bool,
|
||||||
pub progress: FullProgressTracker,
|
pub progress: FullProgressTracker,
|
||||||
@@ -70,7 +70,7 @@ pub struct SetupContext(Arc<SetupContextSeed>);
|
|||||||
impl SetupContext {
|
impl SetupContext {
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
pub fn init(
|
pub fn init(
|
||||||
webserver: &WebServer<UpgradableListener>,
|
webserver: &WebServer<WildcardListener>,
|
||||||
config: ServerConfig,
|
config: ServerConfig,
|
||||||
) -> Result<Self, Error> {
|
) -> Result<Self, Error> {
|
||||||
let (shutdown, _) = tokio::sync::broadcast::channel(1);
|
let (shutdown, _) = tokio::sync::broadcast::channel(1);
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ use crate::prelude::*;
|
|||||||
use crate::{Error, PackageId};
|
use crate::{Error, PackageId};
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct ControlParams {
|
pub struct ControlParams {
|
||||||
|
|||||||
@@ -45,7 +45,12 @@ impl Database {
|
|||||||
.collect(),
|
.collect(),
|
||||||
ssh_privkey: Pem(account.ssh_key.clone()),
|
ssh_privkey: Pem(account.ssh_key.clone()),
|
||||||
ssh_pubkeys: SshKeys::new(),
|
ssh_pubkeys: SshKeys::new(),
|
||||||
available_ports: AvailablePorts::new(),
|
available_ports: {
|
||||||
|
let mut ports = AvailablePorts::new();
|
||||||
|
ports.set_ssl(80, false);
|
||||||
|
ports.set_ssl(443, true);
|
||||||
|
ports
|
||||||
|
},
|
||||||
sessions: Sessions::new(),
|
sessions: Sessions::new(),
|
||||||
notifications: Notifications::new(),
|
notifications: Notifications::new(),
|
||||||
cifs: CifsTargets::new(),
|
cifs: CifsTargets::new(),
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ use crate::s9pk::manifest::{LocaleString, Manifest};
|
|||||||
use crate::status::StatusInfo;
|
use crate::status::StatusInfo;
|
||||||
use crate::util::DataUrl;
|
use crate::util::DataUrl;
|
||||||
use crate::util::serde::{Pem, is_partial_of};
|
use crate::util::serde::{Pem, is_partial_of};
|
||||||
use crate::{ActionId, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
|
use crate::{ActionId, GatewayId, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
|
||||||
|
|
||||||
#[derive(Debug, Default, Deserialize, Serialize, TS)]
|
#[derive(Debug, Default, Deserialize, Serialize, TS)]
|
||||||
#[ts(export)]
|
#[ts(export)]
|
||||||
@@ -381,6 +381,10 @@ pub struct PackageDataEntry {
|
|||||||
pub hosts: Hosts,
|
pub hosts: Hosts,
|
||||||
#[ts(type = "string[]")]
|
#[ts(type = "string[]")]
|
||||||
pub store_exposed_dependents: Vec<JsonPointer>,
|
pub store_exposed_dependents: Vec<JsonPointer>,
|
||||||
|
#[ts(type = "string | null")]
|
||||||
|
pub outbound_gateway: Option<GatewayId>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub plugin: PackagePlugin,
|
||||||
}
|
}
|
||||||
impl AsRef<PackageDataEntry> for PackageDataEntry {
|
impl AsRef<PackageDataEntry> for PackageDataEntry {
|
||||||
fn as_ref(&self) -> &PackageDataEntry {
|
fn as_ref(&self) -> &PackageDataEntry {
|
||||||
@@ -388,6 +392,21 @@ impl AsRef<PackageDataEntry> for PackageDataEntry {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
|
||||||
|
#[serde(rename_all = "camelCase")]
|
||||||
|
#[model = "Model<Self>"]
|
||||||
|
#[ts(export)]
|
||||||
|
pub struct PackagePlugin {
|
||||||
|
pub url: Option<UrlPluginRegistration>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
|
||||||
|
#[serde(rename_all = "camelCase")]
|
||||||
|
#[ts(export)]
|
||||||
|
pub struct UrlPluginRegistration {
|
||||||
|
pub table_action: ActionId,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Default, Deserialize, Serialize, TS)]
|
#[derive(Debug, Clone, Default, Deserialize, Serialize, TS)]
|
||||||
#[ts(export)]
|
#[ts(export)]
|
||||||
pub struct CurrentDependencies(pub BTreeMap<PackageId, CurrentDependencyInfo>);
|
pub struct CurrentDependencies(pub BTreeMap<PackageId, CurrentDependencyInfo>);
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ use openssl::hash::MessageDigest;
|
|||||||
use patch_db::{HasModel, Value};
|
use patch_db::{HasModel, Value};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use ts_rs::TS;
|
use ts_rs::TS;
|
||||||
|
use url::Url;
|
||||||
|
|
||||||
use crate::account::AccountInfo;
|
use crate::account::AccountInfo;
|
||||||
use crate::db::DbAccessByKey;
|
use crate::db::DbAccessByKey;
|
||||||
@@ -20,8 +21,9 @@ use crate::db::model::Database;
|
|||||||
use crate::db::model::package::AllPackageData;
|
use crate::db::model::package::AllPackageData;
|
||||||
use crate::net::acme::AcmeProvider;
|
use crate::net::acme::AcmeProvider;
|
||||||
use crate::net::host::Host;
|
use crate::net::host::Host;
|
||||||
use crate::net::host::binding::{AddSslOptions, BindInfo, BindOptions, NetInfo};
|
use crate::net::host::binding::{
|
||||||
use crate::net::utils::ipv6_is_local;
|
AddSslOptions, BindInfo, BindOptions, Bindings, DerivedAddressInfo, NetInfo,
|
||||||
|
};
|
||||||
use crate::net::vhost::AlpnInfo;
|
use crate::net::vhost::AlpnInfo;
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::progress::FullProgress;
|
use crate::progress::FullProgress;
|
||||||
@@ -57,13 +59,15 @@ impl Public {
|
|||||||
platform: get_platform(),
|
platform: get_platform(),
|
||||||
id: account.server_id.clone(),
|
id: account.server_id.clone(),
|
||||||
version: Current::default().semver(),
|
version: Current::default().semver(),
|
||||||
hostname: account.hostname.no_dot_host_name(),
|
name: account.hostname.name.clone(),
|
||||||
|
hostname: (*account.hostname.hostname).clone(),
|
||||||
last_backup: None,
|
last_backup: None,
|
||||||
package_version_compat: Current::default().compat().clone(),
|
package_version_compat: Current::default().compat().clone(),
|
||||||
post_init_migration_todos: BTreeMap::new(),
|
post_init_migration_todos: BTreeMap::new(),
|
||||||
network: NetworkInfo {
|
network: NetworkInfo {
|
||||||
host: Host {
|
host: Host {
|
||||||
bindings: [(
|
bindings: Bindings(
|
||||||
|
[(
|
||||||
80,
|
80,
|
||||||
BindInfo {
|
BindInfo {
|
||||||
enabled: false,
|
enabled: false,
|
||||||
@@ -82,17 +86,16 @@ impl Public {
|
|||||||
net: NetInfo {
|
net: NetInfo {
|
||||||
assigned_port: None,
|
assigned_port: None,
|
||||||
assigned_ssl_port: Some(443),
|
assigned_ssl_port: Some(443),
|
||||||
private_disabled: OrdSet::new(),
|
|
||||||
public_enabled: OrdSet::new(),
|
|
||||||
},
|
},
|
||||||
|
addresses: DerivedAddressInfo::default(),
|
||||||
},
|
},
|
||||||
)]
|
)]
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.collect(),
|
.collect(),
|
||||||
onions: account.tor_keys.iter().map(|k| k.onion_address()).collect(),
|
),
|
||||||
public_domains: BTreeMap::new(),
|
public_domains: BTreeMap::new(),
|
||||||
private_domains: BTreeSet::new(),
|
private_domains: BTreeMap::new(),
|
||||||
hostname_info: BTreeMap::new(),
|
port_forwards: BTreeSet::new(),
|
||||||
},
|
},
|
||||||
wifi: WifiInfo {
|
wifi: WifiInfo {
|
||||||
enabled: true,
|
enabled: true,
|
||||||
@@ -117,6 +120,7 @@ impl Public {
|
|||||||
acme
|
acme
|
||||||
},
|
},
|
||||||
dns: Default::default(),
|
dns: Default::default(),
|
||||||
|
default_outbound: None,
|
||||||
},
|
},
|
||||||
status_info: ServerStatus {
|
status_info: ServerStatus {
|
||||||
backup_progress: None,
|
backup_progress: None,
|
||||||
@@ -141,6 +145,7 @@ impl Public {
|
|||||||
zram: true,
|
zram: true,
|
||||||
governor: None,
|
governor: None,
|
||||||
smtp: None,
|
smtp: None,
|
||||||
|
ifconfig_url: default_ifconfig_url(),
|
||||||
ram: 0,
|
ram: 0,
|
||||||
devices: Vec::new(),
|
devices: Vec::new(),
|
||||||
kiosk,
|
kiosk,
|
||||||
@@ -162,19 +167,21 @@ fn get_platform() -> InternedString {
|
|||||||
(&*PLATFORM).into()
|
(&*PLATFORM).into()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn default_ifconfig_url() -> Url {
|
||||||
|
"https://ifconfig.co".parse().unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
|
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[model = "Model<Self>"]
|
#[model = "Model<Self>"]
|
||||||
#[ts(export)]
|
#[ts(export)]
|
||||||
pub struct ServerInfo {
|
pub struct ServerInfo {
|
||||||
#[serde(default = "get_arch")]
|
#[serde(default = "get_arch")]
|
||||||
#[ts(type = "string")]
|
|
||||||
pub arch: InternedString,
|
pub arch: InternedString,
|
||||||
#[serde(default = "get_platform")]
|
#[serde(default = "get_platform")]
|
||||||
#[ts(type = "string")]
|
|
||||||
pub platform: InternedString,
|
pub platform: InternedString,
|
||||||
pub id: String,
|
pub id: String,
|
||||||
#[ts(type = "string")]
|
pub name: InternedString,
|
||||||
pub hostname: InternedString,
|
pub hostname: InternedString,
|
||||||
#[ts(type = "string")]
|
#[ts(type = "string")]
|
||||||
pub version: Version,
|
pub version: Version,
|
||||||
@@ -198,6 +205,9 @@ pub struct ServerInfo {
|
|||||||
pub zram: bool,
|
pub zram: bool,
|
||||||
pub governor: Option<Governor>,
|
pub governor: Option<Governor>,
|
||||||
pub smtp: Option<SmtpValue>,
|
pub smtp: Option<SmtpValue>,
|
||||||
|
#[serde(default = "default_ifconfig_url")]
|
||||||
|
#[ts(type = "string")]
|
||||||
|
pub ifconfig_url: Url,
|
||||||
#[ts(type = "number")]
|
#[ts(type = "number")]
|
||||||
pub ram: u64,
|
pub ram: u64,
|
||||||
pub devices: Vec<LshwDevice>,
|
pub devices: Vec<LshwDevice>,
|
||||||
@@ -220,6 +230,9 @@ pub struct NetworkInfo {
|
|||||||
pub acme: BTreeMap<AcmeProvider, AcmeSettings>,
|
pub acme: BTreeMap<AcmeProvider, AcmeSettings>,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub dns: DnsSettings,
|
pub dns: DnsSettings,
|
||||||
|
#[serde(default)]
|
||||||
|
#[ts(type = "string | null")]
|
||||||
|
pub default_outbound: Option<GatewayId>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
|
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
|
||||||
@@ -239,41 +252,12 @@ pub struct DnsSettings {
|
|||||||
#[ts(export)]
|
#[ts(export)]
|
||||||
pub struct NetworkInterfaceInfo {
|
pub struct NetworkInterfaceInfo {
|
||||||
pub name: Option<InternedString>,
|
pub name: Option<InternedString>,
|
||||||
pub public: Option<bool>,
|
|
||||||
pub secure: Option<bool>,
|
pub secure: Option<bool>,
|
||||||
pub ip_info: Option<Arc<IpInfo>>,
|
pub ip_info: Option<Arc<IpInfo>>,
|
||||||
|
#[serde(default, rename = "type")]
|
||||||
|
pub gateway_type: Option<GatewayType>,
|
||||||
}
|
}
|
||||||
impl NetworkInterfaceInfo {
|
impl NetworkInterfaceInfo {
|
||||||
pub fn public(&self) -> bool {
|
|
||||||
self.public.unwrap_or_else(|| {
|
|
||||||
!self.ip_info.as_ref().map_or(true, |ip_info| {
|
|
||||||
let ip4s = ip_info
|
|
||||||
.subnets
|
|
||||||
.iter()
|
|
||||||
.filter_map(|ipnet| {
|
|
||||||
if let IpAddr::V4(ip4) = ipnet.addr() {
|
|
||||||
Some(ip4)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect::<BTreeSet<_>>();
|
|
||||||
if !ip4s.is_empty() {
|
|
||||||
return ip4s
|
|
||||||
.iter()
|
|
||||||
.all(|ip4| ip4.is_loopback() || ip4.is_private() || ip4.is_link_local());
|
|
||||||
}
|
|
||||||
ip_info.subnets.iter().all(|ipnet| {
|
|
||||||
if let IpAddr::V6(ip6) = ipnet.addr() {
|
|
||||||
ipv6_is_local(ip6)
|
|
||||||
} else {
|
|
||||||
true
|
|
||||||
}
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn secure(&self) -> bool {
|
pub fn secure(&self) -> bool {
|
||||||
self.secure.unwrap_or(false)
|
self.secure.unwrap_or(false)
|
||||||
}
|
}
|
||||||
@@ -310,6 +294,28 @@ pub enum NetworkInterfaceType {
|
|||||||
Loopback,
|
Loopback,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(
|
||||||
|
Clone,
|
||||||
|
Copy,
|
||||||
|
Debug,
|
||||||
|
Default,
|
||||||
|
PartialEq,
|
||||||
|
Eq,
|
||||||
|
PartialOrd,
|
||||||
|
Ord,
|
||||||
|
Deserialize,
|
||||||
|
Serialize,
|
||||||
|
TS,
|
||||||
|
clap::ValueEnum,
|
||||||
|
)]
|
||||||
|
#[ts(export)]
|
||||||
|
#[serde(rename_all = "kebab-case")]
|
||||||
|
pub enum GatewayType {
|
||||||
|
#[default]
|
||||||
|
InboundOutbound,
|
||||||
|
OutboundOnly,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
|
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[model = "Model<Self>"]
|
#[model = "Model<Self>"]
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ impl TS for DepInfo {
|
|||||||
"DepInfo".into()
|
"DepInfo".into()
|
||||||
}
|
}
|
||||||
fn inline() -> String {
|
fn inline() -> String {
|
||||||
"{ description: string | null, optional: boolean } & MetadataSrc".into()
|
"{ description: LocaleString | null, optional: boolean } & MetadataSrc".into()
|
||||||
}
|
}
|
||||||
fn inline_flattened() -> String {
|
fn inline_flattened() -> String {
|
||||||
Self::inline()
|
Self::inline()
|
||||||
@@ -54,7 +54,8 @@ impl TS for DepInfo {
|
|||||||
where
|
where
|
||||||
Self: 'static,
|
Self: 'static,
|
||||||
{
|
{
|
||||||
v.visit::<MetadataSrc>()
|
v.visit::<MetadataSrc>();
|
||||||
|
v.visit::<LocaleString>();
|
||||||
}
|
}
|
||||||
fn output_path() -> Option<&'static std::path::Path> {
|
fn output_path() -> Option<&'static std::path::Path> {
|
||||||
Some(Path::new("DepInfo.ts"))
|
Some(Path::new("DepInfo.ts"))
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ use super::mount::filesystem::block_dev::BlockDev;
|
|||||||
use super::mount::guard::TmpMountGuard;
|
use super::mount::guard::TmpMountGuard;
|
||||||
use crate::disk::OsPartitionInfo;
|
use crate::disk::OsPartitionInfo;
|
||||||
use crate::disk::mount::guard::GenericMountGuard;
|
use crate::disk::mount::guard::GenericMountGuard;
|
||||||
use crate::hostname::Hostname;
|
use crate::hostname::ServerHostname;
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::util::Invoke;
|
use crate::util::Invoke;
|
||||||
use crate::util::serde::IoFormat;
|
use crate::util::serde::IoFormat;
|
||||||
@@ -43,22 +43,28 @@ pub struct DiskInfo {
|
|||||||
pub guid: Option<InternedString>,
|
pub guid: Option<InternedString>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, Deserialize, Serialize)]
|
#[derive(Clone, Debug, Deserialize, Serialize, ts_rs::TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct PartitionInfo {
|
pub struct PartitionInfo {
|
||||||
pub logicalname: PathBuf,
|
pub logicalname: PathBuf,
|
||||||
pub label: Option<String>,
|
pub label: Option<String>,
|
||||||
|
#[ts(type = "number")]
|
||||||
pub capacity: u64,
|
pub capacity: u64,
|
||||||
|
#[ts(type = "number | null")]
|
||||||
pub used: Option<u64>,
|
pub used: Option<u64>,
|
||||||
pub start_os: BTreeMap<String, StartOsRecoveryInfo>,
|
pub start_os: BTreeMap<String, StartOsRecoveryInfo>,
|
||||||
pub guid: Option<InternedString>,
|
pub guid: Option<InternedString>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
#[derive(Clone, Debug, Default, Deserialize, Serialize, ts_rs::TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct StartOsRecoveryInfo {
|
pub struct StartOsRecoveryInfo {
|
||||||
pub hostname: Hostname,
|
pub hostname: ServerHostname,
|
||||||
|
#[ts(type = "string")]
|
||||||
pub version: exver::Version,
|
pub version: exver::Version,
|
||||||
|
#[ts(type = "string")]
|
||||||
pub timestamp: DateTime<Utc>,
|
pub timestamp: DateTime<Utc>,
|
||||||
pub password_hash: Option<String>,
|
pub password_hash: Option<String>,
|
||||||
pub wrapped_key: Option<String>,
|
pub wrapped_key: Option<String>,
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ use std::fmt::{Debug, Display};
|
|||||||
use axum::http::StatusCode;
|
use axum::http::StatusCode;
|
||||||
use axum::http::uri::InvalidUri;
|
use axum::http::uri::InvalidUri;
|
||||||
use color_eyre::eyre::eyre;
|
use color_eyre::eyre::eyre;
|
||||||
|
use imbl_value::InternedString;
|
||||||
use num_enum::TryFromPrimitive;
|
use num_enum::TryFromPrimitive;
|
||||||
use patch_db::Value;
|
use patch_db::Value;
|
||||||
use rpc_toolkit::reqwest;
|
use rpc_toolkit::reqwest;
|
||||||
@@ -42,11 +43,11 @@ pub enum ErrorKind {
|
|||||||
ParseUrl = 19,
|
ParseUrl = 19,
|
||||||
DiskNotAvailable = 20,
|
DiskNotAvailable = 20,
|
||||||
BlockDevice = 21,
|
BlockDevice = 21,
|
||||||
InvalidOnionAddress = 22,
|
// InvalidOnionAddress = 22,
|
||||||
Pack = 23,
|
Pack = 23,
|
||||||
ValidateS9pk = 24,
|
ValidateS9pk = 24,
|
||||||
DiskCorrupted = 25, // Remove
|
DiskCorrupted = 25, // Remove
|
||||||
Tor = 26,
|
// Tor = 26,
|
||||||
ConfigGen = 27,
|
ConfigGen = 27,
|
||||||
ParseNumber = 28,
|
ParseNumber = 28,
|
||||||
Database = 29,
|
Database = 29,
|
||||||
@@ -126,11 +127,11 @@ impl ErrorKind {
|
|||||||
ParseUrl => t!("error.parse-url"),
|
ParseUrl => t!("error.parse-url"),
|
||||||
DiskNotAvailable => t!("error.disk-not-available"),
|
DiskNotAvailable => t!("error.disk-not-available"),
|
||||||
BlockDevice => t!("error.block-device"),
|
BlockDevice => t!("error.block-device"),
|
||||||
InvalidOnionAddress => t!("error.invalid-onion-address"),
|
// InvalidOnionAddress => t!("error.invalid-onion-address"),
|
||||||
Pack => t!("error.pack"),
|
Pack => t!("error.pack"),
|
||||||
ValidateS9pk => t!("error.validate-s9pk"),
|
ValidateS9pk => t!("error.validate-s9pk"),
|
||||||
DiskCorrupted => t!("error.disk-corrupted"), // Remove
|
DiskCorrupted => t!("error.disk-corrupted"), // Remove
|
||||||
Tor => t!("error.tor"),
|
// Tor => t!("error.tor"),
|
||||||
ConfigGen => t!("error.config-gen"),
|
ConfigGen => t!("error.config-gen"),
|
||||||
ParseNumber => t!("error.parse-number"),
|
ParseNumber => t!("error.parse-number"),
|
||||||
Database => t!("error.database"),
|
Database => t!("error.database"),
|
||||||
@@ -204,17 +205,12 @@ pub struct Error {
|
|||||||
|
|
||||||
impl Display for Error {
|
impl Display for Error {
|
||||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
write!(f, "{}: {:#}", &self.kind.as_str(), self.source)
|
write!(f, "{}: {}", &self.kind.as_str(), self.display_src())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl Debug for Error {
|
impl Debug for Error {
|
||||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
write!(
|
write!(f, "{}: {}", &self.kind.as_str(), self.display_dbg())
|
||||||
f,
|
|
||||||
"{}: {:?}",
|
|
||||||
&self.kind.as_str(),
|
|
||||||
self.debug.as_ref().unwrap_or(&self.source)
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl Error {
|
impl Error {
|
||||||
@@ -235,8 +231,13 @@ impl Error {
|
|||||||
}
|
}
|
||||||
pub fn clone_output(&self) -> Self {
|
pub fn clone_output(&self) -> Self {
|
||||||
Error {
|
Error {
|
||||||
source: eyre!("{}", self.source),
|
source: eyre!("{:#}", self.source),
|
||||||
debug: self.debug.as_ref().map(|e| eyre!("{e}")),
|
debug: Some(
|
||||||
|
self.debug
|
||||||
|
.as_ref()
|
||||||
|
.map(|e| eyre!("{e}"))
|
||||||
|
.unwrap_or_else(|| eyre!("{:?}", self.source)),
|
||||||
|
),
|
||||||
kind: self.kind,
|
kind: self.kind,
|
||||||
info: self.info.clone(),
|
info: self.info.clone(),
|
||||||
task: None,
|
task: None,
|
||||||
@@ -257,6 +258,30 @@ impl Error {
|
|||||||
self.task.take();
|
self.task.take();
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn display_src(&self) -> impl Display {
|
||||||
|
struct D<'a>(&'a Error);
|
||||||
|
impl<'a> Display for D<'a> {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
write!(f, "{:#}", self.0.source)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
D(self)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn display_dbg(&self) -> impl Display {
|
||||||
|
struct D<'a>(&'a Error);
|
||||||
|
impl<'a> Display for D<'a> {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
if let Some(debug) = &self.0.debug {
|
||||||
|
write!(f, "{}", debug)
|
||||||
|
} else {
|
||||||
|
write!(f, "{:?}", self.0.source)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
D(self)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
impl axum::response::IntoResponse for Error {
|
impl axum::response::IntoResponse for Error {
|
||||||
fn into_response(self) -> axum::response::Response {
|
fn into_response(self) -> axum::response::Response {
|
||||||
@@ -370,17 +395,6 @@ impl From<reqwest::Error> for Error {
|
|||||||
Error::new(e, kind)
|
Error::new(e, kind)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#[cfg(feature = "arti")]
|
|
||||||
impl From<arti_client::Error> for Error {
|
|
||||||
fn from(e: arti_client::Error) -> Self {
|
|
||||||
Error::new(e, ErrorKind::Tor)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
impl From<torut::control::ConnError> for Error {
|
|
||||||
fn from(e: torut::control::ConnError) -> Self {
|
|
||||||
Error::new(e, ErrorKind::Tor)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
impl From<zbus::Error> for Error {
|
impl From<zbus::Error> for Error {
|
||||||
fn from(e: zbus::Error) -> Self {
|
fn from(e: zbus::Error) -> Self {
|
||||||
Error::new(e, ErrorKind::DBus)
|
Error::new(e, ErrorKind::DBus)
|
||||||
@@ -444,9 +458,11 @@ impl Debug for ErrorData {
|
|||||||
impl std::error::Error for ErrorData {}
|
impl std::error::Error for ErrorData {}
|
||||||
impl From<Error> for ErrorData {
|
impl From<Error> for ErrorData {
|
||||||
fn from(value: Error) -> Self {
|
fn from(value: Error) -> Self {
|
||||||
|
let details = value.display_src().to_string();
|
||||||
|
let debug = value.display_dbg().to_string();
|
||||||
Self {
|
Self {
|
||||||
details: value.to_string(),
|
details,
|
||||||
debug: format!("{:?}", value),
|
debug,
|
||||||
info: value.info,
|
info: value.info,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -634,13 +650,10 @@ impl<T> ResultExt<T, Error> for Result<T, Error> {
|
|||||||
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
|
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
|
||||||
self.map_err(|e| {
|
self.map_err(|e| {
|
||||||
let (kind, ctx) = f(&e);
|
let (kind, ctx) = f(&e);
|
||||||
|
let ctx = InternedString::from_display(&ctx);
|
||||||
let source = e.source;
|
let source = e.source;
|
||||||
let with_ctx = format!("{ctx}: {source}");
|
let source = source.wrap_err(ctx.clone());
|
||||||
let source = source.wrap_err(with_ctx);
|
let debug = e.debug.map(|e| e.wrap_err(ctx));
|
||||||
let debug = e.debug.map(|e| {
|
|
||||||
let with_ctx = format!("{ctx}: {e}");
|
|
||||||
e.wrap_err(with_ctx)
|
|
||||||
});
|
|
||||||
Error {
|
Error {
|
||||||
kind,
|
kind,
|
||||||
source,
|
source,
|
||||||
|
|||||||
@@ -1,25 +1,58 @@
|
|||||||
|
use clap::Parser;
|
||||||
use imbl_value::InternedString;
|
use imbl_value::InternedString;
|
||||||
use lazy_format::lazy_format;
|
use lazy_format::lazy_format;
|
||||||
use rand::{Rng, rng};
|
use serde::{Deserialize, Serialize};
|
||||||
use tokio::process::Command;
|
use tokio::process::Command;
|
||||||
use tracing::instrument;
|
use tracing::instrument;
|
||||||
|
use ts_rs::TS;
|
||||||
|
|
||||||
|
use crate::context::RpcContext;
|
||||||
|
use crate::db::model::public::ServerInfo;
|
||||||
|
use crate::prelude::*;
|
||||||
use crate::util::Invoke;
|
use crate::util::Invoke;
|
||||||
use crate::{Error, ErrorKind};
|
|
||||||
#[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize)]
|
|
||||||
pub struct Hostname(pub InternedString);
|
|
||||||
|
|
||||||
lazy_static::lazy_static! {
|
#[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize, ts_rs::TS)]
|
||||||
static ref ADJECTIVES: Vec<String> = include_str!("./assets/adjectives.txt").lines().map(|x| x.to_string()).collect();
|
#[ts(type = "string")]
|
||||||
static ref NOUNS: Vec<String> = include_str!("./assets/nouns.txt").lines().map(|x| x.to_string()).collect();
|
pub struct ServerHostname(InternedString);
|
||||||
}
|
impl std::ops::Deref for ServerHostname {
|
||||||
impl AsRef<str> for Hostname {
|
type Target = InternedString;
|
||||||
fn as_ref(&self) -> &str {
|
fn deref(&self) -> &Self::Target {
|
||||||
&self.0
|
&self.0
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
impl AsRef<str> for ServerHostname {
|
||||||
|
fn as_ref(&self) -> &str {
|
||||||
|
&***self
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ServerHostname {
|
||||||
|
fn validate(&self) -> Result<(), Error> {
|
||||||
|
if self.0.is_empty() {
|
||||||
|
return Err(Error::new(
|
||||||
|
eyre!("{}", t!("hostname.empty")),
|
||||||
|
ErrorKind::InvalidRequest,
|
||||||
|
));
|
||||||
|
}
|
||||||
|
if let Some(c) = self
|
||||||
|
.0
|
||||||
|
.chars()
|
||||||
|
.find(|c| !(c.is_ascii_alphanumeric() || c == &'-') || c.is_ascii_uppercase())
|
||||||
|
{
|
||||||
|
return Err(Error::new(
|
||||||
|
eyre!("{}", t!("hostname.invalid-character", char = c)),
|
||||||
|
ErrorKind::InvalidRequest,
|
||||||
|
));
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn new(hostname: InternedString) -> Result<Self, Error> {
|
||||||
|
let res = Self(hostname);
|
||||||
|
res.validate()?;
|
||||||
|
Ok(res)
|
||||||
|
}
|
||||||
|
|
||||||
impl Hostname {
|
|
||||||
pub fn lan_address(&self) -> InternedString {
|
pub fn lan_address(&self) -> InternedString {
|
||||||
InternedString::from_display(&lazy_format!("https://{}.local", self.0))
|
InternedString::from_display(&lazy_format!("https://{}.local", self.0))
|
||||||
}
|
}
|
||||||
@@ -28,17 +61,135 @@ impl Hostname {
|
|||||||
InternedString::from_display(&lazy_format!("{}.local", self.0))
|
InternedString::from_display(&lazy_format!("{}.local", self.0))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn no_dot_host_name(&self) -> InternedString {
|
pub fn load(server_info: &Model<ServerInfo>) -> Result<Self, Error> {
|
||||||
self.0.clone()
|
Ok(Self(server_info.as_hostname().de()?))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn save(&self, server_info: &mut Model<ServerInfo>) -> Result<(), Error> {
|
||||||
|
server_info.as_hostname_mut().ser(&**self)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn generate_hostname() -> Hostname {
|
#[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize, ts_rs::TS)]
|
||||||
let mut rng = rng();
|
#[ts(type = "string")]
|
||||||
let adjective = &ADJECTIVES[rng.random_range(0..ADJECTIVES.len())];
|
pub struct ServerHostnameInfo {
|
||||||
let noun = &NOUNS[rng.random_range(0..NOUNS.len())];
|
pub name: InternedString,
|
||||||
Hostname(InternedString::from_display(&lazy_format!(
|
pub hostname: ServerHostname,
|
||||||
"{adjective}-{noun}"
|
}
|
||||||
|
|
||||||
|
lazy_static::lazy_static! {
|
||||||
|
static ref ADJECTIVES: Vec<String> = include_str!("./assets/adjectives.txt").lines().map(|x| x.to_string()).collect();
|
||||||
|
static ref NOUNS: Vec<String> = include_str!("./assets/nouns.txt").lines().map(|x| x.to_string()).collect();
|
||||||
|
}
|
||||||
|
impl AsRef<str> for ServerHostnameInfo {
|
||||||
|
fn as_ref(&self) -> &str {
|
||||||
|
&self.hostname
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn normalize(s: &str) -> InternedString {
|
||||||
|
let mut prev_was_dash = true;
|
||||||
|
let mut normalized = s
|
||||||
|
.chars()
|
||||||
|
.filter_map(|c| {
|
||||||
|
if c.is_alphanumeric() {
|
||||||
|
prev_was_dash = false;
|
||||||
|
Some(c.to_ascii_lowercase())
|
||||||
|
} else if (c == '-' || c.is_whitespace()) && !prev_was_dash {
|
||||||
|
prev_was_dash = true;
|
||||||
|
Some('-')
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect::<String>();
|
||||||
|
while normalized.ends_with('-') {
|
||||||
|
normalized.pop();
|
||||||
|
}
|
||||||
|
if normalized.len() < 4 {
|
||||||
|
generate_hostname().0
|
||||||
|
} else {
|
||||||
|
normalized.into()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn denormalize(s: &str) -> InternedString {
|
||||||
|
let mut cap = true;
|
||||||
|
s.chars()
|
||||||
|
.map(|c| {
|
||||||
|
if c == '-' {
|
||||||
|
cap = true;
|
||||||
|
' '
|
||||||
|
} else if cap {
|
||||||
|
cap = false;
|
||||||
|
c.to_ascii_uppercase()
|
||||||
|
} else {
|
||||||
|
c
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect::<String>()
|
||||||
|
.into()
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ServerHostnameInfo {
|
||||||
|
pub fn new(
|
||||||
|
name: Option<InternedString>,
|
||||||
|
hostname: Option<InternedString>,
|
||||||
|
) -> Result<Self, Error> {
|
||||||
|
Self::new_opt(name, hostname)
|
||||||
|
.map(|h| h.unwrap_or_else(|| ServerHostnameInfo::from_hostname(generate_hostname())))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn new_opt(
|
||||||
|
name: Option<InternedString>,
|
||||||
|
hostname: Option<InternedString>,
|
||||||
|
) -> Result<Option<Self>, Error> {
|
||||||
|
let name = name.filter(|n| !n.is_empty());
|
||||||
|
let hostname = hostname.filter(|h| !h.is_empty());
|
||||||
|
Ok(match (name, hostname) {
|
||||||
|
(Some(name), Some(hostname)) => Some(ServerHostnameInfo {
|
||||||
|
name,
|
||||||
|
hostname: ServerHostname::new(hostname)?,
|
||||||
|
}),
|
||||||
|
(Some(name), None) => Some(ServerHostnameInfo::from_name(name)),
|
||||||
|
(None, Some(hostname)) => Some(ServerHostnameInfo::from_hostname(ServerHostname::new(
|
||||||
|
hostname,
|
||||||
|
)?)),
|
||||||
|
(None, None) => None,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_hostname(hostname: ServerHostname) -> Self {
|
||||||
|
Self {
|
||||||
|
name: denormalize(&**hostname),
|
||||||
|
hostname,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_name(name: InternedString) -> Self {
|
||||||
|
Self {
|
||||||
|
hostname: ServerHostname(normalize(&*name)),
|
||||||
|
name,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn load(server_info: &Model<ServerInfo>) -> Result<Self, Error> {
|
||||||
|
Ok(Self {
|
||||||
|
name: server_info.as_name().de()?,
|
||||||
|
hostname: ServerHostname::load(server_info)?,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn save(&self, server_info: &mut Model<ServerInfo>) -> Result<(), Error> {
|
||||||
|
server_info.as_name_mut().ser(&self.name)?;
|
||||||
|
self.hostname.save(server_info)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn generate_hostname() -> ServerHostname {
|
||||||
|
let num = rand::random::<u16>();
|
||||||
|
ServerHostname(InternedString::from_display(&lazy_format!(
|
||||||
|
"startos-{num:04x}"
|
||||||
)))
|
)))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -48,17 +199,17 @@ pub fn generate_id() -> String {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
pub async fn get_current_hostname() -> Result<Hostname, Error> {
|
pub async fn get_current_hostname() -> Result<InternedString, Error> {
|
||||||
let out = Command::new("hostname")
|
let out = Command::new("hostname")
|
||||||
.invoke(ErrorKind::ParseSysInfo)
|
.invoke(ErrorKind::ParseSysInfo)
|
||||||
.await?;
|
.await?;
|
||||||
let out_string = String::from_utf8(out)?;
|
let out_string = String::from_utf8(out)?;
|
||||||
Ok(Hostname(out_string.trim().into()))
|
Ok(out_string.trim().into())
|
||||||
}
|
}
|
||||||
|
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
|
pub async fn set_hostname(hostname: &ServerHostname) -> Result<(), Error> {
|
||||||
let hostname = &*hostname.0;
|
let hostname = &***hostname;
|
||||||
Command::new("hostnamectl")
|
Command::new("hostnamectl")
|
||||||
.arg("--static")
|
.arg("--static")
|
||||||
.arg("set-hostname")
|
.arg("set-hostname")
|
||||||
@@ -77,7 +228,7 @@ pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
pub async fn sync_hostname(hostname: &Hostname) -> Result<(), Error> {
|
pub async fn sync_hostname(hostname: &ServerHostname) -> Result<(), Error> {
|
||||||
set_hostname(hostname).await?;
|
set_hostname(hostname).await?;
|
||||||
Command::new("systemctl")
|
Command::new("systemctl")
|
||||||
.arg("restart")
|
.arg("restart")
|
||||||
@@ -86,3 +237,54 @@ pub async fn sync_hostname(hostname: &Hostname) -> Result<(), Error> {
|
|||||||
.await?;
|
.await?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[serde(rename_all = "camelCase")]
|
||||||
|
#[command(rename_all = "kebab-case")]
|
||||||
|
#[ts(export)]
|
||||||
|
pub struct SetServerHostnameParams {
|
||||||
|
name: Option<InternedString>,
|
||||||
|
hostname: Option<InternedString>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn set_hostname_rpc(
|
||||||
|
ctx: RpcContext,
|
||||||
|
SetServerHostnameParams { name, hostname }: SetServerHostnameParams,
|
||||||
|
) -> Result<(), Error> {
|
||||||
|
let name = name.filter(|n| !n.is_empty());
|
||||||
|
let hostname = hostname
|
||||||
|
.filter(|h| !h.is_empty())
|
||||||
|
.map(ServerHostname::new)
|
||||||
|
.transpose()?;
|
||||||
|
if name.is_none() && hostname.is_none() {
|
||||||
|
return Err(Error::new(
|
||||||
|
eyre!("{}", t!("hostname.must-provide-name-or-hostname")),
|
||||||
|
ErrorKind::InvalidRequest,
|
||||||
|
));
|
||||||
|
};
|
||||||
|
let info = ctx
|
||||||
|
.db
|
||||||
|
.mutate(|db| {
|
||||||
|
let server_info = db.as_public_mut().as_server_info_mut();
|
||||||
|
if let Some(name) = name {
|
||||||
|
server_info.as_name_mut().ser(&name)?;
|
||||||
|
}
|
||||||
|
if let Some(hostname) = &hostname {
|
||||||
|
hostname.save(server_info)?;
|
||||||
|
}
|
||||||
|
ServerHostnameInfo::load(server_info)
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.result?;
|
||||||
|
ctx.account.mutate(|a| a.hostname = info.clone());
|
||||||
|
if let Some(h) = hostname {
|
||||||
|
sync_hostname(&h).await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_generate_hostname() {
|
||||||
|
assert_eq!(dbg!(generate_hostname().0).len(), 12);
|
||||||
|
}
|
||||||
|
|||||||
@@ -18,9 +18,9 @@ use crate::context::{CliContext, InitContext, RpcContext};
|
|||||||
use crate::db::model::Database;
|
use crate::db::model::Database;
|
||||||
use crate::db::model::public::ServerStatus;
|
use crate::db::model::public::ServerStatus;
|
||||||
use crate::developer::OS_DEVELOPER_KEY_PATH;
|
use crate::developer::OS_DEVELOPER_KEY_PATH;
|
||||||
use crate::hostname::Hostname;
|
use crate::hostname::ServerHostname;
|
||||||
use crate::middleware::auth::local::LocalAuthContext;
|
use crate::middleware::auth::local::LocalAuthContext;
|
||||||
use crate::net::gateway::UpgradableListener;
|
use crate::net::gateway::WildcardListener;
|
||||||
use crate::net::net_controller::{NetController, NetService};
|
use crate::net::net_controller::{NetController, NetService};
|
||||||
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
|
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
|
||||||
use crate::net::utils::find_wifi_iface;
|
use crate::net::utils::find_wifi_iface;
|
||||||
@@ -144,7 +144,7 @@ pub async fn run_script<P: AsRef<Path>>(path: P, mut progress: PhaseProgressTrac
|
|||||||
|
|
||||||
#[instrument(skip_all)]
|
#[instrument(skip_all)]
|
||||||
pub async fn init(
|
pub async fn init(
|
||||||
webserver: &WebServerAcceptorSetter<UpgradableListener>,
|
webserver: &WebServerAcceptorSetter<WildcardListener>,
|
||||||
cfg: &ServerConfig,
|
cfg: &ServerConfig,
|
||||||
InitPhases {
|
InitPhases {
|
||||||
preinit,
|
preinit,
|
||||||
@@ -191,15 +191,16 @@ pub async fn init(
|
|||||||
.arg(OS_DEVELOPER_KEY_PATH)
|
.arg(OS_DEVELOPER_KEY_PATH)
|
||||||
.invoke(ErrorKind::Filesystem)
|
.invoke(ErrorKind::Filesystem)
|
||||||
.await?;
|
.await?;
|
||||||
|
let hostname = ServerHostname::load(peek.as_public().as_server_info())?;
|
||||||
crate::ssh::sync_keys(
|
crate::ssh::sync_keys(
|
||||||
&Hostname(peek.as_public().as_server_info().as_hostname().de()?),
|
&hostname,
|
||||||
&peek.as_private().as_ssh_privkey().de()?,
|
&peek.as_private().as_ssh_privkey().de()?,
|
||||||
&peek.as_private().as_ssh_pubkeys().de()?,
|
&peek.as_private().as_ssh_pubkeys().de()?,
|
||||||
SSH_DIR,
|
SSH_DIR,
|
||||||
)
|
)
|
||||||
.await?;
|
.await?;
|
||||||
crate::ssh::sync_keys(
|
crate::ssh::sync_keys(
|
||||||
&Hostname(peek.as_public().as_server_info().as_hostname().de()?),
|
&hostname,
|
||||||
&peek.as_private().as_ssh_privkey().de()?,
|
&peek.as_private().as_ssh_privkey().de()?,
|
||||||
&Default::default(),
|
&Default::default(),
|
||||||
"/root/.ssh",
|
"/root/.ssh",
|
||||||
@@ -211,14 +212,9 @@ pub async fn init(
|
|||||||
|
|
||||||
start_net.start();
|
start_net.start();
|
||||||
let net_ctrl = Arc::new(
|
let net_ctrl = Arc::new(
|
||||||
NetController::init(
|
NetController::init(db.clone(), cfg.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN)).await?,
|
||||||
db.clone(),
|
|
||||||
&account.hostname,
|
|
||||||
cfg.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN),
|
|
||||||
)
|
|
||||||
.await?,
|
|
||||||
);
|
);
|
||||||
webserver.try_upgrade(|a| net_ctrl.net_iface.watcher.upgrade_listener(a))?;
|
webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe()));
|
||||||
let os_net_service = net_ctrl.os_bindings().await?;
|
let os_net_service = net_ctrl.os_bindings().await?;
|
||||||
start_net.complete();
|
start_net.complete();
|
||||||
|
|
||||||
|
|||||||
@@ -177,6 +177,7 @@ pub async fn install(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, TS)]
|
#[derive(Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct SideloadParams {
|
pub struct SideloadParams {
|
||||||
#[ts(skip)]
|
#[ts(skip)]
|
||||||
@@ -185,6 +186,7 @@ pub struct SideloadParams {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, TS)]
|
#[derive(Deserialize, Serialize, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct SideloadResponse {
|
pub struct SideloadResponse {
|
||||||
pub upload: Guid,
|
pub upload: Guid,
|
||||||
@@ -284,6 +286,7 @@ pub async fn sideload(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
|
#[derive(Debug, Clone, Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct CancelInstallParams {
|
pub struct CancelInstallParams {
|
||||||
@@ -521,6 +524,7 @@ pub async fn cli_install(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct UninstallParams {
|
pub struct UninstallParams {
|
||||||
|
|||||||
@@ -25,6 +25,9 @@ pub fn platform_to_arch(platform: &str) -> &str {
|
|||||||
if let Some(arch) = platform.strip_suffix("-nonfree") {
|
if let Some(arch) = platform.strip_suffix("-nonfree") {
|
||||||
return arch;
|
return arch;
|
||||||
}
|
}
|
||||||
|
if let Some(arch) = platform.strip_suffix("-nvidia") {
|
||||||
|
return arch;
|
||||||
|
}
|
||||||
match platform {
|
match platform {
|
||||||
"raspberrypi" | "rockchip64" => "aarch64",
|
"raspberrypi" | "rockchip64" => "aarch64",
|
||||||
_ => platform,
|
_ => platform,
|
||||||
@@ -268,6 +271,18 @@ pub fn server<C: Context>() -> ParentHandler<C> {
|
|||||||
.with_about("about.display-time-uptime")
|
.with_about("about.display-time-uptime")
|
||||||
.with_call_remote::<CliContext>(),
|
.with_call_remote::<CliContext>(),
|
||||||
)
|
)
|
||||||
|
.subcommand(
|
||||||
|
"device-info",
|
||||||
|
ParentHandler::<C, WithIoFormat<Empty>>::new().root_handler(
|
||||||
|
from_fn_async(system::device_info)
|
||||||
|
.with_display_serializable()
|
||||||
|
.with_custom_display_fn(|handle, result| {
|
||||||
|
system::display_device_info(handle.params, result)
|
||||||
|
})
|
||||||
|
.with_about("about.get-device-info")
|
||||||
|
.with_call_remote::<CliContext>(),
|
||||||
|
),
|
||||||
|
)
|
||||||
.subcommand(
|
.subcommand(
|
||||||
"experimental",
|
"experimental",
|
||||||
system::experimental::<C>().with_about("about.commands-experimental"),
|
system::experimental::<C>().with_about("about.commands-experimental"),
|
||||||
@@ -377,6 +392,20 @@ pub fn server<C: Context>() -> ParentHandler<C> {
|
|||||||
"host",
|
"host",
|
||||||
net::host::server_host_api::<C>().with_about("about.commands-host-system-ui"),
|
net::host::server_host_api::<C>().with_about("about.commands-host-system-ui"),
|
||||||
)
|
)
|
||||||
|
.subcommand(
|
||||||
|
"set-hostname",
|
||||||
|
from_fn_async(hostname::set_hostname_rpc)
|
||||||
|
.no_display()
|
||||||
|
.with_about("about.set-hostname")
|
||||||
|
.with_call_remote::<CliContext>(),
|
||||||
|
)
|
||||||
|
.subcommand(
|
||||||
|
"set-ifconfig-url",
|
||||||
|
from_fn_async(system::set_ifconfig_url)
|
||||||
|
.no_display()
|
||||||
|
.with_about("about.set-ifconfig-url")
|
||||||
|
.with_call_remote::<CliContext>(),
|
||||||
|
)
|
||||||
.subcommand(
|
.subcommand(
|
||||||
"set-keyboard",
|
"set-keyboard",
|
||||||
from_fn_async(system::set_keyboard)
|
from_fn_async(system::set_keyboard)
|
||||||
@@ -548,4 +577,12 @@ pub fn package<C: Context>() -> ParentHandler<C> {
|
|||||||
"host",
|
"host",
|
||||||
net::host::host_api::<C>().with_about("about.manage-network-hosts-package"),
|
net::host::host_api::<C>().with_about("about.manage-network-hosts-package"),
|
||||||
)
|
)
|
||||||
|
.subcommand(
|
||||||
|
"set-outbound-gateway",
|
||||||
|
from_fn_async(net::gateway::set_outbound_gateway)
|
||||||
|
.with_metadata("sync_db", Value::Bool(true))
|
||||||
|
.no_display()
|
||||||
|
.with_about("about.set-outbound-gateway-package")
|
||||||
|
.with_call_remote::<CliContext>(),
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ use tokio::process::{Child, Command};
|
|||||||
use tokio_stream::wrappers::LinesStream;
|
use tokio_stream::wrappers::LinesStream;
|
||||||
use tokio_tungstenite::tungstenite::Message;
|
use tokio_tungstenite::tungstenite::Message;
|
||||||
use tracing::instrument;
|
use tracing::instrument;
|
||||||
|
use ts_rs::TS;
|
||||||
|
|
||||||
use crate::PackageId;
|
use crate::PackageId;
|
||||||
use crate::context::{CliContext, RpcContext};
|
use crate::context::{CliContext, RpcContext};
|
||||||
@@ -109,23 +110,28 @@ async fn ws_handler(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
|
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct LogResponse {
|
pub struct LogResponse {
|
||||||
|
#[ts(as = "Vec<LogEntry>")]
|
||||||
pub entries: Reversible<LogEntry>,
|
pub entries: Reversible<LogEntry>,
|
||||||
start_cursor: Option<String>,
|
start_cursor: Option<String>,
|
||||||
end_cursor: Option<String>,
|
end_cursor: Option<String>,
|
||||||
}
|
}
|
||||||
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
|
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct LogFollowResponse {
|
pub struct LogFollowResponse {
|
||||||
start_cursor: Option<String>,
|
start_cursor: Option<String>,
|
||||||
guid: Guid,
|
guid: Guid,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
|
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, TS)]
|
||||||
|
#[ts(export)]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
pub struct LogEntry {
|
pub struct LogEntry {
|
||||||
|
#[ts(type = "string")]
|
||||||
timestamp: DateTime<Utc>,
|
timestamp: DateTime<Utc>,
|
||||||
message: String,
|
message: String,
|
||||||
boot_id: String,
|
boot_id: String,
|
||||||
@@ -321,14 +327,17 @@ impl From<BootIdentifier> for String {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export, concrete(Extra = Empty), bound = "")]
|
||||||
#[serde(rename_all = "camelCase")]
|
#[serde(rename_all = "camelCase")]
|
||||||
#[command(rename_all = "kebab-case")]
|
#[command(rename_all = "kebab-case")]
|
||||||
pub struct LogsParams<Extra: FromArgMatches + Args = Empty> {
|
pub struct LogsParams<Extra: FromArgMatches + Args = Empty> {
|
||||||
#[command(flatten)]
|
#[command(flatten)]
|
||||||
#[serde(flatten)]
|
#[serde(flatten)]
|
||||||
|
#[ts(skip)]
|
||||||
extra: Extra,
|
extra: Extra,
|
||||||
#[arg(short = 'l', long = "limit", help = "help.arg.log-limit")]
|
#[arg(short = 'l', long = "limit", help = "help.arg.log-limit")]
|
||||||
|
#[ts(optional)]
|
||||||
limit: Option<usize>,
|
limit: Option<usize>,
|
||||||
#[arg(
|
#[arg(
|
||||||
short = 'c',
|
short = 'c',
|
||||||
@@ -336,9 +345,11 @@ pub struct LogsParams<Extra: FromArgMatches + Args = Empty> {
|
|||||||
conflicts_with = "follow",
|
conflicts_with = "follow",
|
||||||
help = "help.arg.log-cursor"
|
help = "help.arg.log-cursor"
|
||||||
)]
|
)]
|
||||||
|
#[ts(optional)]
|
||||||
cursor: Option<String>,
|
cursor: Option<String>,
|
||||||
#[arg(short = 'b', long = "boot", help = "help.arg.log-boot")]
|
#[arg(short = 'b', long = "boot", help = "help.arg.log-boot")]
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
|
#[ts(optional, type = "number | string")]
|
||||||
boot: Option<BootIdentifier>,
|
boot: Option<BootIdentifier>,
|
||||||
#[arg(
|
#[arg(
|
||||||
short = 'B',
|
short = 'B',
|
||||||
|
|||||||
@@ -17,3 +17,6 @@ lxc.net.0.link = lxcbr0
|
|||||||
lxc.net.0.flags = up
|
lxc.net.0.flags = up
|
||||||
|
|
||||||
lxc.rootfs.options = rshared
|
lxc.rootfs.options = rshared
|
||||||
|
|
||||||
|
# Environment
|
||||||
|
lxc.environment = LANG={lang}
|
||||||
|
|||||||
@@ -174,10 +174,15 @@ impl LxcContainer {
|
|||||||
config: LxcConfig,
|
config: LxcConfig,
|
||||||
) -> Result<Self, Error> {
|
) -> Result<Self, Error> {
|
||||||
let guid = new_guid();
|
let guid = new_guid();
|
||||||
|
let lang = std::env::var("LANG").unwrap_or_else(|_| "C.UTF-8".into());
|
||||||
let machine_id = hex::encode(rand::random::<[u8; 16]>());
|
let machine_id = hex::encode(rand::random::<[u8; 16]>());
|
||||||
let container_dir = Path::new(LXC_CONTAINER_DIR).join(&*guid);
|
let container_dir = Path::new(LXC_CONTAINER_DIR).join(&*guid);
|
||||||
tokio::fs::create_dir_all(&container_dir).await?;
|
tokio::fs::create_dir_all(&container_dir).await?;
|
||||||
let config_str = format!(include_str!("./config.template"), guid = &*guid);
|
let config_str = format!(
|
||||||
|
include_str!("./config.template"),
|
||||||
|
guid = &*guid,
|
||||||
|
lang = &lang,
|
||||||
|
);
|
||||||
tokio::fs::write(container_dir.join("config"), config_str).await?;
|
tokio::fs::write(container_dir.join("config"), config_str).await?;
|
||||||
let rootfs_dir = container_dir.join("rootfs");
|
let rootfs_dir = container_dir.join("rootfs");
|
||||||
let rootfs = OverlayGuard::mount(
|
let rootfs = OverlayGuard::mount(
|
||||||
@@ -215,6 +220,13 @@ impl LxcContainer {
|
|||||||
100000,
|
100000,
|
||||||
)
|
)
|
||||||
.await?;
|
.await?;
|
||||||
|
write_file_owned_atomic(
|
||||||
|
rootfs_dir.join("etc/default/locale"),
|
||||||
|
format!("LANG={lang}\n"),
|
||||||
|
100000,
|
||||||
|
100000,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
Command::new("sed")
|
Command::new("sed")
|
||||||
.arg("-i")
|
.arg("-i")
|
||||||
.arg(format!("s/LXC_NAME/{guid}/g"))
|
.arg(format!("s/LXC_NAME/{guid}/g"))
|
||||||
|
|||||||
@@ -20,9 +20,6 @@ use crate::context::RpcContext;
|
|||||||
use crate::middleware::auth::DbContext;
|
use crate::middleware::auth::DbContext;
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::rpc_continuations::OpenAuthedContinuations;
|
use crate::rpc_continuations::OpenAuthedContinuations;
|
||||||
use crate::util::Invoke;
|
|
||||||
use crate::util::io::{create_file_mod, read_file_to_string};
|
|
||||||
use crate::util::serde::{BASE64, const_true};
|
|
||||||
use crate::util::sync::SyncMutex;
|
use crate::util::sync::SyncMutex;
|
||||||
|
|
||||||
pub trait SessionAuthContext: DbContext {
|
pub trait SessionAuthContext: DbContext {
|
||||||
|
|||||||
@@ -71,7 +71,7 @@ impl SignatureAuthContext for RpcContext {
|
|||||||
.as_network()
|
.as_network()
|
||||||
.as_host()
|
.as_host()
|
||||||
.as_private_domains()
|
.as_private_domains()
|
||||||
.de()
|
.keys()
|
||||||
.map(|k| k.into_iter())
|
.map(|k| k.into_iter())
|
||||||
.transpose(),
|
.transpose(),
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -461,7 +461,8 @@ impl ValueParserFactory for AcmeProvider {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
pub struct InitAcmeParams {
|
pub struct InitAcmeParams {
|
||||||
#[arg(long, help = "help.arg.acme-provider")]
|
#[arg(long, help = "help.arg.acme-provider")]
|
||||||
pub provider: AcmeProvider,
|
pub provider: AcmeProvider,
|
||||||
@@ -486,7 +487,8 @@ pub async fn init(
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
pub struct RemoveAcmeParams {
|
pub struct RemoveAcmeParams {
|
||||||
#[arg(long, help = "help.arg.acme-provider")]
|
#[arg(long, help = "help.arg.acme-provider")]
|
||||||
pub provider: AcmeProvider,
|
pub provider: AcmeProvider,
|
||||||
|
|||||||
@@ -10,8 +10,9 @@ use color_eyre::eyre::eyre;
|
|||||||
use futures::{FutureExt, StreamExt, TryStreamExt};
|
use futures::{FutureExt, StreamExt, TryStreamExt};
|
||||||
use hickory_server::authority::{AuthorityObject, Catalog, MessageResponseBuilder};
|
use hickory_server::authority::{AuthorityObject, Catalog, MessageResponseBuilder};
|
||||||
use hickory_server::proto::op::{Header, ResponseCode};
|
use hickory_server::proto::op::{Header, ResponseCode};
|
||||||
use hickory_server::proto::rr::{LowerName, Name, Record, RecordType};
|
use hickory_server::proto::rr::{Name, Record, RecordType};
|
||||||
use hickory_server::resolver::config::{ResolverConfig, ResolverOpts};
|
use hickory_server::proto::xfer::Protocol;
|
||||||
|
use hickory_server::resolver::config::{NameServerConfig, ResolverConfig, ResolverOpts};
|
||||||
use hickory_server::server::{Request, RequestHandler, ResponseHandler, ResponseInfo};
|
use hickory_server::server::{Request, RequestHandler, ResponseHandler, ResponseInfo};
|
||||||
use hickory_server::store::forwarder::{ForwardAuthority, ForwardConfig};
|
use hickory_server::store::forwarder::{ForwardAuthority, ForwardConfig};
|
||||||
use hickory_server::{ServerFuture, resolver as hickory_resolver};
|
use hickory_server::{ServerFuture, resolver as hickory_resolver};
|
||||||
@@ -25,6 +26,7 @@ use serde::{Deserialize, Serialize};
|
|||||||
use tokio::net::{TcpListener, UdpSocket};
|
use tokio::net::{TcpListener, UdpSocket};
|
||||||
use tokio::sync::RwLock;
|
use tokio::sync::RwLock;
|
||||||
use tracing::instrument;
|
use tracing::instrument;
|
||||||
|
use ts_rs::TS;
|
||||||
|
|
||||||
use crate::context::{CliContext, RpcContext};
|
use crate::context::{CliContext, RpcContext};
|
||||||
use crate::db::model::Database;
|
use crate::db::model::Database;
|
||||||
@@ -93,7 +95,8 @@ pub fn dns_api<C: Context>() -> ParentHandler<C> {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
pub struct QueryDnsParams {
|
pub struct QueryDnsParams {
|
||||||
#[arg(help = "help.arg.fqdn")]
|
#[arg(help = "help.arg.fqdn")]
|
||||||
pub fqdn: InternedString,
|
pub fqdn: InternedString,
|
||||||
@@ -133,7 +136,8 @@ pub fn query_dns<C: Context>(
|
|||||||
.map_err(Error::from)
|
.map_err(Error::from)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Parser)]
|
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||||
|
#[ts(export)]
|
||||||
pub struct SetStaticDnsParams {
|
pub struct SetStaticDnsParams {
|
||||||
#[arg(help = "help.arg.dns-servers")]
|
#[arg(help = "help.arg.dns-servers")]
|
||||||
pub servers: Option<Vec<String>>,
|
pub servers: Option<Vec<String>>,
|
||||||
@@ -203,6 +207,7 @@ pub async fn dump_table(
|
|||||||
struct ResolveMap {
|
struct ResolveMap {
|
||||||
private_domains: BTreeMap<InternedString, Weak<()>>,
|
private_domains: BTreeMap<InternedString, Weak<()>>,
|
||||||
services: BTreeMap<Option<PackageId>, BTreeMap<Ipv4Addr, Weak<()>>>,
|
services: BTreeMap<Option<PackageId>, BTreeMap<Ipv4Addr, Weak<()>>>,
|
||||||
|
challenges: BTreeMap<InternedString, (InternedString, Weak<()>)>,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct DnsController {
|
pub struct DnsController {
|
||||||
@@ -237,22 +242,60 @@ impl Resolver {
|
|||||||
let mut prev = crate::util::serde::hash_serializable::<sha2::Sha256, _>(&(
|
let mut prev = crate::util::serde::hash_serializable::<sha2::Sha256, _>(&(
|
||||||
ResolverConfig::new(),
|
ResolverConfig::new(),
|
||||||
ResolverOpts::default(),
|
ResolverOpts::default(),
|
||||||
|
Option::<std::collections::VecDeque<SocketAddr>>::None,
|
||||||
))
|
))
|
||||||
.unwrap_or_default();
|
.unwrap_or_default();
|
||||||
loop {
|
loop {
|
||||||
if let Err(e) = async {
|
let res: Result<(), Error> = async {
|
||||||
let mut stream = file_string_stream("/run/systemd/resolve/resolv.conf")
|
let mut file_stream =
|
||||||
|
file_string_stream("/run/systemd/resolve/resolv.conf")
|
||||||
.filter_map(|a| futures::future::ready(a.transpose()))
|
.filter_map(|a| futures::future::ready(a.transpose()))
|
||||||
.boxed();
|
.boxed();
|
||||||
while let Some(conf) = stream.try_next().await? {
|
let mut static_sub = db
|
||||||
|
.subscribe(
|
||||||
|
"/public/serverInfo/network/dns/staticServers"
|
||||||
|
.parse()
|
||||||
|
.unwrap(),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
let mut last_config: Option<(ResolverConfig, ResolverOpts)> = None;
|
||||||
|
loop {
|
||||||
|
let got_file = tokio::select! {
|
||||||
|
res = file_stream.try_next() => {
|
||||||
|
let conf = res?
|
||||||
|
.ok_or_else(|| Error::new(
|
||||||
|
eyre!("resolv.conf stream ended"),
|
||||||
|
ErrorKind::Network,
|
||||||
|
))?;
|
||||||
let (config, mut opts) =
|
let (config, mut opts) =
|
||||||
hickory_resolver::system_conf::parse_resolv_conf(conf)
|
hickory_resolver::system_conf::parse_resolv_conf(conf)
|
||||||
.with_kind(ErrorKind::ParseSysInfo)?;
|
.with_kind(ErrorKind::ParseSysInfo)?;
|
||||||
opts.timeout = Duration::from_secs(30);
|
opts.timeout = Duration::from_secs(30);
|
||||||
|
last_config = Some((config, opts));
|
||||||
|
true
|
||||||
|
}
|
||||||
|
_ = static_sub.recv() => false,
|
||||||
|
};
|
||||||
|
let Some((ref config, ref opts)) = last_config else {
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
let static_servers: Option<std::collections::VecDeque<SocketAddr>> = db
|
||||||
|
.peek()
|
||||||
|
.await
|
||||||
|
.as_public()
|
||||||
|
.as_server_info()
|
||||||
|
.as_network()
|
||||||
|
.as_dns()
|
||||||
|
.as_static_servers()
|
||||||
|
.de()?;
|
||||||
let hash = crate::util::serde::hash_serializable::<sha2::Sha256, _>(
|
let hash = crate::util::serde::hash_serializable::<sha2::Sha256, _>(
|
||||||
&(&config, &opts),
|
&(config, opts, &static_servers),
|
||||||
)?;
|
)?;
|
||||||
if hash != prev {
|
if hash == prev {
|
||||||
|
prev = hash;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if got_file {
|
||||||
db.mutate(|db| {
|
db.mutate(|db| {
|
||||||
db.as_public_mut()
|
db.as_public_mut()
|
||||||
.as_server_info_mut()
|
.as_server_info_mut()
|
||||||
@@ -271,26 +314,37 @@ impl Resolver {
|
|||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.result?;
|
.result?;
|
||||||
let auth: Vec<Arc<dyn AuthorityObject>> = vec![Arc::new(
|
}
|
||||||
ForwardAuthority::builder_tokio(ForwardConfig {
|
let forward_servers = if let Some(servers) = &static_servers {
|
||||||
name_servers: from_value(Value::Array(
|
servers
|
||||||
|
.iter()
|
||||||
|
.flat_map(|addr| {
|
||||||
|
[
|
||||||
|
NameServerConfig::new(*addr, Protocol::Udp),
|
||||||
|
NameServerConfig::new(*addr, Protocol::Tcp),
|
||||||
|
]
|
||||||
|
})
|
||||||
|
.map(|n| to_value(&n))
|
||||||
|
.collect::<Result<_, Error>>()?
|
||||||
|
} else {
|
||||||
config
|
config
|
||||||
.name_servers()
|
.name_servers()
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.skip(4)
|
.skip(4)
|
||||||
.map(to_value)
|
.map(to_value)
|
||||||
.collect::<Result<_, Error>>()?,
|
.collect::<Result<_, Error>>()?
|
||||||
))?,
|
};
|
||||||
options: Some(opts),
|
let auth: Vec<Arc<dyn AuthorityObject>> = vec![Arc::new(
|
||||||
|
ForwardAuthority::builder_tokio(ForwardConfig {
|
||||||
|
name_servers: from_value(Value::Array(forward_servers))?,
|
||||||
|
options: Some(opts.clone()),
|
||||||
})
|
})
|
||||||
.build()
|
.build()
|
||||||
.map_err(|e| Error::new(eyre!("{e}"), ErrorKind::Network))?,
|
.map_err(|e| Error::new(eyre!("{e}"), ErrorKind::Network))?,
|
||||||
)];
|
)];
|
||||||
{
|
{
|
||||||
let mut guard = tokio::time::timeout(
|
let mut guard =
|
||||||
Duration::from_secs(10),
|
tokio::time::timeout(Duration::from_secs(10), catalog.write())
|
||||||
catalog.write(),
|
|
||||||
)
|
|
||||||
.await
|
.await
|
||||||
.map_err(|_| {
|
.map_err(|_| {
|
||||||
Error::new(
|
Error::new(
|
||||||
@@ -301,14 +355,11 @@ impl Resolver {
|
|||||||
guard.upsert(Name::root().into(), auth);
|
guard.upsert(Name::root().into(), auth);
|
||||||
drop(guard);
|
drop(guard);
|
||||||
}
|
}
|
||||||
}
|
|
||||||
prev = hash;
|
prev = hash;
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok::<_, Error>(())
|
|
||||||
}
|
}
|
||||||
.await
|
.await;
|
||||||
{
|
if let Err(e) = res {
|
||||||
tracing::error!("{e}");
|
tracing::error!("{e}");
|
||||||
tracing::debug!("{e:?}");
|
tracing::debug!("{e:?}");
|
||||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||||
@@ -399,7 +450,41 @@ impl RequestHandler for Resolver {
|
|||||||
match async {
|
match async {
|
||||||
let req = request.request_info()?;
|
let req = request.request_info()?;
|
||||||
let query = req.query;
|
let query = req.query;
|
||||||
if let Some(ip) = self.resolve(query.name().borrow(), req.src.ip()) {
|
let name = query.name();
|
||||||
|
|
||||||
|
if STARTOS.zone_of(name) && query.query_type() == RecordType::TXT {
|
||||||
|
let name_str =
|
||||||
|
InternedString::intern(name.to_lowercase().to_utf8().trim_end_matches('.'));
|
||||||
|
if let Some(txt_value) = self.resolve.mutate(|r| {
|
||||||
|
r.challenges.retain(|_, (_, weak)| weak.strong_count() > 0);
|
||||||
|
r.challenges.remove(&name_str).map(|(val, _)| val)
|
||||||
|
}) {
|
||||||
|
let mut header = Header::response_from_request(request.header());
|
||||||
|
header.set_recursion_available(true);
|
||||||
|
return response_handle
|
||||||
|
.send_response(
|
||||||
|
MessageResponseBuilder::from_message_request(&*request).build(
|
||||||
|
header,
|
||||||
|
&[Record::from_rdata(
|
||||||
|
query.name().to_owned().into(),
|
||||||
|
0,
|
||||||
|
hickory_server::proto::rr::RData::TXT(
|
||||||
|
hickory_server::proto::rr::rdata::TXT::new(vec![
|
||||||
|
txt_value.to_string(),
|
||||||
|
]),
|
||||||
|
),
|
||||||
|
)],
|
||||||
|
[],
|
||||||
|
[],
|
||||||
|
[],
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.map(Some);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ip) = self.resolve(name, req.src.ip()) {
|
||||||
match query.query_type() {
|
match query.query_type() {
|
||||||
RecordType::A => {
|
RecordType::A => {
|
||||||
let mut header = Header::response_from_request(request.header());
|
let mut header = Header::response_from_request(request.header());
|
||||||
@@ -615,6 +700,34 @@ impl DnsController {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn add_challenge(
|
||||||
|
&self,
|
||||||
|
domain: InternedString,
|
||||||
|
value: InternedString,
|
||||||
|
) -> Result<Arc<()>, Error> {
|
||||||
|
if let Some(resolve) = Weak::upgrade(&self.resolve) {
|
||||||
|
resolve.mutate(|writable| {
|
||||||
|
let entry = writable
|
||||||
|
.challenges
|
||||||
|
.entry(domain)
|
||||||
|
.or_insert_with(|| (value.clone(), Weak::new()));
|
||||||
|
let rc = if let Some(rc) = Weak::upgrade(&entry.1) {
|
||||||
|
rc
|
||||||
|
} else {
|
||||||
|
let new = Arc::new(());
|
||||||
|
*entry = (value, Arc::downgrade(&new));
|
||||||
|
new
|
||||||
|
};
|
||||||
|
Ok(rc)
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
Err(Error::new(
|
||||||
|
eyre!("{}", t!("net.dns.server-thread-exited")),
|
||||||
|
crate::ErrorKind::Network,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pub fn gc_private_domains<'a, BK: Ord + 'a>(
|
pub fn gc_private_domains<'a, BK: Ord + 'a>(
|
||||||
&self,
|
&self,
|
||||||
domains: impl IntoIterator<Item = &'a BK> + 'a,
|
domains: impl IntoIterator<Item = &'a BK> + 'a,
|
||||||
|
|||||||
@@ -4,44 +4,90 @@ use std::sync::{Arc, Weak};
|
|||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use futures::channel::oneshot;
|
use futures::channel::oneshot;
|
||||||
use id_pool::IdPool;
|
|
||||||
use iddqd::{IdOrdItem, IdOrdMap};
|
use iddqd::{IdOrdItem, IdOrdMap};
|
||||||
use imbl::OrdMap;
|
use imbl::OrdMap;
|
||||||
|
use ipnet::{IpNet, Ipv4Net};
|
||||||
|
use rand::Rng;
|
||||||
use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
|
use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use tokio::process::Command;
|
use tokio::process::Command;
|
||||||
use tokio::sync::mpsc;
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
use crate::GatewayId;
|
|
||||||
use crate::context::{CliContext, RpcContext};
|
use crate::context::{CliContext, RpcContext};
|
||||||
use crate::db::model::public::NetworkInterfaceInfo;
|
use crate::db::model::public::NetworkInterfaceInfo;
|
||||||
use crate::net::gateway::{DynInterfaceFilter, InterfaceFilter};
|
|
||||||
use crate::prelude::*;
|
use crate::prelude::*;
|
||||||
use crate::util::Invoke;
|
use crate::util::Invoke;
|
||||||
use crate::util::future::NonDetachingJoinHandle;
|
use crate::util::future::NonDetachingJoinHandle;
|
||||||
use crate::util::serde::{HandlerExtSerde, display_serializable};
|
use crate::util::serde::{HandlerExtSerde, display_serializable};
|
||||||
use crate::util::sync::Watch;
|
use crate::util::sync::Watch;
|
||||||
|
use crate::{GatewayId, HOST_IP};
|
||||||
|
|
||||||
pub const START9_BRIDGE_IFACE: &str = "lxcbr0";
|
pub const START9_BRIDGE_IFACE: &str = "lxcbr0";
|
||||||
pub const FIRST_DYNAMIC_PRIVATE_PORT: u16 = 49152;
|
const EPHEMERAL_PORT_START: u16 = 49152;
|
||||||
|
// vhost.rs:89 — not allowed: <=1024, >=32768, 5355, 5432, 9050, 6010, 9051, 5353
|
||||||
|
const RESTRICTED_PORTS: &[u16] = &[5353, 5355, 5432, 6010, 9050, 9051];
|
||||||
|
|
||||||
|
fn is_restricted(port: u16) -> bool {
|
||||||
|
port <= 1024 || RESTRICTED_PORTS.contains(&port)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
|
||||||
|
pub struct ForwardRequirements {
|
||||||
|
pub public_gateways: BTreeSet<GatewayId>,
|
||||||
|
pub private_ips: BTreeSet<IpAddr>,
|
||||||
|
pub secure: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for ForwardRequirements {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
write!(
|
||||||
|
f,
|
||||||
|
"ForwardRequirements {{ public: {:?}, private: {:?}, secure: {} }}",
|
||||||
|
self.public_gateways, self.private_ips, self.secure
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, Serialize)]
|
#[derive(Debug, Deserialize, Serialize)]
|
||||||
pub struct AvailablePorts(IdPool);
|
pub struct AvailablePorts(BTreeMap<u16, bool>);
|
||||||
impl AvailablePorts {
|
impl AvailablePorts {
|
||||||
pub fn new() -> Self {
|
pub fn new() -> Self {
|
||||||
Self(IdPool::new_ranged(FIRST_DYNAMIC_PRIVATE_PORT..u16::MAX))
|
Self(BTreeMap::new())
|
||||||
}
|
}
|
||||||
pub fn alloc(&mut self) -> Result<u16, Error> {
|
pub fn alloc(&mut self, ssl: bool) -> Result<u16, Error> {
|
||||||
self.0.request_id().ok_or_else(|| {
|
let mut rng = rand::rng();
|
||||||
Error::new(
|
for _ in 0..1000 {
|
||||||
|
let port = rng.random_range(EPHEMERAL_PORT_START..u16::MAX);
|
||||||
|
if !self.0.contains_key(&port) {
|
||||||
|
self.0.insert(port, ssl);
|
||||||
|
return Ok(port);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(Error::new(
|
||||||
eyre!("{}", t!("net.forward.no-dynamic-ports-available")),
|
eyre!("{}", t!("net.forward.no-dynamic-ports-available")),
|
||||||
ErrorKind::Network,
|
ErrorKind::Network,
|
||||||
)
|
))
|
||||||
})
|
}
|
||||||
|
/// Try to allocate a specific port. Returns Some(port) if available, None if taken/restricted.
|
||||||
|
pub fn try_alloc(&mut self, port: u16, ssl: bool) -> Option<u16> {
|
||||||
|
if is_restricted(port) || self.0.contains_key(&port) {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
self.0.insert(port, ssl);
|
||||||
|
Some(port)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn set_ssl(&mut self, port: u16, ssl: bool) {
|
||||||
|
self.0.insert(port, ssl);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns whether a given allocated port is SSL.
|
||||||
|
pub fn is_ssl(&self, port: u16) -> bool {
|
||||||
|
self.0.get(&port).copied().unwrap_or(false)
|
||||||
}
|
}
|
||||||
pub fn free(&mut self, ports: impl IntoIterator<Item = u16>) {
|
pub fn free(&mut self, ports: impl IntoIterator<Item = u16>) {
|
||||||
for port in ports {
|
for port in ports {
|
||||||
self.0.return_id(port).unwrap_or_default();
|
self.0.remove(&port);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -61,10 +107,10 @@ pub fn forward_api<C: Context>() -> ParentHandler<C> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
let mut table = Table::new();
|
let mut table = Table::new();
|
||||||
table.add_row(row![bc => "FROM", "TO", "FILTER"]);
|
table.add_row(row![bc => "FROM", "TO", "REQS"]);
|
||||||
|
|
||||||
for (external, target) in res.0 {
|
for (external, target) in res.0 {
|
||||||
table.add_row(row![external, target.target, target.filter]);
|
table.add_row(row![external, target.target, target.reqs]);
|
||||||
}
|
}
|
||||||
|
|
||||||
table.print_tty(false)?;
|
table.print_tty(false)?;
|
||||||
@@ -79,6 +125,7 @@ struct ForwardMapping {
|
|||||||
source: SocketAddrV4,
|
source: SocketAddrV4,
|
||||||
target: SocketAddrV4,
|
target: SocketAddrV4,
|
||||||
target_prefix: u8,
|
target_prefix: u8,
|
||||||
|
src_filter: Option<IpNet>,
|
||||||
rc: Weak<()>,
|
rc: Weak<()>,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -93,9 +140,10 @@ impl PortForwardState {
|
|||||||
source: SocketAddrV4,
|
source: SocketAddrV4,
|
||||||
target: SocketAddrV4,
|
target: SocketAddrV4,
|
||||||
target_prefix: u8,
|
target_prefix: u8,
|
||||||
|
src_filter: Option<IpNet>,
|
||||||
) -> Result<Arc<()>, Error> {
|
) -> Result<Arc<()>, Error> {
|
||||||
if let Some(existing) = self.mappings.get_mut(&source) {
|
if let Some(existing) = self.mappings.get_mut(&source) {
|
||||||
if existing.target == target {
|
if existing.target == target && existing.src_filter == src_filter {
|
||||||
if let Some(existing_rc) = existing.rc.upgrade() {
|
if let Some(existing_rc) = existing.rc.upgrade() {
|
||||||
return Ok(existing_rc);
|
return Ok(existing_rc);
|
||||||
} else {
|
} else {
|
||||||
@@ -104,21 +152,28 @@ impl PortForwardState {
|
|||||||
return Ok(rc);
|
return Ok(rc);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Different target, need to remove old and add new
|
// Different target or src_filter, need to remove old and add new
|
||||||
if let Some(mapping) = self.mappings.remove(&source) {
|
if let Some(mapping) = self.mappings.remove(&source) {
|
||||||
unforward(mapping.source, mapping.target, mapping.target_prefix).await?;
|
unforward(
|
||||||
|
mapping.source,
|
||||||
|
mapping.target,
|
||||||
|
mapping.target_prefix,
|
||||||
|
mapping.src_filter.as_ref(),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let rc = Arc::new(());
|
let rc = Arc::new(());
|
||||||
forward(source, target, target_prefix).await?;
|
forward(source, target, target_prefix, src_filter.as_ref()).await?;
|
||||||
self.mappings.insert(
|
self.mappings.insert(
|
||||||
source,
|
source,
|
||||||
ForwardMapping {
|
ForwardMapping {
|
||||||
source,
|
source,
|
||||||
target,
|
target,
|
||||||
target_prefix,
|
target_prefix,
|
||||||
|
src_filter,
|
||||||
rc: Arc::downgrade(&rc),
|
rc: Arc::downgrade(&rc),
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
@@ -136,7 +191,13 @@ impl PortForwardState {
|
|||||||
|
|
||||||
for source in to_remove {
|
for source in to_remove {
|
||||||
if let Some(mapping) = self.mappings.remove(&source) {
|
if let Some(mapping) = self.mappings.remove(&source) {
|
||||||
unforward(mapping.source, mapping.target, mapping.target_prefix).await?;
|
unforward(
|
||||||
|
mapping.source,
|
||||||
|
mapping.target,
|
||||||
|
mapping.target_prefix,
|
||||||
|
mapping.src_filter.as_ref(),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
@@ -157,7 +218,12 @@ impl Drop for PortForwardState {
|
|||||||
let mappings = std::mem::take(&mut self.mappings);
|
let mappings = std::mem::take(&mut self.mappings);
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
for (_, mapping) in mappings {
|
for (_, mapping) in mappings {
|
||||||
unforward(mapping.source, mapping.target, mapping.target_prefix)
|
unforward(
|
||||||
|
mapping.source,
|
||||||
|
mapping.target,
|
||||||
|
mapping.target_prefix,
|
||||||
|
mapping.src_filter.as_ref(),
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
.log_err();
|
.log_err();
|
||||||
}
|
}
|
||||||
@@ -171,6 +237,7 @@ enum PortForwardCommand {
|
|||||||
source: SocketAddrV4,
|
source: SocketAddrV4,
|
||||||
target: SocketAddrV4,
|
target: SocketAddrV4,
|
||||||
target_prefix: u8,
|
target_prefix: u8,
|
||||||
|
src_filter: Option<IpNet>,
|
||||||
respond: oneshot::Sender<Result<Arc<()>, Error>>,
|
respond: oneshot::Sender<Result<Arc<()>, Error>>,
|
||||||
},
|
},
|
||||||
Gc {
|
Gc {
|
||||||
@@ -191,7 +258,13 @@ pub async fn add_iptables_rule(nat: bool, undo: bool, args: &[&str]) -> Result<(
|
|||||||
if nat {
|
if nat {
|
||||||
cmd.arg("-t").arg("nat");
|
cmd.arg("-t").arg("nat");
|
||||||
}
|
}
|
||||||
if undo != !cmd.arg("-C").args(args).status().await?.success() {
|
let exists = cmd
|
||||||
|
.arg("-C")
|
||||||
|
.args(args)
|
||||||
|
.invoke(ErrorKind::Network)
|
||||||
|
.await
|
||||||
|
.is_ok();
|
||||||
|
if undo != !exists {
|
||||||
let mut cmd = Command::new("iptables");
|
let mut cmd = Command::new("iptables");
|
||||||
if nat {
|
if nat {
|
||||||
cmd.arg("-t").arg("nat");
|
cmd.arg("-t").arg("nat");
|
||||||
@@ -257,9 +330,12 @@ impl PortForwardController {
|
|||||||
source,
|
source,
|
||||||
target,
|
target,
|
||||||
target_prefix,
|
target_prefix,
|
||||||
|
src_filter,
|
||||||
respond,
|
respond,
|
||||||
} => {
|
} => {
|
||||||
let result = state.add_forward(source, target, target_prefix).await;
|
let result = state
|
||||||
|
.add_forward(source, target, target_prefix, src_filter)
|
||||||
|
.await;
|
||||||
respond.send(result).ok();
|
respond.send(result).ok();
|
||||||
}
|
}
|
||||||
PortForwardCommand::Gc { respond } => {
|
PortForwardCommand::Gc { respond } => {
|
||||||
@@ -284,6 +360,7 @@ impl PortForwardController {
|
|||||||
source: SocketAddrV4,
|
source: SocketAddrV4,
|
||||||
target: SocketAddrV4,
|
target: SocketAddrV4,
|
||||||
target_prefix: u8,
|
target_prefix: u8,
|
||||||
|
src_filter: Option<IpNet>,
|
||||||
) -> Result<Arc<()>, Error> {
|
) -> Result<Arc<()>, Error> {
|
||||||
let (send, recv) = oneshot::channel();
|
let (send, recv) = oneshot::channel();
|
||||||
self.req
|
self.req
|
||||||
@@ -291,6 +368,7 @@ impl PortForwardController {
|
|||||||
source,
|
source,
|
||||||
target,
|
target,
|
||||||
target_prefix,
|
target_prefix,
|
||||||
|
src_filter,
|
||||||
respond: send,
|
respond: send,
|
||||||
})
|
})
|
||||||
.map_err(err_has_exited)?;
|
.map_err(err_has_exited)?;
|
||||||
@@ -321,14 +399,14 @@ struct InterfaceForwardRequest {
|
|||||||
external: u16,
|
external: u16,
|
||||||
target: SocketAddrV4,
|
target: SocketAddrV4,
|
||||||
target_prefix: u8,
|
target_prefix: u8,
|
||||||
filter: DynInterfaceFilter,
|
reqs: ForwardRequirements,
|
||||||
rc: Arc<()>,
|
rc: Arc<()>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
struct InterfaceForwardEntry {
|
struct InterfaceForwardEntry {
|
||||||
external: u16,
|
external: u16,
|
||||||
filter: BTreeMap<DynInterfaceFilter, (SocketAddrV4, u8, Weak<()>)>,
|
targets: BTreeMap<ForwardRequirements, (SocketAddrV4, u8, Weak<()>)>,
|
||||||
// Maps source SocketAddr -> strong reference for the forward created in PortForwardController
|
// Maps source SocketAddr -> strong reference for the forward created in PortForwardController
|
||||||
forwards: BTreeMap<SocketAddrV4, Arc<()>>,
|
forwards: BTreeMap<SocketAddrV4, Arc<()>>,
|
||||||
}
|
}
|
||||||
@@ -346,7 +424,7 @@ impl InterfaceForwardEntry {
|
|||||||
fn new(external: u16) -> Self {
|
fn new(external: u16) -> Self {
|
||||||
Self {
|
Self {
|
||||||
external,
|
external,
|
||||||
filter: BTreeMap::new(),
|
targets: BTreeMap::new(),
|
||||||
forwards: BTreeMap::new(),
|
forwards: BTreeMap::new(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -358,28 +436,37 @@ impl InterfaceForwardEntry {
|
|||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
let mut keep = BTreeSet::<SocketAddrV4>::new();
|
let mut keep = BTreeSet::<SocketAddrV4>::new();
|
||||||
|
|
||||||
for (iface, info) in ip_info.iter() {
|
for (gw_id, info) in ip_info.iter() {
|
||||||
if let Some((target, target_prefix)) = self
|
|
||||||
.filter
|
|
||||||
.iter()
|
|
||||||
.filter(|(_, (_, _, rc))| rc.strong_count() > 0)
|
|
||||||
.find(|(filter, _)| filter.filter(iface, info))
|
|
||||||
.map(|(_, (target, target_prefix, _))| (*target, *target_prefix))
|
|
||||||
{
|
|
||||||
if let Some(ip_info) = &info.ip_info {
|
if let Some(ip_info) = &info.ip_info {
|
||||||
for addr in ip_info.subnets.iter().filter_map(|net| {
|
for subnet in ip_info.subnets.iter() {
|
||||||
if let IpAddr::V4(ip) = net.addr() {
|
if let IpAddr::V4(ip) = subnet.addr() {
|
||||||
Some(SocketAddrV4::new(ip, self.external))
|
let addr = SocketAddrV4::new(ip, self.external);
|
||||||
} else {
|
if keep.contains(&addr) {
|
||||||
None
|
continue;
|
||||||
}
|
}
|
||||||
}) {
|
|
||||||
|
for (reqs, (target, target_prefix, rc)) in self.targets.iter() {
|
||||||
|
if rc.strong_count() == 0 {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if !reqs.secure && !info.secure() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let src_filter = if reqs.public_gateways.contains(gw_id) {
|
||||||
|
None
|
||||||
|
} else if reqs.private_ips.contains(&IpAddr::V4(ip)) {
|
||||||
|
Some(subnet.trunc())
|
||||||
|
} else {
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
|
||||||
keep.insert(addr);
|
keep.insert(addr);
|
||||||
if !self.forwards.contains_key(&addr) {
|
let fwd_rc = port_forward
|
||||||
let rc = port_forward
|
.add_forward(addr, *target, *target_prefix, src_filter)
|
||||||
.add_forward(addr, target, target_prefix)
|
|
||||||
.await?;
|
.await?;
|
||||||
self.forwards.insert(addr, rc);
|
self.forwards.insert(addr, fwd_rc);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -398,7 +485,7 @@ impl InterfaceForwardEntry {
|
|||||||
external,
|
external,
|
||||||
target,
|
target,
|
||||||
target_prefix,
|
target_prefix,
|
||||||
filter,
|
reqs,
|
||||||
mut rc,
|
mut rc,
|
||||||
}: InterfaceForwardRequest,
|
}: InterfaceForwardRequest,
|
||||||
ip_info: &OrdMap<GatewayId, NetworkInterfaceInfo>,
|
ip_info: &OrdMap<GatewayId, NetworkInterfaceInfo>,
|
||||||
@@ -412,8 +499,8 @@ impl InterfaceForwardEntry {
|
|||||||
}
|
}
|
||||||
|
|
||||||
let entry = self
|
let entry = self
|
||||||
.filter
|
.targets
|
||||||
.entry(filter)
|
.entry(reqs)
|
||||||
.or_insert_with(|| (target, target_prefix, Arc::downgrade(&rc)));
|
.or_insert_with(|| (target, target_prefix, Arc::downgrade(&rc)));
|
||||||
if entry.0 != target {
|
if entry.0 != target {
|
||||||
entry.0 = target;
|
entry.0 = target;
|
||||||
@@ -436,7 +523,7 @@ impl InterfaceForwardEntry {
|
|||||||
ip_info: &OrdMap<GatewayId, NetworkInterfaceInfo>,
|
ip_info: &OrdMap<GatewayId, NetworkInterfaceInfo>,
|
||||||
port_forward: &PortForwardController,
|
port_forward: &PortForwardController,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
self.filter.retain(|_, (_, _, rc)| rc.strong_count() > 0);
|
self.targets.retain(|_, (_, _, rc)| rc.strong_count() > 0);
|
||||||
|
|
||||||
self.update(ip_info, port_forward).await
|
self.update(ip_info, port_forward).await
|
||||||
}
|
}
|
||||||
@@ -495,7 +582,7 @@ pub struct ForwardTable(pub BTreeMap<u16, ForwardTarget>);
|
|||||||
pub struct ForwardTarget {
|
pub struct ForwardTarget {
|
||||||
pub target: SocketAddrV4,
|
pub target: SocketAddrV4,
|
||||||
pub target_prefix: u8,
|
pub target_prefix: u8,
|
||||||
pub filter: String,
|
pub reqs: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<&InterfaceForwardState> for ForwardTable {
|
impl From<&InterfaceForwardState> for ForwardTable {
|
||||||
@@ -506,16 +593,16 @@ impl From<&InterfaceForwardState> for ForwardTable {
|
|||||||
.iter()
|
.iter()
|
||||||
.flat_map(|entry| {
|
.flat_map(|entry| {
|
||||||
entry
|
entry
|
||||||
.filter
|
.targets
|
||||||
.iter()
|
.iter()
|
||||||
.filter(|(_, (_, _, rc))| rc.strong_count() > 0)
|
.filter(|(_, (_, _, rc))| rc.strong_count() > 0)
|
||||||
.map(|(filter, (target, target_prefix, _))| {
|
.map(|(reqs, (target, target_prefix, _))| {
|
||||||
(
|
(
|
||||||
entry.external,
|
entry.external,
|
||||||
ForwardTarget {
|
ForwardTarget {
|
||||||
target: *target,
|
target: *target,
|
||||||
target_prefix: *target_prefix,
|
target_prefix: *target_prefix,
|
||||||
filter: format!("{:#?}", filter),
|
reqs: format!("{reqs}"),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
})
|
})
|
||||||
@@ -534,16 +621,6 @@ enum InterfaceForwardCommand {
|
|||||||
DumpTable(oneshot::Sender<ForwardTable>),
|
DumpTable(oneshot::Sender<ForwardTable>),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test() {
|
|
||||||
use crate::net::gateway::SecureFilter;
|
|
||||||
|
|
||||||
assert_ne!(
|
|
||||||
false.into_dyn(),
|
|
||||||
SecureFilter { secure: false }.into_dyn().into_dyn()
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct InterfacePortForwardController {
|
pub struct InterfacePortForwardController {
|
||||||
req: mpsc::UnboundedSender<InterfaceForwardCommand>,
|
req: mpsc::UnboundedSender<InterfaceForwardCommand>,
|
||||||
_thread: NonDetachingJoinHandle<()>,
|
_thread: NonDetachingJoinHandle<()>,
|
||||||
@@ -593,7 +670,7 @@ impl InterfacePortForwardController {
|
|||||||
pub async fn add(
|
pub async fn add(
|
||||||
&self,
|
&self,
|
||||||
external: u16,
|
external: u16,
|
||||||
filter: DynInterfaceFilter,
|
reqs: ForwardRequirements,
|
||||||
target: SocketAddrV4,
|
target: SocketAddrV4,
|
||||||
target_prefix: u8,
|
target_prefix: u8,
|
||||||
) -> Result<Arc<()>, Error> {
|
) -> Result<Arc<()>, Error> {
|
||||||
@@ -605,7 +682,7 @@ impl InterfacePortForwardController {
|
|||||||
external,
|
external,
|
||||||
target,
|
target,
|
||||||
target_prefix,
|
target_prefix,
|
||||||
filter,
|
reqs,
|
||||||
rc,
|
rc,
|
||||||
},
|
},
|
||||||
send,
|
send,
|
||||||
@@ -637,15 +714,25 @@ async fn forward(
|
|||||||
source: SocketAddrV4,
|
source: SocketAddrV4,
|
||||||
target: SocketAddrV4,
|
target: SocketAddrV4,
|
||||||
target_prefix: u8,
|
target_prefix: u8,
|
||||||
|
src_filter: Option<&IpNet>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
Command::new("/usr/lib/startos/scripts/forward-port")
|
let mut cmd = Command::new("/usr/lib/startos/scripts/forward-port");
|
||||||
.env("sip", source.ip().to_string())
|
cmd.env("sip", source.ip().to_string())
|
||||||
.env("dip", target.ip().to_string())
|
.env("dip", target.ip().to_string())
|
||||||
.env("dprefix", target_prefix.to_string())
|
.env("dprefix", target_prefix.to_string())
|
||||||
.env("sport", source.port().to_string())
|
.env("sport", source.port().to_string())
|
||||||
.env("dport", target.port().to_string())
|
.env("dport", target.port().to_string())
|
||||||
.invoke(ErrorKind::Network)
|
.env(
|
||||||
.await?;
|
"bridge_subnet",
|
||||||
|
Ipv4Net::new(HOST_IP.into(), 24)
|
||||||
|
.with_kind(ErrorKind::ParseNetAddress)?
|
||||||
|
.trunc()
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
if let Some(subnet) = src_filter {
|
||||||
|
cmd.env("src_subnet", subnet.to_string());
|
||||||
|
}
|
||||||
|
cmd.invoke(ErrorKind::Network).await?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -653,15 +740,18 @@ async fn unforward(
|
|||||||
source: SocketAddrV4,
|
source: SocketAddrV4,
|
||||||
target: SocketAddrV4,
|
target: SocketAddrV4,
|
||||||
target_prefix: u8,
|
target_prefix: u8,
|
||||||
|
src_filter: Option<&IpNet>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
Command::new("/usr/lib/startos/scripts/forward-port")
|
let mut cmd = Command::new("/usr/lib/startos/scripts/forward-port");
|
||||||
.env("UNDO", "1")
|
cmd.env("UNDO", "1")
|
||||||
.env("sip", source.ip().to_string())
|
.env("sip", source.ip().to_string())
|
||||||
.env("dip", target.ip().to_string())
|
.env("dip", target.ip().to_string())
|
||||||
.env("dprefix", target_prefix.to_string())
|
.env("dprefix", target_prefix.to_string())
|
||||||
.env("sport", source.port().to_string())
|
.env("sport", source.port().to_string())
|
||||||
.env("dport", target.port().to_string())
|
.env("dport", target.port().to_string());
|
||||||
.invoke(ErrorKind::Network)
|
if let Some(subnet) = src_filter {
|
||||||
.await?;
|
cmd.env("src_subnet", subnet.to_string());
|
||||||
|
}
|
||||||
|
cmd.invoke(ErrorKind::Network).await?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|||||||